I would like to handle 2 servernames, say "web1.example.com" and "web2.example.com" on the same port (443) in the same nginx config where the first should be a local http server, and the second needs to be forwarded to an external upstream without terminating the SSL connection.
How do I configure this?
Details:
I can use nginx to look at the first SSL message (CLientHello) and use it to proxy/forward the entire connection without terminating SSL. This can even look at the SNI and choose a different upstream based on the servername in it. This uses the ngx_stream_ssl_preread_module with proxy_pass and ssl_preread on. The config is something like this:
stream {
upstream web1 {
server 10.0.0.1:443;
}
upstream web2 {
server 10.0.0.2:443;
}
map $ssl_preread_server_name $upstream {
web1.example.com web1;
web1-alias.example.com web1;
web2.example.com web2;
}
server {
listen 443;
resolver 1.1.1.1;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass $upstream;
ssl_preread on;
}
}
This is configured in the stream config section of nginx.
But I can also configure a local http server in the http config section of nginx.
So what if I want web1 ("web1.example.com" in the example) to use such a "local nginx http server", and not an external "upstream server"? ("web2" should still be forwarded as before.) So I want to configure "web1.example.com" in the http config section of nginx, and "forward" to it in the stream config section of nginx.
To be clear, I want "web1.example.com" to be configured like this:
http {
server {
listen 443 ssl;
server_name web1.example.com web1-alias.example.com;
ssl_certificate ...
location ...
...
}
}
This all works find if I do either stream or http listening on the same port. But how do I do both on the same port?
How can I "call" the http config section from the streams config section? Can proxy_pass refer to a local nginx http server somehow?
I don't think you can use both on the same port, but maybe something like this would work?
stream {
upstream web1 {
server 127.0.0.1:8443;
}
upstream web2 {
server 10.0.0.2:443;
}
map $ssl_preread_server_name $upstream {
web1.example.com web1;
web1-alias.example.com web1;
web2.example.com web2;
}
server {
listen 443;
resolver 1.1.1.1;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
server {
listen 8443 ssl;
server_name web1.example.com web1-alias.example.com;
ssl_certificate ...
location ...
...
}
}
Related
I have an nginx config that looks similar to this (simplified):
http {
server {
listen 80 default_server;
location /api {
proxy_pass https://my-bff.azurewebsites.net;
proxy_ssl_server_name on;
}
}
}
Essentially, I have a reverse proxy to an API endpoint that uses https.
Now, I would like to convert this to an upstream group to gain access to keepalive and other features. So I tried this:
http {
upstream bff-app {
server my-bff.azurewebsites.net:443;
}
server {
listen 80 default_server;
location /api {
proxy_pass https:/bff-app;
proxy_ssl_server_name on;
}
}
}
Yet it doesn't work. Clearly I'm missing something.
In summary, how do I correctly do this "conversion" i.e. from url to defined upstream?
I have tried switching between http instead of https in the proxy_pass directive, but that didn't work either.
I was honestly expecting this to be a simple replacement. One upstream for another, but I'm doing something wrong it seems.
Richard Smith pointed me in the right direction.
Essentially, the issue was that the host header was being set to "bff-app" instead of "my-bff.azurewebsites.net" and this caused the remote server to close the connection.
Fixed by specifying header manually like below:
http {
upstream bff-app {
server my-bff.azurewebsites.net:443;
}
server {
listen 80 default_server;
location /api {
proxy_pass https:/bff-app;
proxy_ssl_server_name on;
# Manually set Host header to "my-bff.azurewebsites.net",
# otherwise it will default to "bff-app".
proxy_set_header Host my-bff.azurewebsites.net;
}
}
}
Here's the setup:
fowarding_proxy -> server_1, server_2
server_1 -> app1.domain.com, app2.domain.com
server_2 -> app3.domain.com, app4.domain.com
Where each server is running a docker daemon with an nginx reverse-proxy based on the jwilder/nginx-proxy + letsencrypt setup.
Both servers sit behind the same router and I need a way to route traffic correctly to each one based on the host name. I've been trying to use the nginx stream module since I don't want the forwarding proxy to handle any ssl termination, but the $ssl_preread_name directive doesn't (seem) to capture the host name on http traffic and I can't do a 301 on server directives in the stream module. What's the best way to approach this?
I've included an example of the config I'm currently working with and I've tried multiple iterations. Open to any suggestions.
(Also, as an aside, nothing logs to access.log)
Forward_proxy nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
# bare bones content, still nothing written to the log.
log_format main '[$time_local] $remote_addr'
access_log /var/log/nginx/access.log main;
map $ssl_preread_server_name $name {
app1.domain.com server1;
app2.domain.com server1;
app3.domain.com server2;
app4.domain.com server2;
}
upstream server1 {
server server1:80;
}
upstream server2 {
server server1:80;
}
upstream server1_ssl {
server server1:443;
}
upstream server2_ssl {
server server1:443;
}
server {
listen 80;
proxy_pass $name;
ssl_preread on;
}
server {
listen 443;
proxy_pass "${name}_ssl";
ssl_preread on;
}
}
Came up with a solution, happy to hear of better ones.
Instead of a single forwarding-proxy, I created two new nginx containers: One for HTTP traffic and the other for HTTPS traffic and put them both in a single docker-compose file for easier management.
HTTP-forwarding-proxy
http {
map $host $name {
default server1;
app3.strangedreamsinc.com server2;
app4.strangedreamsinc.com server2;
}
upstream server1 {
server server1_ip:8080;
}
upstream server2 {
server server2:8080;
}
server {
listen 80 default_server;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://$name;
}
}
}
HTTPS-forwarding-proxy
stream {
map $ssl_preread_server_name $name {
default server1;
app1.strangedreamsinc.com server1;
app2.strangedreamsinc.com server1;
}
upstream server1 {
server server1_ip:8443;
}
upstream server2 {
server server2_ip:8443;
}
server {
listen 443;
proxy_pass $name;
ssl_preread on;
}
}
I'm not convinced there isn't a better way and there's probably something I'm overlooking, but this allows me to transparently route traffic to the correct reverse-proxy and still supports the letsencrypt protocols to apply SSL to my servers.
I'm a bit new to using nginx so I'm likely missing something obvious. I'm trying to create an nginx server that will reverse proxy to a set of web servers that use https.
I've been able to get it to work with one server list this:
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://<server1>.herokuapp.com;
}
}
However, as soon I try to add in the 'upstream' configuration element it no longer works.
upstream backend {
server <server1>.herokuapp.com;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
I've tried adding in 443, but that also fails.
upstream backend {
server <server1>.herokuapp.com:443;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
Any ideas what I'm doing wrong here?
I am using Nginx reverse proxy with Kubernetes services. Config is following:
events {
}
http {
upstream my-service-3000 {
server my-service:3000;
}
server {
listen 443 ssl;
server_name myserver.net;
ssl_certificate /key.pem;
ssl_certificate_key /key.pem;
location / {
allow myIP;
deny all;
proxy_pass http://my-service-3000;
}
}
server {
...
}
}
It works fine (doing reverse proxy, terminating ssl, changing port, finding Kubernetes service), till the moment I try whitelist only my IP. When I try to access service via https - I got 403 from Nginx. I've tried to move around allow/deny commands, but it do not help. Any suggestions where could be the problem?
Also I am behind proxy by my self - so I am using my external organisation IP.
The whitelisting should be under the http directive, not under the location directive.
http {
allow MyIp;
deny all;
upstream my-service-3000 {
server my-service:3000;
}
server {
listen 443 ssl;
server_name myserver.net;
ssl_certificate /key.pem;
ssl_certificate_key /key.pem;
location / {
proxy_pass http://my-service-3000;
}
}
server {
...
}
}
Why is nginx is nginx placing the upstream name in the redirected URL?
This is my nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream servs {
server facebook.com;
}
server {
listen 80;
location / {
proxy_pass http://servs;
}
}
}
When I access the port 80, I get:
This site can’t be reached
servs.facebook.com’s server DNS address could not be found.
Why is it placing "servs." before facebook.com?
You are not setting the Host header in the upstream request, so nginx constructs a value from the proxy_pass directive. As you are using an upstream block, this value is the name of the upstream block, rather than the name of the server you are trying to access.
If you are using an upstream block, it may be advisable to set the Host header explicitly:
proxy_set_header Host example.com;
See this document for more.