nginx isnt forwarding to static server - nginx

I have two servers running in the background, I would like nginx to reverse proxy to both of them.
I want nginx to run on port 80. When a user navigates to http://localhost:80/, he should be forwarded to http://localhost:3501. However I am still seeing the default nginx page at http://localhost:80. I have nginx installed on my localhost, and am testing from the same box.
server {
listen 80;
server_name _;
location ^~/api/* {
proxy_pass http://localhost:8000;
}
location ^~/* {
proxy_pass http://localhost:3501;
}
}

Add upstream:
upstream backend-testserver {
server 127.0.0.1:3501 weight=1 max_fails=2 fail_timeout=30s; # server 1
server 127.0.0.1:3502 weight=1 max_fails=2 fail_timeout=30s; # server 2
}
Add proxy_pass in "server -> location":
location / {
root html;
index index.html index.htm;
proxy_pass http://backend-testserver;
}

Related

Nginx proxy pass directive: Invalid port in upstream error

I am doing load balancing with Nginx. Here is my config
upstream web_backend {
least_conn;
server localhost:8001 max_fails=3 fail_timeout=60s;
server localhost:8002 max_fails=3 fail_timeout=60s;
}
server {
listen 8545;
server_name _;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web_backend;
}
}
server {
listen 8001;
server_name localhost;
location / {
proxy_pass https://some_other_url/v3/cxnmcdinwrtyf93vcwdiyfx8q6xqwxv9qg7c93fgcb;
}
}
server {
listen 8002;
server_name localhost;
location / {
proxy_pass 'https://chipdunk-dude:gorgeous-serpents-clubbed-orphans#nd-657-555-555-777.dogify.com';
}
}
as you can see the url at port 8002 is weird (dont even know what this kind of urls are called)
because it has ":" in the url, Nginx gives me this error
nginx: [emerg] invalid port in upstream "chipdunk-dude:gorgeous-serpents-clubbed-orphans#nd-657-555-555-777.dogify.com" in /etc/nginx/sites-enabled/default:60
The url at port 8001 works fine.
Everything before the # is userinfo which should be encoded by the browser and included as a separate request header according to RFC 7617.
Nginx is not a browser and cannot do it for you.
You could probably convert that into Base64 and use a proxy_set_header to set the Authorization header.
For example:
proxy_set_header Authorization "Basic Y2hpcGR1bmstZHVkZTpnb3JnZW91cy1zZXJwZW50cy1jbHViYmVkLW9ycGhhbnM=";
proxy_pass https://nd-657...;

Is there a way to add a response header containing proxied server name on nginx?

We are trying to figure out how to add the chosen server in response headers.
For now, we use $upstream_addr to get ip address and port, and it works, but is there a way to get server hostname instead of ? (just as declared in 'upstream' block)
Here is our (simplified) nginx configuration :
upstream my_upstream {
ip_hash;
server production001 max_fails=2 fail_timeout=15s;
server production002 max_fails=2 fail_timeout=15s;
}
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://my_upstream ;
add_header X-Upstream $upstream_addr always;
}
}
Which produces the following header in response : "x-upstream: XX.XX.XX.XX:XXXX"
What we would like to get : "x-upstream: production001"
If you know the IP address of the upstream servers then you could use a map
upstream my_upstream {
ip_hash;
server prod1:80;
server prod2:80;
server prod3:80;
}
map $upstream_addr $upstream_name {
~.*192.168.1.1:80 production1;
~.*192.168.1.2:80 production2;
~.*192.168.1.3:80 production3;
default $upstream_addr;
}
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://my_upstream ;
add_header X-Upstream $upstream_name always;
}
}
You need the regex in the map for when NGINX tries more than one server, the last name in the list is the server that responded.

NGINX location directive in stream

I've installed Nginx on one of my servers in order to be used as a load balancer for my Rancher application.
I based my configuration on the one found here: https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/
And so my config is:
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
stream {
upstream rancher_servers_http {
least_conn;
server <ipnode1>:80 max_fails=3 fail_timeout=5s;
server <ipnode2>:80 max_fails=3 fail_timeout=5s;
server <ipnode3>:80 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass rancher_servers_http;
}
upstream rancher_servers_https {
least_conn;
server <ipnode1>:443 max_fails=3 fail_timeout=5s;
server <ipnode2>:443 max_fails=3 fail_timeout=5s;
server <ipnode3>:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass rancher_servers_https;
}
}
My configuration is working as expected but I've recently installed Nextcloud on my cluster. Which is giving me the following error:
Your web server is not properly set up to resolve “/.well-known/caldav”. Further information can be found in the
documentation.
Your web server is not properly set up to resolve “/.well-known/carddav”. Further information can be found in the
documentation.
So I would like to add a "location" directive but I'm not able to do it.
I tried to update my config as follow:
...
stream {
upstream rancher_servers_http {
...
}
server {
listen 80;
proxy_pass rancher_servers_http;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
upstream rancher_servers_https {
...
}
server {
listen 443;
proxy_pass rancher_servers_https;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But it's telling me
"location" directive is not allowed here in /etc/nginx/nginx.conf:21
Assuming location directive is not allowed in a stream configuration I tried to add an http block like this:
...
stream {
...
}
http {
server {
listen 443;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
server {
listen 80;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But then I got this message:
bind() to 0.0.0.0:443 failed (98: Address already in use)
(same for the port 80).
Can someone help me with this ? How can I add the location directive without affecting my actual configuration ?
Thank you for reading.
Edit
Well it seems that the stream directive prevent me from adding other standard directives. I tried to add the client_max_body_size inside server but I'm having the same issue:
directive is not allowed here
Right now your setup uses nginx as an TCP proxy. Such configuration of nginx passes through traffic without analysis - it can be ssh, rdp, whatever traffic and it will work regardless of protocols because nginx do not try to check stream content.
That is the reason why location directive does not work in context of streams - it is http protocol related function.
To take advantage of high level protocol analysis nginx need to be aware of protocol going through it, i.e. be configured as an HTTP reverse proxy.
For it to work server directive should be placed in http scope instead of stream scope.
http {
server {
listen 0.0.0.0:443 ssl;
include /etc/nginx/snippets/letsencrypt.conf;
root /var/www/html;
server_name XXXX;
location / {
proxy_pass http://rancher_servers_http;
}
location /.well-known/carddav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
root /var/www/html;
server_name xxxx;
location / {
proxy_pass http://rancher_servers_http;
}
}
}
Drawback of this approach for you would be need of certificate management reconfiguration.
But you will load off ssl encryption to nginx and gain intelligent ballancing based on http queries.

nginx configuration: one location with different server IP

I have the main app running on a server with IP 127.0.0.1 and the domain is http://myexample.com.
I want to add another service to the app with the url http://myexample.com/service.
However, the service is running on another server with IP 127.0.0.2 with port 5000.
How to make nginx configuration work in this situation?
What I have tried is as below:
server {
listen 80;
server_name myexample.com;
location / {
proxy_pass http://127.0.0.1:3002;
client_max_body_size 100m;
}
location /service {
proxy_pass http://127.0.0.2:5000;
}
location = /50x.html {
root /usr/share/nginx/html;
}
}
when I open myexample.com/service, it returns 404 or 500.

Nginx Reverse Proxy with Multiple Backend Domains

I have a 2 servers :-
Server 1 : NGINX Reverse Proxy.
Server 2 : NGINX with 5-6 websites ( different domains )
So basically, all users will come to Server 1 which will proxy_pass the traffic to Server 2 and get the response back. Server 1 will also do Caching, WAF etc.
Here is my configuration for Server 1 :-
server {
listen 80;
server_name example.com www.example.com;
location ~* {
proxy_pass http://mysite:80;
}
}
server {
listen 80;
server_name server.com www.server.com;
location ~* {
proxy_pass http://mysite:80;
}
}
In my Server 2, in virtual.conf of NGINX, i have the following config :
index index.php index.html;
server {
listen 80;
server_name example.com www.example.com;
location / {
root /var/www/websites/example/;
include location-php;
}
}
server {
listen 80;
server_name server.com www.server.com;
location / {
root /var/www/websites/server/;
include location-php;
}
}
However whenever i go to http://example.com or http://server.com ( directed via Sever 1 acting as Reverse Proxy ), it shows the Server 2's Default NGINX Page. I am not sure what am I doing wrong. Also is this type of setup a proper way of doing things ?
This is your host problem.
Due to your upstream name is mysite, so the host name in upstream request is mysqsite too.
So the host isn't matched by the backend servers.
You can solve the problem like this by adding the directive before proxy_pass:
proxy_set_header Host server.com

Resources