Nginx proxy_next_upstream with different URI modification - nginx

We have a need to set up multiple up-stream server, and use proxy_next_upstream to a backup, if the main server returns 404. However, the URI for up-stream backup server is different than the one towards main server, so I don't know whether this can be possible.
In detail, below config snippet works fine (if URIs are the same to all up-stream servers):
upstream upstream-proj-a {
server server1.test.com;
server server2.test.com backup;
}
server {
listen 80;
listen [::]:80;
server_name www.test.com;
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
For a request of http://test.com/proj/proj-a/file, it will first try to request http://server1.test.com/lib/proj/proj-a/file, if return 404 or timeout, then try http://server2.test.com/lib/proj/proj-a/file. This is good.
However, now for server2, it can only accept URL like http://server2.test.com/lib/proj/proj-a-internal/file, which is different than the URI towards the main server. If only considering the backup server, I can write like below:
proxy_pass http://server2.test.com/lib/proj/proj-a-internal
However looks like I can not have different proxy_pass for different upstream server combining proxy_next_upstream.
How can I achieve this?

I found a work-around using simple proxy_pass, and set local host as the backup upstream server, then do rewrite on behalf of the real backup upstream server.
The config is like below:
upstream upstream-proj-a {
server server1.test.com:9991;
# Use localhost as backup
server localhost backup;
}
server {
listen 80;
listen [::]:80;
resolver 127.0.1.1;
server_name www.test.com;
location /lib/proj/proj-a {
# Do rewrite then proxy_pass to real upstream server
rewrite /lib/proj/proj-a/(.*) /lib/proj/proj-a-internal/$1 break;
proxy_pass http://server2.test.com:9992;
}
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
}
It works fine, but the only side-effect is that, when a request needs to go to the backup server, it creates another new HTTP request from localhost to localhost which seems to double the load to nginx. The goal is to transfer quite big files, and I am not sure if this impacts performance or not, especially if all the protocols are https instead of http.

Related

NGINX Timeout on proxy_pass timeout because of DNS change

So I have the following proxy_pass:
server {
listen server:443 ssl;
server_name test.test.com;
location /api/testapi/v1/users
proxy_connection_timeout 120;
proxy_pass https://api.test.com/testapi/v1/users
}
I've noticed that out of the blue, I'd get 503 or 504 timeouts on my service that I'm proxying to. I suspect it is because the IP address of api.test.com is being switched because I can restart my NGINX and everything is back to normal.
What's the proper way of having an either 0 TTL or some way to resolve everytime the proxy_pass is done because I wouldn't know if the IP changed.
I did notice you can do this:
resolver 10.0.0.2 valid=10s;
server {
location / {
set $backend_servers backends.example.com;
proxy_pass http://$backend_servers:8080;
}
}
However, will it work if I don't put a resolver in there? I just want to use whatever my default resolver is without specifying resolver.

NGINX Configuration :

I am new to NGINX and I am trying to load balance our ERP web servers.
I have 3 webserver running on port 80 powered by websphere which are a black box to me:
* web01.example.com/path/apphtml
* web02.example.com/path/apphtml
* web03.example.com/path/apphtml
NGINX is listening for the virtual URL ourerp.example.com and proxying it to the cluster.
Here is my config:
upstream myCluster {
ip_hash;
server web01.example.com:80;
server web02.example.com:80;
server web03.example.com:80;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ourerp.example.com;
location / {
rewrite ^(.*)$ /path/apphtml break;
proxy_pass http://myCluster;
}
}
When I only use proxy_pass, then NGINX load balances but forwards the request to web01.example.com and not web01.example.com/path/apphtml
When I try adding url rewrite, it simply rewrite the virtual URL and i end up with ourerp.example.com/path/apphtml.
Is it possible to do URL rewrite at the upstream level or append the path to the app at the upstream level?
If you are trying to map / to /path/apphtml/ through the proxy, use:
proxy_pass http://myCluster/path/apphtml/;
See this document for more.
The problem with your rewrite statement is a missing a $1 on the end of the replacement string. See this document for more, but as I indicated above, you do not need the rewrite statement, as the proxy_pass statement is capable of doing the same job anyway.

nginx host not found in upstream

I tried to follow the instructions here, but I still can't seem to get it to work. As with the question, I don't expect some of the servers to be running at the time of starting nginx. I would actually prefer if instead a local 404 html were returned if the server was not running.
upstream main {
server my_main:8080;
}
server {
listen 80;
server_name www.my_site.com my_site.com;
location / {
resolver 8.8.8.8 valid=30s;
set $upstream_main main;
proxy_pass http://$upstream_main;
}
}

Ngixn load balancer keep changing original URL to load balanced URL

I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.

uwsgicluster - no live upstreams while connecting to upstream client

Below simple nginx config for cluster, then I turn off 192.168.1.77:3032 server.
From time to time I catch 502 error and "no live upstreams while connecting to upstream client" in logs, while "server unix:///var/tmp/site.sock backup;" working and as I guess must handle request but nginx don't find it as live. What could be the problem?
nginx config:
upstream uwsgicluster {
server 192.168.1.77:3032;
server unix:///var/tmp/site.sock backup;
}
server {
listen 80;
server_name site.com www.site.com;
access_log /var/log/nginx/sire.log;
error_log /var/log/nginx/site-error.log;
location / {
uwsgi_pass uwsgicluster;
include uwsgi_params;
}
}
If I remove 192.168.1.77:3032 server
from upstream and restart nginx it works fine, but with switched off 192.168.1.77:3032 server errors occurs periodically
I think that nginx will still try both of the servers in the upstream block even if one isn't working. When it fails to connect to one of them, it will try the other one, but will still log the error you are seeing.
By default, the proxy_next_upstream setting will try the next upstream server on error or timeout. You can override this:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream
Are you only seeing error logs, or are you also seeing undesired behavior/load-balancing?

Resources