NGINX Timeout on proxy_pass timeout because of DNS change - nginx

So I have the following proxy_pass:
server {
listen server:443 ssl;
server_name test.test.com;
location /api/testapi/v1/users
proxy_connection_timeout 120;
proxy_pass https://api.test.com/testapi/v1/users
}
I've noticed that out of the blue, I'd get 503 or 504 timeouts on my service that I'm proxying to. I suspect it is because the IP address of api.test.com is being switched because I can restart my NGINX and everything is back to normal.
What's the proper way of having an either 0 TTL or some way to resolve everytime the proxy_pass is done because I wouldn't know if the IP changed.
I did notice you can do this:
resolver 10.0.0.2 valid=10s;
server {
location / {
set $backend_servers backends.example.com;
proxy_pass http://$backend_servers:8080;
}
}
However, will it work if I don't put a resolver in there? I just want to use whatever my default resolver is without specifying resolver.

Related

How to configure nginx to expose multiple services on Jelastic?

Through Jelastic's dashboard, I created this:
I just clicked "New environment", then I selected nodejs. I added a docker image (of mailhog).
Now, I would like that port 80 of my environment serves the nodejs application. This is by default so. Therefore nothing to do.
In addition to this, I would like port 8080 (or any other port than 80, like port 5000 for example) of my environment serves mailhog, hosted on the docker image. To do that, I added the following lines to the nginx-jelastic.conf (right after the first server serving the nodejs app):
server {
listen *:8080;
listen [::]:8080;
server_name _;
location / {
proxy_pass http://mailhog_upstream;
}
}
where I have also defined mailhog_upstream like this:
upstream mailhog_upstream{
server 10.102.8.215; ### DEFUPPROTO for common ###
sticky path=/; keepalive 100;
}
If I now browse my environment's 8080 port, then I see ... the nodejs app. If I try any other port than 80 or 8080, I see nothing. Putting another server_name doesn't help. I tried several things but nothing seems to work. Why is that? What am I doing wrong here?
Then I tried to get rid of the above mailhog_upstream and instead write
server {
listen *:5000;
listen [::]:5000;
server_name _;
location / {
proxy_pass http://10.102.8.215;
}
}
Browsing the environment's port 5000 doesn't work either.
If I replace the IP of the nodejs' app with that of my mailhog service, then mailhog runs on port 80. I don't understand how I can make the nodejs app run on port 80 and the mailhog service on port 5000 (or any other port than 80).
Could someone enlighten me please?
After all those failures, I tried another ansatz. Assume the path my env is example.com/. What I've tried above is to get mailhog to work upon calling example.com:5000, which I failed doing. Then I tried to make mailhog available through a call to example.com/mailhog. In order to do that, I got rid of all my modifications above and completed the current server in nginx-jelastic.conf with
location /mailhog {
proxy_pass http://10.102.8.96:8025/;
add_header Set-Cookie "SRVGROUP=$group; path=/";
}
That works in the sense that if I know browse example.com/mailhog, then I get something on the page, but not exactly what I want: it's the mailhog's page without any styling. Also, when I call mailhog's API through example.com/mailhog/api/v2/messages, I get a successful response without body, when I should've received
{"total":0,"count":0,"start":0,"items":[]}
What am I doing wrong this time?
Edit
To be more explicit, I put the following manifest that exhibits the second problem with the nginx location.
Full locations list for your case is a following:
(please pay attention to URIs in upstreams, they are different)
location /mailhog { proxy_pass http://172.25.2.128:8025/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /mailhog/api { proxy_pass http://172.25.2.128:8025/api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /css { proxy_pass http://172.25.2.128:8025; }
location /js { proxy_pass http://172.25.2.128:8025; }
location /images { proxy_pass http://172.25.2.128:8025; }
that works for me with your application
# curl 172.25.2.127/mailhog/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}
The following ports are opened by default: 80, 8080, 8686, 8443, 4848, 4949, 7979.
Additional ports can be opened using:
endpoints - maps the container internal port to random external
via Jelastic Shared LB
Public IP - provides a direct access to all ports of your
container
Read more in the following article: "Container configuration - Ports". This one may also be useful:"Public IP vs Shared Load Balancer"

Nginx proxy_next_upstream with different URI modification

We have a need to set up multiple up-stream server, and use proxy_next_upstream to a backup, if the main server returns 404. However, the URI for up-stream backup server is different than the one towards main server, so I don't know whether this can be possible.
In detail, below config snippet works fine (if URIs are the same to all up-stream servers):
upstream upstream-proj-a {
server server1.test.com;
server server2.test.com backup;
}
server {
listen 80;
listen [::]:80;
server_name www.test.com;
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
For a request of http://test.com/proj/proj-a/file, it will first try to request http://server1.test.com/lib/proj/proj-a/file, if return 404 or timeout, then try http://server2.test.com/lib/proj/proj-a/file. This is good.
However, now for server2, it can only accept URL like http://server2.test.com/lib/proj/proj-a-internal/file, which is different than the URI towards the main server. If only considering the backup server, I can write like below:
proxy_pass http://server2.test.com/lib/proj/proj-a-internal
However looks like I can not have different proxy_pass for different upstream server combining proxy_next_upstream.
How can I achieve this?
I found a work-around using simple proxy_pass, and set local host as the backup upstream server, then do rewrite on behalf of the real backup upstream server.
The config is like below:
upstream upstream-proj-a {
server server1.test.com:9991;
# Use localhost as backup
server localhost backup;
}
server {
listen 80;
listen [::]:80;
resolver 127.0.1.1;
server_name www.test.com;
location /lib/proj/proj-a {
# Do rewrite then proxy_pass to real upstream server
rewrite /lib/proj/proj-a/(.*) /lib/proj/proj-a-internal/$1 break;
proxy_pass http://server2.test.com:9992;
}
location /proj/proj-a {
proxy_next_upstream error timeout http_404;
proxy_pass http://upstream-proj-a/lib/proj/proj-a;
}
}
It works fine, but the only side-effect is that, when a request needs to go to the backup server, it creates another new HTTP request from localhost to localhost which seems to double the load to nginx. The goal is to transfer quite big files, and I am not sure if this impacts performance or not, especially if all the protocols are https instead of http.

Nginx: Listen to Specific host's specific port and do proxy_pass for that host:port only

I've been hitting a wall for 3 days on this. Allow me to explain the matter:
We have a domain named demo1.example.com. We want demo1.example.com:90 to do proxy pass for 123.123.123.123:90, but not any other vhosts in the server like demo2.example.com.
What I mean is, that port should only work for that vhost, if someone tries to access demo2.example.com:90, it should not work. Currently, it is doing proxy_pass for any vhosts:90.
I hope I have explained the situation and that there is an actual solution for this.
Here's my current code:
server {
listen ip:80;
server_name subdomain.url.here;
and other normal server stuff for port 80
}
server {
listen ip:90;
location / {
proxy_pass 123.123.123.123:90;
proxy_set_header Host $host:$server_port;
}
}
I will really appreciate any help.

nginx host not found in upstream

I tried to follow the instructions here, but I still can't seem to get it to work. As with the question, I don't expect some of the servers to be running at the time of starting nginx. I would actually prefer if instead a local 404 html were returned if the server was not running.
upstream main {
server my_main:8080;
}
server {
listen 80;
server_name www.my_site.com my_site.com;
location / {
resolver 8.8.8.8 valid=30s;
set $upstream_main main;
proxy_pass http://$upstream_main;
}
}

Ngixn load balancer keep changing original URL to load balanced URL

I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.

Resources