Nginx reverse proxy causing 504 Gateway Timeout - nginx

I am using Nginx as a reverse proxy that takes requests then does a proxy_pass to get the actual web application from the upstream server running on port 8001.
If I go to mywebsite.example or do a wget, I get a 504 Gateway Timeout after 60 seconds... However, if I load mywebsite.example:8001, the application loads as expected!
So something is preventing Nginx from communicating with the upstream server.
All this started after my hosting company reset the machine my stuff was running on, prior to that no issues whatsoever.
Here's my vhosts server block:
server {
listen 80;
server_name mywebsite.example;
root /home/user/public_html/mywebsite.example/public;
access_log /home/user/public_html/mywebsite.example/log/access.log upstreamlog;
error_log /home/user/public_html/mywebsite.example/log/error.log;
location / {
proxy_pass http://xxx.xxx.xxx.xxx:8001;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And the output from my Nginx error log:
2014/06/27 13:10:58 [error] 31406#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxx.xx.xxx.xxx, server: mywebsite.example, request: "GET / HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:8001/", host: "mywebsite.example"

Probably can add a few more line to increase the timeout period to upstream. The examples below sets the timeout to 300 seconds :
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;

Increasing the timeout will not likely solve your issue since, as you say, the actual target web server is responding just fine.
I had this same issue and I found it had to do with not using a keep-alive on the connection. I can't actually answer why this is but, in clearing the connection header I solved this issue and the request was proxied just fine:
server {
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://localhost:5000;
}
}
Have a look at this posts which explains it in more detail:
nginx close upstream connection after request
Keep-alive header clarification
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

user2540984, as well as many others have pointed out that you can try increasing your timeout settings. I myself faced a similar issue to this one and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This, however, did not help me a single bit; there was no apparent change in NGINX' timeout settings. After many hours of searching, I finally managed to solve my issue.
The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn't exist, you should create it). I used the same settings as suggested in the thread:
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
This might not be the solution to your particular problem, but if anyone else notices that the timeout changes in /etc/nginx/nginx.conf don't do anything, I hope this answer helps!

If you want to increase or add time limit to all sites then you can add below lines to the nginx.conf file.
Add below lines to the http section of /usr/local/etc/nginx/nginx.conf or /etc/nginx/nginx.conf file.
fastcgi_read_timeout 600;
proxy_read_timeout 600;
If the above lines doesn't exist in conf file then add them, otherwise increase fastcgi_read_timeout and proxy_read_timeout to make sure that nginx and php-fpm did not timeout.
To increase time limit for only one site then you can edit in vim /etc/nginx/sites-available/example.com
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
and after adding these lines in nginx.conf, then don't forget to restart nginx.
service php7-fpm reload
service nginx reload
or, if you're using valet then simply type valet restart.

You can also face this situation if your upstream server uses a domain name, and
its IP address changes (e.g.: your upstream points to an AWS Elastic Load
Balancer)
The problem is that nginx will resolve the IP address once, and keep it cached
for subsequent requests until the configuration is reloaded.
You can tell nginx to use a name server to re-resolve the domain once the cached
entry expires:
location /mylocation {
# use google dns to resolve host after IP cached expires
resolver 8.8.8.8;
set $upstream_endpoint http://your.backend.server/;
proxy_pass $upstream_endpoint;
}
The docs on proxy_pass explain why this trick works:
Parameter value can contain variables. In this case, if an address is specified
as a domain name, the name is searched among the described server groups, and,
if not found, is determined using a resolver.
Kudos to "Nginx with dynamic upstreams" (tenzer.dk) for the detailed
explanation, which also contains some relevant information on a caveat of this
approach regarding forwarded URIs.

nginx
proxy_read_timeout 300;
In my case with AWS, I edited load balance setting also.
Attributes => Idle timeout

Had the same problem. Turned out it was caused by iptables connection tracking on the upstream server. After removing --state NEW,ESTABLISHED,RELATED from the firewall script and flushing with conntrack -F the problem was gone.

NGINX itself may not be the root cause.
IF "minimum ports per VM instance" set on the NAT Gateway -- which stand between your NGINX instance & the proxy_pass destination -- is too small for the number of concurrent requests, it has to be increased.
Solution: Increase the available number of ports per VM on NAT Gateway.
Context In my case, on Google Cloud, a reverse proxy NGINX was placed inside a subnet, with a NAT Gateway. The NGINX instance was redirecting requests to a domain associated with our backend API (upstream) through the NAT Gateway.
This documentation from GCP will help you understand how NAT is relevant to the NGINX 504 timeout.

In my case i restart php for and it become ok.

If nginx_ajp_module is used, try adding
ajp_read_timeout 10m;
in nginx.conf file.

Related

How do I fix this NGINX 502 Bad Gateway error?

I'm working on web sockets in an angular app. I have it connect to a python back-end through nginx. I'm find that I'm getting 502 "Bad Gateway" errors about 90% of the time. I'll do this:
Load page in browser and connect web socket
Python back-end sends data to angular front-end
Disconnect web socket
Attempt to re-connect web socket <-- 502 Bad Gateway error
Hard-reload in Chrome
Load page in browser and connect web socket <-- No 502 error
I can't figure out why this is happening. I can't tell why I'm getting a 502 error. Nor can I figure out why doing a hard-reload fixes the problem. Things I've tried:
Increase nginx log-level to debug. Still the logs don't have any useful information.
I don't keep any web socket objects in state. I do this in case something is being cached somewhere.
I always close the web socket with close code 1000
I manually run the python service on the server so that I can watch it. When the 502 error happens, the service doesn't show anything unusual.
Setting the nginx max_fails to 0. Setting the fail_timeout to 0. Neither of these changes seems to have any effect. (I found this suggestion in other SO answers)
What should I be looking for that will help me fix this problem?
EDIT: Here's my nginx conf.d file:
server {
listen 80;
index index.html;
root /var/www/mysite;
location / {
access_log /var/log/nginx/mysite/ui.access.log;
error_log /var/log/nginx/mysite/ui.error.log;
try_files $uri $uri/ /index.html;
}
location /ws/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Host $proxy_host;
proxy_pass http://WEBSOCKET/;
access_log /var/log/nginx/mysite/ws_services.access.log;
error_log /var/log/nginx/mysite/ws_services.error.log;
proxy_read_timeout 300s;
}
}
upstream WEBSOCKET {
ip_hash;
server 127.0.0.1:8765;
}
Not the same problem the OP had, but just in case anyone comes across this and has the same setup as I had:
I was using WebSockets over SSL (so wss:// protocol) and had 502 popping up, even though the config had worked before. The config was as follows:
...
proxy_pass http://127.0.0.1:8080;
...
In the backend I was using a Node with the ws package to create a websocket server
As I said: It was working before, but suddenly stopped working. Additionally nginx wrote upstream prematurely closed connection while reading response header from upstream errors into the error log. I suppose that either nginx or node closed some kind of security issue, that lead to the setup not working anymore.
What I had to do in order to make it work, was to use https instead of http for the proxy_pass config
...
proxy_pass https://127.0.0.1:8080;
...

Icecast2 running under nginx not able to connect

I want to start saying that I've looked all over the place to find an answer to this problem and it just seems like either nobody else ran into this problem or nobody is doing it. So, I recently install icecast2 on my Debian server, The thing is that I'm completely able to broadcast to my server from my local network connecting to its local IP on port 8000 and hear the stream over the internet on radio.example.com since I proxy it with nginx, so far no problems at all. The problem lies when I want to broadcast to the domain I gave with nginx stream.example.com
I have two theories, one is that the proxy is not giving the source IP to icecast so it thinks it's beign broadcasted from 127.0.0.1 and the other is that nginx is doing something strange with the data stream and thus not delivering the correct format to icecast.
Any thoughts? Thanks in advance!
Here is the nginx config
server {
listen 80;
listen [::]:80;
server_name radio.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://127.0.0.1:8000/radio;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':80/' '/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
server {
listen 80;
listen [::]:80;
server_name stream.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://localhost:8000/;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':8000/' ':80/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
And this is what I get on icecast error.log
[2018-08-10 14:15:45] INFO source/get_next_buffer End of Stream /radio
[2018-08-10 14:15:45] INFO source/source_shutdown Source from 127.0.0.1 at "/radioitavya" exiting
Not sure how much of this is directly relevant to the OP's question, but here's a few snippets from my config.
These are the basics of my block to serve streams to clients over SSL on port 443.
In the first location block any requests with a URI of anything other than /ogg, /128, /192 or /320 are rewritten to prevent clients accessing any output from the Icecast server other than the streams themselves.
server {
listen 443 ssl http2;
server_name stream.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
rewrite ~*(ogg) https://stream.example.com/ogg last;
rewrite ~*([0-1][0-5]\d) https://stream.example.com/128 last;
rewrite ~*(?|([1][6-9]\d)|([2]\d\d)) https://stream.example.com/192 last;
rewrite ~*([3-9]\d\d) https://stream.example.com/320 break;
return https://stream.example.com/320;
}
location ~ ^/(ogg|128|192|320)$ {
proxy_bind $remote_addr transparent;
set $stream_url http://192.168.100.100:8900/$1;
types { }
default_type audio/mpeg;
proxy_pass_request_headers on;
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header Host $host;
proxy_set_header Range bytes=0-;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
tcp_nodelay on;
proxy_pass $stream_url;
}
}
Setting proxy_bind with the transparent flag:
allows outgoing connections to a proxied server originate from a
non-local IP address, for example, from a real IP address of a client
This addresses the issues of local IP addresses in your logs/stats instead of client IPs, for this to work you also need to reconfigure your kernel routing tables to capture the responses sent from the upstream server and route them back to Nginx.
This requires root access and a reasonable understanding of Linux networking configuration, which I appreciate not everyone has. I also appreciate not everyone who uses Icecast and might want to reverse proxy will read this. A much better solution would be making Icecast more Nginx friendly, so I had a go.
I cloned Icecast from github and had a look over the code. I've maybe missed some but these lines looked relevant to me:
./src/logging.c:159: client->con->ip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(client->con->ip));
For servers which do not support the PROXY protocol the Nginx default method of passing the client IP upstream is via the X-Real-IP header. Icecast seems to be using the value of client->con->ip for logging listener IPs. Let's change things up a bit. I added this:
const char *realip;
realip = httpp_getvar (client->parser, "x-real-ip");
if (realip == NULL)
realip = client->con->ip;
And changed the previous lines to this:
./src/logging.c:163: realip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(realip));
then I built Icecast from source as per the docs. The proxy_set_header X-Real-IP $remote_addr; directive in my Nginx conf is passing the client IP, if you have additional upstream servers also handling the request you will need to add some set_real_ip_from directives specifying each IP, real_ip_recursive on; and use the $proxy_add_x_forwarded_for; which will capture the IP address of each server which handles the request.
Fired up my new Icecast build and this seems to work perfectly. If the X-Real-IP header is set then Icecast logs this as the listener IP and if not then it logs the client request IP so it should work for reverse proxy and normal setups. Seems too simple, maybe I missed something #TBR?
OK so you should now have working listener streams served over SSL with correct stats/logs. You have done the hard bit. Now lets stream something to them!
Since the addition of the stream module to Nginx then handling incoming connections is simple regardless of whether of not they use PUT/SOURCE.
If you specify a server within a stream directive Nginx will simply tunnel the incoming stream to the upstream server without inspecting or modifying the packets. Nginx streams config lesson 101 is all you need:
stream {
server {
listen pub.lic.ip:port;
proxy_pass ice.cast.ip:port;
}
}
I guess one problem unsuspecting people may encounter with SOURCE connections in Nginx is specifying the wrong port in their Nginx config. Don't feel bad, Shoutcast v1 is just weird. Point to remember is:
Instead of the port you specify in the
client encoder it will actually attempt to connect to port+1
So if you were using port 8000 for incoming connections, either set the port to 7999 in client encoders using the Shoutcast v1 protocol, or set up your Nginx stream directives with 2 blocks, one for port 8000 and one for port 8001.
Your Nginx install must be built with the stream module, it's not part of the standard build. Unsure? Run:
nginx -V 2>&1 | grep -qF -- --with-stream && echo ":)" || echo ":("
If you see a smiley face you are good to go. If not you'll need to build Nginx and include it. Many repositories have an nginx-extras package which includes the stream module.
Almost finished, all that we need now is access to the admin pages. I serve these from https://example.com/icecast/ but Icecast generates all the URIs in the admin page links using the root path, not including icecast/ so they won't work. Let's fix that using the Nginx sub filter module to add icecast/ to the links in the returned pages:
location /icecast/ {
sub_filter_types text/xhtml text/xml text/css;
sub_filter 'href="/' 'href="/icecast/';
sub_filter 'url(/' 'url(/icecast/';
sub_filter_once off;
sub_filter_last_modified on;
proxy_set_header Accept-Encoding "";
proxy_pass http://ice.cast.ip:port/;
}
The trailing slash at the end of proxy_pass http://ice.cast.ip:port/; is vitally important for this to work.
If a proxy_pass directive is specified just as server:port then the full original client request URI will be appended and passed to the upstream server. If the proxy_pass has anything URI appended (even just /) then Nginx will replace the part of the client request URI which matches the location block (in this case /icecast/) with the URI appended to the proxy_pass. So by appending a slash a request to https://example.com/icecast/admin/ will be proxied to http://ice.cast.ip:port/admin/
Finally I don't want my admin pages accessible to the world, just my IP and the LAN, so I also include these in the location above:
allow 127.0.0.1;
allow 192.168.1.0/24;
allow my.ip.add.ress;
deny all;
That's it.
sudo nginx -s reload
Have fun.
tl;dr - Don't reverse proxy Icecast.
Icecast for various reasons is better not reverse proxied. It is a purpose built HTTP server and generic HTTP servers tend to have significant issues with the intricacies of continuous HTTP streaming.
This has been repeatedly answered. People like to try anyway and invariably fail in various ways.
If you need it on port 80/443, then run it on those ports directly
If you have already something running on port 80/443, then use another of the remaining 2^64 IPv6 addresses in your /64 and if you are still using legacy IP, get another address, e.g. by spinning up a virtual server in the cloud.
Need HTTPS, Icecast supports TLS (on Debian and Ubuntu make sure to install the official Xiph.org packages as distro packages come without openSSL support)
Make sure to put both private and public key into one file.
This line....
subs_filter '#localhost' '#stream.example.com' gi;
Should probably be....
subs_filter '#localhost' '#example.com' gi;
I am not familiar with nginx so my best guess would be that this line is linking radio.example.com to the main site of example.com. By adding the stream.example.com you are confusing it by directing it to a site that doesn't exist.
I got this from a config file posted here:
Anyway, it wouldn't hurt to try it.

ActiveMQ and NGINX

I can get my head wrapped around ... We have requirement using ActiveMQ hidden behind NGINX proxy, but I have no idea how to set it up.
For the ActiveMQ I've setup different ports for all protocols
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:62716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5782?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:62713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1993?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:62714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
And the nginx configuration like this:
server {
listen *:61616;
server_name 192.168.210.15;
index index.html index.htm index.php;
access_log /var/log/nginx/k1.access.log combined;
error_log /var/log/nginx/k1.error.log;
location / {
proxy_pass http://localhost:62716;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_method stream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
}
(same for all other five redefined ports)
I though that this would expose default ports ActiveMQ ports and Nginx would map it to the new definition, but this doesn't work.
For communication, we're using NodeJs library amqp10 in version 3.1.4.
And all the ports are enabled on the server ... if using standard ports without nginx proxy, it works.
Anyone idea what am I missing? Thanks for any thoughts.
You can hide ActiveMq behind nginx proxy, even if you are trying to proxy OpenWire for a AMQP client.
If you are adding your configuration inside http block, its bound to fail.
But get it that, nginx not only supports http, but also tcp block.
If you proxy activemq over tcp, then what happens at http level won't matter and you would still be able to proxy.
Off-course you would lose flexibility that comes along with http.
Open your nginx.conf (at /etc/nginx/nginx.conf).
This would have http block, which in turn would have some include statements.
Outside this http block, add another include statement.
$ pwd
/etc/nginx
$ cat nginx.conf | tail -1
include /etc/nginx/tcpconf.d/*;
The include statement is directing nginx to look for additional configurations in directory "/etc/nginx/tcpconf.d/".
Add desired configuration in this directory. Let's call it amq_stream.conf.
$ pwd
/etc/nginx/tcpconf.d
$ cat amq_stream.conf
stream {
upstream amq_server {
# activemq server
server <amq-server-ip>:<port like 61616.;
}
server {
listen 61616;
proxy_pass amq_server;
}
}
Restart your nginx service.
$ sudo service nginx restart
You are done
Nginx is a HTTP server that is capable of proxying WebSocket and HTTP.
But you are trying to proxy OpenWire for a AMQP client. Which does not work with Nginx or Node.js.
So - if you really need to use Nginx, you need to change client protocol to STOMP or MQTT over WebSocket. Then setup a WebSocket proxy in Nginx.
Nginx-example with TLS. More details at https://www.nginx.com/blog/websocket-nginx/
upstream websocket {
server amqserver.example.com:62714;
}
server {
listen 8883 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/certificate.cer;
ssl_certificate_key /etc/nginx/ssl/key.key;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
proxy_read_timeout 120s;
}
}
However, since you have to rewrite all client code, I would rethink the Nginx idea. There are other software and hardware that can front TCP based servers and do TLS termination and whatnot.

Elasticsearch head plugin not working through nginx reverse proxy

I have elasticsearch with the head plugin installed running on a different server. I also set up an nginx reverse proxy for my ES instance. The configuration looks like below:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
}
}
}
Hitting the link http://es.mydomain.net/ works fine and I get a status 200 response. However, if I try to hit the link http://es.mydomain.net/_plugin/head/, I seemingly get a blank page. Note, the page loads fine if I access the head plug-in directly without the reverse proxy, via http://SERVERIP:PORT/_plugin/head/.
EDIT:
After doing some more debugging, I saw a net::ERR_CONTENT_LENGTH_MISMATCH error in the console for the page. After looking at nginx's log, to see what the error was, I came upon the true culprit, which is this error:
2015/05/27 16:26:48 [crit] 29765#0: *655 open() "/home/web/nginx/proxy_temp/6/0
0/0000000006" failed (13: Permission denied) while reading upstream, client: 10.
183.6.63, server: es.mydomain.com, request: "GET /_plugin/head/dist/app.js HTT
P/1.1", upstream: "http://127.0.0.1:9200/_plugin/head/dist/app.js", host: "es.my
domain.com", referrer: "http://es.mydomain.com/_plugin/head/"
I googled this one particularly, and it seems this can happen because the worker process is nobody, and the folder it is trying to read/write to may not have the right permissions. Still looking into this, but will update with an answer when found
EDIT 2: Removed unnecessary information to make issue more direct.
I was able to work out two solutions to get around the permission, so I'll present them both.
One thing to know about my nginx set-up is that I did not use sudo to install it. I unarchived the tar file, configured, and make installed it, so it was residing in /home/USERNAME/nginx/.
The issue was that starting nginx was creating a worker process under "nobody", which was then trying to read/write in /home/USERNAME/nginx/proxy_temp/, which it did not have permission to do. Solutions on the web said to just chown nobody to the temp folders, but this solution wasn't really appropriate in my particular case since we were inside USERNAME's home.
Solution 1:
Add user USERNAME; to top of nginx.conf, so that it would run the worker process as the specified username. This no longer led to a permission issue, as USERNAME had the permissions to read/write in the desired temp folders.
Solution 2:
Add proxy_temp_path to the server config. With this, you could specify a folder for the nobody process to create where it would have read/write permission. Note, you might still run into permission issues if the other *_temp folders are used by your nginx server.
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
proxy_temp_path /foo/bar/proxy_temp
}
}
I personally preferred solution 1, as it would apply to all the server blocks, and I would not have to worry about the other *_temp folders once the conf file got more complex.
You have to install the plugin head on all ES nodes.

NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream

I have Puma running as the upstream app server and Riak as my background db cluster. When I send a request that map-reduces a chunk of data for about 25K users and returns it from Riak to the app, I get an error in the Nginx log:
upstream timed out (110: Connection timed out) while reading
response header from upstream
If I query my upstream directly without nginx proxy, with the same request, I get the required data.
The Nginx timeout occurs once the proxy is put in.
**nginx.conf**
http {
keepalive_timeout 10m;
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
include /etc/nginx/sites-enabled/*.conf;
}
**virtual host conf**
upstream ss_api {
server 127.0.0.1:3000 max_fails=0 fail_timeout=600;
}
server {
listen 81;
server_name xxxxx.com; # change to match your URL
location / {
# match the name of upstream directive which is defined above
proxy_pass http://ss_api;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache cloud;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_bypass $http_authorization;
proxy_cache_bypass http://ss_api/account/;
add_header X-Cache-Status $upstream_cache_status;
}
}
Nginx has a bunch of timeout directives. I don't know if I'm missing something important. Any help would be highly appreciated....
This happens because your upstream takes too long to answer the request and NGINX thinks the upstream already failed in processing the request, so it responds with an error.
Just include and increase proxy_read_timeout in location config block.
Same thing happened to me and I used 1 hour timeout for an internal app at work:
proxy_read_timeout 3600;
With this, NGINX will wait for an hour (3600s) for its upstream to return something.
You should always refrain from increasing the timeouts, I doubt your backend server response time is the issue here in any case.
I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here:
https://stackoverflow.com/a/36589120/479632
server {
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
# these two lines here
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://localhost:5000;
}
}
Unfortunately I can't explain why this works and didn't manage to decipher it from the docs mentioned in the answer linked either so if anyone has an explanation I'd be very interested to hear it.
First figure out which upstream is slowing by consulting the nginx error log
file and adjust the read time out accordingly
in my case it was fastCGI
2017/09/27 13:34:03 [error] 16559#16559: *14381 upstream timed out (110: Connection timed out) while reading response header from upstream, client:xxxxxxxxxxxxxxxxxxxxxxxxx", upstream: "fastcgi://unix:/var/run/php/php5.6-fpm.sock", host: "xxxxxxxxxxxxxxx", referrer: "xxxxxxxxxxxxxxxxxxxx"
So i have to adjust the fastcgi_read_timeout in my server configuration
location ~ \.php$ {
fastcgi_read_timeout 240;
...
}
See: original post
In your case it helps a little optimization in proxy, or you can use "# time out settings"
location /
{
# time out settings
proxy_connect_timeout 159s;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass_header Set-Cookie;
proxy_redirect off;
proxy_hide_header Vary;
proxy_set_header Accept-Encoding '';
proxy_ignore_headers Cache-Control Expires;
proxy_set_header Referer $http_referer;
proxy_set_header Host $host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
I would recommend to look at the error_logs, specifically at the upstream part where it shows specific upstream that is timing out.
Then based on that you can adjust proxy_read_timeout, fastcgi_read_timeout or uwsgi_read_timeout.
Also make sure your config is loaded.
More details here Nginx upstream timed out (why and how to fix)
I think this error can happen for various reasons, but it can be specific to the module you're using. For example I saw this using the uwsgi module, so had to set "uwsgi_read_timeout".
As many others have pointed out here, increasing the timeout settings for NGINX can solve your issue.
However, increasing your timeout settings might not be as straightforward as many of these answers suggest. I myself faced this issue and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This did not help me a single bit; there was no apparent change in NGINX' timeout settings. Now, many hours later, I finally managed to fix this problem.
The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn't exist, you should create it). I used the same settings as suggested in the thread:
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
Please also check the keepalive_timeout of the upstream server.
I got a similar issue: random 502, with Connection reset by peer errors in nginx logs, happening when server was on heavy load. Eventually found it was caused by a mismatch between nginx' and upstream's (gunicorn in my case) keepalive_timeout values. Nginx was at 75s and upstream only a few seconds. This caused upstream to sometimes fall in timeout and drop the connection, while nginx didn't understand why.
Raising the upstream server value to match nginx' one solved the issue.
I had the same problem and resulted that was an "every day" error in the rails controller. I don't know why, but on production, puma runs the error again and again causing the message:
upstream timed out (110: Connection timed out) while reading response header from upstream
Probably because Nginx tries to get the data from puma again and again.The funny thing is that the error caused the timeout message even if I'm calling a different action in the controller, so, a single typo blocks all the app.
Check your log/puma.stderr.log file to see if that is the situation.
If you're using an AWS EC2 instance running Linux like I am you may also need to restart Nginx for the changes to take effect after adding proxy_read_timeout 3600; to etc/nginx/nginx.conf, I did: sudo systemctl restart nginx
Hopefully it helps someone:
I ran into this error and the cause was wrong permission on the log folder for phpfpm, after changing it so phpfpm could write to it, everything was fine.
From our side it was using spdy with proxy cache. When the cache expires we get this error till the cache has been updated.
For proxy_upstream timeout, I tried the above setting but these didn't work.
Setting resolver_timeout worked for me, knowing it was taking 30s to produce the upstream timeout message. E.g. me.atwibble.com could not be resolved (110: Operation timed out).
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver_timeout
we faced issue while saving content (customt content type) giving timeout error. Fixed this by adding all above timeouts, http client config to 600s and increasing memory for php process to 3gb.
If you are using wsl2 on windows 10, check your version by this command:
wsl -l -v
you should see 2 under the version.
if you don't, you need to install wsl_update_x64.
new add a line config to location or nginx.conf, for example:
proxy_read_timeout 900s;

Resources