How do I fix this NGINX 502 Bad Gateway error? - nginx

I'm working on web sockets in an angular app. I have it connect to a python back-end through nginx. I'm find that I'm getting 502 "Bad Gateway" errors about 90% of the time. I'll do this:
Load page in browser and connect web socket
Python back-end sends data to angular front-end
Disconnect web socket
Attempt to re-connect web socket <-- 502 Bad Gateway error
Hard-reload in Chrome
Load page in browser and connect web socket <-- No 502 error
I can't figure out why this is happening. I can't tell why I'm getting a 502 error. Nor can I figure out why doing a hard-reload fixes the problem. Things I've tried:
Increase nginx log-level to debug. Still the logs don't have any useful information.
I don't keep any web socket objects in state. I do this in case something is being cached somewhere.
I always close the web socket with close code 1000
I manually run the python service on the server so that I can watch it. When the 502 error happens, the service doesn't show anything unusual.
Setting the nginx max_fails to 0. Setting the fail_timeout to 0. Neither of these changes seems to have any effect. (I found this suggestion in other SO answers)
What should I be looking for that will help me fix this problem?
EDIT: Here's my nginx conf.d file:
server {
listen 80;
index index.html;
root /var/www/mysite;
location / {
access_log /var/log/nginx/mysite/ui.access.log;
error_log /var/log/nginx/mysite/ui.error.log;
try_files $uri $uri/ /index.html;
}
location /ws/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Host $proxy_host;
proxy_pass http://WEBSOCKET/;
access_log /var/log/nginx/mysite/ws_services.access.log;
error_log /var/log/nginx/mysite/ws_services.error.log;
proxy_read_timeout 300s;
}
}
upstream WEBSOCKET {
ip_hash;
server 127.0.0.1:8765;
}

Not the same problem the OP had, but just in case anyone comes across this and has the same setup as I had:
I was using WebSockets over SSL (so wss:// protocol) and had 502 popping up, even though the config had worked before. The config was as follows:
...
proxy_pass http://127.0.0.1:8080;
...
In the backend I was using a Node with the ws package to create a websocket server
As I said: It was working before, but suddenly stopped working. Additionally nginx wrote upstream prematurely closed connection while reading response header from upstream errors into the error log. I suppose that either nginx or node closed some kind of security issue, that lead to the setup not working anymore.
What I had to do in order to make it work, was to use https instead of http for the proxy_pass config
...
proxy_pass https://127.0.0.1:8080;
...

Related

502 error bad gateway with nginx on a server behind corporate proxy

I'm trying to install a custom service to one of our corporae server (those kind of server that are not connected to internet, unless all the trafic passes to a corporate proxy).
This proxy has been setup with the classic export http_proxy=blablabla in the .bashrc file, among other things.
Now the interesting part. I'm trying to configure nginx to redirect all traffic from server_name to local url localhost:3000
Here is the basic configuration I use. Nothing too tricky.
server {
listen 443 ssl;
ssl_certificate /crt.crt;
ssl_certificate_key /key.key;
server_name x.y.z;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass &http_upgrade;
}
}
When I try to access the server_name, from my browser, I get a 502 error (a nginx error, so the requests hits my server).
When I try to access the local url curl --noproxy '*' https://localhost:3000, from the server, it is working. (I have to write the --noproxy '*' flag because of the export http_proxy=blablabla setup in the .bashrc file. If i do not write this, the localhost reuquest is send to our distant proxy, resulting the requet to fail)
My guess is that this is has to be related to the corporate proxy configuration, but I might been missleading.
Do you have any insights that you could share with me about this?
Thanks a lot !
PS: the issue is not related to any kind of SSL configuration, this part is working great
PS2: I'm not a sysadmin, all these issuses are confusing
PS3: the server I'm working on is a RHEL 7.9
It has nothing to do with proxy, found my solution here :
https://stackoverflow.com/a/24830777/4991067
Thanks anyway

Sometimes 502 Bad Gateway , Sometimes 504 gateway timeout, sometimes website loads successfully

I have a website that is hosted on google cloud platform using NGINX server. Some hour before it was working well but suddenly an error of 502 bad gateway occured.
NGINX server is hosted on another instance and main project is another instance and following is the configuration of my server :
server {
listen 443 ssl;
server_name www.domain.com;
ssl_certificate /path-to/fullchain.pem;
ssl_certificate_key /path-to/privkey.pem;
# REST API Redirect
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http:/internal-ip:3000;
}
# Server-side CMS Redirect
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://internal-ip:4400;
}
}
when restarted the instance of nginx server then website loaded successfully but after three or four refresh it started giving Bad Gateway and after that anytime I open it is giving bad gateway error.
Sometimes, automatically it reloads well but again down.
Tried to know the error log of nginx server and following is the output of error log:
Sometimes this popular issue is logged:
and sometimes this:
Regarding first issue I tried some recommendations such that increase the proxy send and read time to some higher value as suggested here in server configuration and also shown in the image as follows :
Also backend code is working fine because I can access the deployed backend services in local during development but hosted website can not access any backend service.
But nothing worked and sadly my website is down.
Please suggest any solution.
By default nginx is having 1024 worker connections you can change it with
events {
worker_connections 4096;
}
Also you can try to increase amount of workers as workers*worker_connections is giving you amount of connections you can handle. All that is in the context your site is receiving a traffic and you simply running out of connections.

Meteor, WebSocket, Nginx 502 Error

We are trying to run a Meteor application on a Debian server behind Nginx. The application is running but GET requests at http://url/sockjs?info?cb=[random_string] returns 502 Bad Gateway.
The Nginx configuration is set up as thus:
# this section is needed to proxy web-socket connections
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream app_server {
server 127.0.0.1:3000; # for a web port socket (we'll use this first)
# server unix:/var/run/app/app.sock;
}
server {
listen 80;
server_name app_server.com;
charset utf-8;
client_max_body_size 75M;
access_log /var/log/nginx/app.access.log;
error_log /var/log/nginx/app.error.log;
location / {
proxy_pass http://app_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr; # preserve client IP
proxy_read_timeout 60s;
keepalive_timeout 65;
proxy_redirect off;
# the root path (/) MUST NOT be cached
if ($uri != '/') {
expires 30d;
}
}
}
We have tried various configurations and could not figure out where the fault lies. Solution at Meteor WebSocket handshake error 400 with nginx didn't work either.
Edit: Tried following configuration found at recommended nginx configuration for meteor and it was still returning 502.
Edit: The application works fine when not obtaining images from Meteor CFS, which is used to store uploaded images via an admin dashboard. When loading images, a POST to domain/sockjs/img_location/cb/xhr_send causes a 502 error.
Are you sure the issue is really coming from NGINX and websocket?
First you can try wcat as a websocket CLI to ensure if the websockets are working
You can also try to run the app in a console or look at the log (debug / verbose at max level) to see if there is not an underlying error
As per your question edit, CFS uses an HTTP transport as the underlying data transfer layer.
Unfortunately, depending on how you get CFS into your stack, you might end up with an old and buggy version of its dependencies, namely cfs:http-methods, which sometimes tries to end an already ended response, which then translates itself as a cryptic error message.
Fortunately, the bug has been resolved version 0.0.30 onwards and to ensure that Meteor loads that version as the minimum dependency, all you need to do is edit you .meteor/packages file and add the following:
cfs:http-methods#0.0.30
Which will ensure any version that is equal or greater than 0.0.30, which as of this moment, is the latest on Atmosphere (meteor's package server).

Elasticsearch head plugin not working through nginx reverse proxy

I have elasticsearch with the head plugin installed running on a different server. I also set up an nginx reverse proxy for my ES instance. The configuration looks like below:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
}
}
}
Hitting the link http://es.mydomain.net/ works fine and I get a status 200 response. However, if I try to hit the link http://es.mydomain.net/_plugin/head/, I seemingly get a blank page. Note, the page loads fine if I access the head plug-in directly without the reverse proxy, via http://SERVERIP:PORT/_plugin/head/.
EDIT:
After doing some more debugging, I saw a net::ERR_CONTENT_LENGTH_MISMATCH error in the console for the page. After looking at nginx's log, to see what the error was, I came upon the true culprit, which is this error:
2015/05/27 16:26:48 [crit] 29765#0: *655 open() "/home/web/nginx/proxy_temp/6/0
0/0000000006" failed (13: Permission denied) while reading upstream, client: 10.
183.6.63, server: es.mydomain.com, request: "GET /_plugin/head/dist/app.js HTT
P/1.1", upstream: "http://127.0.0.1:9200/_plugin/head/dist/app.js", host: "es.my
domain.com", referrer: "http://es.mydomain.com/_plugin/head/"
I googled this one particularly, and it seems this can happen because the worker process is nobody, and the folder it is trying to read/write to may not have the right permissions. Still looking into this, but will update with an answer when found
EDIT 2: Removed unnecessary information to make issue more direct.
I was able to work out two solutions to get around the permission, so I'll present them both.
One thing to know about my nginx set-up is that I did not use sudo to install it. I unarchived the tar file, configured, and make installed it, so it was residing in /home/USERNAME/nginx/.
The issue was that starting nginx was creating a worker process under "nobody", which was then trying to read/write in /home/USERNAME/nginx/proxy_temp/, which it did not have permission to do. Solutions on the web said to just chown nobody to the temp folders, but this solution wasn't really appropriate in my particular case since we were inside USERNAME's home.
Solution 1:
Add user USERNAME; to top of nginx.conf, so that it would run the worker process as the specified username. This no longer led to a permission issue, as USERNAME had the permissions to read/write in the desired temp folders.
Solution 2:
Add proxy_temp_path to the server config. With this, you could specify a folder for the nobody process to create where it would have read/write permission. Note, you might still run into permission issues if the other *_temp folders are used by your nginx server.
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
proxy_temp_path /foo/bar/proxy_temp
}
}
I personally preferred solution 1, as it would apply to all the server blocks, and I would not have to worry about the other *_temp folders once the conf file got more complex.
You have to install the plugin head on all ES nodes.

Nginx reverse proxy causing 504 Gateway Timeout

I am using Nginx as a reverse proxy that takes requests then does a proxy_pass to get the actual web application from the upstream server running on port 8001.
If I go to mywebsite.example or do a wget, I get a 504 Gateway Timeout after 60 seconds... However, if I load mywebsite.example:8001, the application loads as expected!
So something is preventing Nginx from communicating with the upstream server.
All this started after my hosting company reset the machine my stuff was running on, prior to that no issues whatsoever.
Here's my vhosts server block:
server {
listen 80;
server_name mywebsite.example;
root /home/user/public_html/mywebsite.example/public;
access_log /home/user/public_html/mywebsite.example/log/access.log upstreamlog;
error_log /home/user/public_html/mywebsite.example/log/error.log;
location / {
proxy_pass http://xxx.xxx.xxx.xxx:8001;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And the output from my Nginx error log:
2014/06/27 13:10:58 [error] 31406#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxx.xx.xxx.xxx, server: mywebsite.example, request: "GET / HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:8001/", host: "mywebsite.example"
Probably can add a few more line to increase the timeout period to upstream. The examples below sets the timeout to 300 seconds :
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
Increasing the timeout will not likely solve your issue since, as you say, the actual target web server is responding just fine.
I had this same issue and I found it had to do with not using a keep-alive on the connection. I can't actually answer why this is but, in clearing the connection header I solved this issue and the request was proxied just fine:
server {
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://localhost:5000;
}
}
Have a look at this posts which explains it in more detail:
nginx close upstream connection after request
Keep-alive header clarification
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
user2540984, as well as many others have pointed out that you can try increasing your timeout settings. I myself faced a similar issue to this one and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This, however, did not help me a single bit; there was no apparent change in NGINX' timeout settings. After many hours of searching, I finally managed to solve my issue.
The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn't exist, you should create it). I used the same settings as suggested in the thread:
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
This might not be the solution to your particular problem, but if anyone else notices that the timeout changes in /etc/nginx/nginx.conf don't do anything, I hope this answer helps!
If you want to increase or add time limit to all sites then you can add below lines to the nginx.conf file.
Add below lines to the http section of /usr/local/etc/nginx/nginx.conf or /etc/nginx/nginx.conf file.
fastcgi_read_timeout 600;
proxy_read_timeout 600;
If the above lines doesn't exist in conf file then add them, otherwise increase fastcgi_read_timeout and proxy_read_timeout to make sure that nginx and php-fpm did not timeout.
To increase time limit for only one site then you can edit in vim /etc/nginx/sites-available/example.com
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
and after adding these lines in nginx.conf, then don't forget to restart nginx.
service php7-fpm reload
service nginx reload
or, if you're using valet then simply type valet restart.
You can also face this situation if your upstream server uses a domain name, and
its IP address changes (e.g.: your upstream points to an AWS Elastic Load
Balancer)
The problem is that nginx will resolve the IP address once, and keep it cached
for subsequent requests until the configuration is reloaded.
You can tell nginx to use a name server to re-resolve the domain once the cached
entry expires:
location /mylocation {
# use google dns to resolve host after IP cached expires
resolver 8.8.8.8;
set $upstream_endpoint http://your.backend.server/;
proxy_pass $upstream_endpoint;
}
The docs on proxy_pass explain why this trick works:
Parameter value can contain variables. In this case, if an address is specified
as a domain name, the name is searched among the described server groups, and,
if not found, is determined using a resolver.
Kudos to "Nginx with dynamic upstreams" (tenzer.dk) for the detailed
explanation, which also contains some relevant information on a caveat of this
approach regarding forwarded URIs.
nginx
proxy_read_timeout 300;
In my case with AWS, I edited load balance setting also.
Attributes => Idle timeout
Had the same problem. Turned out it was caused by iptables connection tracking on the upstream server. After removing --state NEW,ESTABLISHED,RELATED from the firewall script and flushing with conntrack -F the problem was gone.
NGINX itself may not be the root cause.
IF "minimum ports per VM instance" set on the NAT Gateway -- which stand between your NGINX instance & the proxy_pass destination -- is too small for the number of concurrent requests, it has to be increased.
Solution: Increase the available number of ports per VM on NAT Gateway.
Context In my case, on Google Cloud, a reverse proxy NGINX was placed inside a subnet, with a NAT Gateway. The NGINX instance was redirecting requests to a domain associated with our backend API (upstream) through the NAT Gateway.
This documentation from GCP will help you understand how NAT is relevant to the NGINX 504 timeout.
In my case i restart php for and it become ok.
If nginx_ajp_module is used, try adding
ajp_read_timeout 10m;
in nginx.conf file.

Resources