nginx "500 internal server error" on large request - nginx

I am sending a 14K request to my backend through nginx and I get the following error:
500 Internal Server Error
I am running nginx 1.6.2 and if I send my request directly to my backend, everything works fine and the request takes about 3-4 seconds round trip.
This is my nginx config:
$ cat /etc/nginx/nginx.conf
events {
worker_connections 1024;
}
http {
proxy_temp_path /tmp/nginx;
upstream my_servers {
server <server1>:9000 down;
server <server2>:9000 down;
server <server3>:9000 down;
server <server1>:9001;
server <server2>:9001;
server <server3>:9001;
}
server {
access_log /var/log/nginx/access.log combined;
listen 9080;
location / {
proxy_pass http://my_servers;
}
}
}
Any idea on what is going on? I can't be hitting any default timeouts at 3-4 seconds I assume?
BTW, when I tried looking at the access log file, it was empty.

The issue was related to permissions for client_body_temp_path as described here:
https://wincent.com/wiki/Fixing_nginx_client_body_temp_permission_denied_errors

Related

Nginx: limit_conn vs upstream max_conns (in location context)

Environment: Nginx 1.14.0 (see dockerfile for more details).
To limit the number of concurrent connections for a specific location
in a server, one can use two methods - limit_conn (third example for all ips)
and upstream max_conns.
Is there a difference in the way the two methods works?
Can someone explain or refer to explanation?
example of limiting using upstream max_conns:
http {
upstream foo{
zone upstream_foo 32m;
server some-ip:8080 max_conns=100;
}
server {
listen 80;
server_name localhost;
location /some_path {
proxy_pass http://foo/some_path;
return 429;
}
}
}
limiting using limit_conn:
http {
limit_conn_zone $server_name zone=perserver:32m;
server {
listen 80;
server_name localhost;
location /some_path {
proxy_pass http://some-ip:8080/some_path;
limit_conn perserver 100;
limit_conn_status 429;
}
}
}
upstream max_conns is the number of connections from the nginx server to an upstream proxy server. max_conns is more to make sure backend servers do not get overloaded. Say you have an upstream of 5 servers that nginx can send to. Maybe one is underpowered so you limit the total number of connections to it to keep from overloading it.
limit_conn is the number of connections to the nginx server from a client and is to limit abuse from requests to the nginx server. For example you can say for a location that an IP can only have 10 open connections before maxing out.
Also note that, if the max_conns limit has been reached, the request can be placed in a queue for further processing, provided that the queue (NGINX Plus) directive is also included to set the maximum number of requests that can be simultaneously in the queue:
upstream backend {
server backend1.example.com max_conns=3;
server backend2.example.com;
queue 100 timeout=70;
}
If the queue is filled up with requests or the upstream server cannot be selected during the timeout specified by the optional timeout parameter, or the queue parameter is omitted, the client receives an error (502).

nginx cant access to backend server in vps but we can access to backend by browser directly

The configuration of nginx is as follows
events {
worker_connections 4096;
}
http {
upstream myproject {
server ip_adder:80;
server ip_adder:8080;
}
server {
listen 80;
location / {
proxy_pass http://myproject;
}
}
}
I run servers in vps,
when i request to ip_adder:80 by browser directly i see response.
but i request to nginx to redirect upstream i see 502 badGateway response.
anyone can help me?
This problem is solved by this method;
Your upstream server ports are probably filtered or closed. Please refer to this link for further review
https://nmap.org/book/man-port-scanning-basics.html

Ngixn load balancer keep changing original URL to load balanced URL

I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.

nginx best practices for reloading servers

I have a nginx config which has:
events {
worker_connections 1024;
}
http {
upstream myservers {
server server1.com:9000;
server server2.com:9000;
server server3.com:9000;
}
server {
access_log /var/log/nginx/access.log combined;
listen 9080;
location / {
proxy_pass http://myservers;
}
}
}
I need to reload the servers and the method I am using is to bring up the new servers on port 9001 and then do nginx -s reload with the following modification to the config:
upstream myservers {
server server1.com:9000 down;
server server2.com:9000 down;
server server3.com:9000 down;
server server1.com:9001;
server server2.com:9001;
server server3.com:9001;
}
Then I bring down the old servers. However, before I bring down the old servers, I need to make sure all workers that were handling requests to these old servers are done. How do I check this? Also, is this the best way to reload backend servers with the free version of nginx?

uwsgicluster - no live upstreams while connecting to upstream client

Below simple nginx config for cluster, then I turn off 192.168.1.77:3032 server.
From time to time I catch 502 error and "no live upstreams while connecting to upstream client" in logs, while "server unix:///var/tmp/site.sock backup;" working and as I guess must handle request but nginx don't find it as live. What could be the problem?
nginx config:
upstream uwsgicluster {
server 192.168.1.77:3032;
server unix:///var/tmp/site.sock backup;
}
server {
listen 80;
server_name site.com www.site.com;
access_log /var/log/nginx/sire.log;
error_log /var/log/nginx/site-error.log;
location / {
uwsgi_pass uwsgicluster;
include uwsgi_params;
}
}
If I remove 192.168.1.77:3032 server
from upstream and restart nginx it works fine, but with switched off 192.168.1.77:3032 server errors occurs periodically
I think that nginx will still try both of the servers in the upstream block even if one isn't working. When it fails to connect to one of them, it will try the other one, but will still log the error you are seeing.
By default, the proxy_next_upstream setting will try the next upstream server on error or timeout. You can override this:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream
Are you only seeing error logs, or are you also seeing undesired behavior/load-balancing?

Resources