I use nginx as a reverse proxy.
From time to time, especially on large files and mjpeg streams, the warning message an upstream response is buffered to a temporary file is displayed.
What could be limiting the nginx memory usage ?
I use docker-compose to start nginx: but by default the container
uses all memory available.
There is no nginx directive to change the
default memory behaviour.
I could solve this by setting proxy_cache_path to /dev/shm but I want to understand what is limiting nginx memory usage.
I'm seeing a weird situation where either Nginx or uwsgi seems to be building up a long queue of incoming requests, and attempting to process them long after the client connection timed out. I'd like to understand and stop that behavior. Here's more info:
My Setup
My server uses Nginx to pass HTTPS POST requests to uWSGI and Flask via a Unix file socket. I have basically the default configurations on everything.
I have a Python client sending 3 requests per second to that server.
The Problem
After running the client for about 4 hours, the client machine started reporting that all the connections were timing out. (It uses the Python requests library with a 7-second timeout.) About 10 minutes later, the behavior changed: the connections began failing with 502 Bad Gateway.
I powered off the client. But for about 10 minutes AFTER powering off the client, the server-side uWSGI logs showed uWSGI attempting to answer requests from that client! And top showed uWSGI using 100% CPU (25% per worker).
During those 10 minutes, each uwsgi.log entry looked like this:
Thu May 25 07:36:37 2017 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /api/polldata (ip 98.210.18.212) !!!
Thu May 25 07:36:37 2017 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during POST /api/polldata (98.210.18.212)
IOError: write error
[pid: 34|app: 0|req: 645/12472] 98.210.18.212 () {42 vars in 588 bytes} [Thu May 25 07:36:08 2017] POST /api/polldata => generated 0 bytes in 28345 msecs (HTTP/1.1 200) 2 headers in 0 bytes (0 switches on core 0)
And the Nginx error.log shows a lot of this:
2017/05/25 08:10:29 [error] 36#36: *35037 connect() to unix:/srv/my_server/myproject.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 98.210.18.212, server: example.com, request: "POST /api/polldata HTTP/1.1", upstream: "uwsgi://unix:/srv/my_server/myproject.sock:", host: "example.com:5000"
After about 10 minutes the uWSGI activity stops. When I turn the client back on, Nginx happily accepts the POST requests, but uWSGI gives the same "writing to a closed pipe" error on every request, as if it's permanently broken somehow. Restarting the webserver's docker container does not fix the problem, but rebooting the host machine fixes it.
Theories
In the default Nginx -> socket -> uWSGI configuration, is there a long queue of requests with no timeout? I looked in the uWSGI docs and I saw a bunch of configurable timeouts, but all default to around 60 seconds, so I can't understand how I'm seeing 10-minute-old requests being handled. I haven't changed any default timeout settings.
The application uses almost all the 1GB RAM in my small dev server, so I think resource limits may be triggering the behavior.
Either way, I'd like to change my configuration so that requests > 30 seconds old get dropped with a 500 error, rather than getting processed by uWSGI. I'd appreciate any advice on how to do that, and theories on what's happening.
This appears to be an issue downstream on the uWSGI side.
It sounds like your backend code may be faulty in that it takes too long to process the requests, does not implement any sort of rate limiting for the requests, and does not properly catch if any of the underlying connections have been terminated (hence, you're receiving the errors that your code tries to write to closed pipelines, and possibly even start processing new requests long after the underlying connections have been terminated).
As per http://lists.unbit.it/pipermail/uwsgi/2013-February/005362.html, you might want to abort processing within your backend if not uwsgi.is_connected(uwsgi.connection_fd()).
You might want to explore https://uwsgi-docs.readthedocs.io/en/latest/Options.html#harakiri.
As last resort, as per Re: Understanding "proxy_ignore_client_abort" functionality (2014), you might want to change uwsgi_ignore_client_abort from off to on in order to not drop the ongoing uWSGI connections that have already been passed to the upstream (even if the client does subsequently disconnect) in order to not receive the closed pipe errors from uWSGI, as well as to enforce any possible concurrent connection limits within nginx itself (otherwise, the connections to uWSGI will get dropped by nginx should the client disconnect, and nginx would have no clue how many requests are being queued up within uWSGI for subsequent processing).
Seems like DoS attack on Nginx uWSGI returning 100% CPU usage with Nginx 502, 504, 500. IP spoofing is common in DoS attack. Exclude by checking the logs.
I need some suggestions to setup auto-scaling, in nginx, of websocket connections. Let's say I have nginx configured to proxy websocket connections to 4 upstream backend servers. When I try to add a 5th server to the upstream block and reload nginx, I see that nginx keeps the existing worker processes running and additionally create new ones to serve new websocket connections. I guess the old workers remain until the earlier connections close.
Ideally, as we auto-scale, we want the number of nginx worker processes to remain the same. Is there a way to transfer the socket connections from the older worker processes to the newer worker processes?
thanks.
I plan to use nginx for proxying websockets. When performing nginx reload / HUP , I understand that nginx waits for the old worker processes to stop processing all requests. In websocket connection however, this may not happen for long time as the connection is persistent. Is there an option / roadmap to forceibly kill old worker process after timeout on reload?
References:
http://nginx.org/en/docs/control.html
http://forum.nginx.org/read.php?21,247573,247651#msg-247651
Thanks
Unless you have either solution: proxy_read_timeout 1d or a ping message to keep connection alive, Nginx closes connections in 60sec otherwise. This default value was chosen by a reason.
See what Nginx core developer says:
There is proxy_read_timeout (http://nginx.org/r/proxy_read_timeout)
which as well applies to WebSocket connections. You have to bump it
if your backend do not send anything for a long time. Alternatively,
you may configure your backend to send websocket ping frames
periodically to reset the timeout (and check if the connection is
still alive).
Having said that nothing should stop you from using USR2+QUIT signals combination that usually used when you gracefully restart Nginx while binary upgrade. Nginx master/worker processes rare consume more than 50MB of memory, so to keep multiple masters isn't that expensive. USR2 helps to fork new master and spawn its workers followed by gracefully shutdown old workers and master.
If I call nginx reload after updating my config to remove a reverse proxy from my list of upstreams, what happens to the requests that reverse proxy is currently processing,
will they get served to the user without error?
Reference: http://wiki.nginx.org/CommandLine#Stopping_or_Restarting_Nginx
HUP is the signal sent to nginx when you tries to reload config file. Nginx will start new worker processes and gracefully showdown older ones. So yes, the current request is still served without any error.
HUP Configuration reload Start the new worker processes with a new
configuration Gracefully shutdown the old worker processes