client closed connection in nginx error log - nginx

I am using openresty with redis for a project.
I am getting these messages in nginx error.log.
2016/07/13 23:08:05 [info] 28306#0: *110027 client closed connection while waiting for request, client: 27.97.70.20, server: 0.0.0.0:80
Total number of connects opened and count of this message is almost same. I see that lots of people see this message on different context, and see varying responses on different places.
How should I proceed? Is this issue serious?

Related

nginx recv() failed (104 connection reset by peer) while sending request to upstream

I am using Google App Engine (Flexible Environment) to deploy my application. My application is written in Flask and I am using Gunicorn. I have a requirement where a user can upload a file and I am saving that file in a GCS bucket. It worked fine for me locally but the moment I deployed it on App engine it started giving me 502 Bad Gateway.
When I looked into the logs it showed a warning first and then an error as below:
[warn] 34#34: *346 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000001
[error] 34#34: *346 writev() failed (104: Connection reset by peer) while sending request to upstream,
So apparently it's not able to send the request. I have tried to increase my server timeout. But it did not work. Please Help!

xxxx could not be resolved (110: Operation timed out)

When invoking other services through HTTP in my app, parsing DNS is occurring every time. At the beginning, parsing DNS was normal and there was no timeout, but after a while, parsing DNS timeouts became more and more. Take out the timeout domain names separately, and use the 'dig' command to parse in the Linux environment, they are all normal. In my nginx.conf file, resolver_timeout 60s (default 30s), resolver 8.8.8.8. My app is deployed with openresty.How do I check?

Nginx is giving uWSGI very old requests?

I'm seeing a weird situation where either Nginx or uwsgi seems to be building up a long queue of incoming requests, and attempting to process them long after the client connection timed out. I'd like to understand and stop that behavior. Here's more info:
My Setup
My server uses Nginx to pass HTTPS POST requests to uWSGI and Flask via a Unix file socket. I have basically the default configurations on everything.
I have a Python client sending 3 requests per second to that server.
The Problem
After running the client for about 4 hours, the client machine started reporting that all the connections were timing out. (It uses the Python requests library with a 7-second timeout.) About 10 minutes later, the behavior changed: the connections began failing with 502 Bad Gateway.
I powered off the client. But for about 10 minutes AFTER powering off the client, the server-side uWSGI logs showed uWSGI attempting to answer requests from that client! And top showed uWSGI using 100% CPU (25% per worker).
During those 10 minutes, each uwsgi.log entry looked like this:
Thu May 25 07:36:37 2017 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /api/polldata (ip 98.210.18.212) !!!
Thu May 25 07:36:37 2017 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during POST /api/polldata (98.210.18.212)
IOError: write error
[pid: 34|app: 0|req: 645/12472] 98.210.18.212 () {42 vars in 588 bytes} [Thu May 25 07:36:08 2017] POST /api/polldata => generated 0 bytes in 28345 msecs (HTTP/1.1 200) 2 headers in 0 bytes (0 switches on core 0)
And the Nginx error.log shows a lot of this:
2017/05/25 08:10:29 [error] 36#36: *35037 connect() to unix:/srv/my_server/myproject.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 98.210.18.212, server: example.com, request: "POST /api/polldata HTTP/1.1", upstream: "uwsgi://unix:/srv/my_server/myproject.sock:", host: "example.com:5000"
After about 10 minutes the uWSGI activity stops. When I turn the client back on, Nginx happily accepts the POST requests, but uWSGI gives the same "writing to a closed pipe" error on every request, as if it's permanently broken somehow. Restarting the webserver's docker container does not fix the problem, but rebooting the host machine fixes it.
Theories
In the default Nginx -> socket -> uWSGI configuration, is there a long queue of requests with no timeout? I looked in the uWSGI docs and I saw a bunch of configurable timeouts, but all default to around 60 seconds, so I can't understand how I'm seeing 10-minute-old requests being handled. I haven't changed any default timeout settings.
The application uses almost all the 1GB RAM in my small dev server, so I think resource limits may be triggering the behavior.
Either way, I'd like to change my configuration so that requests > 30 seconds old get dropped with a 500 error, rather than getting processed by uWSGI. I'd appreciate any advice on how to do that, and theories on what's happening.
This appears to be an issue downstream on the uWSGI side.
It sounds like your backend code may be faulty in that it takes too long to process the requests, does not implement any sort of rate limiting for the requests, and does not properly catch if any of the underlying connections have been terminated (hence, you're receiving the errors that your code tries to write to closed pipelines, and possibly even start processing new requests long after the underlying connections have been terminated).
As per http://lists.unbit.it/pipermail/uwsgi/2013-February/005362.html, you might want to abort processing within your backend if not uwsgi.is_connected(uwsgi.connection_fd()).
You might want to explore https://uwsgi-docs.readthedocs.io/en/latest/Options.html#harakiri.
As last resort, as per Re: Understanding "proxy_ignore_client_abort" functionality (2014), you might want to change uwsgi_ignore_client_abort from off to on in order to not drop the ongoing uWSGI connections that have already been passed to the upstream (even if the client does subsequently disconnect) in order to not receive the closed pipe errors from uWSGI, as well as to enforce any possible concurrent connection limits within nginx itself (otherwise, the connections to uWSGI will get dropped by nginx should the client disconnect, and nginx would have no clue how many requests are being queued up within uWSGI for subsequent processing).
Seems like DoS attack on Nginx uWSGI returning 100% CPU usage with Nginx 502, 504, 500. IP spoofing is common in DoS attack. Exclude by checking the logs.

Handling PUT requests returning 403 errors with Django Rest Framework, nginx and uwsgi

I am testing porting an access-controlled web service implemented using Django REST Framework to nginx/uwsgi. When I'm testing PUT requests which return 403 errors because the user doesn't have permission for that endpoint, I sometimes get errors like this in the logs:
2016/02/09 06:42:05 [error] 574#0: *14978766 readv() failed (104: Connection reset by peer) while reading upstream, client: 10.10.10.10, server: test.whatever.com, request: "PUT /api/1.0/domains/name/Quest/page_content/name/Resit/ HTTP/1.1", upstream: "uwsgi://unix:///tmp/ipp_api_uwsgi.soc:", host: "test.whatever.com"
There are a few questions about this problem. The suggested solutions are:
EITHER make sure you consume the request's post data in the
application OR
use the --post-buffering command line option
for uwsgi.
Option 1 does not seem the right way to go - DRF's permissioning module checks whether the user has the access rights to the endpoint and rejects the PUT if they don't. The post data is never accessed and should just be dumped.
Option 2 seems to fix the problem but I am concerned about performance and the impact on other, successful PUT requests.
Is option 2 the approach I should follow? Any other suggestions?
post-buffering will cause uWSGI to consume and buffer body requests, so yes, it can affect performance for example if someone will make lot of request without permission to do. uWSGI will buffer them all instead of just rejecting.
But you can handle it in django app, using proper middleware that will just throw all body of request into /dev/null when there is no permission to perform any action.

upstream prematurely closed connection while reading response header from upstream, client

I'm getting this error from /var/log/messages on my FreeBSD box. I'm using nginx and spawn-fcgi with memcache and apc modules enabled.
upstream prematurely closed connection while reading response header from upstream,
client HTTP/1.1", upstream: "fastcgi://unix:/tmp/fcgi.sock:", host:
I've had a similar error with unicorn + nginx.
The end result was that the unicorn was timing out due to a firewall misconfiguration, dieing off and leaving NGINX clueless as to what to do (nginx would then throw a 503).
Once the port was open my issue was resolved.
I've also seen this happen with an API call that takes a long time (longer than my 30s unicorn timeout). I ended up shipping it off to a background job so the unicorn didn't timeout.
I had a similar issue with Nginx timing out with a RoR app when using an EC2 + Amazon RDS database instance.
The issue was resolved by editing my security group for the RDS instance to allow the EC2's IP over port 5432. Just edit the security group's rules to add a custom rule for the port you are communicating to the RDS instance over, and whitelist the EC2 server's private IP address. Worked instantly after that!
It was related to the version of PHP. I have used latest version of nginx and slightly old version of PHP. The issue has been fixed by updating PHP to latest version.

Resources