nginx uwsgi upstream timed out (110: Connection timed out) while reading upstream - nginx

I have a python flask application which outputs around 300kb for a request.
This application is hosted via uwsgi emperor with the below configuration.
[uwsgi]
chdir = /var/www/%n
socket = /etc/uwsgi/sockets/%n.sock
chmod-socket = 660
vaccum = true
processes = 4
threads = 20
virtualenv = /var/www/%n/.venv
module = app:app
logto = /var/log/uwsgi/%n.log
The uwsgi log has the below line
[pid: 16668|app: 0|req: 1/1] 127.0.0.1 () {46 vars in 700 bytes} [Wed May 2 04:56:24 2018] POST /context-path => generated 293595 bytes in 34172 msecs (HTTP/1.1 200) 2 headers in 75 bytes (3 switches on core 0)
announcing my loyalty to the Emperor...
And the nginx configuration is
location /context-path {
include uwsgi_params;
uwsgi_pass unix:/etc/uwsgi/sockets/app.sock;
uwsgi_read_timeout 300;
}
At the end of 5 minutes, I see the following in nginx error.log
2018/05/02 05:13:42 [error] 16994#16994: *7 upstream timed out (110: Connection timed out) while reading upstream, client: 127.0.0.1, server: 127.0.0.1, request: "POST /context-path HTTP/1.1", upstream: "uwsgi://unix:/etc/uwsgi/sockets/app.sock:", host: "127.0.0.1"
I receive partial data at the end of 5 minutes.
Increasing uwsgi_read_timeout doesn't affect anything.
Help?
Stack configuration:
nginx version: nginx/1.10.3 (Ubuntu)
uwsgi 2.0.17
Python 2.7.12
Flask==0.12.2

As far as I know, this error message can occur with many of the different upstream options.
The ngx_http_upstream_module module is used to define groups of servers that can be referenced by the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, and grpc_pass directives.
I am using nginx as a load balancer with other servers, using proxy_pass. So I needed one of the proxy_* timeout options, such as proxy_read_timeout, to fix this error.

Related

uWSGI + Nginx, all upstreams become unavailable (Denial-of-Service ) if there are too many headers in the request

If uWSGI receives a request with more headers than specified in the max-vars option it does not return a bad request, it brokes the socket connection or something like that.
uWSGI error:
max vec size reached. skip this var.
Nginx perceive this as an error and marks the upstream as "unavailable" and passes the request to the next one, as a result all the upstreams becomes unavailable for ~11s with only 1 request.
Nginx error:
2022/12/07 10:22:22 [error] 28#28: *4 upstream prematurely closed connection while reading response header from upstream, client: .....
2022/12/07 10:22:22 [warn] 28#28: *4 upstream server temporarily disabled while reading response header from upstream, client: .....
This does not happen if you have just one upstream in Nginx.
Increasing the option max_fails in the upstream changes nothing because you can send just more requests, filtering proxy_next_upstream is not possible because Nginx perceive this as an error, and that is always considered unsuccessful attempt.
The only option I see is setting max-vars (uWSGI) to a really high value (if possibile).
Is there a way to set the max number of headers in Nginx, or "handle" this strange behaviour in uWSGI?

connection reset by peer upstream issue

I've installed OSticket application on my nginx server.
The webpage is opened only first time, if I simply refresh the page, it gives connection reset by peer upstream error..
I tried to change fastcgi_read_timeout and max_execution time as described in https://laracasts.com/discuss/channels/forge/502-bad-gateway-with-large-file-uploads and https://www.scalescale.com/tips/nginx/configure-max_execution_time-in-php-fpm-using-nginx/#, that didn't help.
nginx error log:
2017/08/07 22:15:08 [error] 26877#26877: *42 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.231.1.79, server: web.com, request: "GET /ticket/logo.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm/php-fpm.sock:", host: "web.com", referrer: "https://web.com/ticket/"
PHP-FPM error log:
WARNING: [pool www] child 26883 exited on signal 11 (SIGSEGV) after 4270.119171 seconds from start

HHVM 502 gateway error on high load

I have a codebase that's running fine con HHVM.
I'm realizing some load tests with apache ab tool but when i test with 100 concurrent requests I start to get 502 gateway errors.
I generally only get two types of logs in nginx's error log:
2015/01/10 09:42:48 [crit] 29794#0: *302 connect() to
unix:/var/run/hhvm/hhvm.sock failed (2: No such file or directory)
while connecting to upstream, client: 192.168.56.211, server: ,
request: "GET /api/v2/checkaccess HTTP/1.1", upstream:
"fastcgi://unix:/var/run/hhvm/hhvm.sock:", host: "instela.com"
2015/01/10 09:42:26 [error] 29794#0: *264 connect() to
unix:/var/run/hhvm/hhvm.sock failed (111: Connection refused) while
connecting to upstream, client: 192.168.56.211, server: , request:
"GET /api/v2/checkaccess HTTP/1.1", upstream:
"fastcgi://unix:/var/run/hhvm/hhvm.sock:", host: "instela.com"
Under high load this error occurs on all load balanced backends servers, so it's not a server specific problem (I hope)
Mostly after some time HHVM returns back but many times I had to restart the daemon to bring the server back.
I am using the default configuration mostly. Here I've my configuration: http://pastie.org/9823561

502 Bad Gateway error with nginx and unicorn

I have nginx 1.4.1 and unicorn set up on centos - I am getting a 502 Bad Gateway error and my nginx logs shows this:
1 connect() to unix:/tmp/unicorn.pantry.sock failed (2: No such file or directory)
while connecting to upstream, client: 192.168.1.192, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/tmp/unicorn.pantry.sock:/", host: "192.168.1.30"
There is no /tmp/unicorn.pantry.sock file or directory. I am thinking that it might be a permission error and therefore the file can't be written if so who requires what permission - I have also read that I can create a tcp client
Also I don't understand where the 192.168.1.192 comes from
I just want to make it work How can I do it?
Ok I figured this out. I had unicorn.sock in my shared directory so I needed to point unix: to it

NGINX + uWSGI Connection Reset by Peer

I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192

Resources