NGINX Ingress Error: upstream sent invalid chunked response while reading upstream - nginx

We are deploying the flask server using uwsgi, This works fine if we call the service directly but started getting 502 Bad Gateway with no response when calling it through NGINX and getting status code 502.
Looking at NGINX logs we got:
upstream sent invalid chunked response while reading upstream
we already tried this solution but this didn't work NGINX Error: upstream sent invalid chunked response while reading upstream

Related

uWSGI + Nginx, all upstreams become unavailable (Denial-of-Service ) if there are too many headers in the request

If uWSGI receives a request with more headers than specified in the max-vars option it does not return a bad request, it brokes the socket connection or something like that.
uWSGI error:
max vec size reached. skip this var.
Nginx perceive this as an error and marks the upstream as "unavailable" and passes the request to the next one, as a result all the upstreams becomes unavailable for ~11s with only 1 request.
Nginx error:
2022/12/07 10:22:22 [error] 28#28: *4 upstream prematurely closed connection while reading response header from upstream, client: .....
2022/12/07 10:22:22 [warn] 28#28: *4 upstream server temporarily disabled while reading response header from upstream, client: .....
This does not happen if you have just one upstream in Nginx.
Increasing the option max_fails in the upstream changes nothing because you can send just more requests, filtering proxy_next_upstream is not possible because Nginx perceive this as an error, and that is always considered unsuccessful attempt.
The only option I see is setting max-vars (uWSGI) to a really high value (if possibile).
Is there a way to set the max number of headers in Nginx, or "handle" this strange behaviour in uWSGI?

PayPal API causes 502 on nginx/fpm

I'm developing PayPal Adaptive Payments on one of my sites. Problems comes, when i deployed app, to server with NGINX+PHP-FPM. When i'm trying to process paypal payment, nginx throws 502 error.
18777#0: *711 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: xx.xx.xx.xx, server: www.domain.com, request: "GET /payment/5584 HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "domain.com", referrer: "http://domain.com/process/5584"
PHP-FPM are using socket file, to communicate with nginx. All other apps on server, was running correctly. PHP has json, curl and openssl enabled.
Have someone similar problem with paypal? Maybe some tips, or what to look for, when configuring nginx/fpm for using paypal api?
SOLVED
Uncommenting one line in php-fpm pool conf, solved first problem:
catch_workers_output = yes
Then i saw next error:
Message Unknown cipher in list: TLSv1
Removing:
CURLOPT_SSL_CIPHER_LIST => 'TLSv1'
From /sdk-core-php/lib/PPHttpConfig.php solved this problem, and now, my payments are running correctly :)

Nginx Error [1049#0]

I am using Nginx as Proxy for Websocket SSL upgrade to Asterisk backend.
However at times, my users just couldn't connect to Asterisk. On Asterisk end, I do not see any connection attempt.
Thus I was looking at nginx error log and I found a lot of such error
[error] 1049#0: *28726 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 103.246.xx.xx, server: xxx.xxx.io, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8088/ws", host: "yyy.xxx.io"
Is there any clue on how can I solve this?
Is this correct? http://0.0.0.0:8088/ws
In the Internet Protocol version 4 the address 0.0.0.0 is a non-routable
meta-address used to designate an invalid, unknown or non applicable target.
To give a special meaning to an otherwise invalid piece of data is
an application of in-band signaling.
Source: http://en.wikipedia.org/wiki/0.0.0.0

NGINX + uWSGI Connection Reset by Peer

I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192

nginx - read custom header from upstream server

I am using nginx as a reverse proxy and trying to read a custom header from the response of an upstream server (Apache) without success. The Apache response is the following:
HTTP/1.0 200 OK
Date: Fri, 14 Sep 2012 20:18:29 GMT
Server: Apache/2.2.17 (Ubuntu)
X-Powered-By: PHP/5.3.5-1ubuntu7.10
Connection: close
Content-Type: application/json; charset=UTF-8
My-custom-header: 1
I want to read the value from My-custom-header and use it in a if clause:
location / {
// ...
// get My-custom-header value here
// ...
}
Is this possible?
It's not only possible, it's easy:
in nginx the response header values are available through a variable (one per header).
See http://wiki.nginx.org/HttpCoreModule#.24sent_http_HEADER for the details on those variables.
In your examle the variable would be $sent_http_My_custom_header.
I was facing the same issue. I tried both $http_my_custom_header and $sent_http_my_custom_header but it did not work for me.
Although solved this issue by using $upstream_http_my_custom_header.
When using NGINX as a proxy, there are four sets of headers:
client -> nginx: the client request headers
nginx -> upstream: the upstream request headers
upstream -> nginx: the upstream response headers
nginx -> client: the client response headers
You appear to be asking about the upstream response headers. Those are found in the $upstream_http_name variables.
However, take into account that any response headers are only set after the headers from the upstream server response have been received. Any if directives are run before sending the upstream request, and will not have access to any response headers! In other words, if directives are run after the client request has been received, before making the upstream request.
If you need to change how a response is handled, you can use a map directive however to set variables based on response headers, then use those variables in add_header (set client response headers), log_format or any othere directives that are active during the response phases (internally named the NGX_HTTP_CONTENT_PHASE and NGX_HTTP_LOG_PHASE phases). For more complex control you'll have to use a scripting add-on such as the Lua module (e.g. using a header_filter_by_lua_block directive).
To read or set individual headers, use:
from
to
type
read (variable)
write (directive)
client
nginx
request
$http_name
–
ngnix
upstream
request
–
proxy_set_header
upstream
nginx
response
$upstream_http_name
–
nginx
client
response
$sent_http_name
add_header
NGINX copies certain headers from client request to upstream request, and from upstream response to client response using various proxy_ directives, giving you options to omit or explicitly include headers for either direction. So if an upstream response header is only found in $upstream_http_name variables, then those headers were specifically not copied to the client response, and the set of available $sent_http_name variables will include any extra headers set by NGINX that are not present in the upstream response.
Use $http_MY_CUSTOM_HEADER
You can write some-thing like
set my_header $http_MY_CUSTOM_HEADER;
if($my_header != 'some-value') {
#do some thing;
}

Resources