Why does nginx proxy_pass close my connection? - http

The documentation says the following
Sets the HTTP protocol version for proxying. By default, version 1.0 is used. Version 1.1 is recommended for use with keepalive connections and NTLM authentication.
In my nginx config I have
location / {
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:1980;
}
Doing http://127.0.0.1:1980 directly I can see my app get many request (when I refresh) on one connection. This is the response I send
HTTP/1.1 200 OK\nContent-Type:text/html\nContent-Length: 14\nConnection: keep-alive\n\nHello World!
However nginx makes one request and closes it. WTH? I can see nginx sends the "Connection: keep-alive" header. I can see it added the server and date header. I tried adding proxy_set_header Connection "keep-alive"; but that didn't help.
How do I get nginx to not close the connection every thread?

In order Nginx to keep connection alive, the following configuration is required:
Configure appropriate headers (HTTP 1.1 and Connection header does not contain "Close" value, the actual value doesn't matter, Keep-alive or just an empty value)
Use upstream block with keepalive instruction, just proxy_pass url won't work
Origin server should have keep-alive enabled
So the following Nginx configuration makes keepalive working for you:
upstream keepalive-upstream {
server 127.0.0.1:1980;
keepalive 64;
}
server {
location / {
proxy_pass http://keepalive-upstream;
proxy_set_header Connection "";
proxy_http_version 1.1;
}
}
Make sure, your origin server doesn't finalise the connection, according to RFC-793 Section 3.5:
A TCP connection may terminate in two ways: (1) the normal TCP close
sequence using a FIN handshake, and (2) an "abort" in which one or
more RST segments are sent and the connection state is immediately
discarded. If a TCP connection is closed by the remote site, the local
application MUST be informed whether it closed normally or was
aborted.
A bit more details can be found in the other answer on Stackoverflow.

keepalive should enable in upstream block, not direct proxy_pass http://ip:port.
For HTTP, the proxy_http_version directive should be set to “1.1” and the “Connection” header field should be cleared
like this:
upstream keepalive-upstream {
server 127.0.0.1:1980;
keepalive 23;
}
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://keepalive-upstream;
}

Related

How to make Nginx reverse proxy wait until upstream comes online

I have a server app that listens on a UNIX socket, and Nginx serving as a reverse proxy.
Now I want Nginx to wait until my app comes online when e.g. I deploy an update and restart it, without returning any errors to the clients.
This is what I have in my Nginx config:
location / {
# proxy_pass http://localhost:8080;
proxy_pass http://unix:/tmp/MyApp.sock;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
}
However, whenever my app is down Nginx returns 502 Bad Gateway immediately. Apparently none of the proxy_*_timeout settings help.
Same happens with a local TCP socket. With UNIX sockets, when I shut down the app I make sure the socket file is deleted, so that Nginx can see there's no app running.
How can I tell it to actually wait for a certain period of time until the socket becomes available?
I don't think the core nginx has such a functionality. However something similar can be achieved using the nginx-lua-module. Even if using that module isn't applicable for you, I'll post the working example here just in case it would help someone else.
error_page 502 = #error_502;
location / {
proxy_pass http://localhost:8080;
...
}
location = /heartbeat {
internal;
proxy_pass http://localhost:8080;
}
location #error_502 {
rewrite_by_lua_block {
local timeout = 10
local uri = ngx.var.uri
local args = ngx.var.args
local res = { status = 0 }
while timeout > 0 do
ngx.sleep(1)
res = ngx.location.capture("/heartbeat")
if res.status == 200 then break end
timeout = timeout - 1
end
if res.status == 200 then
ngx.exec(uri, args)
end
}
access_by_lua_block {
ngx.status = ngx.HTTP_SERVICE_UNAVAILABLE
ngx.say("I'd waited too long... exiting.")
ngx.exit(ngx.OK)
}
}
This code should be quite straight to require any additional comments. The ngx.sleep used here is a non-blocking one and takes its parameter in a microseconds granularity. Your app should be able to process the /heartbeat route in order to use this (probably consuming as little processing time as possible). I'm sure this can be adapted to use the UNIX socket too (maybe you'd need to move your upstream definition to the separate upstream block).
Important note. Since this solution relies on ngx.location.capture for making subrequests, it is incompatible with the HTTP/2 protocol because of this limitation (read the whole discussion to find out possible workarounds if needed).
I would not call it solution, but there is a way to achieve this with a resolver. You need a DNS for that, docker will bring one, but in your case you need to setup on our own on your server.
server
{
resolver 127.0.0.11 valid=120s; #DNS-IP
resolver_timeout 60; # Timeout for resolver response
}
location / {
set $upstream_service URI; #URI= DNS-Name:Port
proxy_pass http://$upstream_service;
}
So nginx can't check availability of the service and ask the resolver. The resolvers answer is awaited up to its timeout. If no answer in timeout time: 502, if service comes back in this period it will answer and nginx will respond with 200.
But I have no clue if it's working with a sock..

How to add the 'upstream try' to the request which I send to the backend server

I have an nginx server which acts as a load balancer.
The nginx is configured to upstream 3 tries:
proxy_next_upstream_tries 3
I am looking for a way to pass to the backend server the current try number of this request - i.e. first, second or last.
I believe it can be done by passing this data in the header, however, how can I configure this in nginx and where can I take this data from?
Thanks
I sent this question to Nginx support and they provided me this explanation:
As long as you are using proxy_next_upstream mechanism for
retries, what you are trying to do is not possible. The request
which is sent to next servers is completely identical to the one
sent to the first server nginx tries - or, more precisely, this is the
same request, created once and then sent to different upstream
servers as needed.
If you want to know on the backend if it is handling the first
request or it processes a retry request after an error, a working
option would be to switch proxy_next_upstream off, and instead
retry requests on 502/504 errors using the error_page directive.
See http://nginx.org/r/error_page for examples on how to use
error_page.
So, I did as they advised me:
proxy_intercept_errors on;
location / {
proxy_pass http://example.com;
proxy_set_header NlbRetriesCount 0;
error_page 502 404 #fallback;
}
location #fallback {
proxy_pass http://example.com;
proxy_set_header NlbRetriesCount 1;
}

Nginx Reverse Proxy WebSocket Timeout

I'm using java-websocket for my websocket needs, inside a wowza application, and using nginx for ssl, proxying the requests to java.
The problem is that the connection seems to be cut after exactly 1 hour, server-side. The client-side doesn't even know that it was disconnected for quite some time. I don't want to just adjust the timeout on nginx, I want to understand why the connection is being terminated, as the socket is functioning as usual until it isn't.
EDIT:
Forgot to post the configuration:
location /websocket/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
include conf.d/proxy_websocket;
proxy_connect_timeout 1d;
proxy_send_timeout 1d;
proxy_read_timeout 1d;
}
And that included config:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:1938/;
Nginx/1.12.2
CentOS Linux release 7.5.1804 (Core)
Java WebSocket 1.3.8 (GitHub)
The timeout could be coming from the client, nginx, or the back-end. When you say that it is being cut "server side" I take that to mean that you have demonstrated that it is not the client. Your nginx configuration looks like it shouldn't timeout for 1 day, so that leaves only the back-end.
Test the back-end directly
My first suggestion is that you try connecting directly to the back-end and confirm that the problem still occurs (taking nginx out of the picture for troubleshooting purposes). Note that you can do this with command line utilities like curl, if using a browser is not practical. Here is an example test command:
time curl --trace-ascii curl-dump.txt -i -N \
-H "Host: example.com" \
-H "Connection: Upgrade" \
-H "Upgrade: websocket" \
-H "Sec-WebSocket-Version: 13" \
-H "Sec-WebSocket-Key: BOGUS+KEY+HERE+IS+FINE==" \
http://127.0.0.1:8080
In my (working) case, running the above example stayed open indefinitely (I stopped with Ctrl-C manually) since neither curl nor my server was implementing a timeout. However, when I changed this to go through nginx as a proxy (with default timeout of 1 minute) as shown below I saw a 504 response from nginx after almost exactly 1 minute.
time curl -i -N --insecure \
-H "Host: example.com" \
https://127.0.0.1:443/proxied-path
HTTP/1.1 504 Gateway Time-out
Server: nginx/1.14.2
Date: Thu, 19 Sep 2019 21:37:47 GMT
Content-Type: text/html
Content-Length: 183
Connection: keep-alive
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>
real 1m0.207s
user 0m0.048s
sys 0m0.042s
Other ideas
Someone mentioned trying proxy_ignore_client_abort but that shouldn't make any difference unless the client is closing the connection. Besides, although that might keep the inner connection open I don't think it is able to keep the end-to-end stream intact.
You may want to try proxy_socket_keepalive, though that requires nginx >= 1.15.6.
Finally, there's a note in the WebSocket proxying doc that hints at a good solution:
Alternatively, the proxied server can be configured to periodically send WebSocket ping frames to reset the timeout and check if the connection is still alive.
If you have control over the back-end and want connections to stay open indefinitely, periodically sending "ping" frames to the client (assuming a web browser is used then no change is needed on the client-side as it is implemented as part of the spec) should prevent the connection from being closed due to inactivity (making proxy_read_timeout unnecessary) no matter how long it's open or how many middle-boxes are involved.
Most likely it's because your configuration for the websocket proxy needs tweaking a little, but since you asked:
There are some challenges that a reverse proxy server faces in
supporting WebSocket. One is that WebSocket is a hop‑by‑hop protocol,
so when a proxy server intercepts an Upgrade request from a client it
needs to send its own Upgrade request to the backend server, including
the appropriate headers. Also, since WebSocket connections are long
lived, as opposed to the typical short‑lived connections used by HTTP,
the reverse proxy needs to allow these connections to remain open,
rather than closing them because they seem to be idle.
Within your location directive which handles your websocket proxying you need to include the headers, this is the example Nginx give:
location /wsapp/ {
proxy_pass http://wsbackend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
This should now work because:
NGINX supports WebSocket by allowing a tunnel to be set up between a
client and a backend server. For NGINX to send the Upgrade request
from the client to the backend server, the Upgrade and Connection
headers must be set explicitly, as in this example
I'd also recommend you have a look at the Nginx Nchan module which adds websocket functionality directly into Nginx. Works well.

Under tornado v4+ WebSocket connections get refused with 403

I have an older tornado server that handles vanilla WebSocket connections. I proxy these connections, via Nginx, from wss://info.mydomain.com to wss://mydomain.com:8080 in order to get around customer proxies that block non standard ports.
After the recent upgrade to Tornado 4.0 all connections get refused with a 403. What is causing this problem and how can I fix it?
Tornado 4.0 introduced an, on by default, same origin check. This checks that the origin header set by the browser is the same as the host header
The code looks like:
def check_origin(self, origin):
"""Override to enable support for allowing alternate origins.
The ``origin`` argument is the value of the ``Origin`` HTTP header,
the url responsible for initiating this request.
.. versionadded:: 4.0
"""
parsed_origin = urlparse(origin)
origin = parsed_origin.netloc
origin = origin.lower()
host = self.request.headers.get("Host")
# Check to see that origin matches host directly, including ports
return origin == host
In order for your proxied websocket connection to still work you will need to override check origin on the WebSocketHandler and whitelist the domains that you care about. Something like this.
import re
from tornado import websocket
class YouConnection(websocket.WebSocketHandler):
def check_origin(self, origin):
return bool(re.match(r'^.*?\.mydomain\.com', origin))
This will let the connections coming through from info.mydomain.com to get through as before.
I would like to propose and alternative solution, instead of messing with the tornado application code, I solved the issue by telling nginx to fix the host header:
location /ws {
proxy_set_header Host $host;
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

Play! Framework returns empty header for websocket connection if request comes through nginx

I'm using Nginx 1.3.7 and Play! Framework 2.1 and I need to proxy my HTTP-, HTTPS- and WebSocket-connections to the Play! server through nginx.
I rely on the websocket proxy feature of the nginx trunk and I did set the "upgrade" and "connection" headers to correctly forward the headers for the websocket connections (http://nginx.org/en/docs/http/websocket.html):
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
location / {
proxy_pass http://my-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
I made sure that Play! get's the correct headers during the websocket initialization. The request.headers object in Play! looks exactly the same with our without nginx.
Map(
Cache-Control -> Buffer(no-cache),
Connection -> Buffer(Upgrade),
Host -> Buffer(my-backend),
Origin -> Buffer(https://my-host:8443),
Pragma -> Buffer(no-cache),
Sec-WebSocket-Extensions -> Buffer(x-webkit-deflate-frame),
Sec-WebSocket-Key -> Buffer(nk5JVO4I5QRMQnSxAJaRCg==),
Sec-WebSocket-Version -> Buffer(13),
Upgrade -> Buffer(websocket)
)
Here is the problem: In case the request comes from nginx the response from Play! is empty and doesn't contain any headers, just the protocol-version: "HTTP/1.1 0 ".
Correctly the response from Play! would look like that:
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Sec-WebSocket-Accept: YHVb1xdsVqaObgQxqksBQPhdkvc=
Upgrade: websocket
Yep, the solution is of course to use right version of nginx. 1.3.7 fails to forward the "Connection: Upgrade" flag, because the feature was only introduced in nginx 1.3.13.
I recommend using the latest dev/trunk version.

Resources