NGINX closes the upstream connection in 60 seconds - nginx

I am using Nginx proxy server as a reverse proxy and https load balancer. Client connects to backend server through the reverse proxy in a load balanced environment. I have setup the correct https configuration (with ssl certificates and all) so that my ssl communication is going through proxy. In my case, server gracefully disconnect connection after 120 seconds (IDLE TIMEOUT of my server). But before that, nginx proxy itself closes after 60 seconds. This happens for every connect cycle. Due to which my client don't get ssl disconnect event and just receives tcp socket close event. If I change the IDLE_TIMEOUT of my server less than 60 seconds, everything works fine.
Want to know if there is any timeout on nginx server that I need to configure to keep the connection open for more than 60 seconds.
Ajay

I found the solution, copying it here.
Set values of proxy_read_timeout and client_body_timeout to timeout you want.

Related

Nginx tcp stream proxy module - proxy_timeout counter not resetting after interacting with application

I have an nginx proxy server that is load balancing (LB) an application over tcp 1200. However my application is experiencing a "network/server side error" disconnect around the 10 minute mark.
When I directly connect to the application (skipping the LB), I do not get any kind of timeout of any sort. So there isn't a timeout setting built into the application.
When I connect to the application through the LB, I found it took about 10 minutes before the application reported a "network/server side error" followed by a disconnect of the application.
I've correlated it to this particular nginx directive:
http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout
Syntax: proxy_timeout timeout;
Default:
proxy_timeout 10m;
Context: stream, server
Sets the timeout between two successive read or write operations on client or proxied server connections.
If no data is transmitted within this time, the connection is closed.
So I set the proxy_timeout value to 14 minutes, which correctly increased the time before I got the "network/server side error" message from 10 to 14 minutes.
After figuring this out, I reconnected to the application through the LB. I sat idle for about ~6 minutes, interacted with the application for a moment, and then proceeded to watch for any timeouts. Despite interacting with the application though, my session still disconnected at the 14 minute mark.
Whats going on here?
My preference is to set some kind of "idle timeout" for the application in nginx, that if you dont interact with the application for X continuous minutes, it'll disconnect. If you interact with the application, the timeout counter will reset back to 0. But this proxy_timeout counter doesn't seem to reset when you interact with the application.
Here is my nginx config:
stream {
upstream backend {
hash $remote_addr consistent;
server 10.xxx.yyy.20:1200 weight=1 max_fails=0;
server 10.xxx.yyy.21:1200 weight=1 max_fails=0;
}
server {
listen 1200;
proxy_timeout 14m;
proxy_pass backend;
}
}
Is there another directive I should be using?
PS:
Here is the debug statement that corresponds with the connection closure:
2023/01/20 14:36:28 [info] 1788#0: *29 connection timed out (110: Connection timed out)
while proxying connection, client: 10.xxx.yyy.30, server: 0.0.0.0:1200, upstream: "10.xxx.yyy.21:1200",
bytes from/to client:1408/1481, bytes from/to upstream:1481/1408

Can nginx explicitly close websocket connection when graceful shutdown via nginx -s quit?

I configured a nginx instances as a reverse proxy for a websocket server and established a websocket between client and server, according to the official tutorial https://www.nginx.com/blog/websocket-nginx/.
Then I run nginx -s quit to gracefully shut down nginx.
I found that a worker process is always in a status shutting down.. and I can still send message via the established websocket connection, then the nginx master and worker process will hang up until timeout.
I'd like to know if nginx supports the function that telling both client and server to close the socket connection on transportation level and exit normally, instead of waiting for the websocket time out.

How to set nginx upstream module response to client synchronously

I'm set up a live broadcast website. I use nginx as reverse proxy, and deploy multiple flv-live-stream process behind nginx(binary program writen by C++). In my flv-live-stream program. Clients maintain long connection with nginx. I count video frame that alreay sent to predict whether the client play smoothly.
But I found there is a strange buffer in upstream module. Even if the client 100% loss packets, back-end process can still send to nginx for 2~3 seconds, almost 2.5~3MBytes.
If there is a method that response can pass to a client synchronously, as soon as it is received from the back-end. And when nginx is unable to send data to client(exp. client loss packets...), nginx donot accept data from the back-end immediately.
I'm already set
listen 80 sndbuf=64k rcvbuf=64k;
proxy_buffering off;
fastcgi_buffering off;
Anyone can help? thanks!

Freezing haproxy traffic with maxconn 0 and keepalive connections

Since haproxy v1.5.0 it was possible to temporarily stop reverse-proxying traffic to frontends using
set maxconn frontend <frontend_name> 0
command.
I've noticed that if haproxy is configured to maintain keepalive connections between haproxy and a client then said connections will continue be served whereas the new ones will continue awaiting for "un-pausing" a frontend.
The question is: is it possible to terminate current keepalive connections gracefully so that a client was required to establish new connections?
I've only found shutdown session and shutdown sessions commands but they are obviously not graceful at all.
The purpose of all of this is to make some changes on server seamlessly, otherwise in current configuration it would require a scheduled maintenance window.

nginx upstream server "out of ports"

I'm using nginx as reverse proxy, and find more than 30k TIME_WAIT state ports in upstream server(windows 2003). I know my servers are "out of ports" which discussed here(http://nginx.org/pipermail/nginx/2009-April/011255.html), and set both nginx and upstream server to reuse TIME_WAIT and to recycle more quickly.
[sysctl -p]
……
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
But nginx hangs and "connection timed out while connection to upstream server" error still can be found on nginx error log, when RPS of upstream is higher than 1000 within 1 minutes. When upstream is Windows, server will be "out of ports" in seconds.
Any ideas? A connection pool with a waiting queue? Maxim Dounin wrote a useful module to keep connection with memcached, but why can't it support Web Server?
I am new to nginx but from what I know so far, you need to reduce your net.ipv4.tcp_fin_timeout value which defaults to 60 seconds. Out of the box nginx doesn't supports http connection pooling with backend. Because of this every request to backend creates a new connection. With 64K ports and 60 seconds of wait before that port can be reused, the average RPS will not be more than 1K per second. You can either reduce
your net.ipv4.tcp_fin_timeout value both at nginx server and the backend server or you can assign multiple ip addresses to the backend box and configure nginx to treat these "same servers" as different servers.

Resources