I'm using nginx as reverse proxy, and find more than 30k TIME_WAIT state ports in upstream server(windows 2003). I know my servers are "out of ports" which discussed here(http://nginx.org/pipermail/nginx/2009-April/011255.html), and set both nginx and upstream server to reuse TIME_WAIT and to recycle more quickly.
[sysctl -p]
……
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
But nginx hangs and "connection timed out while connection to upstream server" error still can be found on nginx error log, when RPS of upstream is higher than 1000 within 1 minutes. When upstream is Windows, server will be "out of ports" in seconds.
Any ideas? A connection pool with a waiting queue? Maxim Dounin wrote a useful module to keep connection with memcached, but why can't it support Web Server?
I am new to nginx but from what I know so far, you need to reduce your net.ipv4.tcp_fin_timeout value which defaults to 60 seconds. Out of the box nginx doesn't supports http connection pooling with backend. Because of this every request to backend creates a new connection. With 64K ports and 60 seconds of wait before that port can be reused, the average RPS will not be more than 1K per second. You can either reduce
your net.ipv4.tcp_fin_timeout value both at nginx server and the backend server or you can assign multiple ip addresses to the backend box and configure nginx to treat these "same servers" as different servers.
Related
I am using Nginx proxy server as a reverse proxy and https load balancer. Client connects to backend server through the reverse proxy in a load balanced environment. I have setup the correct https configuration (with ssl certificates and all) so that my ssl communication is going through proxy. In my case, server gracefully disconnect connection after 120 seconds (IDLE TIMEOUT of my server). But before that, nginx proxy itself closes after 60 seconds. This happens for every connect cycle. Due to which my client don't get ssl disconnect event and just receives tcp socket close event. If I change the IDLE_TIMEOUT of my server less than 60 seconds, everything works fine.
Want to know if there is any timeout on nginx server that I need to configure to keep the connection open for more than 60 seconds.
Ajay
I found the solution, copying it here.
Set values of proxy_read_timeout and client_body_timeout to timeout you want.
From nginx.org, the default value of keepalive config is —, however I don't quite understand what this means.
Syntax: keepalive connections;
Default: —
Context: upstream
This directive appeared in version 1.1.4.
In order Nginx to keep TCP connection alive both upstream section and origin server should be configured to not finalise the connection. Upstream section keepalive default value means no keepalive, hence connection won't be reused, each time you can see TCP stream number increases per every request to origin server, opposite to what happens with keepalive. You can check it with using tcpdump.
10 Tips for 10x Application Performance blog post describes it very well:
Client keepalives – Keepalive connections reduce overhead, especially
when SSL/TLS is in use. For NGINX, you can increase the maximum number
of keepalive_requests a client can make over a given connection from
the default of 100, and you can increase the keepalive_timeout to
allow the keepalive connection to stay open longer, resulting in
faster subsequent requests.
Upstream keepalives – Upstream connections – connections to
application servers, database servers, and so on – benefit from
keepalive connections as well. For upstream connections, you can
increase keepalive, the number of idle keepalive connections that
remain open for each worker process. This allows for increased
connection reuse, cutting down on the need to open brand new
connections. For more information, refer to our blog post, HTTP
Keepalive Connections and Web Performance.
See also RFC-793 Section 3.5:
A TCP connection may terminate in two ways: (1) the normal TCP close
sequence using a FIN handshake, and (2) an "abort" in which one or
more RST segments are sent and the connection state is immediately
discarded. If a TCP connection is closed by the remote site, the local
application MUST be informed whether it closed normally or was
aborted.
Two examples, take a look on Application Data below.
Without keepalive:
With keepalive:
Our haproxy loadbalancer opens thousands of connections to its backends
even though its settings say to open no more than 10 connections per server instance (see below). When I uncomment "option http-server-close" the number of backend connection drops however I would like to have keep-alive backend connections.
Why maxconn is not respected with http-keep-alive? I verified with ss that the opened backend connections are in ESTABLISHED state.
defaults
log global
mode http
option http-keep-alive
timeout http-keep-alive 60000
timeout connect 6000
timeout client 60000
timeout server 20000
frontend http_proxy
bind *:80
default_backend backends
backend backends
option prefer-last-server
# option http-server-close
timeout http-keep-alive 1000
server s1 10.0.0.21:8080 maxconn 10
server s2 10.0.0.7:8080 maxconn 10
server s3 10.0.0.22:8080 maxconn 10
server s4 10.0.0.16:8080 maxconn 10
In keep-alive mode idle connections are not accounted. As explained in this HAProxy mailthread
The thing is, you don't want
to leave requests waiting in a server's queue while the server has a ton
of idle connections.
This even makes more sense, knowing that browsers initiate preconnect to improve page performance. So in keep-alive mode only outstanding/active connections are taken into account.
You can still enforce maxconn limits regardless of the connection state using tcp mode, especially that I don't see a particular reason to using mode http in your current configuration (apart from having reacher logs).
Or you can use http-reuse with http mode to achieve a lowest number of concurrent connections.
I need some suggestions to setup auto-scaling, in nginx, of websocket connections. Let's say I have nginx configured to proxy websocket connections to 4 upstream backend servers. When I try to add a 5th server to the upstream block and reload nginx, I see that nginx keeps the existing worker processes running and additionally create new ones to serve new websocket connections. I guess the old workers remain until the earlier connections close.
Ideally, as we auto-scale, we want the number of nginx worker processes to remain the same. Is there a way to transfer the socket connections from the older worker processes to the newer worker processes?
thanks.
Since haproxy v1.5.0 it was possible to temporarily stop reverse-proxying traffic to frontends using
set maxconn frontend <frontend_name> 0
command.
I've noticed that if haproxy is configured to maintain keepalive connections between haproxy and a client then said connections will continue be served whereas the new ones will continue awaiting for "un-pausing" a frontend.
The question is: is it possible to terminate current keepalive connections gracefully so that a client was required to establish new connections?
I've only found shutdown session and shutdown sessions commands but they are obviously not graceful at all.
The purpose of all of this is to make some changes on server seamlessly, otherwise in current configuration it would require a scheduled maintenance window.