About Nginx status - nginx

This is my nginx status below:
Active connections: 6119
server accepts handled requests
418584709 418584709 455575794
Reading: 439 Writing: 104 Waiting: 5576
The value of Waiting is much higher than Reading and Writing, is it normal?
Because of the 'keep-alive' is open?
But if I send a large number of requests to the server, the value of Reading and Writing don't increase, so I think there must be a bottleneck of the nginx or any other.

The Waiting time is Active - (Reading + Writing), i.e. connection still opened waiting for either a new request, or the keepalive expiration.
You could change the keepalive default (which is 75 seconds)
keepalive_timeout 20s;
or tell the browser when it should close the connection by adding an optional second timeout in the header sent to the browser
keepalive_timeout 20s 20s;
but in this nginx page about keepalive you see that some browsers do not care about the header (anyway your site wound't gain much thanks to this optional parameter).
The keepalive is a way to reduce the overhead of creating the connection, as, most of the time, a user will navigate through the site etc... (Plus the multiple requests from a single page, to download css, javascript, images etc...)
It depends on your site, you could reduce the keepalive - but keep in mind that establishing connections is expensive. This is a trade-off you have to refine depending on the site statistics. You could also decrease little by little the timeout (75s -> 50, then a week later 30...) and see how the server behaves.

You don't really want to fix it, as "waiting" means keep-alive
connections. They consume almost no resources (socket + about
2.5M of memory per 10000 connections in nginx).
Are the requests short lived? it's possible they're reading/writing then closing in a short amount of time.
If you're genuinely interested in fixing it you can test to see if nginx is bottleneck you could set keep-alive to 0 in your nginx config:
keepalive_timeout 0;

Related

go doesn't seem to honor persistent connections setting

I have a reverse proxy built using go. I have set it to use persistent connections. But when I monitor the open connections (using lsof -i) on my machine, looks like its closing connections within a few minutes despite me setting it for 60 minutes.
If I set other timeouts in the settings below, like ResponseHeaderTimeout, those are being honored.
myProxy.Transport = &transport{&http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 200 * time.Millisecond,
}).DialContext,
MaxIdleConns: 0, //no limit
MaxIdleConnsPerHost: 10,
IdleConnTimeout: 60 * time.Minute,
}}
Idle connections in http.Transport relies upon HTTP keep-alive as mentioned in http.Transport documentation. These settings are used by the transport while exchanging http payloads with peers (by adding appropriate keep-alive headers).
Having said that, HTTP keep-alive has to be agreed upon by both the parties in exchange (client and the server). Most of the servers restrict maximum number of open connections AND the duration (timeout) of open connections with keep-alive requests. This is done to avoid potential DOS attack and give all it's consumers fair chance with limited system resources at server (each keep-alive connection adds overhead on server).
A 60 minute idle timeout is way too high and I suspect any of the server will honor that (unless you control the server configuration on other end as well). Typically this is kept in few seconds to give enough time to a single HTTP page and all it's resources (css, js, images) to load from server using a single connection. Once a page loads successfully, there's no reason for the connection to be kept open in general browsing world. Apache HTTPD, by default, keeps KeepAliveTimeout as 5 seconds.
It's also worth noting, as clearly stated in Mozilla Keep-Alive documentation, timeouts longer than the TCP timeout may be ignored if no keep-alive TCP message is set at the transport level.
Given all these facts, this is absolutely normal for you to see what you see (connections being automatically closed). The transport will automatically initiate new connections as and when needed.

How long before http timeouts when server sends response slowly?

The standard HTTP timeout seems to be 30 seconds, if the server does not respond at all. But what is the "standard" timeout if a server is responding, but sending the response very slowly? When does the client give up? When it reaches a certain time between packets? Never?
HTTP does not standardize on a timeout; nothing prevents clients from waiting forever. Some clients may do an application-level timeout of 30 seconds, but my Firefox, for example, shows network.http.response.timeout as 300 seconds.
The lack of standard applies even more to slow responses. For example, various scanning and reverse proxies employ trickling techniques to drip feed a few bytes to the client to prevent it from timing out while they do heavy processing. Usually 100 bytes or so every ten seconds suffices, though of course it's very much ad-hoc (see also comment above on lack of standard).

TCP Persistent Connections with HTTP?

So i thought with HTTP 1.1 your TCP connections are sustained for as long are you are communicating with that server? How does it actually work, how does the TCP connection know when you are done writing into the socket? Any formation would be awesome, i have done research but i cant find what im looking for short of reading the RFC.
The typical implementation is that the HTTP server will have a timeout (typically called KeepAliveTimeout or such) after which it will close an idle connection.
A server which reserves a thread or an entire process per connection (such as apache with the usual mpm_prefork or mpm_worker), keepalives are usually disabled entirely or kept quite short (a few seconds). For an event-based server such as nginx which uses much less memory per connection, the keepalive timeout can be left at a much higher value (typically a minute or so).
See section 8.1 of RFC 2616. Basically, HTTP 1.1 treats all connections as persistent but the langauage of the RFC doesn't mandate this behaviour, since it uses the word "SHOULD". If it was mandated, it would use "MUST".
However, the RFC does not specify in detail how an implementation does this. As can be seen from the HTTP Persistent Connection page on Wikipedia, Apache's default timeout (beyond which it returns persistent connections for other uses) may be as low as five seconds. (though this is almost certainly configurable, given all the other knobs and dials that Apache provides).
In other words, it's meant for numerous requests to the same address within a short time frame, so as to not waste time opening and closing a bucket-load of sessions where one will do. Increasing this timeout is not a "free ride", since resources are tied up while the connection is held open. In an environment where you expect lots of incoming clients, tying up these resources can be fatal to performance.

HTTP Keep-Alive for <1KB calls every 1 second

I am optimizing my web server settings to handle large number of concurrent users and one of the problems I'm running into is deciding whether or not to disable HTTP Keep-Alive.
I am using CDN for all the images on the site so when my HTML page is requested I am downloading approximately 5 files (js, css, etc) on first load... and then only HTML on each successive load.
Other then that, the only thing I have is HTTP POST update invoke on every second (resulting JSON is typically less than 1KB).
So, with these constrains - would you think that disabling HTTP Keep-Alive on the server would be a good idea? Would that improve the number of concurrent users server can handle?
(By the way, I've reduced KeepAliveTimeout/ConnectionTimeout to 15 seconds in the IIS 7.5 settings)
From what you are describing, you are making a call per client per second. So all boils down to how much it takes to serve the request. If let's say, it takes 100ms to serve the request. So what that means is Http Keep-Alive of 15 seconds will be have 15 calls accommodated w/o re-establishing connection but connection was actual active (or being used) only for 1.5 seconds - rest of the time, you are actually blocking some client/connection (assuming there is any client). W/o keep alive, you can probably accommodate 8-9 times more concurrent clients.
However all said, you have to look at actual parameters to make decision. How many concurrent clients you are likely to have and what is the response time etc. The best way is to do simulation/load testing to measure the performance. Because if your server is going to handle the anticipated max concurrent user load with keep-alive, you can very well keep keep alive.
BTW, also see this related question on SO: http keep-alive in the modern age

HTTP Keep Alive in a large Web Applications

I have a web application deployed over IIS 7.0. the application is accessible by large number of users and manipulates large data ..my question is concerning the HTTP Keep-Alive option which is set to true by default.
is it a better approach to set the HTTP Keep-Alive to false or true.
in case of true is the good approach to use time out?
KeepAlive should normally be used to handle the requests that immediately follow an HTML request. Let's say on the first visit to your site I get an HTML page with 5 css, 5js and 25 images, I will use my HTTP connection which is still alive to request these things (well, depends on the browser, I'll maybe use 3 connection to speed up these things).
To handle this fact we usually use a Keepalive of 2s or 3s. Having a longer keepalive means the connection is waiting for the next page that the user may request. This may be a valid way of thinking, next time the user will want a page, we'll avoid to loose time establishing HTTP connection (and this can be the longest part of the request/response time). But for your server that mean most of HTTP connection that are handled by the server are doing... nothing. And you will reach your MaxConnection (W3SVC/MaxConnections with a ridiculous default to 10), with connections doing nothing. Really Bad. So long keep-alive needs big webservers and should be used only if your application really needs it.
If you use Keepalive in a 'classical website' you must change the connection timeout (by default 2min). In Apache you would have 2 settings, a keepalive tiemout (5s by default) and a connection timeout (2min). In IIS seems the timeout settings is used for both. So do not set it to 2s (a client really slow in sending his request will timeout), but something like 10s is maybe enough. Now one response is to disallow Keep-Alive, and make the browser opening more connections. Another response is to use a modern webserver (like nginx or cherokee for example) which handles keep-alive connection in a more elegant and resource-free way than Apache or IIS.
Even if you do not use Keepalive, what's the reason of waiting 2 minutes for a client timeout? it's is certainly too high, decrease this value to something like 60s.
Then you should check several settings related to timeout (ConnectionTimeout, HeaderWaitTimeout, MinFileBytesPerSec) and this nice response on performances settings in the registry.
This article will bring more insight and don't forget to check the "How do we fix it?" section
http://mocko.org.uk/b/2011/01/23/http-keepalive-considered-harmful/
I think that it's not a good idea to get all users connected.
Because of:
User just can open your site, but not use it - why we shoud keep connection for long time?
It's hard to keep much connection (more memory)
Use connection time-out (max 5 min will we ok)
BUT: if your application is a live chat - you should kepp alive all connection. In this way better to use Ajax Long Polling Request + Node JS + some fast nosql db to store chat messages.

Resources