JBoss preventing keep-alive when no more thread available - http

After experimenting with my JBoss 5.1 server I noticed that the HTTP responses contain the Connection: close header if the current thread is the last available one.
For instance if I set maxThreads="4" in the HTTP connector config and perform more than 4 simulatenous requests, then:
the 3 first responses do not contain any Connection header (meaning the connection can be reused by the client for future requests)
all the next requests contain the Connection: close header (meaning the client will have to create a new connection on a different port for the next request)
I could not find any documentation for that. Is this behaviour explained somewhere? And is it possible to avoid it (i.e prevent this Connection: close header) so that clients can reuse the sockets for future requests?

I had a quick look at Tomcat code (on which JbossWeb, the Web container of Jboss is base on).
It shows in the Http11Processor doesn't return from the process method if the connection is allowed to be kept alive. So kept alive connection are using a thread for the HTTP pool while the connection is open.
To prevent the pool to be emptied by non active kept alive connection, the thread pool is most probably (I have spotted some part of the code that may do it in the PooledSender) disabling the possibility to keep the connection open for the last thread in its pool before starting to process the new request. Otherwise it will be too easy to block Tomcat/Jboss by creating a limited number of kept-alive connection.

Related

What is an idle http connection?

I am working with http connection and using a MultiThreadedHttpConnectionManager and httpClient.
For my purpose I am closing all the idle connection after 1ms with the following method : closeIdleConnections(1).
I am wondering what is considered as an " idle connection" in http ? It seems that waiting for an answer is not an idle connection.
Regards,
HTTP (1.1) specifies that connections should remain open until explicitly closed, by either party. Beyond that the specification provides only one example for a policy, suggesting using a timeout value beyond which an inactive (idle) connection should be closed. A connection kept open until the next HTTP request reduces latency and TCP connection establishment overhead. However, an idle open TCP connection consumes a socket and buffer space memory.
Excerpt from RFC 7230:
6.5. Failures and Timeouts
Servers will usually have some time-out value beyond which they will no longer maintain an inactive connection. Proxy servers might make this a higher value since it is likely that the client will be making more connections through the same server. The use of persistent connections places no requirements on the length (or existence) of this time-out for either the client or the server.
When a client or server wishes to time-out it SHOULD issue a graceful close on the transport connection. Clients and servers SHOULD both constantly watch for the other side of the transport close, and respond to it as appropriate. If a client or server does not detect the other side's close promptly it could cause unnecessary resource drain on the network.
A client, server, or proxy MAY close the transport connection at any time. For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress.
By studying the source code, in the HttpClient MultiThreadedHttpConnectionManager implementation, connection is simply considered idle when the connection in the pool's age is more than the idleTime. The idleTime is passed to the method closeIdleConnections(idleTime) as an argument.

Keep-alive header clarification

I was asked to build a site , and one of the co-developer told me That I would need to include the keep-alive header.
Well I read alot about it and still I have questions.
msdn ->
The open connection improves performance when a client makes multiple
requests for Web page content, because the server can return the
content for each request more quickly. Otherwise, the server has to
open a new connection for every request
Looking at
When The IIS (F) sends keep alive header (or user sends keep-alive) , does it mean that (E,C,B) save a connection which is only for my session ?
Where does this info is kept ( "this connection belongs to "Royi") ?
Does it mean that no one else can use that connection
If so - does it mean that keep alive-header - reduce the number of overlapped connection users ?
if so , for how long does the connection is saved to me ? (in other words , if I set keep alive- "keep" till when?)
p.s. for those who interested :
clicking this sample page will return keep alive header
Where is this info kept ("this connection is between computer A and server F")?
A TCP connection is recognized by source IP and port and destination IP and port. Your OS, all intermediate session-aware devices and the server's OS will recognize the connection by this.
HTTP works with request-response: client connects to server, performs a request and gets a response. Without keep-alive, the connection to an HTTP server is closed after each response. With HTTP keep-alive you keep the underlying TCP connection open until certain criteria are met.
This allows for multiple request-response pairs over a single TCP connection, eliminating some of TCP's relatively slow connection startup.
When The IIS (F) sends keep alive header (or user sends keep-alive) , does it mean that (E,C,B) save a connection
No. Routers don't need to remember sessions. In fact, multiple TCP packets belonging to same TCP session need not all go through same routers - that is for TCP to manage. Routers just choose the best IP path and forward packets. Keep-alive is only for client, server and any other intermediate session-aware devices.
which is only for my session ?
Does it mean that no one else can use that connection
That is the intention of TCP connections: it is an end-to-end connection intended for only those two parties.
If so - does it mean that keep alive-header - reduce the number of overlapped connection users ?
Define "overlapped connections". See HTTP persistent connection for some advantages and disadvantages, such as:
Lower CPU and memory usage (because fewer connections are open simultaneously).
Enables HTTP pipelining of requests and responses.
Reduced network congestion (fewer TCP connections).
Reduced latency in subsequent requests (no handshaking).
if so , for how long does the connection is saved to me ? (in other words , if I set keep alive- "keep" till when?)
An typical keep-alive response looks like this:
Keep-Alive: timeout=15, max=100
See Hypertext Transfer Protocol (HTTP) Keep-Alive Header for example (a draft for HTTP/2 where the keep-alive header is explained in greater detail than both 2616 and 2086):
A host sets the value of the timeout parameter to the time that the host will allows an idle connection to remain open before it is closed. A connection is idle if no data is sent or received by a host.
The max parameter indicates the maximum number of requests that a client will make, or that a server will allow to be made on the persistent connection. Once the specified number of requests and responses have been sent, the host that included the parameter could close the connection.
However, the server is free to close the connection after an arbitrary time or number of requests (just as long as it returns the response to the current request). How this is implemented depends on your HTTP server.

netty client + keep-alive=true

I'm confused for how to deal with lots of connections in netty (3.6.2.FINAL) and keep-alive=true.
For work on a netty client as a server side connector, making http calls to another service, it wants to always keep the connection open for performance (keep-alive=true).
The issue: there is a hard limit for number of open channels, after which the client will hang when attempting to open a channel. Why no exception just hangs? Is this a setting in terms of channel timeout?
I can't seem to understand Netty in terms of overall managing of connections within worker threads:
With a blocking write/read client ChannelHandler (http request/response), how do you detect that the connection pool is empty?
The handler can receive ChannelEvent(s) but nothing about the overall count available in the connection pool (its very non-deterministic anyway). And if the channel is not open, does it make sense for the handler to initiate opening a new channel given its running in a worker thread?
But if the connection pool is exhausted, how do you go and cleanup some idle connections (within the handler)?
I had to completely rip apart my handler to get the client blocking call to work without hanging. The issue was mostly resolved by not holding onto local channel ref within the handler.
Now we just pass a ConnectionInterface#openConnection() [returns a new ChannelFuture] into the shared custom ChannelHandler#call( ConnectionInterface connectionInterface, HttpRequest request ).
Better to open-channel within the handler call method, and to pass that channel along with checks on its state before channel.write(x), if !channel.isWritable() then recycle the channel (from a new client connection eg. ConnectionInterface#openConnection()) and retry the write. There isn't even a need to close the channel (it gets handled in the pool).
Just ran it with 500 threads / 5000 requests and it succeeds fine.

Azure Web Role - Long Running Request (Load Balancer Timeout?)

Our front-end MVC3 web application is using AsyncController, because each of our instances is servicing many hundreds of long-running, IO bound processes.
Since Azure will terminate "inactive" http sessions after some pre-determined interval (which seems to vary depending up on what website you read), how can we keep the connections alive?
Our clients MUST stay connected, and our processes will run from 30 seconds to 5 minutes or more. How can we keep the client connected/alive? I initially thought of having a timeout on the Async method, and just hitting the Response object with a few bytes of output, sort of like chunking the response, and then going back and waiting some more. However, I don't think this will work, since MVC3 is handling the hookup of an IIS thread back to the asynchronous response, which will have already rendered a view at that time.
How can we run a really long process on an AsyncController, but have the client not be disconnected by the Azure Load Balancer? Sending an immediate response to the caller, and asking that caller to poll or check another resource URL is not acceptable.
Azure load balancer idle time-out is 4 minutes. Can you try to configure TCP keep-alive on the client side for less than 4 minutes, that should keep the connection alive?
On the other hand, it's pretty expensive to keep a connection open per client for a long time. This will limit the number of clients you can handle per server. Also, I think IIS may still decide to close a connection regardless of keep-alives if it thinks it need the connection to serve other requests.

HTTP Keep Alive in a large Web Applications

I have a web application deployed over IIS 7.0. the application is accessible by large number of users and manipulates large data ..my question is concerning the HTTP Keep-Alive option which is set to true by default.
is it a better approach to set the HTTP Keep-Alive to false or true.
in case of true is the good approach to use time out?
KeepAlive should normally be used to handle the requests that immediately follow an HTML request. Let's say on the first visit to your site I get an HTML page with 5 css, 5js and 25 images, I will use my HTTP connection which is still alive to request these things (well, depends on the browser, I'll maybe use 3 connection to speed up these things).
To handle this fact we usually use a Keepalive of 2s or 3s. Having a longer keepalive means the connection is waiting for the next page that the user may request. This may be a valid way of thinking, next time the user will want a page, we'll avoid to loose time establishing HTTP connection (and this can be the longest part of the request/response time). But for your server that mean most of HTTP connection that are handled by the server are doing... nothing. And you will reach your MaxConnection (W3SVC/MaxConnections with a ridiculous default to 10), with connections doing nothing. Really Bad. So long keep-alive needs big webservers and should be used only if your application really needs it.
If you use Keepalive in a 'classical website' you must change the connection timeout (by default 2min). In Apache you would have 2 settings, a keepalive tiemout (5s by default) and a connection timeout (2min). In IIS seems the timeout settings is used for both. So do not set it to 2s (a client really slow in sending his request will timeout), but something like 10s is maybe enough. Now one response is to disallow Keep-Alive, and make the browser opening more connections. Another response is to use a modern webserver (like nginx or cherokee for example) which handles keep-alive connection in a more elegant and resource-free way than Apache or IIS.
Even if you do not use Keepalive, what's the reason of waiting 2 minutes for a client timeout? it's is certainly too high, decrease this value to something like 60s.
Then you should check several settings related to timeout (ConnectionTimeout, HeaderWaitTimeout, MinFileBytesPerSec) and this nice response on performances settings in the registry.
This article will bring more insight and don't forget to check the "How do we fix it?" section
http://mocko.org.uk/b/2011/01/23/http-keepalive-considered-harmful/
I think that it's not a good idea to get all users connected.
Because of:
User just can open your site, but not use it - why we shoud keep connection for long time?
It's hard to keep much connection (more memory)
Use connection time-out (max 5 min will we ok)
BUT: if your application is a live chat - you should kepp alive all connection. In this way better to use Ajax Long Polling Request + Node JS + some fast nosql db to store chat messages.

Resources