Managing server's HTTP keep-alive timeout with Netty - http

I am running an application server using the Play! Framework, which uses Netty for the actual IO heavy lifting.
The HTTP connections have keep-alive turned on (which is the default for HTTP 1.1), and I'm happy with this. However, I would like these kept-alive connections to time out after a certain amount of inactivity (e.g. 15 seconds). As I understand it, this would involve the server closing the connection actively.
This seems like a standard config option, and indeed there is such a setting for Apache. However, I can't see any way to do this in Netty/Play. It seems like the connections stay open until either the client closes them, or the socket times out at the OS level (about two hours).
Is this functionality supported out of the box? And if not, is it feasible to implement by hand (in particular, how do I know when a Channel was last used, or even if it's in use right now)?

You can put IdleStateHandler in the application pipeline.

Related

Isn't http keep alive feature against three rule of thumbs: assyncronous, reactive programing and scalability

I know in HTTP 1.1, keep-alive is the default behavior, unless the client explicitly asks the server to close the connection by including a Connection: close header in its request, or the server decides to includes a Connection: close header in its response. I am wondering if this isn't kind of an obstacle in scalability when growning servers horizontaly.
My scenario: we are developing all new services following microservices patterns either in Java or Phyton. It is desarible we can design and implement such way we can increase horizontally. For isntance, I can use docker in order to easily scale up or use Spring Boot Cloud Config. Whatever the phisical host implementation the basic idea is favour scalability.
My understanding: I must keep server and client as musch agnostic as possible and when I set up HTTP Keep Alive I understand there will be advantage while taking use of same http connection (and save some cpu process) but I guess I am forcing the client (eg. another service) to keep using same connection which may downgrade the advantage of several docker instances of same service since I will promote the client to keep consuming the same initial connection.
Assuming my understanding is correct, I assume it is not a good idea since we develop the service providing response that can be reuseable from different consumers with different approaches: some consumers can consume assyncronously or following reactive design paradigms which make me wondering if keeping alive same connection. Let's say in practical terms: the connection used should be free soon as possible in order to really balance the demand over all providers.
***edited after first comment
Let´s assume I have multiple diferent consumer services (CS1, CS2 ... CSn) connecting to a single Load Balance instance (LB) which will forward the request to multiple Dockers with same provider service (D1, D2 ... Dn). Since keep alive is the default behaviour in http 1+, we have keep "alive = true" in all connection (either between Cx and LB or LB and Dx). As far as I know the only advantage to keep alive is save cpu process while opening/closing a connection. If I send Connection:close after each request there is no advantage at all to use keep alive. If I have some logic to send "connection: close" it means I promote LB to keep connected to a specific Dx using exactly the same connection for while, right? (I choose here the word promote because I iguess force might not be the appropriate one since there is time out in keep alive and then LB migh route to another Dx anyway). So I have in some moment C1 -> LB -> D1 alive persisted for while, right? Comming back to my original question, isn't that against the idea of assyncronous/paralelal/reactive paradigm? For instance, I have some scenario where a single consumer service will call another service few times before returning a single answer to a page. Today we are doing it sequentially but if we decide to call in paralalel and depending on first answer therer will be already a answer to a page or we decide to compouse an answer to the page but I don't care the order. The caller service will wait every answers before returning to a ccontroller and the order doesn't matter. Ins't strange I have keep alive = true?
I am forcing the client (eg. another service) to keep using same connection
You are not forcing. The client can easily avoid persistent connections by sending HTTP/1.0 and/or Connection: close. There are many practical HTTP applications that work just like that.
keep using same connection which may downgrade the advantage of several docker instances of same service since I will promote the client to keep consuming the same initial connection
Assuming your load balancer works well, it will usually distribute connections evenly across your instances. Of course, this may not work when you only have a few connections altogether, but a few connections can hardly pose a scalability or performance problem.

gSoap : is keep-alive header mandatory for asynchronous message?

Initially, I have problem with the option keep-alive enabled (it blocks the next clients calls. Only the first call that receives an answer).
And now, I need to implement some asynchronous web services using gSoap.
So am I obliged to enable keep-alive in order to implement asynchronous web services?
Thank you a lot!
To give some background, establishing a TCP connection has a significant setup overhead. The purpose of keep-alive is to reduce latency by allowing this overhead to be avoided on subsequent connections by reusing the already opened TCP connection instead of constructing a new connection completely from scratch.
You can get the functionality of a web service without using keep alive (after all, keep alive was introduced in HTTP/1.1, and HTTP/1.0 has worked for a long time without keep alive). However, you will definitely experience worse performance than if you properly support keep alive. It should also be noted that, when it comes to establishing connections on mobile, tearing down previous connections and creating new connections completely from scratch rather than keeping a connection open and reusing it may also have implications for the battery. In particular, closing and opening a connection may cause the radio to go to sleep and then wake up again, and the radio usually spends more power when it transitions from sleep to wake than in the steady state.
Your service should be multithreaded to support multiple clients, here gsoap documentation explains it http://www.cs.fsu.edu/~engelen/soapdoc2.html#tth_sEc19.11

implementing a background process responding to the client in an atmosphere+netty/jetty application

We have a requirement to to support 10k+ users, where every user initiate a request and waits for a response from the server (the response can take as long as 20-30 seconds to arrive). it is only one request from the client, and after a long processing by the server, a response will be transmitted and then the connection will disconnect.
in the background, the server will do some DB search and wait for other background processes to notify on completion before responding to the client.
after doing some research i figured out we will need to use something like the atmosphere framework to support websockets/sse event/long polling along with an asynchronous server like netty (=> nettosphere) or jetty.
As for my experience - mostly Java EE world and Tomcat server.
my questions are:
what will be easier to implement in regard to my experience and our requirement: atmosphere + netty or atmoshphere+jetty? which one can scale better, has an easier learning curve and easier to implement other java technologies?
how do u implement in atmosphere a response that is sent only to the originating client and not broadcast to the rest of the clients? (all the examples i found are broadcast).
how can i implement in netty (or jetty) when using the atmosphere framework our response? i.e., the client send a request, after it is received in the server some background processes are run, and when they finish i need to locate the connection and transmit the response. is that achievable?
Some thoughts:
At 10k+ users, with 20-30 second response latency, you likely hit file descriptor limits if using just 1 network interface. Consider a solution that uses multiple network interfaces.
Your description of your request/response can be handled entirely with standard Servlet 3.0, standard HTTP/1.1, Async request handling, and large timeouts.
If your clients are web browsers, and you don't start sending a response from the server until the 20-30 second window, you might hit browser idle timeouts.
Atmosphere and Cometd do the same things, supporting long duration connections, with connection technique fallbacks, and with logical channel APIs.
I believe the AKKA framework will handle this sort of need. I am looking at using it to handle scaling issues possibly with a RabbitMQ to help off load work to potentially other servers that may be added later to scale as needed.

HTTP Keep Alive in a large Web Applications

I have a web application deployed over IIS 7.0. the application is accessible by large number of users and manipulates large data ..my question is concerning the HTTP Keep-Alive option which is set to true by default.
is it a better approach to set the HTTP Keep-Alive to false or true.
in case of true is the good approach to use time out?
KeepAlive should normally be used to handle the requests that immediately follow an HTML request. Let's say on the first visit to your site I get an HTML page with 5 css, 5js and 25 images, I will use my HTTP connection which is still alive to request these things (well, depends on the browser, I'll maybe use 3 connection to speed up these things).
To handle this fact we usually use a Keepalive of 2s or 3s. Having a longer keepalive means the connection is waiting for the next page that the user may request. This may be a valid way of thinking, next time the user will want a page, we'll avoid to loose time establishing HTTP connection (and this can be the longest part of the request/response time). But for your server that mean most of HTTP connection that are handled by the server are doing... nothing. And you will reach your MaxConnection (W3SVC/MaxConnections with a ridiculous default to 10), with connections doing nothing. Really Bad. So long keep-alive needs big webservers and should be used only if your application really needs it.
If you use Keepalive in a 'classical website' you must change the connection timeout (by default 2min). In Apache you would have 2 settings, a keepalive tiemout (5s by default) and a connection timeout (2min). In IIS seems the timeout settings is used for both. So do not set it to 2s (a client really slow in sending his request will timeout), but something like 10s is maybe enough. Now one response is to disallow Keep-Alive, and make the browser opening more connections. Another response is to use a modern webserver (like nginx or cherokee for example) which handles keep-alive connection in a more elegant and resource-free way than Apache or IIS.
Even if you do not use Keepalive, what's the reason of waiting 2 minutes for a client timeout? it's is certainly too high, decrease this value to something like 60s.
Then you should check several settings related to timeout (ConnectionTimeout, HeaderWaitTimeout, MinFileBytesPerSec) and this nice response on performances settings in the registry.
This article will bring more insight and don't forget to check the "How do we fix it?" section
http://mocko.org.uk/b/2011/01/23/http-keepalive-considered-harmful/
I think that it's not a good idea to get all users connected.
Because of:
User just can open your site, but not use it - why we shoud keep connection for long time?
It's hard to keep much connection (more memory)
Use connection time-out (max 5 min will we ok)
BUT: if your application is a live chat - you should kepp alive all connection. In this way better to use Ajax Long Polling Request + Node JS + some fast nosql db to store chat messages.

Are socket connections faster than http on Blackberry?

I'm writing an app for Blackberry that was originally implemented in standard J2ME. The network connection was done using Connector.open("socket://...:80/...") instead of http://
Now, I've implemented the connection using both methods, and it seems like some times, the socket method is more responsive, and some times it doesn't work at all. Is there a significant difference between the two? Mostly what I'm trying to achieve is responsiveness from the connection to get a smooth progress bar.
Blackberry's implementation of http and https provide more options for connecting to the target server than socket, and, of course, implement all the HTTP protocol stuff for you. I've not benchmarked them, but it makes a certain amount of sense that direct TCP via socket would be quicker in some cases, especially if what is listening on port 80 isn't an HTTP server (no protocol overhead)
I've had difficulty in the past with different network providers, some requiring deviceside=true others deviceside=false, and no real way to know until the first support call for that network came in.
Mostly what I'm trying to achieve is responsiveness from the connection to get a smooth progress bar.
Pardon my saying so, but a "smooth progress bar" is "gilding the lily" - nice to have and look at, but not critical to the application's function, reliability or robustness. Go with what is more robust and reduces code size - likely http in this case.
Since both operate over a network I don't think you can guarantee a smooth progress bar. You might have more chance of that if you remind the person to stay in one place so you have a chance of a consistent connection ;)
There is less overhead with a socket connection than an HTTP one. In fact, HTTP connections run over the socket connection. You can take advantage of the reduced overhead of the socket connection to appear more responsive, but you will likely have more work to do than you would with HTTP. The API is more low-level so coding is more complex.
One difference between a socket and an HTTP connection on the BlackBerry is that HTTP connections may be transparently routed via an HTTP proxy in the case of BES and BIS connections.
In theory sockets will be faster, but then you're responsible for managing the overhead of rolling your own protocol (depending on complexity). Though sockets are more lightweight, I've found that HTTP and all the comes with it greatly reduces the headache.

Resources