gSoap : is keep-alive header mandatory for asynchronous message? - asynchronous

Initially, I have problem with the option keep-alive enabled (it blocks the next clients calls. Only the first call that receives an answer).
And now, I need to implement some asynchronous web services using gSoap.
So am I obliged to enable keep-alive in order to implement asynchronous web services?
Thank you a lot!

To give some background, establishing a TCP connection has a significant setup overhead. The purpose of keep-alive is to reduce latency by allowing this overhead to be avoided on subsequent connections by reusing the already opened TCP connection instead of constructing a new connection completely from scratch.
You can get the functionality of a web service without using keep alive (after all, keep alive was introduced in HTTP/1.1, and HTTP/1.0 has worked for a long time without keep alive). However, you will definitely experience worse performance than if you properly support keep alive. It should also be noted that, when it comes to establishing connections on mobile, tearing down previous connections and creating new connections completely from scratch rather than keeping a connection open and reusing it may also have implications for the battery. In particular, closing and opening a connection may cause the radio to go to sleep and then wake up again, and the radio usually spends more power when it transitions from sleep to wake than in the steady state.

Your service should be multithreaded to support multiple clients, here gsoap documentation explains it http://www.cs.fsu.edu/~engelen/soapdoc2.html#tth_sEc19.11

Related

In the TechEmpower benchmarks, the Netty application does not honor keepalive connections—is this on purpose?

I am in the process of load testing a Netty application I have. I have been looking to various authorities to try to follow best practices both to learn more about Netty and just to become a better developer.
One of those authorities I'm consulting is the blazingly fast Netty application that is part of the TechEmpower framework benchmarks.
I've noticed that this application does not honor the Connection: Keep-Alive header that is sent as part of the test. Specifically, at the end of any given write operation, it closes the connection, even though the test requests that connections be kept alive. This is of course permitted, but…. Often times seemingly odd choices like this exist for good performance reasons. Is there a reason that the Netty application here chooses to close every connection instead of keeping them alive?
As far as I can see it only does this when it receive a request to an unexpected path, in which case it just closed the connection after the response was written.

Isn't http keep alive feature against three rule of thumbs: assyncronous, reactive programing and scalability

I know in HTTP 1.1, keep-alive is the default behavior, unless the client explicitly asks the server to close the connection by including a Connection: close header in its request, or the server decides to includes a Connection: close header in its response. I am wondering if this isn't kind of an obstacle in scalability when growning servers horizontaly.
My scenario: we are developing all new services following microservices patterns either in Java or Phyton. It is desarible we can design and implement such way we can increase horizontally. For isntance, I can use docker in order to easily scale up or use Spring Boot Cloud Config. Whatever the phisical host implementation the basic idea is favour scalability.
My understanding: I must keep server and client as musch agnostic as possible and when I set up HTTP Keep Alive I understand there will be advantage while taking use of same http connection (and save some cpu process) but I guess I am forcing the client (eg. another service) to keep using same connection which may downgrade the advantage of several docker instances of same service since I will promote the client to keep consuming the same initial connection.
Assuming my understanding is correct, I assume it is not a good idea since we develop the service providing response that can be reuseable from different consumers with different approaches: some consumers can consume assyncronously or following reactive design paradigms which make me wondering if keeping alive same connection. Let's say in practical terms: the connection used should be free soon as possible in order to really balance the demand over all providers.
***edited after first comment
Let´s assume I have multiple diferent consumer services (CS1, CS2 ... CSn) connecting to a single Load Balance instance (LB) which will forward the request to multiple Dockers with same provider service (D1, D2 ... Dn). Since keep alive is the default behaviour in http 1+, we have keep "alive = true" in all connection (either between Cx and LB or LB and Dx). As far as I know the only advantage to keep alive is save cpu process while opening/closing a connection. If I send Connection:close after each request there is no advantage at all to use keep alive. If I have some logic to send "connection: close" it means I promote LB to keep connected to a specific Dx using exactly the same connection for while, right? (I choose here the word promote because I iguess force might not be the appropriate one since there is time out in keep alive and then LB migh route to another Dx anyway). So I have in some moment C1 -> LB -> D1 alive persisted for while, right? Comming back to my original question, isn't that against the idea of assyncronous/paralelal/reactive paradigm? For instance, I have some scenario where a single consumer service will call another service few times before returning a single answer to a page. Today we are doing it sequentially but if we decide to call in paralalel and depending on first answer therer will be already a answer to a page or we decide to compouse an answer to the page but I don't care the order. The caller service will wait every answers before returning to a ccontroller and the order doesn't matter. Ins't strange I have keep alive = true?
I am forcing the client (eg. another service) to keep using same connection
You are not forcing. The client can easily avoid persistent connections by sending HTTP/1.0 and/or Connection: close. There are many practical HTTP applications that work just like that.
keep using same connection which may downgrade the advantage of several docker instances of same service since I will promote the client to keep consuming the same initial connection
Assuming your load balancer works well, it will usually distribute connections evenly across your instances. Of course, this may not work when you only have a few connections altogether, but a few connections can hardly pose a scalability or performance problem.

Managing server's HTTP keep-alive timeout with Netty

I am running an application server using the Play! Framework, which uses Netty for the actual IO heavy lifting.
The HTTP connections have keep-alive turned on (which is the default for HTTP 1.1), and I'm happy with this. However, I would like these kept-alive connections to time out after a certain amount of inactivity (e.g. 15 seconds). As I understand it, this would involve the server closing the connection actively.
This seems like a standard config option, and indeed there is such a setting for Apache. However, I can't see any way to do this in Netty/Play. It seems like the connections stay open until either the client closes them, or the socket times out at the OS level (about two hours).
Is this functionality supported out of the box? And if not, is it feasible to implement by hand (in particular, how do I know when a Channel was last used, or even if it's in use right now)?
You can put IdleStateHandler in the application pipeline.

Is there a HTTP header field / hack to tell the browser NOT to pipeline its requests?

I am implementing a minimalistic web server application on a Microcontroller. When I have several images (or CSS/JS) on the web page, the browser creates several connections and fetches them. But the Microcontroller can not catch up with this. Is there a way to tell the browser to stop pipelining and fetch them one by one ?
Note :: "Connection: close" is already in place.
I think Connection:close is exactly the wrong message. When the browser creates multiple connections, it precisely does not pipeline its requests - so ISTM that you want the browser to pipeline, instead of creating parallel connections.
So one step towards that would be to use HTTP 1.1, and keep the connection open. The browser would then reuse the TCP connection for further requests. This should allow the microcontroller to catch up.
Now, the browser might still try to create additional, parallel connections. The best reaction to that is to not accept any of these connections. So limit the number of parallel connections that you are serving (independent of client), and only read new requests when you are done reading the previous ones. In doing so, prefer to read from established connections over accepting new connections.
If you have access to the TCP stack of the controller, you might be able to tell what host a connection comes from, so you can accept connections from other browsers while limiting the number of connections from the same browser (something that you cannot do in the regular socket API).
"Pipelining" is something else; it means that the user agent sends additional requests on the same connection although the first one didn't complete yet (see http://greenbytes.de/tech/webdav/rfc2616.html#pipelining).
"Connection: close" doesn't seem to be relevant; that being said: is there a reason why you don't want the connection reused?
With respect to your question: no, I don't think you can prevent clients from doing that. Did you try limiting the maximum number of open connections on your server?
Same problem... However, Firefox loads my site very fast unlike Opera. I have not invented anything better than rejecting connections at an initial stage: SYN. I'm just answering with RST flag. But probably it doesn't suit Opera.
My device supports only two simultaneous connections.

TCP Connection Life

How long can I expect a client/server TCP connection to last in the wild?
I want it to stay permanently connected, but things happen, so the client will have to reconnect. At what point do I say that there's a problem in the code rather than there's a problem with some external equipment?
I agree with Zan Lynx. There's no guarantee, but you can keep a connection alive almost indefinitely by sending data over it, assuming there are no connectivity or bandwidth issues.
Generally I've gone for the application level keep-alive approach, although this has usually because it's been in the client spec so I've had to do it. But just send some short piece of data every minute or two, to which you expect some sort of acknowledgement.
Whether you count one failure to acknowledge as the connection having failed is up to you. Generally this is what I have done in the past, although there was a case I had wait for three failed responses in a row to drop the connection because the app at the other end of the connection was extremely flaky about responding to "are you there?" requests.
If the connection fails, which at some point it probably will, even with machines on the same network, then just try to reestablish it. If that fails a set number of times then you have a problem. If your connection persistently fails after it's been connected for a while then again, you have a problem. Most likely in both cases it's probably some network issue, rather than your code, or maybe a problem with the TCP/IP stack on your machine (has been known: I encountered issues with this on an old version of QNX--it'd just randomly fall over). Having said that you might have a software problem, and the only way to know for sure is often to attach a debugger, or to get some logging in there. E.g. if you can always connect successfully, but after a time you stop getting ACKs, even after reconnect, then maybe your server is deadlocking, or getting stuck in a loop or something.
What's really useful is to set up a series of long-running tests under a variety of load conditions, from just sending the keep alive are you there?/ack requests and responses, to absolutely battering the server. This will generally give you more confidence about your software components, and can be really useful in shaking out some really weird problems which won't necessarily cause a problem with your connection, although they might result in problems with the transactions taking place. For example, I was once writing a telecoms application server that provided services such as number translation, and we'd just leave it running for days at a time. The thing was that when Saturday came round, for the whole day, it would reject every call request that came in, which amounted to millions of calls, and we had no idea why. It turned out to be because of a single typo in some date conversion code that only caused a problem on Saturdays.
Hope that helps.
I think the most important idea here is theory vs. practice.
The original theory was that the connections had no lifetimes. If you had a connection, it stayed open forever, even if there was no traffic, until an event caused it to close.
The new theory is that most OS releases have turned on the keep-alive timer. This means that connections will last forever, as long as the system on the other end responds to an occasional TCP-level exchange.
In reality, many connections will be terminated after time, with a variety of criteria and situations.
Two really good examples are: The remote client is using DHCP, the lease expires, and the IP address changes.
Another example is firewalls, which seem to be increasingly intelligent, and can identify keep-alive traffic vs. real data, and close connections based on any high level criteria, especially idle time.
How you want to implement reconnect logic depends a lot on your architecture, the working environment, and your performance goals.
It shouldn't really matter, you should design your code to automatically reconnect if that is the desired behavior.
There really is no way to tell. There is nothing inherent to TCP that would cause the connection to just drop after a certain amount of time. Someone on a reliable connection could have years of uptime, while someone on a different connection could have to reconnect every 5 minutes. There is no way to tell or even guess.
You will need some data going over the connection periodically to keep it alive - many OS's or firewalls will drop an inactive connection.
Pick a value. One drop every hour is probably fine. Ten unexpected connection drops in 5 minutes probably indicates a problem.
TCP connections will generally last about two hours without any traffic. Either end can send keep-alive packets, which are, I think, just an ACK on the last received packet. This can usually be set per socket or by default on every TCP connection.
An application level keep-alive is also possible. For a telnet style protocol like FTP, SMTP, POP or IMAP something like sending return, newline and getting back a command prompt.

Resources