Tomcat NIO/RESTEasy disconnects TCP after each request - tcp

I'm using RESTEasy asynchronous (Comet) IO support on Tomcat 6 via the NIO Connector. Currently, TCP connections are getting dropped by the server after each response is sent back to the client.
All documentation I've read on HTTP Connector configuration for Tomcat suggests that it should keep connections alive by default, so I'm puzzled as to what the problem is.
Here's my connector config:
<Connector connectionTimeout="20000" port="6080"
emptySessionPath="true" enableLookups="false"
protocol="org.apache.coyote.http11.Http11NioProtocol"
acceptorThreadCount="4" pollerThreadCount="12"/>
Thanks for any suggestions!

It turns out the root of the problem is elsewhere (still investigating and will post a separate question directly on that to avoid confusion!).
Tomcat is releasing the connections after a period of a few seconds rather than immediately on responding to the HTTP request. The client in this case is at fault for creating new TCP connections for each request rather than re-using connections already established.

Related

Close HTTP request socket connection

I'm implementing HTTP over TLS proxy server (sni-proxy) that make two socket connection:
Client to ProxyServer
ProxyServer to TargetServer
and transfer data between Client and TargetServer(TargetServer detected using server_name extension in ClientHello)
The problem is that the client doesn't close the connection after the response has been received and the proxy server waits for data to transfer and uses resources when the request has been done.
What is the best practice for implementing this project?
The client behavior is perfectly normal - HTTP keep alive inside the TLS connection or maybe even a Websocket connection. Given that the proxy does transparent forwarding of the encrypted traffic it is not possible to look at the HTTP traffic in order to determine exactly when the connection can be closed. A good approach is therefore to keep the connection open as long as the resources allow this and on resource shortage close the connections which were idle (no traffic) the longest time.

Does Nginx close client TCP connection before sending back the http response?

I found the following documentation from Nginx website itself: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
Question:
The above point is not correct, right? Since HTTP is a synchronous protocol, after a client sends a request over an established TCP connection with the server (here Nginx reverse proxy), the client expects a response on that TCP connection. So if this is the case Nginx server cannot close the connection just after receiving the request, correct? Shouldn't the Nginx server keep the connection still open until it gets a response from upstream server connection and relays back that data over the same client connection?
I believe the way that paragraph is phrased is inaccurate.
The NGINX blog post mentioned in the question is referencing the behavior of UDP in the context of Direct Server Return (DSR). It is not part of their official documentation. I suspect that the author didn't do a good job of communicating how a conventional layer 7 reverse proxy connection works because they were focusing on explaining how DSR works.

IBrowse and persistent connection per client process

I need to operate with a SOAP service from Erlang. SOAP implementation is not a subject, I have a problem with HTTP requests at a client side.
I use IBrowse as a HTTP client. This SOAP service uses a specific authorization mechanism, which relates an opened session to a client connection (socket). So, the client should use only one persistent connection to server (socket), and if it try to send a request via another socket (e.g., connection from pool) - authorization will fail.
I use IBrowse in this way:
Spawn connection process to server (ibrowse:spawn_worker_process/1)
Send request to server via spawned process with {max_sessions, 1} and {max_pipeline_size, 0}.
If I understand the docs right, this should use one socket for server connection with disabled pipelining, also, I use Connection: Keep-Alive header and HTTP version explicitly set to 1.0. But my connection is always closed after the response is received.
How can I use IBrowse (or another http-client) the way I described above?
I think you could that with hackney by reusing a connection.
Also gun is quite nice http client, easy to use, keeping connection, but with little less connection control.

Apache Camel: why is TCP connection not closed after receiving 200 OK

We are using Apache Camel as an orchestration engine. Typically, the following scenario:
client sends HTTP request <-> CAMEL code <-> external server(s)
The ball starts to roll when our client sends a HTTP request to our CAMEL code.
The Camel code will trigger external servers via REST HTTP calls.
Eventually, the Camel code will send a reply back to the client.
The last action before sending the response back to the client, the Camel code sends a HTTP GET towards an external server. So a TCP connection is setup first, then the data sent. After some time (this might take up 5 to 10 seconds), the external server replies with a 200 OK.
Problem: Camel does not send a TCP FIN to the external server after receiving the 200 OK. As a result, the TCP connection remains open ... (the external server then closes the TCP connection itself after a timeout of 200 seconds, but this means a TCP resource lost during 200 seconds).
So, at TCP level, it goes like this:
Camel <----------> external server
TCP SYN -->
<-- TCP SYN,ACK
TCP ACK -->
HTTP GET -->
<-- 200 OK
TCP ACK -->
<200 seconds later>
<-- TCP FIN,ACK
TCP ACK -->
Any idea how I can have Camel close the TCP connection after it has received the 200 OK ?
Note: I tried adding the "Connection: close" header, but Camel did not add the header ?! It seemed to ignore it ...
This was the code to add the header:
exchange.getOut().setHeader("Connection","Close");
I am using Camel 2.9.1 in a Spring framework with Eclipse IDE.
Unfortunately, I did not see another solution than create a custom HttpHeaderFilterStrategy class which does not filter out the Connection header.
Then before sending out my request to the external server, I am setting the header "Connection: close". As soon as this request gets replied, the Camel code then sends a TCP FIN, ACK in order to close the TCP connection.
More details:
1) create a custom HttpHeaderFilterStrategy class, eg: CustomHttpHeaderFilterStrategy
2) adapt the applicationContext.xml so it points to that class, eg:
<bean id="http" class="org.apache.camel.component.http.HttpComponent">
<property name="camelContext" ref="camel"/>
<property name="headerFilterStrategy" ref="myHeaderFilterStrategy"/>
</bean>
<bean id="myHeaderFilterStrategy" class="com.alu.iptc.com.CustomHttpHeaderFilterStrategy">
</bean>
3) adapt your code, so that the Connection: close header is set, eg:
exchange.getOut().setHeader("Connection","close");
HTTP1.1 connections are to be considered to be kept alive after the first message for a while to allow multiple files to be delivered in one TCP session for performance reasons. Normlly, a http server might cut connections after a few seconds to save threads while allow multiple files to be downloaded. The Camel http component will probably behave the same way.
http://en.wikipedia.org/wiki/HTTP_persistent_connection
The official HTTP client which Camel relies on can be configured to use or not use persistent connections, but default is true:
http://docs.oracle.com/javase/1.5.0/docs/guide/net/http-keepalive.html
Although I have not tried it, it should be possible to set a system property to configure this
http.keepAlive=<boolean>
You should be able to set it on the camel context if you want
<camelContext>
<properties>
<property key="http.keepAlive" value="false"/>
</properties>
</camelContext>
Note that I have not tried it. If you make it work, it would be nice to hear the results!

IIS HTTP Keep-Alives

I am reading that Keep-Alives is meant for performance - so that no connections need to be recreated but just reuse the existing ones. What if there is a traffic spike, will new connections be created?
Additionally, if I don't turn on Keep-Alive and in a high traffic environment, will it eventually running out of connections/socket port on client side? because a new connection has to be created for each http/web request.
HTTP is a stateless protocol.
In HTTP 1.0 each request meant opening a new TCP connection.
That caused performance issues (e.g. have to re-do the 3-way handshake for each GET or POST) so the Keep-Alive Header was added to maintain the connection across requests and in HTTP1.1 the default is persistent connection.
This means that the connection is reused across requests.
I am not really familiar with IIS but if there is a configuration to close the connection after each HTTP response, it will have impact on the performance.
Concerning the running out of sockets/ports on the client side, that could occur if the client fires a huge amount of requests and a new TCP connection must be opened per HTTP request.
After a while the ports will be depleted

Resources