I use jmeter HTTP Sampler to test a sequence of HTTP requests and choosed "Use KeepAlive". But a few threads Jmeter closed connection with TCP FIN before all
requests send out.
As the picture shown, 172.19.0.101 is Jmeter,172.19.0.111 is the server. The rest of requests can be only send in a new connection and they are out of session.
It can be of two reasons:
First reason - timeout
whether timeout is reached (default value is 60 seconds, and configurable. If not configured, it uses the connectionTimeout parameter value in tomcat server).
the default connection timeout of Apache httpd 1.3 and 2.0 is as
little as 15 seconds and just 5 seconds for Apache httpd 2.2 and
above
I observed that the request got the response after 10 seconds (15 -> 29 seconds) before sending FIN signal to terminate the connection.
References:
https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#p-timeout
https://en.wikipedia.org/wiki/HTTP_persistent_connection
https://tomcat.apache.org/tomcat-7.0-doc/config/http.html
Second reason - 'max' Parameter
May be it reached the number of requests that can be sent on a single Persistent Connetion.
https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#p-max
Set Implementation in HTTP Samplers to HTTPClient4 and try.
From JMeter HTTP Sampler documentation.
JMeter sets the Connection: keep-alive header. This does not work properly with the default HTTP implementation, as connection re-use is not under user-control. It does work with the Apache HttpComponents HttpClient implementations.
Find the jmeter.properties file in jmeter5.4.1, which describes the parameters during iteration:
# Reset HTTP State when starting a new Thread Group iteration which means:
# true means next iteration is associated to a new user
# false means next iteration is associated to same user
# true involves:
# - Closing opened connection
# - resetting SSL State
#httpclient.reset_state_on_thread_group_iteration=true
set
httpclient.reset_state_on_thread_group_iteration=false
Related
I have a datapower mpgw service that takes in JSON POST and GET HTTPs requests. Persistent connections are enabled. It sets the backend url using the dp routing-url variable. How do retries work for this? is there some specific retry setting? does it do retries automatically up to a certain point? what if I don't want it to retry?
The backend app is taking about 1.5 minutes to return 500 when it can't connect, but I want it to return more quickly. I have the "backside timeout" set to 30 seconds. I'm wondering if it's because it's retrying a couple times but I can't find info on how retries are working or configured in this case.
I'm open to more answers, but what i found here looks like it says that with persistent connections enabled, DP will retry after the backend timeout duration up until the duration of the persistent connection timeout.
Setup
Suppose I have two different HTTP clients using the same Squid instance.
The first client, named ClientA, has an aggressive Http read/write and connection timeout of 5 seconds. The other Client, named ClientB, has a very relax timeout of 120 seconds.
My Squid server configurations looks like that:
connect_timeout 1 minute
read_timeout 1 minute
write_timeout 1 minute
Scenario 1
The ClientA, sends a request to ServerX (through Squid) that will wait 45 seconds before accepting the connection and then immediately answers back.
Question 1
The ClientA will timeout after 5 seconds, but will Squid notice that and close the outbound the connection or wait for ServerX response (in 40~ seconds) and fail to write back the result to the ClientA that is no longer listening?
Scenario 2
The ClientB, sends a request to ServerY (through Squid) that will wait 61 seconds before accepting the connection and then immediately answers back.
Question 2
The ClientB will NOT timeout, but Squid should timeout after 60 seconds and send an Http Timeout 408 to the ClientB, right?
Global question
Is there any way to setup Squid so we can set the timeouts by Request instead of globally at the service level?
Important note: squid's behavior regarding timeouts can change depending on the protocol used. In the case of HTTPS, it cannot inspect the tunneled connection's HTTP headers so it cannot honor any of the Connection, Proxy-Connection, or Keep-Alive: timeout=xx values.
Scenario 1
Squid will close the outbound connection as soon as it notices the client going away.
Scenario 2
Squid will indeed time out, but the result will depend on the protocol used. In the case of HTTP, it will return "504 Gateway Time-out" and add an additional header "X-Squid-Error: ERR_READ_TIMEOUT 0".
In the case of HTTPS, it will simply close the connection, because it cannot read or inject headers that would be meaningful to the client.
Global question
Not in the squid configuration itself. If you want fine-grained control over your persistent connections' timeouts, you should do so in your clients.
For the record, here are the time-out settings squid will try to honor for persistent connections:
client <-> squid:
client_persistent_connections, persistent_request_timeout
squid<->server:
server_persistent_connections, pconn_timeout, read_timeout (for HTTPS)
After experimenting with my JBoss 5.1 server I noticed that the HTTP responses contain the Connection: close header if the current thread is the last available one.
For instance if I set maxThreads="4" in the HTTP connector config and perform more than 4 simulatenous requests, then:
the 3 first responses do not contain any Connection header (meaning the connection can be reused by the client for future requests)
all the next requests contain the Connection: close header (meaning the client will have to create a new connection on a different port for the next request)
I could not find any documentation for that. Is this behaviour explained somewhere? And is it possible to avoid it (i.e prevent this Connection: close header) so that clients can reuse the sockets for future requests?
I had a quick look at Tomcat code (on which JbossWeb, the Web container of Jboss is base on).
It shows in the Http11Processor doesn't return from the process method if the connection is allowed to be kept alive. So kept alive connection are using a thread for the HTTP pool while the connection is open.
To prevent the pool to be emptied by non active kept alive connection, the thread pool is most probably (I have spotted some part of the code that may do it in the PooledSender) disabling the possibility to keep the connection open for the last thread in its pool before starting to process the new request. Otherwise it will be too easy to block Tomcat/Jboss by creating a limited number of kept-alive connection.
Our application is ExtJS 3.4 based application we are frequently getting "Communication Failure" error on UI , we have our application deployed on different domain but on some domain we get this very frequently .
Without HTTP Keep Alive we are not getting that error. :
But in different scenarios for 1 sec and 5 sec we get it quite frequently.
We have observed on Wireshark was due to high RTT (Round Trip Time) the request were taking more time than expected.
There were inconsistency in packet flow the scenario was :
If keep alive was 5 sec :
When a request is successfully served it will return 200 OK(success response) and timeout parameter of 5 sec (where server tries to say to client that server will wait for 5 sec before closing this connection).
Now as soon as 5 sec of time is elapsed Server sends a FIN Packet(Finish packet which is to close connection is sent from server to client which is browser in our case).
Now here is the catch the time taken by ACK (Acknowledge Packet) from client to close connection is high ( high RTT).
Now server has initiated close but due to high RTT before the connection is closed client sends a new HTTP request(for eg ExampleABC.do request) before server receives ACK for FINISH from client.
Because of which server was not able to handle that request since it has initiated connection close.
Setting 1 sec as keep alive meant we are reducing time the server will wait to close the connection since we wanted after 1 sec one connection is to be closed and fresh connection is setup for new request to avoid unwanted request coming after 5 sec .
Thanks in advance
This is my first post please correct me if needed.
Sorry for bad English :)
Image for communication failure :
We solved this issue by synchronizing browser timeout and server timeouts.
The fix was to make sure the TCP keepalive time and browser coincide or come at same time, causing the TCP connection to completely drop.
I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently?
My understanding is (maybe incorrect):
The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) regardless of the configured HTTP Keep Alive timeout and creates a Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection.
EDIT:
My question is about the relation or difference between the two timeout durations i.e. what will happen when HTTP keep-alive timeout duration and the timeout on the Socket (SO_TIMEOUT) which the Web server uses is different? should I even worry about these two being same or not?
An open TCP socket does not require any communication whatsoever between the two parties (let's call them Alice and Bob) unless actual data is being sent. If Alice has received acknowledgments for all the data she's sent to Bob, there's no way she can distinguish among the following cases:
Bob has been unplugged, or is otherwise inaccessible to Alice.
Bob has been rebooted, or otherwise forgotten about the open TCP socket he'd established with Alice.
Bob is connected to Alice, and knows he has an open connection, but doesn't have anything he wants to say.
If Alice hasn't heard from Bob in awhile and wants to distinguish among the above conditions, she can resend her last byte of data, wrapped in a suitable TCP frame to be recognizable as a retransmission, essentially pretending she hasn't heard the acknowledgment. If Bob is unplugged, she'll hear nothing back, even if she repeatedly sends the packet over a period of many seconds. If Bob has rebooted or forgotten the connection, he will immediately respond saying the connection is invalid. If Bob is happy with the connection and simply has nothing to say, he'll respond with an acknowledgment of the retransmission.
The Timeout indicates how long Alice is willing to wait for a response when she sends a packet which demands a reply. The Keepalive time indicates how much time she should allow to lapse before she retransmits her last bit of data and demands an acknowledgment. If Bob goes missing, the sum of the Keepalive and Timeout values will indicate the worst-case time between Alice receiving her last bit of data and her deciding that Bob is dead.
They're two separate mechanisms; the name is a coincidence.
HTTP keep-alive (also known as persistent connections) is keeping the TCP socket open so that another request can be made without setting up a new connection.
TCP keep-alive is a periodic check to make sure that the connection is still up and functioning. It's often used to assure that a NAT box (e.g., a DSL router) doesn't "forget" the mapping between an internal and external ip/port.
KeepAliveTimeout Directive
Description: Amount of time the server will wait for subsequent
requests on a persistent connection Syntax: KeepAliveTimeout seconds
Default: KeepAliveTimeout 15 Context: server config, virtual host
Status: Core Module: core The number of seconds Apache will wait for a
subsequent request before closing the connection. Once a request has
been received, the timeout value specified by the Timeout directive
applies.
Setting KeepAliveTimeout to a high value may cause performance
problems in heavily loaded servers. The higher the timeout, the more
server processes will be kept occupied waiting on connections with
idle clients.
In a name-based virtual host context, the value of the first defined
virtual host (the default host) in a set of NameVirtualHost will be
used. The other values will be ignored.
TimeOut Directive
Description: Amount of time the server will wait for certain events
before failing a request Syntax: TimeOut seconds Default: TimeOut 300
Context: server config, virtual host Status: Core Module: core The
TimeOut directive currently defines the amount of time Apache will
wait for three things:
The total amount of time it takes to receive a GET request. The amount
of time between receipt of TCP packets on a POST or PUT request. The
amount of time between ACKs on transmissions of TCP packets in
responses. We plan on making these separately configurable at some
point down the road. The timer used to default to 1200 before 1.2, but
has been lowered to 300 which is still far more than necessary in most
situations. It is not set any lower by default because there may still
be odd places in the code where the timer is not reset when a packet
is sent.