Jmeter causing SYN Flood - tcp

While making HTTP call to server using Jmeter we see SYN flood.
This is causing most probably since Jmeter doesn't sends back the ack as part of 3 way handshake.
Is there any way we can force Jmeter to send the ack back to Server for TCP Connection ?

Given connection succeeded JMeter should normally send ACK, there is no need to "force" it by any means (at least this is true for JMeter 5.2 and HttpClient4 implementation):
I would recommend double-checking incoming and outgoing packets using a lower level sniffer tool like Wireshark
You can also get extra information regarding what's going on under the hood by adding the next line to log4j2.xml file (lives in "bin" folder of your JMeter installation) to enable debug logging for the HTTP protocol
<Logger name="org.apache.http" level="debug" />
JMeter restart will be required to pick up the change.

Related

POP3 commands in TCP sampler in Jmeter

Using TCP sampler in Jmeter, I am connected successfully with a pop3 server and set "USER username" as text to send it works and server response over to send the second command.
Can any help me to send the second command "PASS password" one TCP sampler with multiple commands or need to add more TCP sampler for a single command?
I doubt you will be able to do this using JMeter's TCP Sampler, you can try HTTP Raw Request plugin which has option to keep the connection open so you will be able to pass multiple commands within the same TCP session. The HTTP Raw Request sampler can be installed using JMeter Plugins Manager
However I'm under impression that you're doing something weird. Connecting to POP3 server via telnet is not something the majority of users will do and if you need to load test an email server you should be simulating user actions with 100% accuracy. JMeter comes with Mail Reader Sampler which can be used for fetching messages using POP3 or IMAP protocols which is much closer to real email clients and much easier to implement. See Load Testing Your Email Server: How to Send and Receive E-mails with JMeter for more information if needed

HTTP and Sessions

I just went through the specification of http 1.1 at http://www.w3.org/Protocols/rfc2616/rfc2616.html and came across a section about connections http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8 that says
" A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection. That is, unless otherwise indicated, the client SHOULD assume that the server will maintain a persistent connection, even after error responses from the server.
Persistent connections provide a mechanism by which a client and a server can signal the close of a TCP connection. This signaling takes place using the Connection header field (section 14.10). Once a close has been signaled, the client MUST NOT send any more requests on that connection. "
Then I also went through a section on http state management at https://www.rfc-editor.org/rfc/rfc2965 that says in its section 2 that
"Currently, HTTP servers respond to each client request without relating that request to previous or subsequent requests;"
A section about the need to have persistent connections in the RFC 2616 also said that prior to persistent connections every time a client wished to fetch a url it had to establish a new TCP connection for each and every new request.
Now my question is, if we have persistent connections in http/1.1 then as mentioned above a client does not need to make a new connection for every new request. It can send multiple requests over the same connection. So if the server knows that every subsequent request is coming over the same connection, would it not be obvious that the request is from the same client? And hence would this just not suffice to maintain the state and would this just nit be enough for the server to understand that the request was from the same client ? In this case then why is a separate state management mechanism required at all ?
Basically, yes, it would make sense, but HTTP persistent connections are used to eliminate administrative TCP/IP overhead of connection handling (e.g. connect/disconnect/reconnect, etc.). It is not meant to say anything about the state of the data moving across the connection, which is what you're talking about.
No. For instance, there might an intermediate (such as a proxy or a reverse proxy) in the request path that aggregates requests from multiple TCP connections.
See http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p1-messaging-21.html#intermediaries.

Apache Camel: why is TCP connection not closed after receiving 200 OK

We are using Apache Camel as an orchestration engine. Typically, the following scenario:
client sends HTTP request <-> CAMEL code <-> external server(s)
The ball starts to roll when our client sends a HTTP request to our CAMEL code.
The Camel code will trigger external servers via REST HTTP calls.
Eventually, the Camel code will send a reply back to the client.
The last action before sending the response back to the client, the Camel code sends a HTTP GET towards an external server. So a TCP connection is setup first, then the data sent. After some time (this might take up 5 to 10 seconds), the external server replies with a 200 OK.
Problem: Camel does not send a TCP FIN to the external server after receiving the 200 OK. As a result, the TCP connection remains open ... (the external server then closes the TCP connection itself after a timeout of 200 seconds, but this means a TCP resource lost during 200 seconds).
So, at TCP level, it goes like this:
Camel <----------> external server
TCP SYN -->
<-- TCP SYN,ACK
TCP ACK -->
HTTP GET -->
<-- 200 OK
TCP ACK -->
<200 seconds later>
<-- TCP FIN,ACK
TCP ACK -->
Any idea how I can have Camel close the TCP connection after it has received the 200 OK ?
Note: I tried adding the "Connection: close" header, but Camel did not add the header ?! It seemed to ignore it ...
This was the code to add the header:
exchange.getOut().setHeader("Connection","Close");
I am using Camel 2.9.1 in a Spring framework with Eclipse IDE.
Unfortunately, I did not see another solution than create a custom HttpHeaderFilterStrategy class which does not filter out the Connection header.
Then before sending out my request to the external server, I am setting the header "Connection: close". As soon as this request gets replied, the Camel code then sends a TCP FIN, ACK in order to close the TCP connection.
More details:
1) create a custom HttpHeaderFilterStrategy class, eg: CustomHttpHeaderFilterStrategy
2) adapt the applicationContext.xml so it points to that class, eg:
<bean id="http" class="org.apache.camel.component.http.HttpComponent">
<property name="camelContext" ref="camel"/>
<property name="headerFilterStrategy" ref="myHeaderFilterStrategy"/>
</bean>
<bean id="myHeaderFilterStrategy" class="com.alu.iptc.com.CustomHttpHeaderFilterStrategy">
</bean>
3) adapt your code, so that the Connection: close header is set, eg:
exchange.getOut().setHeader("Connection","close");
HTTP1.1 connections are to be considered to be kept alive after the first message for a while to allow multiple files to be delivered in one TCP session for performance reasons. Normlly, a http server might cut connections after a few seconds to save threads while allow multiple files to be downloaded. The Camel http component will probably behave the same way.
http://en.wikipedia.org/wiki/HTTP_persistent_connection
The official HTTP client which Camel relies on can be configured to use or not use persistent connections, but default is true:
http://docs.oracle.com/javase/1.5.0/docs/guide/net/http-keepalive.html
Although I have not tried it, it should be possible to set a system property to configure this
http.keepAlive=<boolean>
You should be able to set it on the camel context if you want
<camelContext>
<properties>
<property key="http.keepAlive" value="false"/>
</properties>
</camelContext>
Note that I have not tried it. If you make it work, it would be nice to hear the results!

What's the difference between http request and writing http request text to tcp/ip socket on 80 port

Can someone explain the difference between the HTTP request and it handling and socket requests on 80 port. As I understood, HTTP server listen the 80 port and when someone sends an HTTP request on this port - server handle it. So when we place socket listener on port 80, and then write HTML formatted message to it - does it means that we send usual HTTP request? But as fiddler said - it false. What's the difference on a packet level? Or another lower than presentation-level between HTTP request and HTTP-formed writing to socket? Thanks.
First of all, port 80 is the default port for HTTP, it is not required. You can have HTTP servers listening on other ports as well.
Regarding the difference between "regular" HTTP requests and the ones you make yourself over a socket - there is no difference. The "regular" HTTP requests you are referring to (made by a web browser for example) are also implemented over sockets, just like you would do it manually yourself. And the same goes for the server. The implementation of the HTTP server listens for incoming socket connections and parses the data that passes there just like you would.
As long as you send in your socket valid HTTP protocol (according to the RFC), there should be no difference in the packet level (if the lower network stack is identical).
Keep in mind that the socket layer is just the layer the HTTP data always passes over. It doesn't really matter who put the data there, it just comes out from the other side the same way it was put in.
Please note that you have some degree of freedom when implementing an HTTP yourself. There are many optional fields and the order of the headers doesn't matter. So it is possible that two different HTTP implementations will be different in the packet level, but will behave basically the same.
The best way to actually see what's going on in the packet level, is by using a network sniffer - like wireshark or packetyzer. A sniffer actually records the packets of the network and shows you their content. So if you record several HTTP implementations (from various browsers) and your own socket implementation, you can make the required changes to make them identical in the packet level.

Simulating HTTP/TCP re-transmission timeout

I am working on linux.
I have a HTTP client which requests some data from the HTTP server. The HTTP client is written in C and HTTP server is written in perl.
I want to simulate TCP re-transmission timeouts at the client end.
I assume that closing the socket gracefully would not result in client to re-transmit the requests.
So I tried the following scenario:
Exit the Server as soon as it gets the HTTP GET request. However, I noticed that once the application exits, the socket is still closed gracefully. I see that the server initiates FIN.ACK messages towards the client even though the application has not called "close" on the socket. I have noticed this behaviour on a simple TCP server and client written in C program as well.
Server does not send any response to the client's GET request. In this case I notice that there is still FIN, ACK sent by the server.
Seems that in these cases the OS (linux) takes care of closing the socket with the peer.
Is there any way to suppress this behaviour (using ioctl or setsockopt options) or any other way to simulate the TCP re-transmission timeouts.
You could try setting firewall rules that block the packets going from the server to the client, which would cause the client to re-transmit the quests. On Linux, this would probably be done using iptables, but different distributions have different methods of controlling it.
This issue was previously discussed here

Resources