TCP Termination - http

What is happening to a TCP connection after
an end of an HTTP session?
for example, after loading a static webpage from a webserver
Thanks

A HTTP session usually refers to the server is keeping an association to a specific user and could potentially be of any length (using, for instance, cookies as association tokens).
A HTTP session therefore usually contains multiple TCP sessions. For non persistent HTTP connections, every request has its own TCP session (and is closed after). For persistent HTTP connections on the other hand, multiple HTTP resources could be fetched wihtin a TCP session and either side will close it upon a reached timeout threshold on either side.
Wikipedia article on Persistent HTTP connections (Keep-Alive: true)

You can have several HTTP request in one TCP connection. So it if you refer a HTTP session as a set of HTTP requests/responses, the TCP connection will be closed.
At TCP level the closing side sends a packet with the FIN flag set, the other side acknowledges this with ACK, and immediately or eventually does his own FIN, which the first acknowledges again with ACK. It's also possible that the connection is abandonded with the RST instead of FIN flag. The port that sent the first FIN goes into the TIME_WAIT state. This is used to reject packets that arrive subsequently, that would otherwise be misinterpreted as packets of a new connection. After the timeout the port goes from the TIME_WAIT state to the CLOSED state.
Edit: Normal termination is indicated by the FIN flag.

Related

How to keep TCP connections alive on AWS network loadbalancer

Architecture:
We have a bunch of IoT devices connected via an AWS network loadbalancer (NLB) to our backend servers.
This is a bidirectional channel (not a request response style, but messages passed from either party to the other).
Objective:
How to keep connections (both sides of NLB) alive during inactivity.
Description:
Frequently clients go to inactive mode and do not send (or receive) anything to (or from) servers. If this state lasts longer than 350 seconds (connection idle timeout value of NLBs) the LB silently kill the connection. This is bad, because we see a lot of RST packets everywhere.
Questions:
I'm aware of SO_KEEPALIVE feature and can enable it on our backend servers. This keeps the connection between backend servers and NLB alive. But what about clients? Do NLBs forward TCP keep-alive packets to the other party? (Here it says it does not). If it does not, how to keep clients connections open? (At them moment, I'm thinking to send an empty message to keep the connection.)
Is this behavior specific to AWS NLBs or do loadbalancers generally work this way?
AWS docs say that NLB TCP listener has ability to keep connection alive with TCP keep-alive packets: link
For TCP listeners, clients or targets can use TCP keepalive packets to reset the idle timeout.
Based on my tests client is receiving TCP keep alive packets sent by server and correctly responds back.
Server doesn't interrupt connection what means it receives response from client.
It means that NLB TCP listener actually forwards keep-alive packets.
Based on the same docs, NLB TLS listener shouldn't react the same on TCP keep-alive packets.
TCP keepalive packets are not supported for TLS listeners.
But actual tests result shocked me when Wireshark showed keep-alive packets received on client connected through TLS listener.
My previous test results performed 2 months ago don't correspond what I'm experiencing now and I'm thinking behaviour may changed.
(previously server was keeping the connection even after client became unavailable in unexpected manner)
Not an answer, just to document what I found/did:
NELBs do not forward keep-alive packets. Meaning you have to enable them on both server and clients.
NELB's timeout cannot be changed. it's 350 second
I couldn't find any way to forge an empty TCP packet to fool the LB to forward it to the other side of the LB.
At the end, we implemented the keep alive feature at the application layer (sending an empty message to clients periodically.)

reset TCP just after receive the ACK of three-way handshake

I have a server with multiple clients. The simulated network is in heavy congestion. What I found is that the server reset some TCP connections after received the ACK segment of three-way handshake. But it doesn't happen when the network is in good condition.
What I found is that the ACK of three-way handshake is received about 3.5s later than the SYN-ACK.
Is that because the three-way handshake SYN-ACK time-out? If SYN-ACK time out, why not resend SYN-ACK.
Thank you for any suggestions.
This looks like related to SYN cookies.
SYN cookies
When a Linux host receives too much SYN traffic, it activates the SYN cookies mechanism.
When SYN cookies is enabled, a server answers to SYN by issuing a SYN-ACK segment with specific data encoded in the TCP sequence field. In that field it encodes the timestamp, the MSS and a cryptographic hash of the two endpoints (local and remote IPs and ports) plus the timestamp.
This is done so that the server does not have to store anything about the connection at this point, it simply send the answer and forget about it.
Then, when the client answer with its ACK, the server checks the hash in the ack field (the ack of the client is the sequence of the server). If it is correct, it creates the connection with the data stored in the field.
SYN cookies explain why the server does not resend SYN-ACK packets when they timeout.
But, why the reset after receiving the ACK?
Maybe clients (or server) are behind a NAT that modifies ports and the NAT also gets congested, so that it cannot link the final ACK to the previous SYN, and assigns a new source port. When the server receives it, it resets the connection (it does not matter if SYN cookies are enabled or not).
Or maybe the server process is not accepting connections at the same speed they are arriving, the kernel queue has filled and newer ones are discarded that way.

Basic understanding of TCP

I don't fully understand when a TCP connection ends. That is, when a client sends a request to a server and the server responds, is that response part of the same TCP connection? Or is that response made through a brand new TCP connection?
A TCP connection ends when both sides have closed it. A response is sent over the same connection as the request, and there can be many request/response pairs over a single connection. Or none, just a download for example.

Will keep-alive useful to use with load balancer and firewalls

I have client and server component. Server may be installed behind the firewall or load balancer. Many sites/forums suggested to use TCP keep-alive feature to avoid connection termination due to inactivity.
The question is whether the keep-alive message from client will actually reach to server?
I tried to simulate the deployment using tcptrace utility and found that the keep-alive messages does not reach to server still the client was getting ACK for keep alive message.
I am not sure whether LB/FW work in same manner.
Is the keep-alive good option to avoid connection termination due to inactivity over socket in case of firewall and load balancer?
The answer is, of course: "it depends".
Many firewalls and load balancers maintain separate frontend and backend TCP connections, e.g.:
client <-- TCP --> firewall/balancer <-- TCP --> server
For situations like this, using TCP keepalive will not work as you'd expect. Why not? The TCP keepalive works for that TCP session only, and the keepalive probe packets are more like "administrative overhead" packets that data-bearing packets. This means that a) using TCP keepalive on the client end only means keeping the TCP connection to the firewall/balancer alive, and b) the firewall/balancer does not "forward" those keepalive probe packets across to the backend connection.
So is using TCP keepalive useful? Yes. There are other types of proxies which work at lower layers in the OSI stack, and which do forward those packets; using TCP keepalive is good for keeping your idle connection alive through those types of network intermediaries.
If your client/server application uses a long-lived, possibly idle TCP connection through firewalls/balancers, the best way to ensure that that connection is not torn down (sometimes politely, e.g. with a RST packet sent by the firewall/balancer, sometimes silently) is to use a "ping" or "heartbeat" message at the application layer. (Think of this as an "application keepalive".) This is just some kind of message that is sent e.g. from the client to the server. A simple and effective technique is to have the client periodically send some bytes to the server, which the server echoes back to the client. The client knows which bytes it sent, and when it receives those same bytes back from the server, it knows that everything in the network path is still working as expected.
Hope this helps!

are TCP client and server in equivalent status after TCP 3_way handshake

when a TCP client wants to establish a tcp connection with a tcp server
it needs to send SYN and then ACK
while tcp server only sends SYN/ACK
so they are different
but , after the 3_way handshaking,
is this connection symmetric, namely, are TCP client and server in equal status
for example, after the 3-way handshake, usually the client send packet first,
can TCP server send packet first?
No, the procedure is not different at all, but instead of sending a SYN then an ACK in two different packets, the servers concatenate them by sending them via a single packet!
In the other hand, remember always that the client/server nomenclature is relative. The server is the party that remains in listening mode, while the client is the party that initiates the connection ...
After the establishment of the connection, both parties are equivalent (same status as you said: ESTABLISHED). For that reason, both can send the FIN statement to close the connection ...
After the connection is established, both ends are indeed "symmetric". Who sends first is decided by the underlying protocol and differes amongst them.
For example, HTTP starts with the GET <path> HTTP/1.0 command, while other protocols let the server give a greeting line first, and only then the client sends its request.
So in general, both ends are free to send their stuff first.

Resources