How to keep TCP connections alive on AWS network loadbalancer - tcp

Architecture:
We have a bunch of IoT devices connected via an AWS network loadbalancer (NLB) to our backend servers.
This is a bidirectional channel (not a request response style, but messages passed from either party to the other).
Objective:
How to keep connections (both sides of NLB) alive during inactivity.
Description:
Frequently clients go to inactive mode and do not send (or receive) anything to (or from) servers. If this state lasts longer than 350 seconds (connection idle timeout value of NLBs) the LB silently kill the connection. This is bad, because we see a lot of RST packets everywhere.
Questions:
I'm aware of SO_KEEPALIVE feature and can enable it on our backend servers. This keeps the connection between backend servers and NLB alive. But what about clients? Do NLBs forward TCP keep-alive packets to the other party? (Here it says it does not). If it does not, how to keep clients connections open? (At them moment, I'm thinking to send an empty message to keep the connection.)
Is this behavior specific to AWS NLBs or do loadbalancers generally work this way?

AWS docs say that NLB TCP listener has ability to keep connection alive with TCP keep-alive packets: link
For TCP listeners, clients or targets can use TCP keepalive packets to reset the idle timeout.
Based on my tests client is receiving TCP keep alive packets sent by server and correctly responds back.
Server doesn't interrupt connection what means it receives response from client.
It means that NLB TCP listener actually forwards keep-alive packets.
Based on the same docs, NLB TLS listener shouldn't react the same on TCP keep-alive packets.
TCP keepalive packets are not supported for TLS listeners.
But actual tests result shocked me when Wireshark showed keep-alive packets received on client connected through TLS listener.
My previous test results performed 2 months ago don't correspond what I'm experiencing now and I'm thinking behaviour may changed.
(previously server was keeping the connection even after client became unavailable in unexpected manner)

Not an answer, just to document what I found/did:
NELBs do not forward keep-alive packets. Meaning you have to enable them on both server and clients.
NELB's timeout cannot be changed. it's 350 second
I couldn't find any way to forge an empty TCP packet to fool the LB to forward it to the other side of the LB.
At the end, we implemented the keep alive feature at the application layer (sending an empty message to clients periodically.)

Related

What is the typical usage of TCP keepalive?

Consider a scenario where exists one server and multiple clients. And each client creates TCP connections to interact with the server. There are three usages of TCP alive:
Server-side keepalive: The server sends TCP keepalive to make sure that the client is alive. If the client is dead, the server closes the TCP connection to the client.
Client-side keepalive: Clients sends TCP keepalive to prevent the server from closing the TCP connection to the client.
Both-side keepalive: Both server and clients send TCP keepalive as described in 1 and 2.
Which of the above usages of TCP keepalive are typical?
Actually, both server and client peers may use TCP keepalive. It is useful to ensure that the operating system will eventually release any resource associated with dead connections. Note that if a connection between two hosts get lost because of some issue with a router between them, then both hosts have to independently detect that the connection is dead, and cleanup for themselves.
Now, each host will maintain a timer on each connection indicating when it last received a packet associated with that connection. A host will send a keepalive packet when that timer goes over a certain threshold, which is defined locally (that is, hosts do not exchange information about their own keepalive configuration). So either host with the lowest keepalive time will take the initiative of sending a keepalive packet to the other host. If the packet indeed goes through, the other host (that is, the one with the higher keepalive time) will respond to that packet and reset its own timer; therefore, the host with an higher keepalive time will certainly never reach the need to send keepalive packet itself, unless the connection has indeed been lost.
Arguably, it could be said that servers are generally more aggressive on keepalive than client machines (that is, they will more often be configured with lower keepalive time), because hanging connections often have undesirable effects on server software (for example, the software may accept a limited number of concurrent connection, or the server may fork a new process instance associated with each connection).
Server-side keepalive: The server sends TCP keepalive to make sure that the client is alive. If the client is dead, the server closes the TCP connection to the client.
If the client is dead, the server gets a 'connection reset' error, after which it should close the connection.
Client-side keepalive: Clients sends TCP keepalive to prevent the server from closing the TCP connection to the client.
No. Client sends keepalive so that if the server is dead, the client will get a 'connection reset' error, after which it should close the connection.
Both-side keepalive
Both sides are capable of getting a 'connection reset' due to keepalive failure, as above.
Whuch of the above usages is typical?
Any of them, or none. If a peer is sending regularly it doesn't really need keepalive as well. It is therefore often of more use to a server than a client.

At a low level, how does the Websocket protocol detect the status of a connection?

I was discussing Websocket overhead with a fellow engineer and we were both uncertain of how a Websocket actually detects the status of a client connection.
Is there a "status" packet sent to the client/server periodically?
Does it have anything to do with ping or pong in the low level API?
What about frames?
How does the Websocket detect that it's disconnected on the client? the server?
I'm surprised I couldn't find this answered on SO, but that may be my mistake. I found this answer addresses scalability, but that's not what I'm asking here. This answer touches on implementation, but not at the depth I'm pursuing here.
A webSocket connection is a TCP connection that uses the webSocket protocol. By default, a server or client knows when the connection has disappeared only when the underlying TCP realizes that the connection has been closed and the webSocket layer is listening for a close event on the connection so it is informed that way.
The webSocket protocol does not, by itself, require heartbeat packets that can regularly test if the connection is still working. The TCP socket may still appear to be alive, but the connection may not actually still work. The other end could have disappeared or been interrupted in between and one or both endpoints might not know that at any given time.
Socket.io which is built on top of webSocket, uses the ping and pong packets to implement a heartbeat which regularly tests the connection and will, in fact, detect a non-functioning connection at the client, close the socket and then reconnect automatically.
Is there a "status" packet sent to the client/server periodically?
Not by default for a regular webSocket connection.
Does it have anything to do with ping or pong in the low level API?
It is up to a client or server if they want to send ping or pong packets themselves to implement some sort of connection validation detection.
What about frames?
webSocket frames are the data format for sending data over a webSocket. They don't have anything to do with this issue.
How does the Websocket detect that it's disconnected on the client? the server?
Described above. Unless the client/server implement their own ping/pong system to detect when a connection has gone awry, they just rely on TCP signaling to know when a connection has been closed by the other end. A webSocket connection can be non-functional without the client or server knowing it until they try to send.
When a browser window/tab opens a webSocket connection and then the window/tab is retargeted to a new URL, the browser will close all resources associated with that window/tab including any webSocket connections. If the link between client and server is functional at that point, then the server will be told the underlying TCP connection (and thus the webSocket) has been closed. If the network link goes down and then the user moves that window/tab to a new URL, the server will not necessarily know that the connection is non-functional without relying on ping/pong type signalling to regularly test the connection.
If the browser crashes, the OS should close any sockets opened by that process.
If the computer/OS crashes, the sockets will likely not get closed normally (though this may be somewhat OS dependent and may be crash-dependent too).

TCP Retransmission after Reset RST flag

I have around 20 clients communicating together with a central server in the same LAN. The clients can make transaction simultaneously with the server. The server forward each transaction to external appliance in the network. Sometimes it works, sometimes my application shows a "time out" message in a client screen (randomly)
I mirrored all traffic and found TCP Retransmission after TCP Reset packets for the first TCP Sequence. I immediately thought about packet loss but all my cables/NIC are fine, and I do not see DUP ACK in the capture.
It seems that RST packets may have different significations.
What causes those TCP Reset?
Where should I focus my investigation: network or application design ?
I would appreciate any help. Thanks in advance.
Judging by the capture, I assume your central server is 137.56.64.31. What's happening is the clients are initiating a connection to the server with a SYN packet and the server responds with a RST. This is typical if the server has no application listening on that particular port e.g. the webserver application isn't running and a client tries to connect to port 80.
The clients are all connecting to different ports on the server, which is unusual for an central server, but not unheard of. The destination ports the clients are connecting to on the server are: 11007, 11012, 11014, 11108, and 11115. Is that normal for the application? If not, the clients should be connecting to whatever port the application server is listening on.
The reason for the retransmits is that instead of giving up on the connection upon receiving a RST from the server, the client tries to initiate the connection again so Wireshark considers it a retransmission.

Will keep-alive useful to use with load balancer and firewalls

I have client and server component. Server may be installed behind the firewall or load balancer. Many sites/forums suggested to use TCP keep-alive feature to avoid connection termination due to inactivity.
The question is whether the keep-alive message from client will actually reach to server?
I tried to simulate the deployment using tcptrace utility and found that the keep-alive messages does not reach to server still the client was getting ACK for keep alive message.
I am not sure whether LB/FW work in same manner.
Is the keep-alive good option to avoid connection termination due to inactivity over socket in case of firewall and load balancer?
The answer is, of course: "it depends".
Many firewalls and load balancers maintain separate frontend and backend TCP connections, e.g.:
client <-- TCP --> firewall/balancer <-- TCP --> server
For situations like this, using TCP keepalive will not work as you'd expect. Why not? The TCP keepalive works for that TCP session only, and the keepalive probe packets are more like "administrative overhead" packets that data-bearing packets. This means that a) using TCP keepalive on the client end only means keeping the TCP connection to the firewall/balancer alive, and b) the firewall/balancer does not "forward" those keepalive probe packets across to the backend connection.
So is using TCP keepalive useful? Yes. There are other types of proxies which work at lower layers in the OSI stack, and which do forward those packets; using TCP keepalive is good for keeping your idle connection alive through those types of network intermediaries.
If your client/server application uses a long-lived, possibly idle TCP connection through firewalls/balancers, the best way to ensure that that connection is not torn down (sometimes politely, e.g. with a RST packet sent by the firewall/balancer, sometimes silently) is to use a "ping" or "heartbeat" message at the application layer. (Think of this as an "application keepalive".) This is just some kind of message that is sent e.g. from the client to the server. A simple and effective technique is to have the client periodically send some bytes to the server, which the server echoes back to the client. The client knows which bytes it sent, and when it receives those same bytes back from the server, it knows that everything in the network path is still working as expected.
Hope this helps!

TCP Termination

What is happening to a TCP connection after
an end of an HTTP session?
for example, after loading a static webpage from a webserver
Thanks
A HTTP session usually refers to the server is keeping an association to a specific user and could potentially be of any length (using, for instance, cookies as association tokens).
A HTTP session therefore usually contains multiple TCP sessions. For non persistent HTTP connections, every request has its own TCP session (and is closed after). For persistent HTTP connections on the other hand, multiple HTTP resources could be fetched wihtin a TCP session and either side will close it upon a reached timeout threshold on either side.
Wikipedia article on Persistent HTTP connections (Keep-Alive: true)
You can have several HTTP request in one TCP connection. So it if you refer a HTTP session as a set of HTTP requests/responses, the TCP connection will be closed.
At TCP level the closing side sends a packet with the FIN flag set, the other side acknowledges this with ACK, and immediately or eventually does his own FIN, which the first acknowledges again with ACK. It's also possible that the connection is abandonded with the RST instead of FIN flag. The port that sent the first FIN goes into the TIME_WAIT state. This is used to reject packets that arrive subsequently, that would otherwise be misinterpreted as packets of a new connection. After the timeout the port goes from the TIME_WAIT state to the CLOSED state.
Edit: Normal termination is indicated by the FIN flag.

Resources