Tcp Socket Closed - tcp

I always thought that if you didn't implement a heartbeat, there was no way to know if one side of a TCP connection died unexpectedly. If the process was just killed on one side and didn't exit gracefully, there was no way for the socket to send FIN or let the other side know that it was closed.
(See some of the comments here for example http://www.perlmonks.org/?node_id=566568 )
But there is a stock order server that I connect to that has a new "cancel all orders on disconnect feature" that cancels live orders if the client dis-connects. It works even when I kill the process on my end, and there is definitely no heartbeat from my app to it.
So how is it able to detect when I've killed the process? My app is running on Windows Server 2003 and the order server is on Suse Linux Enterprise Server 10. Does Windows detect that the process associated with the socket is no longer alive and send the FIN?

When a process exits - for whatever reason - the OS will close the TCP connections it had open.
There's numerous other ways a TCP connection can go dead undetected
someone yanks out a network cable inbetween.
the computer at the other end gets nuked.
a nat gateway inbetween silently drops the connection
the OS at the other end crashes hard.
the FIN packets gets lost.
Though enabling tcp keepalive, you'll detect it eventually - atleast during a couple of hours.

It could be using a TCP Keep Alive to check for dead peers:
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html

As far as I know, the OS detects the process termination and closes all the file descriptors/sockets/handles the process was using. So, there isn't difference between "killing" application and "gracefully terminating". Of course, the kernel itself must be running (=pc turned on, wire connected...). But it's on the OS the job of sending the FIN and so on...
Also, if a host becomes unreachable /turned off, disconnected...) an intermediate gateway (or the client itself) may detect the event (e.g. loss of carrier, DHCP lease not renewed...) and reply to the packets sent to the died host with a ICMP error (host/network unreachable). This causes the peer's TCP connection to die, but it happens only if the client has some packet to send to the host.

Related

Does TCP kills dead idle connection after keepalive packet is sent

My understandings for TCP keepalive:
This keepalive does not really "keeping connection alive". Instead, "detectAlive" may be a more proper word: Tcp levels exchange heartbeat packet to detect whether an idle conneciton dead or alive.
Here are my questions:
After TCP knows that the connection is dead, does the TCP level automatically closes the connection?
After TCP knows that the connection is dead, how does application level know that infomation? By knowing that infomation it thus close the socket and release the resource
After TCP knows that the connection is dead, does the TCP level automatically closes the connection?
TCP is a protocol. It "knows" nothing. The specific implementation in the OS though will detect if the TCP connection can no longer exchange data with or without payload (i.e. keep alive) and will close the local state of the connection. It will not do the normal TCP connection close which involves sending FIN etc, because it can be assumed that the connection is broken.
After TCP knows that the connection is dead, how does application level know that information? By knowing that infomation it thus close the socket and release the resource
This depends on the application. The application needs to somehow monitor the state of the socket, i.e. doing a write, read, select or similar. These functions will then no longer block and the broken state of the connection can be determined for example based on error code in read/write or similar. If the application instead does not care about the socket for some time it will only realize the problem once it starts caring again.

How does a websocket know the server was taken down?

I was playing with websocket a bit (using Sails.js with its built-in socket thing, which is based on Socket.io).
I noticed Chrome receives two frames every 25 seconds. I thought this was some kind of polling to tell the connection was still on.
But then, I cancelled the server and Chrome was notified immediately.
Also I closed the Node process by force with the kill command, and still Chrome was notified, so that means it wasn't Node sending a signal before shutting down the server.
How does this happen?
Normal TCP socket connections do this, so it'd be surprising if websockets didn't.
The server kernel is responsible for cleaning up when the server process dies/exits/is killed. This includes releasing memory, closing files, and shutting down sockets. Cleanly shutting down a TCP socket requires sending a message to tell the peer.
Interestingly, on some old versions of Windows with userspace winsock, this didn't happen if the server process crashed. On all OS with compliant TCP support, it should be guaranteed unless the kernel itself hangs, the machine loses power, or the network breaks.

Listening Application (winsock2) behavior towards Port scanning (Syn Scan)

Should a server application that listens on a port, able to detect and logs down any connection attempt done by Syn Scanning?
Test Scenario
I had written a windows program which i simply called it "simpleServer.exe".
This program is just a simulation of a very basic server application.
It listens on a port, and wait for incoming messages.
The listening Socket was defined to be a TCP Stream Socket.
that's all that this program is doing.
I had been deploying this exact same program on 2 different machines, both running on windows 7 professional 64bit.
This machine will act as a host.
and they are stationed in the same network area.
then, using the program "nmap",
i used another machine on the same network, to act as a client.
using the "-sS" parameter on "nmap", i do a Syn Scan, to the IP and Port of the listening simpleServer on both machine (one attempt at a time).
(note that the 2 hosts already had "wireshark" started, and is monitoring on tcp packets from the client's IP and to the listening port.)
In the "wireshark" entry, on both machine, i saw the expected tcp packet for Syn Scan:
client ----(SYN)----> host
client <--(SYN/ACK)-- host
client ----(RST)----> host
the above packet exchange suggests that the connection was not established.
But on the "simpleServer.exe", only one of it had "new incoming connection" printed in the logs, while the other instance was not alerted of any new incoming connection, hence no logs at all.
Code Snippets
// socket bind and listen was done above this loop
while(TRUE)
{
sClient=accept(sListen,(SOCKADDR*)&remoteAddr,&nAddrLen);
if(sClient == INVALID_SOCKET)
{
printf("Failed accept()");
continue;
}
dwSockOpt (sListen);
printf ("recv a connection: %s\n", inet_ntoa(remoteAddr.sin_addr));
closesocket(sClient);
}
side note:
yes, since it is just a simple program, the flow might be a little funny, such as no break in the while loop. so please don't mind this simple and flawed design.
Further Investigation
i had also put a getsockopt() in the "simpleServer" right after it went into listening state, to check the differences of both the listening socket's SOL_SOCKET option.
one notable difference i found between the two hosts, is the SO_MAX_MSG_SIZE.
the host that detects the incoming connection has a Hex value of 0x3FFFFFFF (1073741823), while the other one that has no logs is 0xFFFFFFFF (-1). not sure if this is related or not, but i just spammed whatever differences that i may found in my test environment. the other value of the SOL_SOCKET are more or less the same.
side note: i tested on some other machine, which covers another windows 7 professional, windows server 2008 r2, windows server 2003. i am not sure if it is coincidence or not, but machine that have SO_MAX_MSG_SIZE == -1, they all did not detect the connection of the Syn Scanning. but maybe it is just a coincidence. i have nothing to prove tho.
Help That I Needed
why is the different behavior from the 2 same of the same application on a different machine with the same OS?
what determines the value of the SO_MAX_MSG_SIZE? considering two same OS but having 2 different values.
If a connection is never established, accept() will never return. That disposes of 90% of your question.
The only explanation for the 'new incoming connection' (or 'recv a connection' or whatever it is) message is that something else connected.
SO_MAX_MSG_SIZE has no meaning for a TCP socket, let alone a listening TCP socket. So whatever variation you experienced is meaningless.

How does TCP connection terminate if one of the machine dies?

If a TCP connection is established between two hosts (A & B), and lets say host A has sent 5 octets to host B, and then the host B crashes (due to unknown reason).
The host A will wait for acknowledgments, but on not getting them, will resend octets and also reduce the sender window size.
This will repeat couple times till the window size shrinks to zero because of packet loss. My question is, what will happen next?
In this case, TCP eventually times out waiting for the ack's and return an error to the application. The application have to read/recv from the TCP socket to learn about that error, a subsequent write/send call will fail as well. Up till the point that TCP determined that the connection is gone, write/send calls will not fail, they'll succeed as seen from the application or block if the socket buffer is full.
In the case your host B vanishes after it has sent its ACKs, host A will not learn about that until it sends something to B, which will eventually also time out, or result in an ICMP error. (Typically the first write/send call will not fail as TCP will not fail the connection immediately, and keep in mind that write/send calls does not wait for ACKs until they complete).
Note also that retransmission does not reduce the window size.
Please follow this link
now a very simple answer to your question in my view is, The connection will be timed out and will be closed. another possibility that exists is that some ICMP error might be generated due to due un-responsive machine.
Also, if the crashed machine is online again, then the procedure described in the link i just pasted above will be observed.
Depends on the OS implementation. In short it will wait for ACK and resend packets until it times out. Then your connection will be torn down. To see exactly what happens in Linux look here other OSes follow similar algorithm.
in your case, A FIN will be generated (by the surviving node) and connection will eventually migrate to CLOSED state. If you keep grep-ing for netstat output on the destination ip address, you will watch the migration from ESTABLISHED state to TIMED_WAIT and then finally disappear.
In your case, this will happen since TCP keeps a timer to get the ACK for the packet it has sent. This timer is not long enough so detection will happen pretty quickly.
However, if the machine B dies after A gets ACK and after that A doesn't send anything, then the above timer can't detect the same event, however another timer (calls idle timeout) will detect that condition and connection will close then. This timeout period is high by default. But normally this is not the case, machine A will try to send stuff in between and will detect the error condition in send path.
In short, TCP is smart enough to close the connection by itself (and let application know about it) except for one case (Idle timeout: which by default is very high).
cforfun
In normal cases, each side terminating its end of the connectivity by sending a special message with a FIN(finish) bit set.
The device receiving this FIN responds with an acknowledgement to the FIN to indicate that it has been received.
The connection as a whole is not considered terminated until both the devices complete the shut down procedure by sending an FIN and receiving an acknowledgement.

What is the cost of many TIME_WAIT on the server side?

Let's assume there is a client that makes a lot of short-living connections to a server.
If the client closes the connection, there will be many ports in TIME_WAIT state on the client side. Since the client runs out of local ports, it becomes impossible to make a new connection attempt quickly.
If the server closes the connection, I will see many TIME_WAITs on the server side. However, does this do any harm? The client (or other clients) can keep making connection attempts since it never runs out of local ports, and the number of TIME_WAIT state will increase on the server side. What happens eventually? Does something bad happen? (slowdown, crash, dropped connections, etc.)
Please note that my question is not "What is the purpose of TIME_WAIT?" but "What happens if there are so many TIME_WAIT states on the server?" I already know what happens when a connection is closed in TCP/IP and why TIME_WAIT state is required. I'm not trying to trouble-shoot it but just want to know what is the potential issue with it.
To put simply, let's say netstat -nat | grep :8080 | grep TIME_WAIT | wc -l prints 100000. What would happen? Does the OS's network stack slow down? "Too many open files" error? Or, just nothing to worry about?
Each socket in TIME_WAIT consumes some memory in the kernel, usually somewhat less than an ESTABLISHED socket yet still significant. A sufficiently large number could exhaust kernel memory, or at least degrade performance because that memory could be used for other purposes. TIME_WAIT sockets do not hold open file descriptors (assuming they have been closed properly), so you should not need to worry about a "too many open files" error.
The socket also ties up that particular src/dst IP address and port so it cannot be reused for the duration of the TIME_WAIT interval. (This is the intended purpose of the TIME_WAIT state.) Tying up the port is not usually an issue unless you need to reconnect a with the same port pair. Most often one side will use an ephemeral port, with only one side anchored to a well known port. However, a very large number of TIME_WAIT sockets can exhaust the ephemeral port space if you are repeatedly and frequently connecting between the same two IP addresses. Note this only affects this particular IP address pair, and will not affect establishment of connections with other hosts.
Each connection is identified by a tuple (server IP, server port, client IP, client port). Crucially, the TIME_WAIT connections (whether they are on the server side or on the client side) each occupy one of these tuples.
With the TIME_WAITs on the client side, it's easy to see why you can't make any more connections - you have no more local ports. However, the same issue applies on the server side - once it has 64k connections in TIME_WAIT state for a single client, it can't accept any more connections from that client, because it has no way to tell the difference between the old connection and the new connection - both connections are identified by the same tuple. The server should just send back RSTs to new connection attempts from that client in this case.
Findings so far:
Even if the server closed the socket using system call, its file descriptor will not be released if it enters the TIME_WAIT state. The file descriptor will be released later when the TIME_WAIT state is gone (i.e. after 2*MSL seconds). Therefore, too many TIME_WAITs will possibly lead to 'too many open files' error in the server process.
I believe OS TCP/IP stack has been implemented with proper data structure (e.g. hash table), so the total number of TIME_WAITs should not affect the performance of the OS TCP/IP stack. Only the process (server) which owns the sockets in TIME_WAIT state will suffer.
If you have a lot of connections from many different client IPs to the server IPs you might run into limitations of the connection tracking table.
Check:
sysctl net.ipv4.netfilter.ip_conntrack_count
sysctl net.ipv4.netfilter.ip_conntrack_max
Over all src ip/port and dest ip/port tuples you can only have net.ipv4.netfilter.ip_conntrack_max in the tracking table. If this limit is hit you will see a message in your logs "nf_conntrack: table full, dropping packet." and the server will not accept new incoming connections until there is space in the tracking table again.
This limitation might hit you long before the ephemeral ports run out.
In my scenario i ran a script which schedules files repeatedly,my product do some computations and sends response to client ie client is making a repetitive http call to get the response of each file.When around 150 files are scheduled socket ports in my server goes in time_wait state and an exception is thrown in client which opens a http connection ie
Error : [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
The result was that my application hanged.I do not know may be threadshave gone in wait state or what has happened but i need to kill all processes or restart my application to make it work again.
I tried reducing wait time to 30 seconds since it is 240 seconds by default but it did not work.
So basically overall impact was critical as it made my application non-responsive
it looks like the server can just run out of ports to assign for incoming connections (for the duration of existing TIMED_WAITs) - a case for a DOS attack.

Resources