My question is , I have create a TCP connection , and when it stays without transfering any data about 1 hour, It already disconnected from the server, but it do not notify me , that it's disconnected, Should I send keep alive packets to the server ? or should i send keep alive packets from server to client ? Or should i send to both ?
Yes you should. A few days ago I created a TCP socket/server application, and I got the same problem. I fixed it by starting to send keep alive packets.
If you send keep alive packets, your problem will disappear.
I've heard some people say that the OS will send keep alive packets for you, I am not very familiar with this, but sending keep alive packets explicitly worked for me
it do not notify me that it's disconnected
It can't. There is no means whereby it could do so other than inducing an EOS or a 'connection reset' next time you try an I/O.
Should I send keepalive packets ...
If you have this problem, yes. In general, probably not.
Related
This question asks what to do about loosing XMPP messages on mobile devices when they don't have a stable connection, but I don't really see why the packages get lost in the first place.
I remember having read that the stream between the server and the client stays open when the connection is suddenly lost and will only be destroyed once the connection times out. This means that the server sends arriving messages over the stream, even though the disconnected client can't receive those messages anymore.
I was happy with that explanation for some time, but started wondering why core XMPP would be lacking such an important feature. Eventually I noticed that ensuring correct transmission in the XMPP protocol would be redundant, as the underlying TCP should already ensure the proper transmission of the message, but as the various problems that arise from the lost message it seems that this isn't true.
Why isn't TCP enough to ensure that the message is either correctly sent or fails properly so the server knows it has to send the message later?
Why isn't TCP enough to ensure a proper transmission (or proper error handling, so the server knows the message has to be sent again) in this scenario?
Application gives the data that needs to be sent across to its TCP. TCP segments the data as needed and sends them out on established connection. Application passes over the burden of ensuring the packet reaches the other end to TCP. ( This does not mean,application should not have re-transmissions. Application level protocol can define re-send of messages if right response didn't come)
TCP has the mechanism of the Re-transmissions. Every packet sent to peer needs to be acknowledged. Until the acknowledgements come in TCP shall have the packets in its sendQ. Once the acknowledgement for the packets sent is received, they are removed.
If there is packet loss, acknowledgements don't arrive. TCP does the re-transmissions. Eventually gives up.Notifies application which needs to take action. Packet loss can happen beyond TCPs control. Thus TCP provides best-effort reliable service.
(Original title: "Weird TCP connection close behavior")
I am troubleshooting TCP connection process using Wireshark. Client opens connection to server (I tried two different servers), and starts receiving long stream of data. At some point in time client wants to stop and sends server [FIN, ACK] packet, but server does not stop sending data, it continues till its own full stream end, and then sends its own completion packet [FIN, PSH, ACK]. I figured it out keeping reading data from the client's socket after client sent FIN packet. Also, after client sent this FIN packet, its state is FIN_WAIT, thus waiting for FIN response from server...
Why servers do not stop sending data and respond to FIN packet with acknowledgment with FIN set?
I would expect, after client sends FIN packet, server will still send several packets which were on the fly before it received FIN, but not the whole pack of long data stream!
Edit: reading this I think that web server is stuck in stage "CLOSE-WAIT: The server waits for the application process on its end to signal that it is ready to close" (third row), and its data sending process "is done" when it flushed all contents to the socket at its end, and this process can not be terminated. Weird.
Edit1: it appears my question is a little different one. I need to totally terminate connection at client's side, so that server stops sending data, and it (server) would not go crazy about forceful termination from client's side, and aborted its data sending thread at its side being ready for next connection.
Edit2: environment is HTTP servers.
The client has only shutdown the connection for output, not closed it. So the server is fully entitled to keep sending.
If the client had closed the connection, it would issue an RST in response to any further data received, which would stop the server from sending any more, modulo buffering.
Why servers do not stop sending data and respond to FIN packet with acknowledgment with FIN set?
Why should they? The client has said it won't send another request, but that doesn't mean it isn't interested in the response to any requests it has already sent.
Most protocols, such as HTTP, specify that the server should complete the response to the current request and only then close the connection. This is not an abnormal abort, it's just a promise not to send anything else.
I am trying to create an UDP server and client. How can I check if client is "connected" or has disconnected? Since UDP is not really a Connection. How does multiplayer games do it?
Have you ever played a first person game and at some point lost internet connection? You should have gotten a message with a countdown before it automatically disconnects you. Have a look at this picture and notice in the top right the countdown to a timeout.
UDP maintains the connection by continuously sending and receiving packets. So as an example if for 30 seconds the server hasn't received anything from a client it can presume the client disconnected without letting the server know (disconnected by timeout).
The implementation is usually quite straightforward: use a timer and reset it each time you receive a packet, if the timer goes over the TIMEOUT_VALUE then disconnect this client.
We are facing random RST packet problem in our environments, which causes some unexpected behaviors, following image is snapshot of the tcp data generated by wireshark, which shows the problem:
Client (117.136.2.181) successfully sets up the connection with the server (192.168.40.16)
Client sends some data to the server, as well the KEEP_ALIVE signal.
Server receives the data, process it and sends the result back to client.
Server close the socket.
Server does not receive the ACK signal from client, so it re-transmits the result data as well as the FIN signal, this is automatically done by TCP protocol. However, server still does not receive the ACK signal from client.
Server sends a RST signal to client so connection is closed.
After some analysis, we think some network problem happens after step 3, so all the result data and FIN signal sent from server are not ack'd by client, but we are very confused about the RST signal sent from the server. Based on our understanding, a RST signal is sent if a half-closed socket receives some data, or if there is data in the receive queue when closes a socket. But both these seem not be the root cause of our case.
Can some one help to elaborate why this is happening?
RST usually happens when close is called on the socket without shutdown, or after a shutdown while the other party is still trying to send data (still has not replied with an FIN).
Some programming languages have a socket.close(timeout) for example .NET, that calls shutdown then close after timeout has passed.
So the client have up to timeout to finish sending and closing the connection with FIN, if it fails to do so, the connection will be forcibly closed by RST.
See https://stackoverflow.com/a/23483487/1438522 for a more thorough explanation about difference between close and shutdown.
I know that TCP is very reliable, and what ever is sent is guaranteed to get to its destination. But what happens if after a packet is sent, but before it arrives at the server, the server goes down? Is the acknowledgment that the packet is successfully sent triggered on the server's existence when the packet is initially sent, or when the packet successfully arrives at the server?
Basically what I'm asking is - if the server goes down in between the sending and the receiving of a packet, would the client know?
It really doesn't matter, but here's some finer details:
You need to distinguish between the Server-Machine going down and the Server-Process going down.
If the Server-Machine has crashed, then, clearly, there is nothing to receive the packet. The sending client will get no retry-requests, and no acknowledgment of success or failure. After having not received any feedback at all, the client will eventually receive a timeout, and consider the connection dropped. This is pretty much identical to the cable being physically cut unexpectedly.
If, however, the Server-Machine remains functioning, but the Server-Process crashes due to a programming bug, then the receiving TCP stack, which is a function of the OS, not of the process, will likely ACK the packet, and any others that arrive. This will continue until the OS notifies the TCP stack that the process is no longer active. The TCP stack will likely send a RST (reset) notice to the client, or may drop the connection (as described above)
This is basically what happens. The full reality is hard to describe without getting tied up in unnecessary detail.
TCP manages connections which are defined as a 4-tuple (source-ip, source-port, dest-ip, dest-port).
When the server closes the connection, the connection is placed into a TIME_WAIT2 state where it cannot be re-used for a certain time. That time is double the maximum time-to-live value of the packets. Any packets that arrive during that time are discarded by TCP itself.
So, when the connection becomes available for re-use, all packets have been destroyed (anywhere on the network) either by:
being received at the destination and thrown away due to TIME_WAIT2 state; or
being destroyed by packet forwarders on the net due to expired lifetime.
When you send a packet to the network there is never a grantee it will get safely to the other side. The reliability of TCP is achieved exactly as you suggest using acknowledgment packets.