I am trying to create an UDP server and client. How can I check if client is "connected" or has disconnected? Since UDP is not really a Connection. How does multiplayer games do it?
Have you ever played a first person game and at some point lost internet connection? You should have gotten a message with a countdown before it automatically disconnects you. Have a look at this picture and notice in the top right the countdown to a timeout.
UDP maintains the connection by continuously sending and receiving packets. So as an example if for 30 seconds the server hasn't received anything from a client it can presume the client disconnected without letting the server know (disconnected by timeout).
The implementation is usually quite straightforward: use a timer and reset it each time you receive a packet, if the timer goes over the TIMEOUT_VALUE then disconnect this client.
Related
I currently use websocket, and in rfc6455:
Upon receipt of a Ping frame, an endpoint MUST send a Pong frame in
response, unless it already received a Close frame. It SHOULD respond
with Pong frame as soon as is practical.
Websocket prefer a Pong respond to Ping. But if a Ping can send success over a tcp connection, should this mean the connection is still alive, why need remote to respond with a Pong.
Because in case of proxies, PING/PONG gives you the end to end check, while TCP mechanisms only assure you can send till next hop.
PING/PONG is also intended to measure performance, you can PING sending a timestamp, and the PONG will echo it.
I found a similar questions. Application layer need Ping Pong maybe because when call send, we can only know data is copied to the network stack successfully, and tcp layer doesn't report to us when the data is really transferred.
Maybe someone can explain the design more detail...
This question asks what to do about loosing XMPP messages on mobile devices when they don't have a stable connection, but I don't really see why the packages get lost in the first place.
I remember having read that the stream between the server and the client stays open when the connection is suddenly lost and will only be destroyed once the connection times out. This means that the server sends arriving messages over the stream, even though the disconnected client can't receive those messages anymore.
I was happy with that explanation for some time, but started wondering why core XMPP would be lacking such an important feature. Eventually I noticed that ensuring correct transmission in the XMPP protocol would be redundant, as the underlying TCP should already ensure the proper transmission of the message, but as the various problems that arise from the lost message it seems that this isn't true.
Why isn't TCP enough to ensure that the message is either correctly sent or fails properly so the server knows it has to send the message later?
Why isn't TCP enough to ensure a proper transmission (or proper error handling, so the server knows the message has to be sent again) in this scenario?
Application gives the data that needs to be sent across to its TCP. TCP segments the data as needed and sends them out on established connection. Application passes over the burden of ensuring the packet reaches the other end to TCP. ( This does not mean,application should not have re-transmissions. Application level protocol can define re-send of messages if right response didn't come)
TCP has the mechanism of the Re-transmissions. Every packet sent to peer needs to be acknowledged. Until the acknowledgements come in TCP shall have the packets in its sendQ. Once the acknowledgement for the packets sent is received, they are removed.
If there is packet loss, acknowledgements don't arrive. TCP does the re-transmissions. Eventually gives up.Notifies application which needs to take action. Packet loss can happen beyond TCPs control. Thus TCP provides best-effort reliable service.
My question is , I have create a TCP connection , and when it stays without transfering any data about 1 hour, It already disconnected from the server, but it do not notify me , that it's disconnected, Should I send keep alive packets to the server ? or should i send keep alive packets from server to client ? Or should i send to both ?
Yes you should. A few days ago I created a TCP socket/server application, and I got the same problem. I fixed it by starting to send keep alive packets.
If you send keep alive packets, your problem will disappear.
I've heard some people say that the OS will send keep alive packets for you, I am not very familiar with this, but sending keep alive packets explicitly worked for me
it do not notify me that it's disconnected
It can't. There is no means whereby it could do so other than inducing an EOS or a 'connection reset' next time you try an I/O.
Should I send keepalive packets ...
If you have this problem, yes. In general, probably not.
We are facing random RST packet problem in our environments, which causes some unexpected behaviors, following image is snapshot of the tcp data generated by wireshark, which shows the problem:
Client (117.136.2.181) successfully sets up the connection with the server (192.168.40.16)
Client sends some data to the server, as well the KEEP_ALIVE signal.
Server receives the data, process it and sends the result back to client.
Server close the socket.
Server does not receive the ACK signal from client, so it re-transmits the result data as well as the FIN signal, this is automatically done by TCP protocol. However, server still does not receive the ACK signal from client.
Server sends a RST signal to client so connection is closed.
After some analysis, we think some network problem happens after step 3, so all the result data and FIN signal sent from server are not ack'd by client, but we are very confused about the RST signal sent from the server. Based on our understanding, a RST signal is sent if a half-closed socket receives some data, or if there is data in the receive queue when closes a socket. But both these seem not be the root cause of our case.
Can some one help to elaborate why this is happening?
RST usually happens when close is called on the socket without shutdown, or after a shutdown while the other party is still trying to send data (still has not replied with an FIN).
Some programming languages have a socket.close(timeout) for example .NET, that calls shutdown then close after timeout has passed.
So the client have up to timeout to finish sending and closing the connection with FIN, if it fails to do so, the connection will be forcibly closed by RST.
See https://stackoverflow.com/a/23483487/1438522 for a more thorough explanation about difference between close and shutdown.
I know that TCP is very reliable, and what ever is sent is guaranteed to get to its destination. But what happens if after a packet is sent, but before it arrives at the server, the server goes down? Is the acknowledgment that the packet is successfully sent triggered on the server's existence when the packet is initially sent, or when the packet successfully arrives at the server?
Basically what I'm asking is - if the server goes down in between the sending and the receiving of a packet, would the client know?
It really doesn't matter, but here's some finer details:
You need to distinguish between the Server-Machine going down and the Server-Process going down.
If the Server-Machine has crashed, then, clearly, there is nothing to receive the packet. The sending client will get no retry-requests, and no acknowledgment of success or failure. After having not received any feedback at all, the client will eventually receive a timeout, and consider the connection dropped. This is pretty much identical to the cable being physically cut unexpectedly.
If, however, the Server-Machine remains functioning, but the Server-Process crashes due to a programming bug, then the receiving TCP stack, which is a function of the OS, not of the process, will likely ACK the packet, and any others that arrive. This will continue until the OS notifies the TCP stack that the process is no longer active. The TCP stack will likely send a RST (reset) notice to the client, or may drop the connection (as described above)
This is basically what happens. The full reality is hard to describe without getting tied up in unnecessary detail.
TCP manages connections which are defined as a 4-tuple (source-ip, source-port, dest-ip, dest-port).
When the server closes the connection, the connection is placed into a TIME_WAIT2 state where it cannot be re-used for a certain time. That time is double the maximum time-to-live value of the packets. Any packets that arrive during that time are discarded by TCP itself.
So, when the connection becomes available for re-use, all packets have been destroyed (anywhere on the network) either by:
being received at the destination and thrown away due to TIME_WAIT2 state; or
being destroyed by packet forwarders on the net due to expired lifetime.
When you send a packet to the network there is never a grantee it will get safely to the other side. The reliability of TCP is achieved exactly as you suggest using acknowledgment packets.