I know that TCP is very reliable, and what ever is sent is guaranteed to get to its destination. But what happens if after a packet is sent, but before it arrives at the server, the server goes down? Is the acknowledgment that the packet is successfully sent triggered on the server's existence when the packet is initially sent, or when the packet successfully arrives at the server?
Basically what I'm asking is - if the server goes down in between the sending and the receiving of a packet, would the client know?
It really doesn't matter, but here's some finer details:
You need to distinguish between the Server-Machine going down and the Server-Process going down.
If the Server-Machine has crashed, then, clearly, there is nothing to receive the packet. The sending client will get no retry-requests, and no acknowledgment of success or failure. After having not received any feedback at all, the client will eventually receive a timeout, and consider the connection dropped. This is pretty much identical to the cable being physically cut unexpectedly.
If, however, the Server-Machine remains functioning, but the Server-Process crashes due to a programming bug, then the receiving TCP stack, which is a function of the OS, not of the process, will likely ACK the packet, and any others that arrive. This will continue until the OS notifies the TCP stack that the process is no longer active. The TCP stack will likely send a RST (reset) notice to the client, or may drop the connection (as described above)
This is basically what happens. The full reality is hard to describe without getting tied up in unnecessary detail.
TCP manages connections which are defined as a 4-tuple (source-ip, source-port, dest-ip, dest-port).
When the server closes the connection, the connection is placed into a TIME_WAIT2 state where it cannot be re-used for a certain time. That time is double the maximum time-to-live value of the packets. Any packets that arrive during that time are discarded by TCP itself.
So, when the connection becomes available for re-use, all packets have been destroyed (anywhere on the network) either by:
being received at the destination and thrown away due to TIME_WAIT2 state; or
being destroyed by packet forwarders on the net due to expired lifetime.
When you send a packet to the network there is never a grantee it will get safely to the other side. The reliability of TCP is achieved exactly as you suggest using acknowledgment packets.
Related
This question asks what to do about loosing XMPP messages on mobile devices when they don't have a stable connection, but I don't really see why the packages get lost in the first place.
I remember having read that the stream between the server and the client stays open when the connection is suddenly lost and will only be destroyed once the connection times out. This means that the server sends arriving messages over the stream, even though the disconnected client can't receive those messages anymore.
I was happy with that explanation for some time, but started wondering why core XMPP would be lacking such an important feature. Eventually I noticed that ensuring correct transmission in the XMPP protocol would be redundant, as the underlying TCP should already ensure the proper transmission of the message, but as the various problems that arise from the lost message it seems that this isn't true.
Why isn't TCP enough to ensure that the message is either correctly sent or fails properly so the server knows it has to send the message later?
Why isn't TCP enough to ensure a proper transmission (or proper error handling, so the server knows the message has to be sent again) in this scenario?
Application gives the data that needs to be sent across to its TCP. TCP segments the data as needed and sends them out on established connection. Application passes over the burden of ensuring the packet reaches the other end to TCP. ( This does not mean,application should not have re-transmissions. Application level protocol can define re-send of messages if right response didn't come)
TCP has the mechanism of the Re-transmissions. Every packet sent to peer needs to be acknowledged. Until the acknowledgements come in TCP shall have the packets in its sendQ. Once the acknowledgement for the packets sent is received, they are removed.
If there is packet loss, acknowledgements don't arrive. TCP does the re-transmissions. Eventually gives up.Notifies application which needs to take action. Packet loss can happen beyond TCPs control. Thus TCP provides best-effort reliable service.
We have an application which is periodically sending TCP messages at a defined rate(Using MODBUS TCP). If a message is not received within a set period an alarm is raised. However every once in a while there appears to be a delay in messages being received. Investigation has shown that this is associated with the ARP cache being refreshed causing a resend of the TCP message.
The IP stack provider have told us that this is the expected behaviour for TCP. The questions are,
Is this expected behaviour for an IP stack? If not how do other stacks work around the period when IP/MAC address translation is not available
If this is the expected behaviour how can we reduce the delay in TCP messages during this period?(Permanent ARP entries have been tried, but are not the best solution)
In my last job I worked with a company building routers and switches. Our implementation would queue packets waiting for ARP replies and send them when the ARP reply was received. Therefore, no TCP retransmit required.
Retransmission in TCP occurs when an ACK is not received within a given time. If the ARP reply takes a long time, or is itself lost, you might be getting a retransmission even though the device waiting for the ARP reply is queuing the packet.
It would appear from your question that the period of the TCP message is shorter than the ARP refresh time. This implies that reuse of the ARP is not causing it to stay refreshed, which is possible behaviour that would be helpful in your situation.
A packet trace of the situation occurring could be helpful - are you actually losing the first packet? How long does the ARP reply take?
In order to stop the ARP cache timing out, you might want to try to find something that will refresh it, such as another ARP request for the same address, or a gratuitous ARP.
I found a specification for MODBUS TCP but it didn't help. Can you post some details of your network - media, devices, speeds?
Your description suggests that the peer ARP entries expire between TCP segments and cause some subsequent segments to fail due to the lack of a current MAC destination.
If you have the MODBUS devices on a separate subnet, then perhaps the destination router will be kind enough to queue the segment until it receives a valid MAC. If you cannot use a separate subnet, you could try to force the session to have keep-alives activated - this would cause a periodic empty message to be sent that would keep the ARP timers resetting. If the overhead of the keep-alive is too high and you completely control the application in your system, you could try to force zero-length messages through to the peer.
What could be good list of failure scenaros for testing a reliable UDP layer? I have thought of the below cases:
Drop Data packets
Drop ACK, NAK Packets
Send packets in out of sequence.
Drop intial hand shaking packets
Drop close / shutdown packets
Duplicate packets
Please help in identifying other cases that reliable UDP needs to handle?
The list you've given sounds pretty good. Also think about:
Very delayed packets (where most packets come through fine, but one or two are delayed by several minutes);
Very delayed duplicates (where the original came through quickly, but the duplicate arrived after several minutes delay);
Silent dropping of all packets above a certain size (both unidirectional and bidirectional cases);
Highly variable delays;
Sequence number wrapping tests.
Have you tried intentionally corrupting packets in transit?
Also, have you considered a scenario where only one-way communication is possible? In this case, the sending host thinks that the send failed, but the receiving end successfully processes the message. For instance:
host A sends a message to host B
B successfully receives message and replies with ACK
ACK gets dropped in the network
A waits for timeout and re-sends message (repeats steps 1-3)
host A exceeds retry count and thinks the send failed, but host B has in fact processed the message
I have thought UDP is a connectionless and unreliable protocol and that is does not require and specific transport handshake between hosts. And hence there is no such thing as a reliable UDP protocol.
Iam trying to create an iterative server based on datagram sockets (UDP).
It calls connect to the first client which it gets from the first recvfrom() call (yes I know this is no real connect).
After having served this client, I disconnect the UDP socket (calling connect with AF_UNSPEC)
Then I call recvfrom() to get the first packet from the next client.
Now the problem is, that the call of recvfrom() in the second iteration of the loop is returning 0. My clients never send empty packets, so what could be going on.
This is what Iam doing (pseudocode):
s = socket(PF_INET, SOCK_DGRAM, 0)
bind(s)
for(;;)
{
recvfrom(s, header, &client_address) // get first packet from client
connect(s,client_address) // connect to this client
serve_client(s);
connect(s, AF_UNSPEC); // disconnect, ready to serve next client
}
EDIT: I found the bug in my client accidently sending an empty packet.
Now my problem is how to make the client wait to get served instead of sending a request into nowhere (server is connected to another client and doesn't serve any other client yet).
connect() is really completely unnecessary on SOCK_DGRAM.
Calling connect does not stop you receiving packets from other hosts, nor does it stop you sending them. Just don't bother, it's not really helpful.
CORRECTION: yes, apparently it does stop you receiving packets from other hosts. But doing this in a server is a bit silly because any other clients would be locked out while you were connect()ed to one. Also you'll still need to catch "chaff" which float around. There are probably some race conditions associated with connect() on a DGRAM socket - what happens if you call connect and packets from other hosts are already in the buffer?
Also, 0 is a valid return value from recvfrom(), as empty (no data) packets are valid and can exist (indeed, people often use them). So you can't check whether something has succeeded that way.
In all likelihood, a zero byte packet was in the queue already.
Your protocol should be engineered to minimise the chance of an errant datagram being misinterpreted; for this reason I'd suggest you don't use empty datagrams, and use a magic number instead.
UDP applications MUST be capable of recognising "chaff" packets and dropping them; they will turn up sooner or later.
man connect:
...
If the initiating socket is not connection-mode, then connect()
shall set the socket’s peer address, and no connection is made.
For SOCK_DGRAM sockets, the peer address identifies where all
datagrams are sent on subsequent send() functions, and limits
the remote sender for subsequent recv() functions. If address
is a null address for the protocol, the socket’s peer address
shall be reset.
...
Just a correction in case anyone stumbples across this like I did. To disconnect connect() needs to be called with the sa_family member of sockaddr set to AF_UNSPEC. Not just passed AF_UNSPEC.
How does TCP/IP report errors when packet delivery fails permanently? All Socket.write() APIs I've seen simply pass bytes to the underlying TCP/IP output buffer and transfer the data asynchronously. How then is TCP/IP supposed to notify the developer if packet delivery fails permanently (i.e. the destination host is no longer reachable)?
Any protocol that requires the sender to wait for confirmation from the remote end will get an error message. But what happens for protocols where a sender doesn't have to read any bytes from the destination? Does TCP/IP just fail silently? Perhaps Socket.close() will return an error? Does the TCP/IP specification say anything about this?
TCP/IP is a reliable byte stream protocol. All your bytes will get to the receiver or you'll get an error indication.
The error indication will come in the form of a closed socket. Regardless of what the communication pattern (who does the sending), if the bytes can't be delivered, the socket will close.
So the question is, how do you see the socket close? If you're never reading, you'd eventually get an error trying to write to the closed socket (with ECONNRESET errno, I think).
If you have a need to sleep or wait for input on another file handle, you might want to do your waiting in a select() call where you include the socket in the list of sources you're waiting on (even if you never expect to receive anything). If the select() indicates that the socket is ready for a read call, you may get a -1 return (with ECONNRESET, I think). An EOF would indicate an orderly close (other side did a shutdown() or close().
How to distinguish this error close from a clean close (other program exiting, for example)? The errno values may be enough to distinguish error from orderly close.
If you want an unambiguous indication of a problem, you'll probably need to build some sort of application level protocol above the socket layer. For example, a short "ack" message sent by the receiver back to the sender. Then the violation of that higher level application protocol (sender didn't see an ack) would be a confirmation that it was an error close vs a clean close.
The sockets API has no way of informing the writer exactly how many bytes have been received as acknowledged by the peer. There are no guarantees made by the presence of a successful shutdown or close either.
The TCP/IP specification says nothing about the application interface (which is nearly always the sockets API).
SCTP is an alternative to TCP which attempts to address these shortcomings, among others.
In C, if you write to a socket that has failed with send(), you will get back the number of bytes that were sent. If this does not match the number of bytes you meant to send, then you have a problem. But also, when you write to a failed socket, you get SIGPIPE back. Before you start socket handling, you need to have a signal handler in place that will alert you when you get SIGPIPE.
If you are reading from a socket, you really should wrap it with an alarm so you can timeout. Like "alarm(timeout_val); recv(); alarm(0)". Check the return code of recv, and if it's 0, that indicates that the connection has been closed. A negative return result indicates a read failure and you need to check errno.
TCP is built upon the IP protocol, which is the centerpiece for the Internet, providing much of the interoperability that drives Routing, which is what determines how to get packets from their source to their destination. The IP protocol specifies that error messages should be sent back to the sender via Internet Control Message Protocol(ICMP) in the case of a packet failing to get to the sender. Some of these reasons include the Time To Live(TTL) field being decremented to zero, often meaning that the packet got stuck in a routing loop, or the packet getting dropped due to switch contention causing buffer overruns. As others have said, it is the responsibility of the Socket API that is being used to relay these errors at the IP layer up to the application interacting with the network at the TCP layer.
TCP/IP packets are either raw, UDP, or TCP. TCP requires each byte to be acked, and it will re-transmit bytes that are not acked in time. raw, and UDP are connectionless (aka best effort), so any lost packets (barring some ICMP cases, but many of these get filtered for security) are silently dropped. Upper layer protocols can add reliability, such as is done with some raw OSPF packets.