On windows, say IO completion port is used to process tcp incoming packets. Now suppose all worker threads associated with the iocp is stuck, hence not processing any more receiving buffer of the tcp stack. Will this cause "Packets Received Discarded" counter to go up?
From experiment and wireshark capture, eventually receiver side signals [TCP Zero Window] and sender side stops sending. But I'm not sure if any discards happen before that.
I also didn't find any in depth, thorough description of what could cause "Packets Received Discarded".
Related
We're seeing this pattern happen a lot between two RHEL 6 boxes that are transferring data via a TCP connection. The client issues a TCP Window Full, 0.2s later the client sends TCP Keep-Alives, to which the server responds with what look like correctly shaped responses. The client is unsatisfied by this however and continues sending TCP Keep-Alives until it finally closes the connection with an RST nearly 9s later.
This is despite the RHEL boxes having the default TCP Keep-Alive configuration:
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 75
...which declares that this should only occur until 2hrs of silence. Am I reading my PCAP wrong (relevant packets available on request)?
Below is Wireshark screenshot of the pattern, with my own packet notes in the middle.
Actually, these "keep-alive" packets are not used for TCP keep-alive! They are used for window size updates detection.
Wireshark treats them as keep-alive packets just because these packets look like keep-alive packet.
A TCP keep-alive packet is simply an ACK with the sequence number set to one less than the current sequence number for the connection.
(We assume that ip 10.120.67.113 refers to host A, 10.120.67.132 refers to host B.) In packet No.249511, A acks seq 24507484. In next packet(No.249512), B send seq 24507483(24507484-1).
Why there are so many "keep-alive" packets, what are they used for?
A sends data to B, and B replies zero-window size to tell A that he temporarily can't receive data anymore. In order to assure that A knows when B can receive data again, A send "keep-alive" packet to B again and again with persistence timer, B replies to A with his window size info (In our case, B's window size has always been zero).
And the normal TCP exponential backoff is used when calculating the persist timer. So we can see that A send its first "keep-alive" packet after 0.2s, send its second packet after 0.4s, the third is sent after 0.8, the fouth is sent after 1.6s...
This phenomenon is related to TCP flow control.
The source and destination IP addresses in the packets originating from client do not match the destination and source IP addresses in the response packets, which indicates that there is some device in between the boxes doing NAT. It is also important to understand where the packets have been captured. Probably a packet capture on the client itself will help understand the issue.
Please note that the client can generate TCP keepalive if it does not receive a data packet for two hours or more. As per RFC 1122, the client retries keepalive if it does not receive a keepalive response from the peer. It eventually disconnects after continuous retry failure.
The NAT devices typically implement connection caches to maintain the state of ongoing connections. If the size of the connection reaches limit, the NAT devices drops old connections in order to service the new connections. This could also lead to such a scenario.
The given packet capture indicates that there is a high probability that packets are not reaching the client, so it will be helpful to capture packets on client machine.
I read the trace slightly differently:
Sender sends more data than receiver can handle and gets zerowindow response
Sender sends window probes (not keepalives it is way to soon for that) and the application gives up after 10 seconds with no progress and closes the connection, the reset indicates there is data pending in the TCP sendbuffer.
If the application uses a large blocksize writing to the socket it may have seen no progress for more than the 10 seconds seen in the tcpdump.
If this is a straight connection (no proxies etc.) the most likely reason is that the receiving up stop receiving (or is slower than the sender & data transmission)
It looks to me like packet number 249522 provoked the application on 10.120.67.113 to abort the connection. All the window probes get a zero window response from .132 (with no payload) and then .132 sends (unsolicited) packet 249522 with 63 bytes (and still showing 0 window). The PSH flag suggests that this 63 bytes is the entire data written by the app on .132. Then .113 in the same millisecond responds with an RST. I can't think of any reason why the TCP stack would send a RST immediately after receiving data (sequence numbers are correct). In my view it is almost certain that the app on .113 decided to give up based on the 63 byte message sent by .132.
In my program the receiver has a bigger workload, should i make the sender wait for the receiver through methods like application level ACK?
You shouldn't directly send TCP ACK messages--those are handled at a low level by the OS. I'd look at the following in order of likelihood:
Is there some easy optimization on the receiver? It's pretty rare that you can really fill a network pipe more quickly than the receiver can handle the data. Make sure that the receiver has at least two threads: a network i/o thread and a work thread.
If the receiver is starting to panic, it could send a throttle message to the server, which makes the server cool its heels until the receiver catches up. This is more efficient than waiting for an ack message after every message, but it requires that the receiver know when it's about to fall behind, which might be difficult.
Alternately, the slowest but most reliable thing is to have the receiver acknowledge every message from the server, sort of like you mentioned. This wouldn't be a TCP ACK, but a special message in the data format that your sender/receiver are using for comm.
What could be good list of failure scenaros for testing a reliable UDP layer? I have thought of the below cases:
Drop Data packets
Drop ACK, NAK Packets
Send packets in out of sequence.
Drop intial hand shaking packets
Drop close / shutdown packets
Duplicate packets
Please help in identifying other cases that reliable UDP needs to handle?
The list you've given sounds pretty good. Also think about:
Very delayed packets (where most packets come through fine, but one or two are delayed by several minutes);
Very delayed duplicates (where the original came through quickly, but the duplicate arrived after several minutes delay);
Silent dropping of all packets above a certain size (both unidirectional and bidirectional cases);
Highly variable delays;
Sequence number wrapping tests.
Have you tried intentionally corrupting packets in transit?
Also, have you considered a scenario where only one-way communication is possible? In this case, the sending host thinks that the send failed, but the receiving end successfully processes the message. For instance:
host A sends a message to host B
B successfully receives message and replies with ACK
ACK gets dropped in the network
A waits for timeout and re-sends message (repeats steps 1-3)
host A exceeds retry count and thinks the send failed, but host B has in fact processed the message
I have thought UDP is a connectionless and unreliable protocol and that is does not require and specific transport handshake between hosts. And hence there is no such thing as a reliable UDP protocol.
How does TCP/IP report errors when packet delivery fails permanently? All Socket.write() APIs I've seen simply pass bytes to the underlying TCP/IP output buffer and transfer the data asynchronously. How then is TCP/IP supposed to notify the developer if packet delivery fails permanently (i.e. the destination host is no longer reachable)?
Any protocol that requires the sender to wait for confirmation from the remote end will get an error message. But what happens for protocols where a sender doesn't have to read any bytes from the destination? Does TCP/IP just fail silently? Perhaps Socket.close() will return an error? Does the TCP/IP specification say anything about this?
TCP/IP is a reliable byte stream protocol. All your bytes will get to the receiver or you'll get an error indication.
The error indication will come in the form of a closed socket. Regardless of what the communication pattern (who does the sending), if the bytes can't be delivered, the socket will close.
So the question is, how do you see the socket close? If you're never reading, you'd eventually get an error trying to write to the closed socket (with ECONNRESET errno, I think).
If you have a need to sleep or wait for input on another file handle, you might want to do your waiting in a select() call where you include the socket in the list of sources you're waiting on (even if you never expect to receive anything). If the select() indicates that the socket is ready for a read call, you may get a -1 return (with ECONNRESET, I think). An EOF would indicate an orderly close (other side did a shutdown() or close().
How to distinguish this error close from a clean close (other program exiting, for example)? The errno values may be enough to distinguish error from orderly close.
If you want an unambiguous indication of a problem, you'll probably need to build some sort of application level protocol above the socket layer. For example, a short "ack" message sent by the receiver back to the sender. Then the violation of that higher level application protocol (sender didn't see an ack) would be a confirmation that it was an error close vs a clean close.
The sockets API has no way of informing the writer exactly how many bytes have been received as acknowledged by the peer. There are no guarantees made by the presence of a successful shutdown or close either.
The TCP/IP specification says nothing about the application interface (which is nearly always the sockets API).
SCTP is an alternative to TCP which attempts to address these shortcomings, among others.
In C, if you write to a socket that has failed with send(), you will get back the number of bytes that were sent. If this does not match the number of bytes you meant to send, then you have a problem. But also, when you write to a failed socket, you get SIGPIPE back. Before you start socket handling, you need to have a signal handler in place that will alert you when you get SIGPIPE.
If you are reading from a socket, you really should wrap it with an alarm so you can timeout. Like "alarm(timeout_val); recv(); alarm(0)". Check the return code of recv, and if it's 0, that indicates that the connection has been closed. A negative return result indicates a read failure and you need to check errno.
TCP is built upon the IP protocol, which is the centerpiece for the Internet, providing much of the interoperability that drives Routing, which is what determines how to get packets from their source to their destination. The IP protocol specifies that error messages should be sent back to the sender via Internet Control Message Protocol(ICMP) in the case of a packet failing to get to the sender. Some of these reasons include the Time To Live(TTL) field being decremented to zero, often meaning that the packet got stuck in a routing loop, or the packet getting dropped due to switch contention causing buffer overruns. As others have said, it is the responsibility of the Socket API that is being used to relay these errors at the IP layer up to the application interacting with the network at the TCP layer.
TCP/IP packets are either raw, UDP, or TCP. TCP requires each byte to be acked, and it will re-transmit bytes that are not acked in time. raw, and UDP are connectionless (aka best effort), so any lost packets (barring some ICMP cases, but many of these get filtered for security) are silently dropped. Upper layer protocols can add reliability, such as is done with some raw OSPF packets.
I know that TCP is very reliable, and what ever is sent is guaranteed to get to its destination. But what happens if after a packet is sent, but before it arrives at the server, the server goes down? Is the acknowledgment that the packet is successfully sent triggered on the server's existence when the packet is initially sent, or when the packet successfully arrives at the server?
Basically what I'm asking is - if the server goes down in between the sending and the receiving of a packet, would the client know?
It really doesn't matter, but here's some finer details:
You need to distinguish between the Server-Machine going down and the Server-Process going down.
If the Server-Machine has crashed, then, clearly, there is nothing to receive the packet. The sending client will get no retry-requests, and no acknowledgment of success or failure. After having not received any feedback at all, the client will eventually receive a timeout, and consider the connection dropped. This is pretty much identical to the cable being physically cut unexpectedly.
If, however, the Server-Machine remains functioning, but the Server-Process crashes due to a programming bug, then the receiving TCP stack, which is a function of the OS, not of the process, will likely ACK the packet, and any others that arrive. This will continue until the OS notifies the TCP stack that the process is no longer active. The TCP stack will likely send a RST (reset) notice to the client, or may drop the connection (as described above)
This is basically what happens. The full reality is hard to describe without getting tied up in unnecessary detail.
TCP manages connections which are defined as a 4-tuple (source-ip, source-port, dest-ip, dest-port).
When the server closes the connection, the connection is placed into a TIME_WAIT2 state where it cannot be re-used for a certain time. That time is double the maximum time-to-live value of the packets. Any packets that arrive during that time are discarded by TCP itself.
So, when the connection becomes available for re-use, all packets have been destroyed (anywhere on the network) either by:
being received at the destination and thrown away due to TIME_WAIT2 state; or
being destroyed by packet forwarders on the net due to expired lifetime.
When you send a packet to the network there is never a grantee it will get safely to the other side. The reliability of TCP is achieved exactly as you suggest using acknowledgment packets.