How to identify pushback under Win IOCP? - tcp

How can I identify TCP pushback when using IOCP? I.e. how can I find out that the receiver is not receiving, that tx/rx buffers on both sides of connection are full and that the sender should cease to send more data?

With any async TCP send operation the way to determine the rate that the peer is receiving data is to monitor the rate of send completions on the sender.
I've written about this in depth here. In summary, when the receiver's buffers fill and TCP flow control is in operation and the TCP window is reduced the sender cannot send which causes the sender's TCP buffers to fill. This then means that async send requests can not complete. If you track the number of outstanding send requests that are pending you can spot this situation and throttle the sender.

When this happens, the send won't complete.

Related

Can application layer behavior cause Packets Received Discarded

On windows, say IO completion port is used to process tcp incoming packets. Now suppose all worker threads associated with the iocp is stuck, hence not processing any more receiving buffer of the tcp stack. Will this cause "Packets Received Discarded" counter to go up?
From experiment and wireshark capture, eventually receiver side signals [TCP Zero Window] and sender side stops sending. But I'm not sure if any discards happen before that.
I also didn't find any in depth, thorough description of what could cause "Packets Received Discarded".

ZeroMQ: Internal queuing policy before having a valid connection

Say if I have an inproc-PAIR messaging system. Before the Receiver connects to the Sender, the Sender is already bound and starts sending out messages right away.
Now, before the connection succeeds, will ZMQ choose to do one of the following:
queue up those messages internally
block Sender from sending those messages
simply discard them?
I know that ZMQ has an internal queue of 1000 messages, and read that with PubSub it will send out messages right after binding happens and thus will lose messages. But I'm not sure about other protocols.
From the ZeroMQ documentation :
When a PAIR socket enters the mute state due to having reached the high water mark for the connected peer, or if no peer is connected, then any send operations on the socket will block until the peer becomes available for sending; messages are not discarded.
So your first two options are correct; it's queuing messages till you reach hwm then blocks the sender.
It is not dicarding messages.

Alternating bit protocol delayed packet

In the alternating bit protocol, how does the receiver know the difference between a delayed packet and a correct one. For example if the sender sends a packet with seq#0, it gets severely delayed on the way there, so much so that the sender and receiver have completed 2 packets in between and the receiver is expecting the next packet with seq#0, and instead receives the delayed one. Should the receiver have a temporary storage of the last few packets to compare if it's just delayed or are there other ways to check?

TCP send function retransmission logic?

When we send a packet and re-transmission starts does we come out of send function or not?
In my case my application took a lock and waits for send to return and then it leaves the lock.
But In my scenario it never came back. I want to know do we really come out of send function when we have a re transmission case?
The send function transfers data into the socket send buffer, blocking while there isn't enough room.
Data is removed from the socket send buffer when acknowledged.
Retransmission starts when data that has been sent to the peer hasn't been acknowledged within the appropriate timeout interval.
The interactions between retransmission and the send() function consist basically of this: if data hasn't been acknowledged, it is still in the send buffer, which may cause the send() function to block.

Methods for implementing UDP multicast reliable

I am preparing for my university exam and one of the question last year was " how to make UDP multicast reliable " ( like tcp, retransmission of lost packets )
I thought about something like this :
Server send multicast using UDP
Every client send acknowledgement of receiving that packets ( using TCP )
If server realize that not everyone receive packets , it resends multicast or unicast to particular client
The problem are that there might be one client who usually lost packets and force server to resend.
Is it good ?
Every client send acknowledgement of receiving that packets ( using TCP )
Sending an ACK for each packet, and using TCP to do so, is not scalable to a large number of receivers. Using a NACK based scheme is more efficient.
Each packet sent from the server should have a sequence number associated with it. As clients receive them, they keep track of which sequence numbers were missed. If packets are missed, a NACK message can then be sent back to the server via UDP. This NACK can be formatted as either a list of sequence numbers or a bitmap of received / not received sequence numbers.
If server realize that not everyone receive packets , it resends multicast or unicast to particular client
When the server receives a NACK it should not immediately resend the missing packets but wait for some period of time, typically a multiple of the GRTT (Group Round Trip Time -- the largest round trip time among the receiver set). That gives it time to accumulate NACKs from other receivers. Then the server can multicast the missing packets so any clients missing them can receive them.
If this scheme is being used for file transfer as opposed to streaming data, the server can alternately send the file data in passes. The complete file is sent on the first pass, during which any NACKs that are received are accumulated and packets that need to be resent are marked. Then on subsequent passes, only retransmissions are sent. This has the advantage that clients with lower loss rates will have the opportunity to finish receiving the file while high loss receivers can continue to receive retransmissions.
The problem are that there might be one client who usually lost packets and force server to resend.
For very high loss clients, the server can set a threshold for the maximum percentage of packets missed. If a client sends back NACKs in excess of that threshold one or more times (how many times is up to the server), the server can drop that client and either not accept its NACKs or send a message to that client informing it that it was dropped.
There are a number of protocols which implement these features:
UFTP - Encrypted UDP based FTP with multicast (disclosure: author)
NORM - NACK-Oriented Reliable Multicast
PGM - Pragmatic General Multicast
UDPCast
Relevant RFCs:
RFC 4654 - TCP-Friendly Multicast Congestion Control (TFMCC): Protocol Specification
RFC 5401 - Multicast Negative-Acknowledgment (NACK) Building Blocks
RFC 5740 - NACK-Oriented Reliable Multicast (NORM) Transport Protocol
RFC 3208 - PGM Reliable Transport Protocol Specification
To make UDP reliable, you have to handle few things (i.e., implement it yourself).
Connection handling: Connection between the sending and receiving process can drop. Most reliable implementations usually send keep-Alive messages to maintain the connection between the two ends.
Sequencing: Messages need to split into chunks before sending.
Acknowledgement: After each message is received an ACK message needs to be send to the sending process. These ACK messasges can also be sent through UDP, doesn't have to be through UDP. The receiving process might realise that has lost a message. In this case, it will stop delivering the messages from the holdback queue (queue of messages that holds the received messages, it is like a waiting room for messages), and request of a retransmission of the missing message.
Flow control: Throttle the sending of data based on the abilities of the receiving process to deliver the data.
Usually, there is a leader of a group of processes. Each of these groups normally have a leader and a view of the entire group. This is called a virtual synchrony.

Resources