SR & GBN: Out-of-window ACKs - networking

I'm currently studying fairly basic networking, and I'm currently on the subject of reliable transmission. I'm using the book Computer Networking by Kurrose & Ross, and two of the review questions were as follows:
With the selective-repeat/go-back-n protocol, it is possible for the
sender to receive an ACK for a packet that falls outside of its
current window?
For the SR version, my answer to the question was as follows:
Yes, if the window size is too big for the sequence number space. For
example, a receiver gets a number of packets equal to the space of the
sequence numbers. Its receive window has thus moved so that it is
expecting a new set of packets with the same sequence numbers as the
last one. The receiver now sends an ACK for each of the packets, but
all of them are lost along the way. This eventually causes the sender
to timeout for each of the previous set of packets, and retransmits
each of them. The receiver think that this duplicate set of packets
are really the new ones that it is expecting, and it sends ACKs for
each of them that successfully reaches the sender. The sender now
experiences a similar kind of confusion, where it thinks that the ACKs
are confirmations that each of the old packets have been received,
when they are really ACKs meant for the new, yet-to-be-sent packets.
I'm pretty sure this is correct (otherwise, please tell me!), since this kind of scenario seems to be the classic justification of why window size should be less than or equal to half the size of the sequence number space when it comes to SR protocols, but what about GBN?
Can the same kind of wraparound issue occur for it, making the answers mostly identical? If not, are there any other cases that can cause a typical GBN sender to receive an ACK outside of its window?
Regarding the later, the only example I can think of is the following:
A GBN sender sends packets A & B in order. The receiver receives both in order, and sends one cumulative ACK covering every packet before and up to A, and then another one covering every packet before and up to B (including A). The first one is so heavily delayed that the second one arrives first to the sender, causing its window to slide beyond A & B. When the first one finally arrives, it needlessly acknowledges that everything up to A has been correctly received, when A is already outside of the sender's window.
This example seems rather harmless and unlikely in contrast to the previous one, so I doubt that its correct (but again, correct me if I'm wrong, please!).

In practical world, how about a duplicated ACK delayed long enough to fall out of the window?
The protocol is between the sender and the receiver, but it does not have control over how the media (network path) behaves.
The protocol would still be reliable according to design but the implementation shall be able to handle such out-of-window duplicated ACKs.

Related

Acknowledgments in flow control protocol

Can someone explain it to me as to why in selective repeat it's not possible for an acknowledgment to come for a packet that falls outside the current window? Because it may be possible that there is some delayed acknowledgment. It's possible in all sliding window protocols then why is statement 2 only true?
Moreover, in the solution, they mentioned that statement 2 is true because GBN has cumulative ack because of which if we receive ack 2 then the sender will assume that both packets 1 and 2 have been received successfully and so it slides the window to remove 1 and 2 from it but later we might get ack 1 which I feel is not possible because here we are talking about cumulative ack not independent.
So how is this reason valid?
Let's start from the back. Cummulative acknowledgments do not imply that there are delayed acknowledgements. GoBackN can in theory not acknowledge every single packet, but this is an optimization of practical protocols. So, I would assume, that GoBackN acknowledges every single packet.
Assuming that GoBackN acknowledges every single packet, the situation you are describing can happen. The receiver has received all packets in order, and send ACKs to every single packet in order. The channel however does not guaranty (reliable) in order delivery, i.e., the ACKs can arrive in arbitrary order. If two ACKs were reordered, exactly what they describe will happen.
In selective repeat each packet acknowledges only the packet received. And this only happens if the packet was sent. And the packet can only be sent as a part of sender window. Also, since ACKs are not cummulative, if the ACK for the second packet in the window is received first, the window will not move (since first packet is not acknowledged).
edit Actually, what could happen in selective repeat is following. Sender sends a packet, and receives no ACK. Then the timer fires, and the packet is retransmitted. And after the timer has fired, the first ACK has arrived. The window moves. Then, some time after the second ACK arrives, and it is outside the window. This can happen if the timer is set incorrectly, or if the ACK spent too much time somewhere in transit (which should be covered by the channel model). So, I guess you are correct, saying that it is possible.
Also, delayed ACKs usually refer to a TCP receiver, that does not acknowledge every received packet, but instead sends a single ACK for several of them. With cummulative acknowledgements this works trivially, since ACK for every single packet is not required. I don't see any way to implement delay acknowledgements for selective repeat, expect send 2 acks in the same packet, but then these ACKs will be for packets inside a window.

Which TCP window update is most recent?

I was writing a TCP implementation, did all the fancy slow and fast retransmission stuff, and it all worked so I thought I was done. But then I reviewed my packet receive function (almost half of the 400 lines total code), and realized that my understanding of basic flow control is incomplete...
Suppose we have a TCP connection with a "sender" and "receiver". Suppose that the "sender" is not sending anything, and the receiver is stalling and then unstalling.
Since the "sender" is not sending anything, the "receiver" sees no ack_no delta. So the two window updates from the "receiver" look like:
ack_no = X, window = 0
ack_no = X, window = 8K
since both packets have the same ack_no, and they could be reordered in transit, how does the sender know which came first?
If the sender doesn't know which came first, then, after receiving both packets, how does it know whether it's allowed to send?
One guess is that maybe the window's upper endpoint is never allowed to decrease? Once the receiver has allocated a receive buffer and advertised it, it can never un-advertise it? In that case the window update could be reliably handled via the following code (assume no window scale, for simplicity):
// window update (https://stackoverflow.com/questions/63931135/)
int ack_delta = pkt_ack_no - c->tx_sn_ack;
c->tx_window = MAX(BE16(PKT.l4.window), c->tx_window - ack_delta);
if (c->tx_window)
Net_Notify(); // wake up transmission
But this is terrible from a receiver standpoint: it vastly increases the memory you'd need to support 10K connections reliably. Surely the protocol is smarter than that?
There is an assumption that the receive buffer never shrinks, which is intentionally undocumented to create an elite "skin in the game" club in order to limit the number of TCP implementations.
The original standard says that shrinking the window is "discouraged" but doesn't point out that it can't work reliably:
The mechanisms provided allow a TCP to advertise a large window and
to subsequently advertise a much smaller window without having
accepted that much data. This, so called "shrinking the window," is
strongly discouraged.
Even worse, the standard is actually missing the MAX operation proposed in the question, and just sets the window from the most recent packet if the acknowledgement number isn't increasing:
If SND.UNA < SEG.ACK =< SND.NXT, the send window should be
updated. If (SND.WL1 < SEG.SEQ or (SND.WL1 = SEG.SEQ and
SND.WL2 =< SEG.ACK)), set SND.WND <- SEG.WND, set
SND.WL1 <- SEG.SEQ, and set SND.WL2 <- SEG.ACK.
Note that SND.WND is an offset from SND.UNA, that SND.WL1
records the sequence number of the last segment used to update
SND.WND, and that SND.WL2 records the acknowledgment number of
the last segment used to update SND.WND. The check here
prevents using old segments to update the window.
so it will fail to grow the window if packets having the same ack number are reordered.
Bottom line: implement something that actually works robustly, not what's in the standard.

Selective repeat buffer size

Why in selective repeat algorithm both sending and receiving windows don't need to have the same window size, but the sending and receiving buffers should be the same size?
Window size actually means that how many packets a sender can send and how many packets a receiver can receive until every packet in the window gets acknowledged and received.
But in case of selective repeat, receiver don't need to keep track of packets received, because it just have to keep putting them into buffer or into file itself, in sequence. Whereas sender needs to keep track of window and base for keeping the Acknowledgment track.
So in Selective Repeat if you don't even put a window in receiver side, it will work.
But whereas Sending and Receiving Buffers do need to be same because they are going to share similar size and type of data. That's why we need to have same size of Sending and Receiving Buffers.
I hope this will help.
Good Question.

Dealing with network packet loss in realtime games - TCP and UDP

Reading lots on this for my first network game, I understand the core difference of guaranteed delivery versus time-to-deliver for TCP v UDP. I've also read diametrically opposed views whether realtime games should use UDP or TCP! ;)
What no-one has covered well is how to handle the issue of a dropped packet.
TCP : Read an article using TCP for an FPS that recommended only using TCP. How would an authoritative server using TCP client input handle a packet drop and sudden epic spike in lag? Does the game just stop for a moment and then pick up where it left off? Is TCP packet loss so rare that it's not really that much of an issue and an FPS over TCP actually works well?
UDP : Another article suggested only ever using UDP. Clearly one-shot UDP events like "grenade thrown" aren't reliable enough as they won't fire some of the time. Do you have to implement a message-received, resend protocol manually? Or some other solution?
My game is a tick-based authoritative server with 1/10th second updates from the server to clients and local simulation to keep things seeming more responsive, although the question is applicable to a lot more applications.
I did a real-time TV editing system. All real-time communication was via UDP, but none-real-time used TCP as it is simpler. With the UDP we would send a state packet every frame. e.g. start video in 100 frames, 99,98,…3,2,1,0,-1,-2,-3 so even if no message gets through until -3 then the receiver would start on the 4th frame (just skipping the first 3), hoping that no one would notice, and knowing that this was better than lagging from here on in. We even added the countdown from around +¼ second (as no-one will notice), this way hardly any frames where dropped.
So in summary, we sent the same status packet every frame. It contained all real-time data about past, current, and future events.
The trick is keeping this data-set small. So instead of sending play button pressed event (there is an unbound number of these), we send the video-id, frame-number, start-mask and end-mask. (start/stop mask are frame numbers, if start-mask is positive and stop-mask is negative then show video, at frame frame-number).
Now we need to be able to start a video during another or shortly after it stops. So we consider how many consecutive video can be played at the same time. We need a slot for each, but can we reuse them immediately? If we have pressed stop, so do not know the stop mask until then, then reuse the slot will the video stop. Well there will be no slot for this video, so we should stop it. So yes we can reuse the slot immediately, as long as we use unique IDs.
Other tips: Do not send +1 events instead send current total. If two players have to update the some total, then each should have their own total, sum all totals at point of use, but never edit someone else's total.

Congestion Control Algorithm at Receiver

Assume we talking about the situation of many senders sending packets to a receiver.
Often senders would be the one that control congestion by using sliding window that limits sending rate.
We have:
snd_cwnd = min(cwnd,rwnd)
Using explicit or implicit feedback information from network (router,switch), sender would control cwnd to control sending rate.
Normally, rwnd is always big enough that sender only care about cwnd. But if we consider rwnd, using it to limit snd_cwnd, it would make congestion control more efficiently.
rwnd is the number of packets (or bytes) that receiver be able to receive. What I'm concerned about is capability of senders.
Questions:
1. So how do receiver know how many flows sending packets to it?
2. Is there anyway that receiver know the snd_cwnd of sender?
This is all very confused.
The number of flows into a receiver isn't relevant to the rwnd of any specific flow. The rwnd is simply the amount of space left in the receive buffer for that flow.
The receiver has no need to know the sender's cwnd. That's the sender's problem.
Your statement that 'normally rwnd is always big enough that sender only cares about cwnd' is simply untrue. The receive window changes with every receive; it is re-advertised with every ACK; and it frequently drops to zero.
Your following statement 'if we consider rwnd, using it to limit cwnd ...' is simply a description of what already happens, as per 'snd_cwnd = min(cwnd, rwnd)'.
Or else it may constitute a completely unexplained proposal to needlessly modify TCP's flow control which has been working for 25 years, and which didn't work for several years before that: I remember several Arpanet freezes in the middle 1980s.

Resources