Go-back-n, with an ACK loss and a packet loss - networking

While refreshing some old theory, and solving couple of problems, one instance had me confused: "7 packages (0 to 6) with window size 3 are sent. Packet no. 2 ACK is lost the first time it is sent, and packet number 4 gets lost the first time it is sent"
I am aware that the window size will now encompass packets [2,3,4] since ACK2 has not been received a timeout would occur, and that same window will be re-sent. But Packet 4 is lost.
I have tried to depict, what I think happens. Forgive my sketching skills:

For completeness. Lost acknowledge will not cause retransmission, because acknowledgement for next packet will implicitly ack the missing one.

Related

Acknowledgments in flow control protocol

Can someone explain it to me as to why in selective repeat it's not possible for an acknowledgment to come for a packet that falls outside the current window? Because it may be possible that there is some delayed acknowledgment. It's possible in all sliding window protocols then why is statement 2 only true?
Moreover, in the solution, they mentioned that statement 2 is true because GBN has cumulative ack because of which if we receive ack 2 then the sender will assume that both packets 1 and 2 have been received successfully and so it slides the window to remove 1 and 2 from it but later we might get ack 1 which I feel is not possible because here we are talking about cumulative ack not independent.
So how is this reason valid?
Let's start from the back. Cummulative acknowledgments do not imply that there are delayed acknowledgements. GoBackN can in theory not acknowledge every single packet, but this is an optimization of practical protocols. So, I would assume, that GoBackN acknowledges every single packet.
Assuming that GoBackN acknowledges every single packet, the situation you are describing can happen. The receiver has received all packets in order, and send ACKs to every single packet in order. The channel however does not guaranty (reliable) in order delivery, i.e., the ACKs can arrive in arbitrary order. If two ACKs were reordered, exactly what they describe will happen.
In selective repeat each packet acknowledges only the packet received. And this only happens if the packet was sent. And the packet can only be sent as a part of sender window. Also, since ACKs are not cummulative, if the ACK for the second packet in the window is received first, the window will not move (since first packet is not acknowledged).
edit Actually, what could happen in selective repeat is following. Sender sends a packet, and receives no ACK. Then the timer fires, and the packet is retransmitted. And after the timer has fired, the first ACK has arrived. The window moves. Then, some time after the second ACK arrives, and it is outside the window. This can happen if the timer is set incorrectly, or if the ACK spent too much time somewhere in transit (which should be covered by the channel model). So, I guess you are correct, saying that it is possible.
Also, delayed ACKs usually refer to a TCP receiver, that does not acknowledge every received packet, but instead sends a single ACK for several of them. With cummulative acknowledgements this works trivially, since ACK for every single packet is not required. I don't see any way to implement delay acknowledgements for selective repeat, expect send 2 acks in the same packet, but then these ACKs will be for packets inside a window.

A theoretical question using TCP sliding window: how is a mismatching window size handled?

I know this probably doesn't happen in real life at all, but I am curious. I was studying for an exam and came across a theoretical question while doing some sliding window problems as I wrote one down incorrectly; what happens if the RWS (receiving window size) and SWS (sending window size) are somehow mismatched? As an example, let's say I'm sending 10 packets of information, with a timeout of 1 RTT, but the SWS is 4 but the RWS is 3. For this purpose, let's say that the RX side is not able to send an ACK for packet 1 before packet 4 arrives so the window is not "slid" in time.
My idea was that the extra 4th packet is just discarded and never ACK'ed, so it would time out on the TX end and resend packet 4 after 1 RTT, and continue this process for 5,6,7,8 with 8 being the one discarded and having the same process done. Would this be a correct assumption? Or is there something to this that I'm maybe not understanding?
Thanks in advance.

Understanding TCP Slow Start

I was trying to get my head around tcp congestion control and came across what is called the slow start phase, where tcp starts by sending just 1 MSS and then keep on adding 1 MSS to the congestion window on receipt of an ack. This much is clear. But after this, almost all books/articles that i refers goes ahead a say that this results in doubling the cwnd every RTT showing a image something like below where i got confused.
The first segment is clear, tcp sends it and receives the ack after a RTT and then doubles the cwnd which now is 2. Now it transmits two segments, the ack for the fist one comes after RTT making the cwnd 3. But the ack for the second segment comes after this making cwnd 4(ie doubling it). So i am not able to understand how the cwnd doubles every RTT, since as per my understanding, in this example, cwnd doubled on the first RTT and got incremented by one on the second RTT and again doubled on some other time(RTT+tx time of the first segment i believe). Is this understanding correct. Please explain.
After the two segments' acks had been received by the sender, the CWND was increased by 2, not 1. Note that the second ack in round 2 arrived right after the first ack in round 2, that's why they were considered in the same round and costed 1 RTT.
I can not agree with you more. I think there is not exist a double mechanism and they get the wrong meaning for the Slow Start.
When the server receives the ACK, the cwnd will increase the value of 1 MSS. So if you send n requests and receive all the ACK, cwnd will become (n + n) MSS.
In the process of Slow Start, the requests can be done in 1 RTT, because it is not easy to happend traffic jam in the Internet. So it looks like it become double per RTT. But, the real mechanism is to add, not multiply.

What happens to the TCP congestion window when packet loss occurs at slow start stage?

In TCP, packet loss can be detected in two ways: timeout and three ACKs (for one certain packet, ie. the loss packet).
Assume that timeout has not been reached yet, what happens to the congestion window if a packet loss happens during the slow start stage? Will the congestion window still increase by 1 when receiving the first duplicated ACK?
For example, at the view of the sender, initially the window size is 3:
[1 2 3]
Packet 1 and its ACK (the ACK for packet 1) are sent and received. Therefore the window size increases by 1, ie. to 4:
[2 3 4 5]
Packet 2 is sent but it's lost. Then when packet 3 is sent successfully, a duplicated ACK (still for packet 1) arrives, what is the window size at this point?
1) If the window size could increase due to receiving of this first duplicated ACK (note that the sender doesn't know packet loss now because there is only one duplicated ACK and timeout has not been reached yet), it should be:
[2 3 4 5 6]
2) Otherwise, perhaps because ACK for packet 1 has been received already (because packet 1 is sent successfully), the window size may remain 4:
[2 3 4 5]
Which one is true for TCP?
Many thanks!
Firstly , window size is set depending on the flow control .
Generally it is 4096 bytes or 8192 bytes and it may change.
So your window size does not depend on the congestion parameters or lost packets.
Now at the very beginning , one packet is sent ( 1 MSS size packet ). If ack for the first packet is received successfully , then it will increase the rate of packets being sent . It will double the size for every acknowledgement. So the congestion window grows exponentially. At certain point , due to congestion it figures out the loss due to timeout or duplicate ACK received. If it receives duplicate ACk , then it will half the congestion windows size and then increases the congwin by adding 1. It increases linearly. If loss is due to timeout then , it will set conwin to 1 and does perform slow start algorithm . In case duplicate ack received it runs fast recovery algorithm .
Slow start stage is where the rate at which packets are sent into transmission like grows exponentially, and after figuring out congestion due to loss event , it half's the congestion window value and increased linearly . Now it will be in congestion avoidance stage.
Although this might be off topic at StackOverflow, I still believe this is an important and interesting issue. I post the answer I find here:
A similar (actually almost the same) question:
Does TCP increase its congestion window when Dup Acks arrive?
Answer:
https://www.rfc-editor.org/rfc/rfc5681#section-3.2
In more detail:
On the first and second duplicate ACKs received at a sender ... the
TCP sender MUST NOT change cwnd to reflect these two segments
For each additional duplicate ACK received (after the third),
cwnd MUST be incremented by SMSS.

Why does a SYN or FIN bit in a TCP segment consume a byte in the sequence number space?

I am trying to understand the rationale behind such a design. I skimmed through a few RFCs but did not find anything obvious.
It's not particularly subtle - it's so that the SYN and FIN bits themselves can be acknowledged (and therefore re-sent if they're lost).
For example, if the connection is closed without sending any more data, then if the FIN did not consume a sequence number the closing end couldn't tell the difference between an ACK for the FIN, and an ACK for the data that was sent prior to the FIN.
SYNs and FINs require acknowledgement, thus they increment the stream's sequence number by one when used.

Resources