TCP fast recovery algorithm is described as follows(from TCP illustrated vol. 1). What I can't understand is in step 1, why there's a CWD window inflation by three times the segment size?
When the third duplicate ACK is received, set ssthresh to one-half the current congestion window, cwnd. Retransmit the missing segment.
Set cwnd to ssthresh plus 3 times the segment size.
Each time another duplicate ACK arrives, increment cwnd by the segment size and transmit a packet (if allowed by the new value of
cwnd).
When the next ACK arrives that acknowledges new data, set cwnd to ssthresh. This should be the ACK of the retransmission from step 1,
one round-trip time after the retransmission. Additionally, this ACK
should acknowledge all the intermediate segments sent between the lost
packet and the receipt of the first duplicate ACK. This step is
congestion avoidance, since we're slowing down to one-half the rate we
were at when the packet was lost.
From [RFC 2001][1]
When the third duplicate ACK in a row is received, set ssthresh
to one-half the current congestion window, cwnd, but no less
than two segments. Retransmit the missing segment. Set cwnd to
ssthresh plus 3 times the segment size. This inflates the
congestion window by the number of segments that have left the
network and which the other end has cached
So, when you receive 3 duplicate ACKs in a row you cut cwnd to half and perform a fast retransmit, from now on you're trying to not just idle while waiting for the next new ACK (1 RTT at best). Once you enter fast recovery, you send new data with
cwnd= original cwnd + # of duplicate ACKs received
until either you receive the ACK you were waiting for or the timer for that ACK expires.
Basically, that "+3" takes account for those 3 acks received that made you enter fast recovery in the first place so that you transmit a number of new bytes equal to the lost bytes + the ones that got to the receiver but were discarded.
[1]: https://www.rfc-editor.org/rfc/rfc2001
Related
I received an assignment from the College where I have to implement a reliable transfer through UDP aka. TCP Over UDP (I know, reinvent the wheel since this has already been implemented on TCP) to know in deep how TCP works. Some of the requirements are: 3-Way Handshake, Congestion Control (TCP Tahoe, in particular) and Waved Hands. I think about doing this with Java or Python.
Some more specific requirements are:
After each ACK is received:
(Slow start) If CWND < SS-THRESH: CWND += 512
(Congestion Avoidance) If CWND >= SS-THRESH: CWND += (512 * 512) / CWND
After timeout, set SS-THRESH -> CWND / 2, CWND -> 512, and retransmit data after the last acknowledged byte.
I couldn't find more specific information about the TCP Tahoe implementation. But from what I understand, TCP Tahoe is based on Go-Back-N, so I found the following pseudo algorithm for sender and receiver:
My question is the Slow Start and Congestion Avoidance phase should happen right after if sendbase == nextseqnum? That is, right after confirming the receipt of an expected ACK?
My other question is about the Window Size, Go-Back-N uses a fixed window whereas TCP Tahoe uses a dynamic window. How can I calculate window size based on cwnd?
Note: your pictures are unreadable, please provide a higher resolution images
I don't think that algorithm is correct. A timer should be associated with each packet and stopped when ACK for this packet is received. Congestion control is triggered when the timer for any of the packets fires.
TCP is not exactly Go-Back-N receiver. In TCP receiver has a buffer too. This does not require any changes at the sender Go-Back-N. However, TCP is also supposed to implement flow control, in which the receiver tells the sender how much space in its buffer remains, and the sender adjusts its window accordingly.
Note, that Go-Back-N sequence number count packets, and TCP sequence numbers count bytes in the packets, you have to change your algorithm accordingly.
I would advice to get somewhat familiar with rfc793. It does not have congestion control, but it specifies how other TCP mechanics is supposed to work. Also this link has a nice illustration of TCP window and all variables associated with it.
My question is the Slow Start and Congestion Avoidance phase should happen right after if sendbase == nextseqnum? That is, right after confirming the receipt of an expected ACK?
your algorithm only does something when it receives ACK for the last packet. As I said, this is incorrect.
Regardless. Every ACK that acknowledges new packet shoult trigger window increase. You can do check this by checking if send_base was increased as the result of an ACK.
Dunno if every Tahoe implementation does this, but you may need this also. After three consequtive duplicate ACKs, i.e., ACKs that do not increase send_base you trigger congestion response.
My other question is about the Window Size, Go-Back-N uses a fixed window whereas TCP Tahoe uses a dynamic window. How can I calculate window size based on cwnd?
you make the N variable instead of constant, and assign congestion window to it.
in a real TCP with flow control you do N = min (cwnd, receiver_window).
Host A is sending data to host B over a full duplex
link. A and B are using the sliding window
protocol for flow control. The send and receive
window sizes are 5 packets each. Data packets
(sent only from A to B) are all 1000 bytes long
and the transmission time for such a packet is
50 ps. Acknowledgment packets (sent only from
B to A) are very small and require negligible
transmission time. The propagation delay over
the link is 200 trrs. What is the maximum
achievable throughput in this communication?
This question was asked in gate my question. I have calculated it, but what is the meaning of word 'maximum'? The calculation was just for throughput. How would one calculate minimum throughput?
I think maximum means assuming no packets loss and therefore no retries. Also, no additional transmission time above the 50ms. Basically, given the above transmission time and propagation delay, how many bytes can be sent and acknowledged per sec?
My intuition is to figure out how long it takes to send 5 packets to fill up the window with the propagation delay added. Then add the time for the acknowledgement for the first packet to arrive at the sender. That's your basic window send and acknowledgement time because as soon as the acknowledgement arrives the window will slide forward by one packet.
Since the window is 5 packets and each packet is 1,000 bytes then the maximum throughput should be 5,000 bytes / the time you calculated for the above cycle.
I was trying to get my head around tcp congestion control and came across what is called the slow start phase, where tcp starts by sending just 1 MSS and then keep on adding 1 MSS to the congestion window on receipt of an ack. This much is clear. But after this, almost all books/articles that i refers goes ahead a say that this results in doubling the cwnd every RTT showing a image something like below where i got confused.
The first segment is clear, tcp sends it and receives the ack after a RTT and then doubles the cwnd which now is 2. Now it transmits two segments, the ack for the fist one comes after RTT making the cwnd 3. But the ack for the second segment comes after this making cwnd 4(ie doubling it). So i am not able to understand how the cwnd doubles every RTT, since as per my understanding, in this example, cwnd doubled on the first RTT and got incremented by one on the second RTT and again doubled on some other time(RTT+tx time of the first segment i believe). Is this understanding correct. Please explain.
After the two segments' acks had been received by the sender, the CWND was increased by 2, not 1. Note that the second ack in round 2 arrived right after the first ack in round 2, that's why they were considered in the same round and costed 1 RTT.
I can not agree with you more. I think there is not exist a double mechanism and they get the wrong meaning for the Slow Start.
When the server receives the ACK, the cwnd will increase the value of 1 MSS. So if you send n requests and receive all the ACK, cwnd will become (n + n) MSS.
In the process of Slow Start, the requests can be done in 1 RTT, because it is not easy to happend traffic jam in the Internet. So it looks like it become double per RTT. But, the real mechanism is to add, not multiply.
In TCP, packet loss can be detected in two ways: timeout and three ACKs (for one certain packet, ie. the loss packet).
Assume that timeout has not been reached yet, what happens to the congestion window if a packet loss happens during the slow start stage? Will the congestion window still increase by 1 when receiving the first duplicated ACK?
For example, at the view of the sender, initially the window size is 3:
[1 2 3]
Packet 1 and its ACK (the ACK for packet 1) are sent and received. Therefore the window size increases by 1, ie. to 4:
[2 3 4 5]
Packet 2 is sent but it's lost. Then when packet 3 is sent successfully, a duplicated ACK (still for packet 1) arrives, what is the window size at this point?
1) If the window size could increase due to receiving of this first duplicated ACK (note that the sender doesn't know packet loss now because there is only one duplicated ACK and timeout has not been reached yet), it should be:
[2 3 4 5 6]
2) Otherwise, perhaps because ACK for packet 1 has been received already (because packet 1 is sent successfully), the window size may remain 4:
[2 3 4 5]
Which one is true for TCP?
Many thanks!
Firstly , window size is set depending on the flow control .
Generally it is 4096 bytes or 8192 bytes and it may change.
So your window size does not depend on the congestion parameters or lost packets.
Now at the very beginning , one packet is sent ( 1 MSS size packet ). If ack for the first packet is received successfully , then it will increase the rate of packets being sent . It will double the size for every acknowledgement. So the congestion window grows exponentially. At certain point , due to congestion it figures out the loss due to timeout or duplicate ACK received. If it receives duplicate ACk , then it will half the congestion windows size and then increases the congwin by adding 1. It increases linearly. If loss is due to timeout then , it will set conwin to 1 and does perform slow start algorithm . In case duplicate ack received it runs fast recovery algorithm .
Slow start stage is where the rate at which packets are sent into transmission like grows exponentially, and after figuring out congestion due to loss event , it half's the congestion window value and increased linearly . Now it will be in congestion avoidance stage.
Although this might be off topic at StackOverflow, I still believe this is an important and interesting issue. I post the answer I find here:
A similar (actually almost the same) question:
Does TCP increase its congestion window when Dup Acks arrive?
Answer:
https://www.rfc-editor.org/rfc/rfc5681#section-3.2
In more detail:
On the first and second duplicate ACKs received at a sender ... the
TCP sender MUST NOT change cwnd to reflect these two segments
For each additional duplicate ACK received (after the third),
cwnd MUST be incremented by SMSS.
I was reading Computer Networking from Kurose, and while reading in the TCP chapter about the differences between TCP and Go Back N I found something that I don't fully understand. The book says the following about some of the differences between the two protocols:
"many TCP implementations buffer correctly received but out-of-order segs rather than discard.
also, suppose a seqof segs 1, 2, …N, are received correctively in-order,ACK(n),
n < N, gets lost, and remaining N-1 acks arrive at sender before their respective timeouts
TCP retransmit most one seg, i.e., seg n, instead of pkts, n, n+1, …, N
TCP wouldn’t even retransmit seg n if ACK(n+1) arrived before timeout for seg n"
I understand the buffering of out-of-order segments, but I don't understand the other behavior, and I think it is because I don't fully understand Go Back N. Following that example, if ACK(n+t) arrives before Go Back N timeout, the protocol would continue as if seg n was in fact received, which is the case, because of the accumulative ACKS... so, Go Back N wouldn't retransmit that segment either.... or am I missing something?
I was looking at this question's answer and after finding it I thought even though this is old, it might help someone, so I copied a fragment from Kurose-Ross Computer Networking - A top down approach:
Is TCP a GBN or an SR protocol? Recall that TCP acknowledgments are
cumulative and correctly received but out-of-order segments are not individually ACKed by the receiver. Consequently, the TCP sender need only maintain the smallest sequence number of a transmitted but unacknowledged byte (SendBase) and the sequence number of the next byte to be sent (NextSeqNum). In this sense, TCP looks a lot like a GBN-style protocol. But
there are some striking differences between TCP and Go-Back-N. Many TCP implementations
will buffer correctly received but out-of-order segments [Stevens 1994].
Consider also what happens when the sender sends a sequence of segments 1, 2, . . . ,
N, and all of the segments arrive in order without error at the receiver. Further suppose
that the acknowledgment for packet n < N gets lost, but the remaining N – 1 acknowledgments
arrive at the sender before their respective timeouts. In this example, GBN
would retransmit not only packet n, but also all of the subsequent packets n + 1, n + 2,
. . . , N. TCP, on the other hand, would retransmit at most one segment, namely, segment
n. Moreover, TCP would not even retransmit segment n if the acknowledgment
for segment n + 1 arrived before the timeout for segment n.
My conclusion: in practice TCP is a mixture between both GBN and SR.
see these links, it is easy to understand about GBN and SR:
Go Back N protocol (GBN):
enter link description here
Selective Repeat protocol (SR):
https://www.youtube.com/watch?v=Cs8tR8A9jm8
in GBN and SR protocol,the receiver has to send ACK message for all segments which it has received in the slide window.
in TCP protocol, the receiver don't send ACK message for all segments which it has received in the slide window. the receiver only send ACK to get the next segments that it expect. it means that less ACK messages will be sent to the sender. therefore, it is good for reducing network congestion.
in abnormal cases, some segments are lost (by network congestion or bit error), TCP transmission time is longer than GBN and SR because the receiver can not sent 2 ACK messages at the same time.
in my opinion, losing segment rarely happens. so TCP protocol optimizes for normal cases instead of abnormal cases. in normal cases, TCP is better than GBN and SR
The quote says that the ACK(n) got lost, not the nth segment got lost. In such case, nothing needs to be re-transmitted, because ACK(n + x) means that everything upto n + x was successfully received.
I was confused by the statement from the book too, but I think I have found the answer:
Consider also what happens when the sender sends a sequence of segments 1, 2, . . . , N, and all of the segments arrive in order without error at the receiver. Further suppose that the acknowledgment for packet n < N gets lost, but the remaining N – 1 acknowledgments arrive at the sender before their respective timeouts. In this example, GBN would retransmit not only packet n, but also all of the subsequent packets n + 1, n + 2, . . . , N. TCP, on the other hand, would retransmit at most one segment, namely, segment n. Moreover, TCP would not even retransmit segment n if the acknowledgment for segment n + 1 arrived before the timeout for segment n.
Actually, in the above example, even though the ACK for packet n+1 arrives at the sender before its timeout, one has to be aware that the timer for packet n could have timed-out before that arrival. So, because packet n timeout and the GBN has not seen ACK(n+1) or ACK(n+2)... so far, it will trigger the re-transmission of all packets after n .
However, for TCP, the sender would only send packet n again at this specific moment.
P.S. this question has been very old. But, anyway, hopefully that might help anyone.
ACK(n) acknowledges arrival of the entire stream up to n. So ACK(n+1) says that everything up to n+1 has arrived, including n.