TCP slow start and congestion avoidance problems? - networking

Having a little problem with a trace i'm examining. I know a connection is in slow start if the window size is increasing along with the amount of ACKs sent between each segment and that it will increase by the size of the ACk'd segment. However the beginning of my trace is showing numbers that do not add up (Screenshot below). What i do not know is how packet 6's window size was calculated as the maths does not add up with the previous window size and ACKs in-between. Can anyone shed any light on this?
Also i have no idea how to spot when slow start becomes congestion avoidance. Is there something that i can look out for in the trace?
Slow start seems to only go until packet 13 so should i just assume that congestion avoidance has taken over?
http://img10.imageshack.us/f/tcptrace.jpg/
Thanks for any help given! I really appreciate it

Your sentence starting 'I know' is incorrect, hence your confusion. You are conflating the receive window advertised by the receiver and the congestion window maintained by the sender, which does not appear in packets and which doubles on each ACK during slow start. This is not the place to reiterate all of RFC 2001 but I suggest you take another look at it.

Related

Why isn't cwnd restricted by rwnd in a TCP connection?

I'm trying to understand how TCP works and I'm a bit surprised by the (absence of) effect of the receiver window (rwnd) on the congestion window (cwnd).
From what I've read (mainly wikipedia and RFC5681) I understand that if the slow start threshold (ssthresh) has not been reached but the transmission rate is restricted by rwnd (since it is the minimum value between rwnd and cwnd) then cwnd continues to increase during the slow start phase (and even during congestion avoidance) if there are no loss or timeout. Meaning that cwnd could potentially reach a very high value since the initial value of ssthresh is extremely big.
See the following citation to confirm my deduction :
Implementation Note: An easy mistake to make is to simply use cwnd,
rather than FlightSize, which in some implementations may
incidentally increase well beyond rwnd.
[from RFC5681 (this part of the RFC is about setting a new value for ssthresh after a loss)]
In this case wouldn't it be possible to :
keep a connection with a relatively low transmission rate (e.g. setting rwnd to 10mss in every ack) to have no loss and hence keep the connection in the slow start phase,
wait enough time to allow cwnd to be extremely big (like 10 times what the link can handle) and then
set rwnd to an even bigger value to let the transmission rate be restricted only by cwnd ?
This would lead to a massive amount of congestion on the link, especially since it will take quite a lot of time for the server to notice the loss with a timeout and reset cwnd back to its initial value... and this may have a huge impact on other connections using the same link, or at least the same bottleneck link.
I would have imagined that once rcwnd is reached, slow start algorithm stops and congestion avoidance would begin to react to any new change in the network (or an increase in rwnd).
According to https://stackoverflow.com/a/21775731/20003316, Linux implementation of TCP does not allow cwnd to increase when the sending rate is application-controlled (= sending rate is controlled by rwnd and not cwnd).
By looking more in depth into this, I've found that in fact there is an RFC handling this question : https://www.rfc-editor.org/rfc/rfc7661#page-10
When in slow start :
if the number of ACK in the current window is smaller than 0.5*cwnd, then a TCP implementation must not increase the value of cwnd.
if the number of ACK in the current window is greater or equal than 0.5*cwnd, then a TCP implementation must increase the value of cwnd as it would normally do.
When not in slow start:
if the sending rate is restricted by rwnd and not cwnd and the number of ACK in the current window is smaller than 0.5*cwnd, then a TCP implementation must not increase cwnd.
otherwise proceed as usual

A theoretical question using TCP sliding window: how is a mismatching window size handled?

I know this probably doesn't happen in real life at all, but I am curious. I was studying for an exam and came across a theoretical question while doing some sliding window problems as I wrote one down incorrectly; what happens if the RWS (receiving window size) and SWS (sending window size) are somehow mismatched? As an example, let's say I'm sending 10 packets of information, with a timeout of 1 RTT, but the SWS is 4 but the RWS is 3. For this purpose, let's say that the RX side is not able to send an ACK for packet 1 before packet 4 arrives so the window is not "slid" in time.
My idea was that the extra 4th packet is just discarded and never ACK'ed, so it would time out on the TX end and resend packet 4 after 1 RTT, and continue this process for 5,6,7,8 with 8 being the one discarded and having the same process done. Would this be a correct assumption? Or is there something to this that I'm maybe not understanding?
Thanks in advance.

Benefit of small TCP receive window?

I am trying to learn how TCP Flow Control works when I came across the concept of receive window.
My question is, why is the TCP receive window scale-able? Are there any advantages from implementing a small receive window size?
Because as I understand it, the larger the receive window size, the higher the throughput. While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
My question is, why is the TCP receive window scale-able?
There are two questions there. Window scaling is the ability to multiply the scale by a power of 2 so you can have window sizes > 64k. However the rest of your question indicates that you are really asking why it is resizeable, to which the answer is 'so the application can choose its own receive window size'.
Are there any advantages from implementing a small receive window size?
Not really.
Because as I understand it, the larger the receive window size, the higher the throughput.
Correct, up to the bandwidth-delay product. Beyond that, increasing it has no effect.
While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
Yes, up to the bandwidth-delay product (see above).
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network),
No it doesn't. Simulations show that if packet loss gets above a few %, TCP becomes unusable.
the sender will not need to resend a lot of packets.
It doesn't happen like that. There aren't any advantages to small window sizes except lower memory occupancy.
After much reading around, I think I might just have found an answer.
Throughput is not just a function of receive window. Both small and large receive windows have their own benefits and harms.
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network), the sender will not need to resend a lot of packets.
A large receive window ensures that the sender will not be idle a most of the time as it waits for the receiver to acknowledge that a packet has been received.
The receive window needs to be adjustable to get the optimal throughput for any given network.

Identifying the end of a slow-start phase

I have a good understanding of slow-start phase, namely how it only lets a few packets send at first, however this amount increments until the max is found in order to avoid congestion.
For the graph below however, how do I identify when the slow-start phase ends? I'm assuming it starts right in the beginning at 0 seconds, which is when the connection would be established. I'm going to guess that the slow-start ends at 0.65 seconds? Which is when we only start seeing two dots (packets) one after another... Or rather would this just be because of congestion avoidance?
I agree. It would be easier if you had joined the dots, but I see maximum slope being achieved at t=0.65s, then a slowdown as congestion avoidance kicks in.
It's somewhat hard to see (no gridlines on the figure), but it seems the max window size is reached around ~0.76s (150000 bytes) - so slow start ends there.

SR & GBN: Out-of-window ACKs

I'm currently studying fairly basic networking, and I'm currently on the subject of reliable transmission. I'm using the book Computer Networking by Kurrose & Ross, and two of the review questions were as follows:
With the selective-repeat/go-back-n protocol, it is possible for the
sender to receive an ACK for a packet that falls outside of its
current window?
For the SR version, my answer to the question was as follows:
Yes, if the window size is too big for the sequence number space. For
example, a receiver gets a number of packets equal to the space of the
sequence numbers. Its receive window has thus moved so that it is
expecting a new set of packets with the same sequence numbers as the
last one. The receiver now sends an ACK for each of the packets, but
all of them are lost along the way. This eventually causes the sender
to timeout for each of the previous set of packets, and retransmits
each of them. The receiver think that this duplicate set of packets
are really the new ones that it is expecting, and it sends ACKs for
each of them that successfully reaches the sender. The sender now
experiences a similar kind of confusion, where it thinks that the ACKs
are confirmations that each of the old packets have been received,
when they are really ACKs meant for the new, yet-to-be-sent packets.
I'm pretty sure this is correct (otherwise, please tell me!), since this kind of scenario seems to be the classic justification of why window size should be less than or equal to half the size of the sequence number space when it comes to SR protocols, but what about GBN?
Can the same kind of wraparound issue occur for it, making the answers mostly identical? If not, are there any other cases that can cause a typical GBN sender to receive an ACK outside of its window?
Regarding the later, the only example I can think of is the following:
A GBN sender sends packets A & B in order. The receiver receives both in order, and sends one cumulative ACK covering every packet before and up to A, and then another one covering every packet before and up to B (including A). The first one is so heavily delayed that the second one arrives first to the sender, causing its window to slide beyond A & B. When the first one finally arrives, it needlessly acknowledges that everything up to A has been correctly received, when A is already outside of the sender's window.
This example seems rather harmless and unlikely in contrast to the previous one, so I doubt that its correct (but again, correct me if I'm wrong, please!).
In practical world, how about a duplicated ACK delayed long enough to fall out of the window?
The protocol is between the sender and the receiver, but it does not have control over how the media (network path) behaves.
The protocol would still be reliable according to design but the implementation shall be able to handle such out-of-window duplicated ACKs.

Resources