Wireshark Time-Sequence-Graph (Stevens) - tcp

I need help explain this graph http://www.picamatic.com/view/9018094_Untitled/
I need to find out the TCP’s slowstart phase begins and ends, and where congestion avoidance takes over.

Somewhat tough to see since there are no grid lines, but we can estimate. Slow start is characterized by exponential growth, so it appears that on the 6th burst, the packets being sent between 1.0 and 1.1 seconds, that the exponential growth of packets being sent out has stopped and instead turned linear indicating it has entered congestion avoidance.

Related

How can propagation time be greater than transmission time?

Please help me out understanding a concept in computer networks. I have understood what propagation delay and transmission time means. I can't understand how propagation time can be greater than transmission time? Please explain with one such example.
Transmission time = time it takes to send (dispatch) the data.
Propagation delay = time it takes for the data to reach the other side.
Source
For example, suppose you're sending 10 bits of data from the North Pole to the South Pole. If, for example, your computer (which is very old) has a 1bit/ms bit rate for sending, the transmission time would be 10ms. However, due to the length and the great number of hops required between the north and south poles, it could take 30ms for the first bit (and assuming everything goes well, for the other bits as well) to reach the south pole.
Thus, propagation time > transmission time.

Identifying the end of a slow-start phase

I have a good understanding of slow-start phase, namely how it only lets a few packets send at first, however this amount increments until the max is found in order to avoid congestion.
For the graph below however, how do I identify when the slow-start phase ends? I'm assuming it starts right in the beginning at 0 seconds, which is when the connection would be established. I'm going to guess that the slow-start ends at 0.65 seconds? Which is when we only start seeing two dots (packets) one after another... Or rather would this just be because of congestion avoidance?
I agree. It would be easier if you had joined the dots, but I see maximum slope being achieved at t=0.65s, then a slowdown as congestion avoidance kicks in.
It's somewhat hard to see (no gridlines on the figure), but it seems the max window size is reached around ~0.76s (150000 bytes) - so slow start ends there.

What is the rationale behind bandwidth delay product

My understanding is that Bandwidth delay product refers to the maximum amount of data "in-transit" at any point in time, between two endpoints.
The thing that I don't get is, why multiply bandwidth by RTT. Bandwidth is a function of underlying medium, such as copper wire, fire optics etc and RTT is function of how busy intermediate nodes are, any scheduling applied at the intermediate nodes, distance etc. RTT can change, but bandwidth for practical purposes can be considered as fixed. So how does multiplying a constant value (capacity aka bandwidth) by fluctuating value (RTT) represents total amount of data in transit?
Based on this, will a really really slow have very large capacity? Chances are the "Causes" of RTT will start dropping.
Look at the units:
[bandwidth] = bytes / second
[round trip time] = seconds
[data volume] = bytes
[data volume] = [bandwidth] * [round trip time].
Unit-wise, it is correct. Semantically,
What is bandwidth * round trip time? It's the amount of data that left the sender before the first acknowledgement was received by the sender. That is, bandwidth * round trip time = the desired window size under perfect conditions.
If the round trip time is measured from the last packet and the sender's outbound bandwidth is perfectly stable and fully used, then the measured window size exactly calculates the number of packets (data and ACKs together) in transit. If you want only one direction, divide the quantity by two.
Since the round trip time is a measured quantity, it naturally fluctuates (and gets smoothed out). The measured bandwidth could fluctuate as well, and thus the estimated total volume of data in transit fluctuates as well.
Note that the amount of data in transit can vary with the data transfer rate. If the bottleneck is wire delay, then RTT can be considered constant, and the amount of data in transit will be proportional to the speed with which it's sent to the network.
Of course, if a round trip time suddenly rises dramatically, the estimated max. amount of data in transit rises as well, but that is correct. If there is no accompanying packet loss, the sliding window needs to expand. If there is packet loss, you need to reconsider the bandwidth estimate (and the bandwidth delay product drops accordingly).
To add to Jan Dvorak's answer, you can think of the 'big fat pipe' as a garden hose. We are interested in how much water is in the pipe. So, we take its 'bandwidth' i.e. how fast it can deliver water, which for a hose is determined by its cross-sectional area, and multiply by its length, which corresponds to the RTT, i.e. how 'long' a drop of water takes to get from one end to the other. The result is the volume of the hose, the volume of the pipe, the amount of data 'in the pipe'.
First, BDP is a calculated value used in performance tuning to determine the upper bounds of data which could be outstanding/unacknowledged. This, almost always, does not represent the quantity of "in-transit" data, but a target which tuning parameters are applied. If it represented "in-transit" data, always, there would be no room for performance tuning.
RTT does in fact fluctuate. This is why the expected worse case RTT is used in calculations. By tuning to the worse case, throughput efficiency will be at maximum when RTT is poorest. If RTT improves, we get outstanding Acks sooner, the pipe remains full and maximum throughput (efficiency) is maintained.
"Full pipe" is a misnomer. The goal is to keep the Tx side full, as the Rx contains Ack packets which are typically smaller than the transmitted packets.
RTT also aggregated asymmetrical upstream and downstream bandwidths (ADSL, satellite modem, cable modem, etc.).

does TCP randomly select a time from an interval to decide when a timeout has occured?

In TCP how is the time for a time out to happen determined? I was told it is randomly selected from an interval that doubles after each time out, but nothing I found on Google mentions anything about random selection and instead says it's calculated used Smoothed Round Trip Time after the first acknowledgment is received. Does it do this for each packet or is there some randomness to the design?
An initial value of the RTT is calculated during the TCP 3-way handshake that starts a connection. It is updated thereafter when qualifying send/acks are seen.
Most modern implementations don't use this method directly but rather using a statistical analysis of the maximum time it should take to get an ACK and retransmit after that interval. The "exponential backoff" (the doubling of the wait interval) happens for further retransmissions of the same data.
A connection "times out" after some number of transmissions with no ACK being received.

TCP slow start and congestion avoidance problems?

Having a little problem with a trace i'm examining. I know a connection is in slow start if the window size is increasing along with the amount of ACKs sent between each segment and that it will increase by the size of the ACk'd segment. However the beginning of my trace is showing numbers that do not add up (Screenshot below). What i do not know is how packet 6's window size was calculated as the maths does not add up with the previous window size and ACKs in-between. Can anyone shed any light on this?
Also i have no idea how to spot when slow start becomes congestion avoidance. Is there something that i can look out for in the trace?
Slow start seems to only go until packet 13 so should i just assume that congestion avoidance has taken over?
http://img10.imageshack.us/f/tcptrace.jpg/
Thanks for any help given! I really appreciate it
Your sentence starting 'I know' is incorrect, hence your confusion. You are conflating the receive window advertised by the receiver and the congestion window maintained by the sender, which does not appear in packets and which doubles on each ACK during slow start. This is not the place to reiterate all of RFC 2001 but I suggest you take another look at it.

Resources