Round trip time (sliding window)? - networking

I have 2 questions:
While calculating RTT, should we take transmission time into consideration or not?
The distance between two stations M and N is L kilometers. All frames are K bits long. The propagation delay per kilometers is t seconds. Let T bits/sec be the channel capacity. Assuming that processing delay is negligible, what is the minimum number of bits for the sequence number field in a frame for maximum utilization, when the sliding window protocol is used?

1) RTT is the measured time from segment transmission until ACK received. So transmission time will be considered obviosly. One can ignore the retransmission time.

RTT is calculated by considering the values of propagation delay, queing delay, transmission delay and processing delay..
Rtt = 2*[propagation delay + N(queing delay+transmission delay+processing delay)]

Related

Maximum throughput meaning

Host A is sending data to host B over a full duplex
link. A and B are using the sliding window
protocol for flow control. The send and receive
window sizes are 5 packets each. Data packets
(sent only from A to B) are all 1000 bytes long
and the transmission time for such a packet is
50 ps. Acknowledgment packets (sent only from
B to A) are very small and require negligible
transmission time. The propagation delay over
the link is 200 trrs. What is the maximum
achievable throughput in this communication?
This question was asked in gate my question. I have calculated it, but what is the meaning of word 'maximum'? The calculation was just for throughput. How would one calculate minimum throughput?
I think maximum means assuming no packets loss and therefore no retries. Also, no additional transmission time above the 50ms. Basically, given the above transmission time and propagation delay, how many bytes can be sent and acknowledged per sec?
My intuition is to figure out how long it takes to send 5 packets to fill up the window with the propagation delay added. Then add the time for the acknowledgement for the first packet to arrive at the sender. That's your basic window send and acknowledgement time because as soon as the acknowledgement arrives the window will slide forward by one packet.
Since the window is 5 packets and each packet is 1,000 bytes then the maximum throughput should be 5,000 bytes / the time you calculated for the above cycle.

CSMA/CD: Minimum frame size to hear all collisions?

Question from a networking class:
"In a csma/cd lan of 2 km running at 100 megabits per second, what would be the minimum frame size to hear all collisions?"
Looked all over and can't find info anywhere on how to do this. Is there a formula for this problem? Thanks for any help.
bandwidth delay product is the amount of data on transit.
propagation delay is the amount of time it takes for the signal to propagate over network.
propagation delay=(lenght of wire/speed of signal).
assuming copper wire i.e speed =2/3* speed of light
propagation delay =(2000/(2*3*10^8/3))
=10us
round trip time is the time taken for message to travel from sender to receiver and back from receiver to sender.
round trip time =2*propagation delay =20us
minimum frame size =bandwidth *delay (rtt)
frame size = bandwidth *rtt
=100Mbps*20us=2000bits

Bandwidth estimation with multiple TCP connections

I have a client which issues parallel requests for data from a server. Each request uses a separate TCP connection. I would like to estimate the available throughput (bandwidth) based on the received data.
I know that for one connection TCP connection I can do so by dividing the amount of data the has been download by the duration of time it took to download the data. But given that there are multiple concurrent connections, would it be correct to sum up all the data that has been downloaded by the connections and divide the sum by the duration between sending the first request and the arrival time of the last byte (i.e., the last byte of the download that finishes last)? Or am I overlooking something here?
[This is a rewrite of my previous answer, which was getting too messy]
There are two components that we want to measure in order to calculate throughput: the total number of bytes transferred, and the total amount of time it took to transfer those bytes. Once we have those two figures, we just divide the byte-count by the duration to get the throughput (in bytes-per-second).
Calculating the number of bytes transferred is trivial; just have each TCP connection tally the number of bytes it transferred, and at the end of the sequence, we add up all of the tallies into a single sum.
Calculating the amount of time it takes for a single TCP connection to do its transfer is likewise trivial: just record the time (t0) at which the TCP connection received its first byte, and the time (t1) at which it received its last byte, and that connection's duration is (t1-t0).
Calculating the amount of time it takes for the aggregate process to complete, OTOH, is not so obvious, because there is no guarantee that all of the TCP connections will start and stop at the same time, or even that their download-periods will intersect at all. For example, imagine a scenario where there are five TCP connections, and the first four of them start immediately and finish within one second, while the final TCP connection drops some packets during its handshake, and so it doesn't start downloading until 5 seconds later, and it also finishes one second after it starts. In that scenario, do we say that the aggregate download process's duration was 6 seconds, or 2 seconds, or ???
If we're willing to count the "dead time" where no downloads were active (i.e. the time between t=1 and t=5 above) as part of the aggregate-duration, then calculating the aggregate-duration is easy: Just subtract the smallest t0 value from the largest t1 value. (this would yield an aggregate duration of 6 seconds in the example above). This may not be what we want though, because a single delayed download could drastically reduce the reported bandwidth estimate.
A possibly more accurate way to do it would be say that the aggregate duration should only include time periods when at least one TCP download was active; that way the result does not include any dead time, and is thus perhaps a better reflection of the actual bandwidth of the network path.
To do that, we need to capture the start-times (t0s) and end-times (t1s) of all TCP downloads as a list of time-intervals, and then merge any overlapping time-intervals as shown in the sketch below. We can then add up the durations of the merged time-intervals to get the aggregate duration.
You need to do a weighted average. Let B(n) be the bytes processed for connection 'n' and T(n) be the time required to process those bytes. The total throughput is:
double throughput=0;
for (int n=0; n<Nmax; ++n)
{
throughput += B(n) / T(n);
}
throughtput /= Nmax;

Bandwidth-Delay Product

Its given that the bandwidth-delay product defines the number of bits that can fill the link.
The sender should send a burst of data of (2*bandwidth*delay) bits.
I am not getting why the term bandwidth*delay multiplied by 2.Please Explain the reason???
It depends what you mean by "delay". If delay is the round trip time (RTT) then you wouldn't multiply it by two. Presumably, in the formula you are looking at, the delay is the unidirectional transmiasion time, so you multiply it by 2 to estimate the RTT.
One RTT is the earliest time you could get an acknowledgement back for the first bit you transmitted, so that's why your window should be that big in order to fill the pipe.
Delay in your case is the propagation delay which is the time taken by the signal(message) to propagate from sender to receiver.
It is multiplied by 2 because the link is bidirectional i.e sender and receiver can both send the data at the same time i.e in order to completely fill the link you need to multiply the propagation delay by 2 and this term is known as round trip time(RTT).
bandwidth-delay product = RTT * bandwidth
bandwidth-delay product = 2 * propagation delay * bandwidth
where
RTT = 2 * propagation delay
i guess this product only work to tcp/ip, not udp/ip. as only tcp need confirmation about the sending data.

What is the rationale behind bandwidth delay product

My understanding is that Bandwidth delay product refers to the maximum amount of data "in-transit" at any point in time, between two endpoints.
The thing that I don't get is, why multiply bandwidth by RTT. Bandwidth is a function of underlying medium, such as copper wire, fire optics etc and RTT is function of how busy intermediate nodes are, any scheduling applied at the intermediate nodes, distance etc. RTT can change, but bandwidth for practical purposes can be considered as fixed. So how does multiplying a constant value (capacity aka bandwidth) by fluctuating value (RTT) represents total amount of data in transit?
Based on this, will a really really slow have very large capacity? Chances are the "Causes" of RTT will start dropping.
Look at the units:
[bandwidth] = bytes / second
[round trip time] = seconds
[data volume] = bytes
[data volume] = [bandwidth] * [round trip time].
Unit-wise, it is correct. Semantically,
What is bandwidth * round trip time? It's the amount of data that left the sender before the first acknowledgement was received by the sender. That is, bandwidth * round trip time = the desired window size under perfect conditions.
If the round trip time is measured from the last packet and the sender's outbound bandwidth is perfectly stable and fully used, then the measured window size exactly calculates the number of packets (data and ACKs together) in transit. If you want only one direction, divide the quantity by two.
Since the round trip time is a measured quantity, it naturally fluctuates (and gets smoothed out). The measured bandwidth could fluctuate as well, and thus the estimated total volume of data in transit fluctuates as well.
Note that the amount of data in transit can vary with the data transfer rate. If the bottleneck is wire delay, then RTT can be considered constant, and the amount of data in transit will be proportional to the speed with which it's sent to the network.
Of course, if a round trip time suddenly rises dramatically, the estimated max. amount of data in transit rises as well, but that is correct. If there is no accompanying packet loss, the sliding window needs to expand. If there is packet loss, you need to reconsider the bandwidth estimate (and the bandwidth delay product drops accordingly).
To add to Jan Dvorak's answer, you can think of the 'big fat pipe' as a garden hose. We are interested in how much water is in the pipe. So, we take its 'bandwidth' i.e. how fast it can deliver water, which for a hose is determined by its cross-sectional area, and multiply by its length, which corresponds to the RTT, i.e. how 'long' a drop of water takes to get from one end to the other. The result is the volume of the hose, the volume of the pipe, the amount of data 'in the pipe'.
First, BDP is a calculated value used in performance tuning to determine the upper bounds of data which could be outstanding/unacknowledged. This, almost always, does not represent the quantity of "in-transit" data, but a target which tuning parameters are applied. If it represented "in-transit" data, always, there would be no room for performance tuning.
RTT does in fact fluctuate. This is why the expected worse case RTT is used in calculations. By tuning to the worse case, throughput efficiency will be at maximum when RTT is poorest. If RTT improves, we get outstanding Acks sooner, the pipe remains full and maximum throughput (efficiency) is maintained.
"Full pipe" is a misnomer. The goal is to keep the Tx side full, as the Rx contains Ack packets which are typically smaller than the transmitted packets.
RTT also aggregated asymmetrical upstream and downstream bandwidths (ADSL, satellite modem, cable modem, etc.).

Resources