How can propagation time be greater than transmission time? - networking

Please help me out understanding a concept in computer networks. I have understood what propagation delay and transmission time means. I can't understand how propagation time can be greater than transmission time? Please explain with one such example.

Transmission time = time it takes to send (dispatch) the data.
Propagation delay = time it takes for the data to reach the other side.
Source
For example, suppose you're sending 10 bits of data from the North Pole to the South Pole. If, for example, your computer (which is very old) has a 1bit/ms bit rate for sending, the transmission time would be 10ms. However, due to the length and the great number of hops required between the north and south poles, it could take 30ms for the first bit (and assuming everything goes well, for the other bits as well) to reach the south pole.
Thus, propagation time > transmission time.

Related

Vulnerable time in ALOHA depends on the frame transmission time (Tfr), but in CSMA it depends on frame propagation time

Q: Why the vulnerable time in ALOHA depends on the frame
transmission time (Tfr), but in CSMA it depends on frame propagation
time (Tp)
-->I've understood that Vulnerable time is the time when there is a possibility for collision
There is no proper explanation about Transmission time and propagation time anywhere.
Please help
This is because of the difference in the magnitude of Transmission time and Propogation time. Most propogation is done through EM waves and thus for most distances the propogation delay is very negligible as compared to transmission time which is limited by the Bandwidth of a channel.
In the case of ALOHA, both these times contribute to the vulnerable time and thus the dominant factor is retained , however incase of CSMA the transmission time factor is avoided and thus only the propogation delay in considered although its very small , it must be taken into account to validate the efficiencies of CSMA protocols.

CSMA/CD: Minimum frame size to hear all collisions?

Question from a networking class:
"In a csma/cd lan of 2 km running at 100 megabits per second, what would be the minimum frame size to hear all collisions?"
Looked all over and can't find info anywhere on how to do this. Is there a formula for this problem? Thanks for any help.
bandwidth delay product is the amount of data on transit.
propagation delay is the amount of time it takes for the signal to propagate over network.
propagation delay=(lenght of wire/speed of signal).
assuming copper wire i.e speed =2/3* speed of light
propagation delay =(2000/(2*3*10^8/3))
=10us
round trip time is the time taken for message to travel from sender to receiver and back from receiver to sender.
round trip time =2*propagation delay =20us
minimum frame size =bandwidth *delay (rtt)
frame size = bandwidth *rtt
=100Mbps*20us=2000bits

Wireshark Time-Sequence-Graph (Stevens)

I need help explain this graph http://www.picamatic.com/view/9018094_Untitled/
I need to find out the TCP’s slowstart phase begins and ends, and where congestion avoidance takes over.
Somewhat tough to see since there are no grid lines, but we can estimate. Slow start is characterized by exponential growth, so it appears that on the 6th burst, the packets being sent between 1.0 and 1.1 seconds, that the exponential growth of packets being sent out has stopped and instead turned linear indicating it has entered congestion avoidance.

What is the rationale behind bandwidth delay product

My understanding is that Bandwidth delay product refers to the maximum amount of data "in-transit" at any point in time, between two endpoints.
The thing that I don't get is, why multiply bandwidth by RTT. Bandwidth is a function of underlying medium, such as copper wire, fire optics etc and RTT is function of how busy intermediate nodes are, any scheduling applied at the intermediate nodes, distance etc. RTT can change, but bandwidth for practical purposes can be considered as fixed. So how does multiplying a constant value (capacity aka bandwidth) by fluctuating value (RTT) represents total amount of data in transit?
Based on this, will a really really slow have very large capacity? Chances are the "Causes" of RTT will start dropping.
Look at the units:
[bandwidth] = bytes / second
[round trip time] = seconds
[data volume] = bytes
[data volume] = [bandwidth] * [round trip time].
Unit-wise, it is correct. Semantically,
What is bandwidth * round trip time? It's the amount of data that left the sender before the first acknowledgement was received by the sender. That is, bandwidth * round trip time = the desired window size under perfect conditions.
If the round trip time is measured from the last packet and the sender's outbound bandwidth is perfectly stable and fully used, then the measured window size exactly calculates the number of packets (data and ACKs together) in transit. If you want only one direction, divide the quantity by two.
Since the round trip time is a measured quantity, it naturally fluctuates (and gets smoothed out). The measured bandwidth could fluctuate as well, and thus the estimated total volume of data in transit fluctuates as well.
Note that the amount of data in transit can vary with the data transfer rate. If the bottleneck is wire delay, then RTT can be considered constant, and the amount of data in transit will be proportional to the speed with which it's sent to the network.
Of course, if a round trip time suddenly rises dramatically, the estimated max. amount of data in transit rises as well, but that is correct. If there is no accompanying packet loss, the sliding window needs to expand. If there is packet loss, you need to reconsider the bandwidth estimate (and the bandwidth delay product drops accordingly).
To add to Jan Dvorak's answer, you can think of the 'big fat pipe' as a garden hose. We are interested in how much water is in the pipe. So, we take its 'bandwidth' i.e. how fast it can deliver water, which for a hose is determined by its cross-sectional area, and multiply by its length, which corresponds to the RTT, i.e. how 'long' a drop of water takes to get from one end to the other. The result is the volume of the hose, the volume of the pipe, the amount of data 'in the pipe'.
First, BDP is a calculated value used in performance tuning to determine the upper bounds of data which could be outstanding/unacknowledged. This, almost always, does not represent the quantity of "in-transit" data, but a target which tuning parameters are applied. If it represented "in-transit" data, always, there would be no room for performance tuning.
RTT does in fact fluctuate. This is why the expected worse case RTT is used in calculations. By tuning to the worse case, throughput efficiency will be at maximum when RTT is poorest. If RTT improves, we get outstanding Acks sooner, the pipe remains full and maximum throughput (efficiency) is maintained.
"Full pipe" is a misnomer. The goal is to keep the Tx side full, as the Rx contains Ack packets which are typically smaller than the transmitted packets.
RTT also aggregated asymmetrical upstream and downstream bandwidths (ADSL, satellite modem, cable modem, etc.).

Measuring time difference between networked devices

I'm adding networked multiplayer to a game I've made. When the server sends an update packet to the client, I include a timestamp so that the client knows exactly when that information is valid. However, the server computer and the client computer might have their clocks set to different times (maybe even just a few seconds difference), so the timestamp from the server needs to be translated to the client's local time.
So, I'd like to know the best way to calculate the time difference between the server and the client. Currently, the client pings the server for a time stamp during initialization, takes note of when the request was sent and when it was answered, and guesses that the time stamp was generated roughly halfway along the journey. The client also runs 10 of these trials and takes the average.
But, the problem is that I'm getting different results over repeated runs of the program. Within each set of 10, each measurement rarely diverges by more than 400 milliseconds, which might be acceptable. But if I wait a few minutes between each run of the program, the resulting averages might disagree by as much as 2 seconds, which is not acceptable.
Is there a better way to figure out the difference between the clocks of two networked devices? Or is there at least a way to tweak my algorithm to yield more accurate results?
Details that may or may not be relevant: The devices are iPod Touches communicating over Bluetooth. I'm measuring pings to be anywhere from 50-200 milliseconds. I can't ask the users to sync up their clocks. :)
Update: With the help of the below answers, I wrote an objective-c class to handle this. I posted it on my blog: http://scooops.blogspot.com/2010/09/timesync-was-time-sink.html
I recently took a one-hour class on this and it wasn't long enough, but I'll try to boil it down to get you pointed in the right direction. Get ready for a little algebra.
Let s equal the time according to the server. Let c equal the time according to the client. Let d = s - c. d is what is added to the client's time to correct it to the server's time, and is what we need to solve for.
First we send a packet from the server to the client with a timestamp. When that packet is received at the client, it stores the difference between the given timestamp and its own clock as t1.
The client then sends a packet to the server with its own timestamp. The server sends the difference between the timestamp and its own clock back to the client as t2.
Note that t1 and t2 both include the "travel time" t of the packet plus the time difference between the two clocks d. Assuming for the moment that the travel time is the same in both directions, we now have two equations in two unknowns, which can be solved:
t1 = t - d
t2 = t + d
t1 + d = t2 - d
d = (t2 - t1)/2
The trick comes because the travel time is not always constant, as evidenced by your pings between 50 and 200 ms. It turns out to be most accurate to use the timestamps with the minimum ping time. That's because your ping time is the sum of the "bare metal" delay plus any delays spent waiting in router queues. Every once in a while, a lucky packet gets through without any queuing delays, so you use that minimum time as the most repeatable time.
Also keep in mind that clocks run at different rates. For example, I can reset my computer at home to the millisecond and a day later it will be 8 seconds slow. That means you have to continually readjust d. You can use the slope of various values of d computed over time to calculate your drift and compensate for it in between measurements, but that's beyond the scope of an answer here.
Hope that helps point you in the right direction.
Your algorithm will not be much more accurate unless you can use some statistical methods. First of all, 10 is probably not sufficient. The first and simplest change would be to gather 100 transit time samples and toss out the x longest and shortest.
Another thing to add would be that both clients send their own timestamp in each packet. Then you can also calculate how different their clocks are and check the average difference between the clocks.
You can also check up on STNP and NTP implementations specifically, as these protocols do this specifically.

Resources