Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 5 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
Recall that propagation delay d/s is the time to transmit one bit over a link
And transmission delay is the time to transmit a whole packet over a link
Then, why isn't packet length * propagation delay = transmission delay?
Because they're measuring different things.
Propagation delay is how long it takes one bit to travel from one end of the "wire" to the other (it's proportional to the length of the wire, crudely).
Transmission delay is how long it takes to get all the bits into the wire in the first place (it's packet_length/data_rate).
The transmission delay is the amount of time required for
the router to push out the packet.
The propagation delay, is the time it takes a bit to propagate from one router to the next.
the transmission and propagation delay are completely different! if denote the length of the packet by L bits, and denote the transmission rate of the link from first router to second router by R bits/sec. then transmission delay will be L/R. and this is depended to transmission rate of link and the length of packet.
then if denote the distance between two routers d and denote the propagation speed s, the propagation delay will be d/s. it is a function of the Distance between the two routers, but
has no dependence to the packet's length or the transmission rate of the link.
Obviously , packet length * propagation delay = trasmission delay is wrong.
Let us assume that you have a packet which has 4 bits 1010.You have to send it from A to B.
For this scenario,Transmission delay is the time taken by the sender to place the packet on the link(Transmission medium).Because the bits(1010) has to be converted in to signals.So it takes some time.Note that here only the packet is placed.It is not moving to receiver.
Propagation delay is the time taken by a bit(Mostly MSB ,Here 1) to reach from sender(A) to receiver(B).
Transmission Delay : Amount of time required to pump all bits/packets into the electric wire/optic fibre.
Propagation delay : It's the amount of time needed for a packet to reach the destination.
If propagation delay is very high than transmission delay the chance of losing the packet is high.
Transmission Delay:
This is the amount of time required to transmit all of the packet's bits into the link. Transmission delays are typically on the order of microseconds or less in practice.
L: packet length (bits)
R: link bandwidth (bps)
so transmission delay is = L/R
Propagation Delay:
Is the time it takes a bit to propagate over the transmission medium from the source router to the destination router; it is a function of the distance between the two routers, but has nothing to do with the packet's length or the transmission rate of the link.
d: length of physical link
S: propagation speed in medium (~2x108m/sec, for copper wires & ~3x108m/sec, for wireless media)
so propagation delay is = d/s
The transmission delay is the amount of time required for the router to push out the packet, it has nothing to do with the distance between the two routers.
The propagation delay is the time taken by a bit to to propagate form one router to the next
Related
DCTCP is a variant of TCP for Data Center environment. The source is here
DCTCP using ECN feature in commodity switch to limit queue length of buffer in switch around the threshold K. Doing so, packet loss is rarely happen because K is much smaller than buffer's capacity so buffer isn't almost full.
DCTCP achieve low latency for small-flows while maintaining high throughput for big-flow. The reason is when queue length exceeds threshold K, a notification of congestion will be feedback to sender. At sender, a value for probability of congestion is computed over time, so sender will decrease sending rate correspondingly to the extent of congestion.
DCTCP states that small queue length will decrease the latency or the transmission time of flows. I doubted that. Because unless packet loss leading to re-transmission and so high latency. In DCTCP, packet loss rarely happens.
Small queue at switch forces senders to decrease sending rates so force packets to queue in TX buffer of senders.
Bigger queue at switch make senders have higher sending rates and packets instead queue in TX buffer of senders, it now queue in buffer of switch.
So I think that delay in both small and big queue is still the same.
What do you think?
The buffer in the switch does not increase the capacity of the network, it only helps to not loose too much packets if you have a traffic burst. But, TCP can deal with packet loss by sending slower, which is exactly what it needs to do in case the network capacity is reached.
If you continuously run the network at the limit, the queue of the switch will be full or nearly full all the time, so you still loose packets if the queue is full. But, you also increase the latency, because the packet needs some time to get from the end of the queue where it arrived to the beginning where it will be forwarded. This latency again causes the TCP stack to react slower to congestion, which again increases congestion, packet loss etc.
So the ideal switch behaves like a network cable, e.g. does not have any buffer at all.
You might read more about the problems caused by large buffers by searching for "bufferbloat", e.g. http://en.wikipedia.org/wiki/Bufferbloat.
And when in doubt benchmark yourself.
It depends on queue occupancy. DCTCP aims to maintain small queue occupancy, because the authors think that queueing delay is the reason of long latency.
So, it does not matter how maximum size of queue is. In 16Mb of maximum queue size or just 32kb of maximum queue size, if we can maintain queue occupancy always around 8kb or something small size, queueing delay will be the same.
Read a paper, HULL from NSDI 2012, of M. Alizadeh who is the first author of DCTCP. HULL also aims to maintain short queue occupancy.
What they talk about small buffer is, because trends of data center switches shift from 'store and forward' buffer to 'cut-through' buffer. Just google it, and you can find some documents from CISCO or somewhere related webpages.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
A lot of ISP's sell their products saying: 100Mbit/s Speed.
However, compare the internet to a packet service, UPS for example.
The ammount of packages you can send every second(bandwith) is something different then the time it takes to arrive(speed).
I know there are multiple meanings of the term 'bandwith' so is it wrong to advertise with speed?
Wikipedia( http://en.wikipedia.org/wiki/Bandwidth_(computing) )
In computer networking and computer science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate of available or consumed data communication
resources expressed in bits per second or multiples of it (bit/s,
kbit/s, Mbit/s, Gbit/s, etc.).> In computer networking and computer
science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate
This part tells me that bandwith is measured in Mbit/s, Gbit/s.
So does this mean the majority of ISP's are advertising wrongly while they should advertise with 'bandwith' instead of speed?
Short answer: Yes.
Long answer: There are several aspects on data transfer that can be measured on an amount-per-time basis; Amount of data per second is one of them, but perhaps misleading if not properly explained.
From the network performance point of view, these are the important factors (quoting Wikipedia here):
Bandwidth - maximum rate that information can be transferred
Throughput - the actual rate that information is transferred
Latency - the delay between the sender and the receiver decoding it
Jitter - variation in the time of arrival at the receiver of the information
Error rate - corrupted data expressed as a percentage or fraction of the total sent
So you may have a 10Mb connection, but if 50% of the sent packages are corrupted, your final throughput is actually just 5Mb. (Even less, if you consider that a substantial part of the data may be control structures instead of data payload.
Latency may be affected by mechanisms such as Nagle's algorythm and ISP-side buffering:
As specified in RFC 1149, An ISP could sell you a IPoAC package with 9G bits/s, and still be true to its words, if they sent to you 16 pigeons with 32GB SD cards attached to them, average air time around 1 hour - or ~3,600,000 ms latency.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the potential maximum speed for rs232 serial port on a modern PC? I know that the specification says it is 115200 bps. But i believe it can be faster. What influences the speed of the rs232 port? I believe it is quartz resonator but i am not sure.
This goes back to the original IBM PC. The engineers that designed it needed a cheap way to generate a stable frequency. And turned to crystals that were widely in use at the time, used in any color TV in the USA. A crystal made to run an oscillator circuit at the color burst frequency in the NTSC television standard. Which is 315/88 = 3.579545 megahertz. From there, it first went through a programmable divider, the one you change to set the baudrate. The UART itself then divides it by 16 to generate the sub-sampling clock for the data line.
So the highest baudrate you can get is by setting the divider to the smallest value, 2. Which produces 3579545 / 2 / 16 = 111861 baud. A 2.3% error from the ideal baudrate. But close enough, the clock rate doesn't have to be exact. The point of asynchronous signalling, the A in UART, the start bit always re-synchronizes the receiver.
Getting real RS-232 hardware running at 115200 baud reliably is a significant challenge. The electrical standard is very sensitive to noise, there is no attempt at canceling induced noise and no attempt at creating an impedance-matched transmission line. The maximum recommended cable length at 9600 baud is only 50 feet. At 115200 only very short cables will do in practice. To go further you need a different approach, like RS-422's differential signals.
This is all ancient history and doesn't exactly apply to modern hardware anymore. True serial hardware based on a UART chip like 16550 have been disappearing rapidly and replaced by USB emulators. Which have a custom driver to emulate a serial port. They do accept a baudrate selection but just ignore it for the USB bus itself, it only applies to the last half-inch in the dongle you plug in the device. Whether or not the driver accepts 115200 as the maximum value is a driver implementation detail, they usually accept higher values.
The maximum speed is limited by the specs of the UART hardware.
I believe the "classical" PC UART (the 16550) in modern implementations can handle at least 1.5 Mbps. If you use a USB-based serial adapter, there's no 16550 involved and the limit is instead set by the specific chip(s) used in the adapter, of course.
I regularly use a RS232 link running at 460,800 bps, with a USB-based adapter.
In response to the comment about clocking (with a caveat: I'm a software guy): asynchronous serial communication doesn't transmit the clock (that's the asynchronous part right there) along with the data. Instead, transmitter and receiver are supposed to agree beforehand about which bitrate to use.
A start bit on the data line signals the start of each "character" (typically a byte, but with start/stop/parity bits framing it). The receiver then starts sampling the data line in order to determine if its a 0 or a 1. This sampling is typically done at least 16 times faster than the actual bit rate, to make sure it's stable. So for a UART communicating at 460,800 bps like I mentioned above, the receiver will be sampling the RX signal at around 7.4 MHz. This means that even if you clock the actual UART with a raw frequency f, you can't expect it to reliably receive data at that rate. There is overhead.
Yes it is possible to run at higher speeds but the major limitation is the environment, in a noisy environment there will be more corrupt data limitating the speed. Another limitation is the length of the cable between the devices, you may need to add a repeater or some other device to strengthen the signal.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I usually heard that network bandwidth is measured in bit per second, for example 500 Mbps. But when I reading some network-related text, they say:
"Coaxial cable supports bandwidths up to 600 MHz"?
Why they say that? And what is the relationship between MHz and Kb?
Coaxial cable is not a "network" but a transmission medium. For physical reasons the bandwidth must be measured in Hz, consider in fact that we're talking here of electromagnetic signals and not bits.
When you move to the "network" side, in particular digital networks, the capacity is measured in bps. Note that while a bandwidth (MHz) increase will lead generally to a bps increase, the final bps depends on many factors which depend for example on the digital modulation scheme (at low level) and the network protocol (at higher level). A typical case is the "symbol" representation, which gives you the information of how many bits are sent on a single "pulse".
But the subject is really huge and cannot be faced in a single answer here, I recommend you to read a good book on electric communications to have a clear picture on the subject.
That's the bandwidth of the signal that can be sent through the cable. You might want to have a read of Nyquist-Shannon sampling theorem to read about how that relates to the data that can be transmitted.
How the "MHz relate to Kb" depends on the method for transmitting the data, which is why you'll see cables rated with a bandwidth in MHz like you've seen.
We are dealing with a bit of abuse of terminology. Originally "bandwidth" means the width of the band that you have available to transmit (or receive) on. The term has been co-opted to also mean the amount of digital data you can transmit (or receive) on a line per unit time.
Here's an example of the original meaning. FM radio stations are spaced 200 kHz apart. You can have a station on 95.1 MHz and another one on 94.9 MHz and another one on 95.3 MHz, but none in between. The bandwidth available to any given FM radio station is 200 kHz (actually it may be less than that if there is a built-in buffer zone of no-mans-land frequencies between stations, I don't know).
The bandwidth rating of something like a coaxial cable is the range of frequencies of the electromagnetic waves that it is designed to transmit reliably. Outside that range the physical properties of the cable cause it to not reliably transmit signals.
With (digital) computers, bandwidth almost always has the alternate meaning of data capacity per unit time. It's related though, because obviously if you have more available analog band width, it lets you use a technology that transmits more (digital) data at the same time over that carrier.
I understand latency - the time it takes for a message to go from sender to recipient - and bandwidth - the maximum amount of data that can be transferred over a given time - but I am struggling to find the right term to describe a related thing:
If a protocol is conversation-based - the payload is split up over many to-and-fros between the ends - then latency affects 'throughput'1.
1 What is this called, and is there a nice concise explanation of this?
Surfing the web, trying to optimize the performance of my nas (nas4free) I came across a page that described the answer to this question (imho). Specifically this section caught my eye:
"In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgement from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP."
This sounds like the answer to your question.
BDP as wikipedia describes it
To conclude, it's called Bandwidth Delay Product (BDP) and the shortest explanation I've found is the one above. (Flexo has noted this in his comment too.)
Could goodput be the term you are looking for?
According to wikipedia:
In computer networks, goodput is the application level throughput, i.e. the number of useful bits per unit of time forwarded by the network from a certain source address to a certain destination, excluding protocol overhead, and excluding retransmitted data packets.
Wikipedia Goodput link
The problem you describe arises in communications which are synchronous in nature. If there was no need to acknowledge receipt of information and it was certain to arrive then the sender could send as fast as possible and the throughput would be good regardless of the latency.
When there is a requirement for things to be acknowledged then it is this synchronisation that cause this drop in throughput and the degree to which the communication (i.e. sending of acknowledgments) is allowed to be asynchronous or not controls how much it hurts the throughput.
'Round-trip time' links latency and number of turns.
Or: Network latency is a function of two things:
(i) round-trip time (the time it takes to complete a trip across the network); and
(ii) the number of times the application has to traverse it (aka turns).