Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
A lot of ISP's sell their products saying: 100Mbit/s Speed.
However, compare the internet to a packet service, UPS for example.
The ammount of packages you can send every second(bandwith) is something different then the time it takes to arrive(speed).
I know there are multiple meanings of the term 'bandwith' so is it wrong to advertise with speed?
Wikipedia( http://en.wikipedia.org/wiki/Bandwidth_(computing) )
In computer networking and computer science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate of available or consumed data communication
resources expressed in bits per second or multiples of it (bit/s,
kbit/s, Mbit/s, Gbit/s, etc.).> In computer networking and computer
science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate
This part tells me that bandwith is measured in Mbit/s, Gbit/s.
So does this mean the majority of ISP's are advertising wrongly while they should advertise with 'bandwith' instead of speed?
Short answer: Yes.
Long answer: There are several aspects on data transfer that can be measured on an amount-per-time basis; Amount of data per second is one of them, but perhaps misleading if not properly explained.
From the network performance point of view, these are the important factors (quoting Wikipedia here):
Bandwidth - maximum rate that information can be transferred
Throughput - the actual rate that information is transferred
Latency - the delay between the sender and the receiver decoding it
Jitter - variation in the time of arrival at the receiver of the information
Error rate - corrupted data expressed as a percentage or fraction of the total sent
So you may have a 10Mb connection, but if 50% of the sent packages are corrupted, your final throughput is actually just 5Mb. (Even less, if you consider that a substantial part of the data may be control structures instead of data payload.
Latency may be affected by mechanisms such as Nagle's algorythm and ISP-side buffering:
As specified in RFC 1149, An ISP could sell you a IPoAC package with 9G bits/s, and still be true to its words, if they sent to you 16 pigeons with 32GB SD cards attached to them, average air time around 1 hour - or ~3,600,000 ms latency.
Related
I have an application installed at my phone which is providing below details every minute: - Bandwidth , -Packet loss ,-signal strength,- RTT for google.com every minute.
I am trying to predict congestion based on these 4 attribute , but some how it doesn't look accurate to me , previously i have only used bandwidth .
I want predict congestion at any point more appropriately , appreciate any recommendations .
I think you are saying you are trying to measure network 'responsiveness', and from these measurements get a sense of how congested the network is. You also mention you want to predict which I guess means you want to make an estimate of the future 'responsiveness' based on your measurements and observations.
The items you are measuring look sensible, although you may want to include jitter if you are interested in VoIP or other real time streamed media.
The issue you have is that there are many variables which can effect your measurements, for example:
congestion in the radio cell you are in at the time
congestion in the backhaul network
delays in the server you are using to measure the RTT
congestion or faults with the particular APN your mobile is using to access data services
network faults
As some of these can be irregularly occurring but can have a large impact, it is quite hard to build up an accurate view of the overall network 'responsiveness' with a single handset. For example your local cell may be busy or have a problem but others users of Google.com in other cells will have perfectly good response, or Google.com may be busy or delayed and other users in your cell accessing a different server may again have perfectly good response.
It would likely be useful for you to look at some of the generally available web speedtest applications to see the type of information they provide - they have the advantage of being able to gather results from many thousands of users, and also generally have access to the servers to understand any issues on that side.
Depending on what you are trying to achieve it might be that a combination of measurements from one of the general speedtest services, combined with your own measurements will give you enough data to draw some sort of meaningful conclusions.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Maybe it's a stupid question!
Assume a P2P network in which peers independently try to find and connect to good nodes.
good nodes are those that are closer(in term of RTT) and has higher bandwidth.
in this scenario RTT is more important (for example RTT has 90% weight).
I want to obtain the liner combination of RTT and bandwidth in a meaningful way.
but it's obvious that the nature of these two metric is inconsistent.
How can I combine these two metrics ?
One of the key unicast routing protocols (it is a point to point and not P2P), EIGRP, uses several metrics to come up with a single metric. EIGRP is a very popular routing protocol. Delay (aka RTT) and bandwidth are typically the two key params for EIGRP. So, your questions is a relevant one!
This should help:
http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6554/ps6599/ps6630/whitepaper_C11-720525.html
Given the nature of bandwidth and delay, you could have multiple peers scoring the same final metric. For example, you could have one peer, P1, with values B and D. and another peer, P2, with values 2B and 2D. Assuming the bandwidth represents the capacity of the pipe, you would be able to transport a given amount of data with both of these peers in the same time!
You should certainly use a default value of weights, let us say, 0.50/0.5, for these two params. But, you might find that for a given goal (faster routing convergence, say), a different value of weights might be better.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I usually heard that network bandwidth is measured in bit per second, for example 500 Mbps. But when I reading some network-related text, they say:
"Coaxial cable supports bandwidths up to 600 MHz"?
Why they say that? And what is the relationship between MHz and Kb?
Coaxial cable is not a "network" but a transmission medium. For physical reasons the bandwidth must be measured in Hz, consider in fact that we're talking here of electromagnetic signals and not bits.
When you move to the "network" side, in particular digital networks, the capacity is measured in bps. Note that while a bandwidth (MHz) increase will lead generally to a bps increase, the final bps depends on many factors which depend for example on the digital modulation scheme (at low level) and the network protocol (at higher level). A typical case is the "symbol" representation, which gives you the information of how many bits are sent on a single "pulse".
But the subject is really huge and cannot be faced in a single answer here, I recommend you to read a good book on electric communications to have a clear picture on the subject.
That's the bandwidth of the signal that can be sent through the cable. You might want to have a read of Nyquist-Shannon sampling theorem to read about how that relates to the data that can be transmitted.
How the "MHz relate to Kb" depends on the method for transmitting the data, which is why you'll see cables rated with a bandwidth in MHz like you've seen.
We are dealing with a bit of abuse of terminology. Originally "bandwidth" means the width of the band that you have available to transmit (or receive) on. The term has been co-opted to also mean the amount of digital data you can transmit (or receive) on a line per unit time.
Here's an example of the original meaning. FM radio stations are spaced 200 kHz apart. You can have a station on 95.1 MHz and another one on 94.9 MHz and another one on 95.3 MHz, but none in between. The bandwidth available to any given FM radio station is 200 kHz (actually it may be less than that if there is a built-in buffer zone of no-mans-land frequencies between stations, I don't know).
The bandwidth rating of something like a coaxial cable is the range of frequencies of the electromagnetic waves that it is designed to transmit reliably. Outside that range the physical properties of the cable cause it to not reliably transmit signals.
With (digital) computers, bandwidth almost always has the alternate meaning of data capacity per unit time. It's related though, because obviously if you have more available analog band width, it lets you use a technology that transmits more (digital) data at the same time over that carrier.
I often heard people talking about a network's speed in terms of "bandwith", and I read from < Computer Networks: A Systems Approach > the following definiton:
The bandwidth of a network is given by
the number of bits that can be
transmitted over the network in a
certain period of time.
AFAIK, the word "bandwith" is used to describe the the width of frequency that can be passed on some kind of medium. And the above definition describe something more like a throughput. So is it mis-use?
I have been thinking about this question for some time. I don't know where to post it. So forgive me if it is off topic.
Thanks.
Update - 1 - 9:56 AM 1/13/2011
I recall that, if a signal's cycle is smaller in time domain, its frequency belt will be wider in frequency domain, so IF the bit rate (digital bandwidth) is big, the signal's cycle should be quite small, and thus the analog bandwidth it required will be quite wide, but medium has its physical limit, the medium has the widest frequency it allows to pass, so it has the biggest bit rate it allows to transmit. From this point of view, I think the mis-use of bandwidth in digital world is acceptable.
The word bandwidth has more than one definition:
Bandwidth has several related meanings:
Bandwidth (computing) or digital bandwidth: a rate of data transfer, throughput or bit rate, measured in bits per second (bps), by analogy to signal processing bandwidth
Bandwidth (signal processing) or analog bandwidth, frequency bandwidth or radio bandwidth: a measure of the width of a range of frequencies, measured in hertz
...
With both definitions having more bandwidth means that you can send more data.
In computer networking and other digital fields, the term bandwidth often refers to a data rate measured in bits per second, for example network throughput, sometimes denoted network bandwidth, data bandwidth or digital bandwidth. The reason is that according to Hartley's law, the digital data rate limit (or channel capacity) of a physical communication link is proportional to its bandwidth in hertz, sometimes denoted radio frequency (RF) bandwidth, signal bandwidth, frequency bandwidth, spectral bandwidth or analog bandwidth. For bandwidth as a computing term, less ambiguous terms are bit rate, throughput, maximum throughput, goodput or channel capacity.
(Source)
Bandwidth is only one aspect of network speed. Delay is also important.
The term "bandwidth" is not a precise term, it may mean:
the clock frequency multiplied by the no-of-bits-transmitted-in-a-clock-tick - physical bandwidth,
minus bytes used for low-level error corrections, checksums (e.g. FEC in DVB),
minus bytes used by transmit protocol for addressing or other meta info (e.g. IP headers),
minus the time overhead of the handshake/transmit control (see TCP),
minus the time overhead of the administration of connection (e.g. DNS),
minus time spent on authentication (seeking user name on the host side),
minus time spent on receiving and handling the packet (e.g. an FTP server/client writes out the block of data received) - effective bandwidth, or throughput.
The best we can do is to always explain what kind of bandwidth we mean: with or without protocol overhead etc. Also, the users are often interested only in the last brutto value: how long does it take donwloading that stuff?
I understand latency - the time it takes for a message to go from sender to recipient - and bandwidth - the maximum amount of data that can be transferred over a given time - but I am struggling to find the right term to describe a related thing:
If a protocol is conversation-based - the payload is split up over many to-and-fros between the ends - then latency affects 'throughput'1.
1 What is this called, and is there a nice concise explanation of this?
Surfing the web, trying to optimize the performance of my nas (nas4free) I came across a page that described the answer to this question (imho). Specifically this section caught my eye:
"In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgement from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP."
This sounds like the answer to your question.
BDP as wikipedia describes it
To conclude, it's called Bandwidth Delay Product (BDP) and the shortest explanation I've found is the one above. (Flexo has noted this in his comment too.)
Could goodput be the term you are looking for?
According to wikipedia:
In computer networks, goodput is the application level throughput, i.e. the number of useful bits per unit of time forwarded by the network from a certain source address to a certain destination, excluding protocol overhead, and excluding retransmitted data packets.
Wikipedia Goodput link
The problem you describe arises in communications which are synchronous in nature. If there was no need to acknowledge receipt of information and it was certain to arrive then the sender could send as fast as possible and the throughput would be good regardless of the latency.
When there is a requirement for things to be acknowledged then it is this synchronisation that cause this drop in throughput and the degree to which the communication (i.e. sending of acknowledgments) is allowed to be asynchronous or not controls how much it hurts the throughput.
'Round-trip time' links latency and number of turns.
Or: Network latency is a function of two things:
(i) round-trip time (the time it takes to complete a trip across the network); and
(ii) the number of times the application has to traverse it (aka turns).