Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I usually heard that network bandwidth is measured in bit per second, for example 500 Mbps. But when I reading some network-related text, they say:
"Coaxial cable supports bandwidths up to 600 MHz"?
Why they say that? And what is the relationship between MHz and Kb?
Coaxial cable is not a "network" but a transmission medium. For physical reasons the bandwidth must be measured in Hz, consider in fact that we're talking here of electromagnetic signals and not bits.
When you move to the "network" side, in particular digital networks, the capacity is measured in bps. Note that while a bandwidth (MHz) increase will lead generally to a bps increase, the final bps depends on many factors which depend for example on the digital modulation scheme (at low level) and the network protocol (at higher level). A typical case is the "symbol" representation, which gives you the information of how many bits are sent on a single "pulse".
But the subject is really huge and cannot be faced in a single answer here, I recommend you to read a good book on electric communications to have a clear picture on the subject.
That's the bandwidth of the signal that can be sent through the cable. You might want to have a read of Nyquist-Shannon sampling theorem to read about how that relates to the data that can be transmitted.
How the "MHz relate to Kb" depends on the method for transmitting the data, which is why you'll see cables rated with a bandwidth in MHz like you've seen.
We are dealing with a bit of abuse of terminology. Originally "bandwidth" means the width of the band that you have available to transmit (or receive) on. The term has been co-opted to also mean the amount of digital data you can transmit (or receive) on a line per unit time.
Here's an example of the original meaning. FM radio stations are spaced 200 kHz apart. You can have a station on 95.1 MHz and another one on 94.9 MHz and another one on 95.3 MHz, but none in between. The bandwidth available to any given FM radio station is 200 kHz (actually it may be less than that if there is a built-in buffer zone of no-mans-land frequencies between stations, I don't know).
The bandwidth rating of something like a coaxial cable is the range of frequencies of the electromagnetic waves that it is designed to transmit reliably. Outside that range the physical properties of the cable cause it to not reliably transmit signals.
With (digital) computers, bandwidth almost always has the alternate meaning of data capacity per unit time. It's related though, because obviously if you have more available analog band width, it lets you use a technology that transmits more (digital) data at the same time over that carrier.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
A lot of ISP's sell their products saying: 100Mbit/s Speed.
However, compare the internet to a packet service, UPS for example.
The ammount of packages you can send every second(bandwith) is something different then the time it takes to arrive(speed).
I know there are multiple meanings of the term 'bandwith' so is it wrong to advertise with speed?
Wikipedia( http://en.wikipedia.org/wiki/Bandwidth_(computing) )
In computer networking and computer science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate of available or consumed data communication
resources expressed in bits per second or multiples of it (bit/s,
kbit/s, Mbit/s, Gbit/s, etc.).> In computer networking and computer
science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate
This part tells me that bandwith is measured in Mbit/s, Gbit/s.
So does this mean the majority of ISP's are advertising wrongly while they should advertise with 'bandwith' instead of speed?
Short answer: Yes.
Long answer: There are several aspects on data transfer that can be measured on an amount-per-time basis; Amount of data per second is one of them, but perhaps misleading if not properly explained.
From the network performance point of view, these are the important factors (quoting Wikipedia here):
Bandwidth - maximum rate that information can be transferred
Throughput - the actual rate that information is transferred
Latency - the delay between the sender and the receiver decoding it
Jitter - variation in the time of arrival at the receiver of the information
Error rate - corrupted data expressed as a percentage or fraction of the total sent
So you may have a 10Mb connection, but if 50% of the sent packages are corrupted, your final throughput is actually just 5Mb. (Even less, if you consider that a substantial part of the data may be control structures instead of data payload.
Latency may be affected by mechanisms such as Nagle's algorythm and ISP-side buffering:
As specified in RFC 1149, An ISP could sell you a IPoAC package with 9G bits/s, and still be true to its words, if they sent to you 16 pigeons with 32GB SD cards attached to them, average air time around 1 hour - or ~3,600,000 ms latency.
The bandwidth in EE = width of frequency band, measured in Hz, and in CS= information carrying capacity, in bits/sec,
So what is the relation between hz and bps and how to covert between each other?
This is a place for software not hardware
You did not give enough information or context and showed no effort towards a solution
The relation depends on the modulation scheme and is called modulation efficiency. A number like 20 (bits per second)/(hz) is reasonable. So, if I say your modem has a bandwidth of 3khz you should expect 3000*20 bits per second, or 60K
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the potential maximum speed for rs232 serial port on a modern PC? I know that the specification says it is 115200 bps. But i believe it can be faster. What influences the speed of the rs232 port? I believe it is quartz resonator but i am not sure.
This goes back to the original IBM PC. The engineers that designed it needed a cheap way to generate a stable frequency. And turned to crystals that were widely in use at the time, used in any color TV in the USA. A crystal made to run an oscillator circuit at the color burst frequency in the NTSC television standard. Which is 315/88 = 3.579545 megahertz. From there, it first went through a programmable divider, the one you change to set the baudrate. The UART itself then divides it by 16 to generate the sub-sampling clock for the data line.
So the highest baudrate you can get is by setting the divider to the smallest value, 2. Which produces 3579545 / 2 / 16 = 111861 baud. A 2.3% error from the ideal baudrate. But close enough, the clock rate doesn't have to be exact. The point of asynchronous signalling, the A in UART, the start bit always re-synchronizes the receiver.
Getting real RS-232 hardware running at 115200 baud reliably is a significant challenge. The electrical standard is very sensitive to noise, there is no attempt at canceling induced noise and no attempt at creating an impedance-matched transmission line. The maximum recommended cable length at 9600 baud is only 50 feet. At 115200 only very short cables will do in practice. To go further you need a different approach, like RS-422's differential signals.
This is all ancient history and doesn't exactly apply to modern hardware anymore. True serial hardware based on a UART chip like 16550 have been disappearing rapidly and replaced by USB emulators. Which have a custom driver to emulate a serial port. They do accept a baudrate selection but just ignore it for the USB bus itself, it only applies to the last half-inch in the dongle you plug in the device. Whether or not the driver accepts 115200 as the maximum value is a driver implementation detail, they usually accept higher values.
The maximum speed is limited by the specs of the UART hardware.
I believe the "classical" PC UART (the 16550) in modern implementations can handle at least 1.5 Mbps. If you use a USB-based serial adapter, there's no 16550 involved and the limit is instead set by the specific chip(s) used in the adapter, of course.
I regularly use a RS232 link running at 460,800 bps, with a USB-based adapter.
In response to the comment about clocking (with a caveat: I'm a software guy): asynchronous serial communication doesn't transmit the clock (that's the asynchronous part right there) along with the data. Instead, transmitter and receiver are supposed to agree beforehand about which bitrate to use.
A start bit on the data line signals the start of each "character" (typically a byte, but with start/stop/parity bits framing it). The receiver then starts sampling the data line in order to determine if its a 0 or a 1. This sampling is typically done at least 16 times faster than the actual bit rate, to make sure it's stable. So for a UART communicating at 460,800 bps like I mentioned above, the receiver will be sampling the RX signal at around 7.4 MHz. This means that even if you clock the actual UART with a raw frequency f, you can't expect it to reliably receive data at that rate. There is overhead.
Yes it is possible to run at higher speeds but the major limitation is the environment, in a noisy environment there will be more corrupt data limitating the speed. Another limitation is the length of the cable between the devices, you may need to add a repeater or some other device to strengthen the signal.
I often heard people talking about a network's speed in terms of "bandwith", and I read from < Computer Networks: A Systems Approach > the following definiton:
The bandwidth of a network is given by
the number of bits that can be
transmitted over the network in a
certain period of time.
AFAIK, the word "bandwith" is used to describe the the width of frequency that can be passed on some kind of medium. And the above definition describe something more like a throughput. So is it mis-use?
I have been thinking about this question for some time. I don't know where to post it. So forgive me if it is off topic.
Thanks.
Update - 1 - 9:56 AM 1/13/2011
I recall that, if a signal's cycle is smaller in time domain, its frequency belt will be wider in frequency domain, so IF the bit rate (digital bandwidth) is big, the signal's cycle should be quite small, and thus the analog bandwidth it required will be quite wide, but medium has its physical limit, the medium has the widest frequency it allows to pass, so it has the biggest bit rate it allows to transmit. From this point of view, I think the mis-use of bandwidth in digital world is acceptable.
The word bandwidth has more than one definition:
Bandwidth has several related meanings:
Bandwidth (computing) or digital bandwidth: a rate of data transfer, throughput or bit rate, measured in bits per second (bps), by analogy to signal processing bandwidth
Bandwidth (signal processing) or analog bandwidth, frequency bandwidth or radio bandwidth: a measure of the width of a range of frequencies, measured in hertz
...
With both definitions having more bandwidth means that you can send more data.
In computer networking and other digital fields, the term bandwidth often refers to a data rate measured in bits per second, for example network throughput, sometimes denoted network bandwidth, data bandwidth or digital bandwidth. The reason is that according to Hartley's law, the digital data rate limit (or channel capacity) of a physical communication link is proportional to its bandwidth in hertz, sometimes denoted radio frequency (RF) bandwidth, signal bandwidth, frequency bandwidth, spectral bandwidth or analog bandwidth. For bandwidth as a computing term, less ambiguous terms are bit rate, throughput, maximum throughput, goodput or channel capacity.
(Source)
Bandwidth is only one aspect of network speed. Delay is also important.
The term "bandwidth" is not a precise term, it may mean:
the clock frequency multiplied by the no-of-bits-transmitted-in-a-clock-tick - physical bandwidth,
minus bytes used for low-level error corrections, checksums (e.g. FEC in DVB),
minus bytes used by transmit protocol for addressing or other meta info (e.g. IP headers),
minus the time overhead of the handshake/transmit control (see TCP),
minus the time overhead of the administration of connection (e.g. DNS),
minus time spent on authentication (seeking user name on the host side),
minus time spent on receiving and handling the packet (e.g. an FTP server/client writes out the block of data received) - effective bandwidth, or throughput.
The best we can do is to always explain what kind of bandwidth we mean: with or without protocol overhead etc. Also, the users are often interested only in the last brutto value: how long does it take donwloading that stuff?
I am writing a networking application. It has some unxpected lags. I need to calculate some figures but I cant find an information - how many bits can be transferes through Ethernet connection at each tick.
I know that the resulting transfer rate is 100Mbps/1Gbps. But ethernet should use hardware ticks to sync both ends I suppose. So it moves data in ticks.
So the question is how many ticks per second or how many bits per one tick used in ethernet.
The actual connection is 100 Mbps full-duplex.
I found the answer myself. Here is the article: http://discountcablesusa.com/ethernetcables.html
The first table says 100BaseTX has frequency 31.25MHz, the signal rate is 125Mb (25 is Ehternet overhead so we receive 100Mbit/s of "useful" data).
Hence 125/31.25 = 4bit per one transmition which is roughly the answer on my question.