What is the relation between HZ and bps? - networking

The bandwidth in EE = width of frequency band, measured in Hz, and in CS= information carrying capacity, in bits/sec,
So what is the relation between hz and bps and how to covert between each other?

This is a place for software not hardware
You did not give enough information or context and showed no effort towards a solution
The relation depends on the modulation scheme and is called modulation efficiency. A number like 20 (bits per second)/(hz) is reasonable. So, if I say your modem has a bandwidth of 3khz you should expect 3000*20 bits per second, or 60K

Related

Finding distance between 2 devices

I wanted to know if there is any efficient way of finding the distance between 2 devices(a transmitter and a receiver) which is accurate to atleast the order of a couple of inches.
I am basically want to detect the movement of the transmitter from the receiver and how far it has moved from its original position.
I was thinking in terms of using a wireless hotspot/bluetooth connection. I cannot Use some form of audio/medium which can be detected by humans.
Could anybody help me with this?
To my mind, assuming there is no common synchronisation signal between the devices, there are 2 differents way to do this (not really easy):
1. Measure received power : some receivers provide RSSI (Received Signal Strength Indication). RSSI is a measure of how much power you received. If you know the transmitted power, you can estimate the transmission loss (from the transsmission channel) by taking different measure of RSSI at different distance. It will really depends on the channel (environment, frequency, throughput, ..), so don't change it for the measure. Once you got enough points, try to fit it by a curve. You can now predict distance by having RSSI.
2. Measure round trip time : this is called RADAR, and is really more difficult but is the classic way to measure distance and speed. Broadband systems (like WiFi) are better for this kind of measure. By the way you also can do the same with audio for short distances (SONAR), without being detected if you use frequencies higher than 20kHz.

What is the rationale behind bandwidth delay product

My understanding is that Bandwidth delay product refers to the maximum amount of data "in-transit" at any point in time, between two endpoints.
The thing that I don't get is, why multiply bandwidth by RTT. Bandwidth is a function of underlying medium, such as copper wire, fire optics etc and RTT is function of how busy intermediate nodes are, any scheduling applied at the intermediate nodes, distance etc. RTT can change, but bandwidth for practical purposes can be considered as fixed. So how does multiplying a constant value (capacity aka bandwidth) by fluctuating value (RTT) represents total amount of data in transit?
Based on this, will a really really slow have very large capacity? Chances are the "Causes" of RTT will start dropping.
Look at the units:
[bandwidth] = bytes / second
[round trip time] = seconds
[data volume] = bytes
[data volume] = [bandwidth] * [round trip time].
Unit-wise, it is correct. Semantically,
What is bandwidth * round trip time? It's the amount of data that left the sender before the first acknowledgement was received by the sender. That is, bandwidth * round trip time = the desired window size under perfect conditions.
If the round trip time is measured from the last packet and the sender's outbound bandwidth is perfectly stable and fully used, then the measured window size exactly calculates the number of packets (data and ACKs together) in transit. If you want only one direction, divide the quantity by two.
Since the round trip time is a measured quantity, it naturally fluctuates (and gets smoothed out). The measured bandwidth could fluctuate as well, and thus the estimated total volume of data in transit fluctuates as well.
Note that the amount of data in transit can vary with the data transfer rate. If the bottleneck is wire delay, then RTT can be considered constant, and the amount of data in transit will be proportional to the speed with which it's sent to the network.
Of course, if a round trip time suddenly rises dramatically, the estimated max. amount of data in transit rises as well, but that is correct. If there is no accompanying packet loss, the sliding window needs to expand. If there is packet loss, you need to reconsider the bandwidth estimate (and the bandwidth delay product drops accordingly).
To add to Jan Dvorak's answer, you can think of the 'big fat pipe' as a garden hose. We are interested in how much water is in the pipe. So, we take its 'bandwidth' i.e. how fast it can deliver water, which for a hose is determined by its cross-sectional area, and multiply by its length, which corresponds to the RTT, i.e. how 'long' a drop of water takes to get from one end to the other. The result is the volume of the hose, the volume of the pipe, the amount of data 'in the pipe'.
First, BDP is a calculated value used in performance tuning to determine the upper bounds of data which could be outstanding/unacknowledged. This, almost always, does not represent the quantity of "in-transit" data, but a target which tuning parameters are applied. If it represented "in-transit" data, always, there would be no room for performance tuning.
RTT does in fact fluctuate. This is why the expected worse case RTT is used in calculations. By tuning to the worse case, throughput efficiency will be at maximum when RTT is poorest. If RTT improves, we get outstanding Acks sooner, the pipe remains full and maximum throughput (efficiency) is maintained.
"Full pipe" is a misnomer. The goal is to keep the Tx side full, as the Rx contains Ack packets which are typically smaller than the transmitted packets.
RTT also aggregated asymmetrical upstream and downstream bandwidths (ADSL, satellite modem, cable modem, etc.).

Why network bandwidth is measured in MHz? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I usually heard that network bandwidth is measured in bit per second, for example 500 Mbps. But when I reading some network-related text, they say:
"Coaxial cable supports bandwidths up to 600 MHz"?
Why they say that? And what is the relationship between MHz and Kb?
Coaxial cable is not a "network" but a transmission medium. For physical reasons the bandwidth must be measured in Hz, consider in fact that we're talking here of electromagnetic signals and not bits.
When you move to the "network" side, in particular digital networks, the capacity is measured in bps. Note that while a bandwidth (MHz) increase will lead generally to a bps increase, the final bps depends on many factors which depend for example on the digital modulation scheme (at low level) and the network protocol (at higher level). A typical case is the "symbol" representation, which gives you the information of how many bits are sent on a single "pulse".
But the subject is really huge and cannot be faced in a single answer here, I recommend you to read a good book on electric communications to have a clear picture on the subject.
That's the bandwidth of the signal that can be sent through the cable. You might want to have a read of Nyquist-Shannon sampling theorem to read about how that relates to the data that can be transmitted.
How the "MHz relate to Kb" depends on the method for transmitting the data, which is why you'll see cables rated with a bandwidth in MHz like you've seen.
We are dealing with a bit of abuse of terminology. Originally "bandwidth" means the width of the band that you have available to transmit (or receive) on. The term has been co-opted to also mean the amount of digital data you can transmit (or receive) on a line per unit time.
Here's an example of the original meaning. FM radio stations are spaced 200 kHz apart. You can have a station on 95.1 MHz and another one on 94.9 MHz and another one on 95.3 MHz, but none in between. The bandwidth available to any given FM radio station is 200 kHz (actually it may be less than that if there is a built-in buffer zone of no-mans-land frequencies between stations, I don't know).
The bandwidth rating of something like a coaxial cable is the range of frequencies of the electromagnetic waves that it is designed to transmit reliably. Outside that range the physical properties of the cable cause it to not reliably transmit signals.
With (digital) computers, bandwidth almost always has the alternate meaning of data capacity per unit time. It's related though, because obviously if you have more available analog band width, it lets you use a technology that transmits more (digital) data at the same time over that carrier.

Is it mis-use to use "bandwith" to describe the speed of a network?

I often heard people talking about a network's speed in terms of "bandwith", and I read from < Computer Networks: A Systems Approach > the following definiton:
The bandwidth of a network is given by
the number of bits that can be
transmitted over the network in a
certain period of time.
AFAIK, the word "bandwith" is used to describe the the width of frequency that can be passed on some kind of medium. And the above definition describe something more like a throughput. So is it mis-use?
I have been thinking about this question for some time. I don't know where to post it. So forgive me if it is off topic.
Thanks.
Update - 1 - 9:56 AM 1/13/2011
I recall that, if a signal's cycle is smaller in time domain, its frequency belt will be wider in frequency domain, so IF the bit rate (digital bandwidth) is big, the signal's cycle should be quite small, and thus the analog bandwidth it required will be quite wide, but medium has its physical limit, the medium has the widest frequency it allows to pass, so it has the biggest bit rate it allows to transmit. From this point of view, I think the mis-use of bandwidth in digital world is acceptable.
The word bandwidth has more than one definition:
Bandwidth has several related meanings:
Bandwidth (computing) or digital bandwidth: a rate of data transfer, throughput or bit rate, measured in bits per second (bps), by analogy to signal processing bandwidth
Bandwidth (signal processing) or analog bandwidth, frequency bandwidth or radio bandwidth: a measure of the width of a range of frequencies, measured in hertz
...
With both definitions having more bandwidth means that you can send more data.
In computer networking and other digital fields, the term bandwidth often refers to a data rate measured in bits per second, for example network throughput, sometimes denoted network bandwidth, data bandwidth or digital bandwidth. The reason is that according to Hartley's law, the digital data rate limit (or channel capacity) of a physical communication link is proportional to its bandwidth in hertz, sometimes denoted radio frequency (RF) bandwidth, signal bandwidth, frequency bandwidth, spectral bandwidth or analog bandwidth. For bandwidth as a computing term, less ambiguous terms are bit rate, throughput, maximum throughput, goodput or channel capacity.
(Source)
Bandwidth is only one aspect of network speed. Delay is also important.
The term "bandwidth" is not a precise term, it may mean:
the clock frequency multiplied by the no-of-bits-transmitted-in-a-clock-tick - physical bandwidth,
minus bytes used for low-level error corrections, checksums (e.g. FEC in DVB),
minus bytes used by transmit protocol for addressing or other meta info (e.g. IP headers),
minus the time overhead of the handshake/transmit control (see TCP),
minus the time overhead of the administration of connection (e.g. DNS),
minus time spent on authentication (seeking user name on the host side),
minus time spent on receiving and handling the packet (e.g. an FTP server/client writes out the block of data received) - effective bandwidth, or throughput.
The best we can do is to always explain what kind of bandwidth we mean: with or without protocol overhead etc. Also, the users are often interested only in the last brutto value: how long does it take donwloading that stuff?

What is relation between MCPS(million cycles per second) and power consumed

I have been working on a ARM cortex A8 board on mp3 decoder.
While doing this i have a requirement saying the mp3 decoder solution i am doing should consume 50 milli-watts of power. This generated few questions in my mind when i thought about it:-
1.) I recall that there is some relation between the Core Voltage applied(V), the clock frequency(f) of a processor and power consumed(P) as something like P is directly proportional to the voltage and frequency squared. But is the exact relation. Given operating clock Frequency, voltage of a processor, how can we calculate power consumed by it.
2.) Now if i get the power consumed from step 1.) at some clock frequency, and i am told that the decoder solution i am giving, can consume only 50 milli-watts, how can i get the maximum limit on MCPS, which will be the upper bound on the MCPS of my decoder solution running on that hardware board?
Can i deduce that if power obtained as in step 1.) say P, is consumed at frequency F, so for 50 milli-watts power, what is clock frequency frequency and calculate accordingly the frequency. And then call this frequency as my code MHz (MCPS) upper bound?
Basically how does one map(is there any equation) power consumed by a software to MCPS consumed
I hope this is relevant here, or should it go to superuser?
Thank you.
-AD.
It really depends on the architecture.
From their own page:
Core area, frequency range and power consumption are dependent on process, libraries and optimizations.
Power with cache (mW/MHz) <0.59
<0.45
Basically, it states that you can't accurately calculate the power consumption, so your best bet would be to do some measurements yourself. Try to run a full CPU-usage application and meassure the power consumption. It will give you some idea of the max-load, which will be a good start for you (to know how much you need to optimize your code and insert idle points).

Resources