I am writing a networking application. It has some unxpected lags. I need to calculate some figures but I cant find an information - how many bits can be transferes through Ethernet connection at each tick.
I know that the resulting transfer rate is 100Mbps/1Gbps. But ethernet should use hardware ticks to sync both ends I suppose. So it moves data in ticks.
So the question is how many ticks per second or how many bits per one tick used in ethernet.
The actual connection is 100 Mbps full-duplex.
I found the answer myself. Here is the article: http://discountcablesusa.com/ethernetcables.html
The first table says 100BaseTX has frequency 31.25MHz, the signal rate is 125Mb (25 is Ehternet overhead so we receive 100Mbit/s of "useful" data).
Hence 125/31.25 = 4bit per one transmition which is roughly the answer on my question.
Related
I am studying about wireless networks and specifically about the IEEE 802.11. I cannot understand whether two users in the different BSSs that work at the same frequency and the same location can interfere with each other or not. I know that a BSS is formed from users that use the same frequency but i cannot figure out if a nearby BSS can use the same frequency as one of its neighbours.
Thank you for your time!
802.11 WiFi uses CSMA/CA or "Carrier-Sense Multiple Access with Collision Avoidance" to ensure all stations using the same or similar frequency can co-operate.
Before sending onto the network, a station will listen to the medium to see if something else is using it, this is called "Clear Channel Assessment (CCA)".
If the station detects energy on the medium, it assumes it is being used and will backoff for a random (very short, in the order of microseconds) time and then try again. Eventually it should see the medium is clear and be able to proceed with its transmit.
Every unicast frame sent on a WiFi network is ACKnowledged as soon as it is received by the destination, with an ACK frame. If a station transmits and doesn't receive an ACK, it will retransmit. This avoids problems where something has decided to use the medium mid-way through a station transmitting a packet, causing corruption.
All this operates outside of the concept of a BSS, as regardless of which BSS a station is in, it still needs to play fair with all the other stations on the frequency in the same or other BSSes.
The net effect is you can have many stations in many BSSs all on the same channel happily co-habiting, the downside is performance degradation as it gets harder to get a clear channel, and the likelihood of corrupt frames and retransmits increases.
I am reading a book about networking, and it says that, in a circuit switching environment, the number of links and switches a signal has to go through before getting to destination does not affect the overall time it takes for all the signal to be received. On the other hand, in a packet switching scenario the number of links and switches does make a difference. I spent quite a bit of time trying to figure it out, but I can't seem to get it. Why is that?
To drastically over-simplify, effectively a circuit-switched environment has a direct line from the transmitter to the transmitted once a connection has been established; imagine an old-fashioned phone-call going through switchboards. Therefore the transmission time is the same regardless of hops (well, ignoring the physical time it takes the signal to move over the wire, which very small since it's moving at the speed of light).
In a packet-switched environment, there is no direct connection. A packet of data is sent from the transmitter to the first hop, which tries to calculate an open route to the destination. It then passes its data onto the next hop, which again has to calculate the next available hop, and so on. This takes time that linearly increases with the number of hops. Think sending a letter through the US postal system. It has to go from your house to a post office, then the post office to a local distribution center, then from the local distribution center to the national one, then from that to the recipient's local distribution center, then to the recipient's post office, then finally to their house.
The difference is that only one connection at a time per circuit can exist on a circuit-switched network; again, think phone line with someone using it to make a call. Whereas in a packet switched network many transmitters and receivers can be sending data at the same time; again, think many people sending/recieving letters.
According to Wikipedia Jitter is the undesired deviation from true periodicity of an assumed periodic signal, according to a papper on QoS that I am reading jitter is reffered to as delay variation. Are there any definition of the jitter in the context of real time applications? Are there applications that are sensitive to jitter but not sensitive to delay? If for example a streaming application use some kind of buffer to store packets before show them to the user, is it possible that this application is not sensitive to delay but is sensitive to jitter?
Delay: Is the amount of time data(signal) takes to reach the destination. Now a higher delay generally means congestion of some sort of breaking of the communication link.
Jitter: Is the variation of delay time. This happens when a system is not in deterministic state eg. Video Streaming suffers from jitter a lot because the size of data transferred is quite large and hence no way of saying how long it might take to transfer.
If your application is sensitive to jitter it is definitely sensitive to delay.
In Real-time Protocol (RTP, RFC3550), a header contains a timestamp field. The value of it usually comes from a monotonically incremented counter and the frequency of the increment is the clock-rate. This clock-rate must be the same all over the participant wants something with the timestamp field. The counters have different base offsets, because the start time may different or they contains it because of security reason, etc... All in all we say the clocks are not syncronized.
To show it in an example consider if we refer to snd_timestamp and rcv_timestamp the most recent packet sender timestamp from the RTP header field and receiver timestamp generated by the receiver using the same clock-rate.
The wrong conclusion is that
delay_in_timestamp_unit = rcv_timestamp - snd_timestamp
If the receiver and sender clock-rate has different base offset (and they have), this not gives you the delay, also it doesn't consider the wrap around the 32bit unsigned integer.
But monitoring the time for delivering packets is somehow necessary if we want a proper playout adaption algorithm or if we want to detect and avoid congestions.
Also note that if we have syncronized clocks delay_in_timestamp_unit might be not punctually represent the pure network delay, because of components at the sender or at the receiver side retaining these packets after and/or before the timestamp added and/or exemined. So if you calculate a 2seconds delay between the participant, but you know your network delay is around 100ms, then your packets suffer additional delays at the sender or/and at the receiver side. But that additional delay is somehow (or at least you hope that it is) constant, so the only delay changes in time is - hopefully - the network delay. So you should not say that if packet delay > 500ms then we have a congestion, because you have no idea what is the actual network delay if you use only one packet sender and receiver timestamp information.
But the difference between the delays of two consecutive packets might gives you some information about weather something wrong in the network or not.
diff_delay = delay_t0 - delay_t1
if diff_delay equals to 0 the delay is the same, if it greater than 0 the newly arrived packets needed more time then the previous one, and if it smaller than 0 it needed less time.
And from that relative information based on two consecutive delays you could say something.
How you determine the difference between two delay if the clocks are not syncronized?
Consider you stored the last timestamps in rcv_timestamp_t1 and snd_timestamp_t1
diff_delay = (rcv_timestamp_t0 - snd_timestamp_t0) - (rcv_timestamp_t1 - snd_timestamp_t1)
but that would be problem without maintaining the base offsets of the sender and the receiver, so reordering it:
diff_delay = (rcv_timestamp_t0 - rcv_timestamp_t1) - (snd_timestamp_t0 - snd_timestamp_t1)
and here you can subtract rcv timestamps from each other and it eliminates the offset rcv and snd contain, and then you can extract the rcv_diff from snd_diff and it gives you the information about the difference of the delays of two consecutive packets in the unit of the clock-rate.
Now, according to RFC3550 jitter is "An estimate of the statistical variance of the RTP data packet interarrival time".
In order to finally get to the point your question is
"What is the difference between the delay and the jitter in the context of real time applications?"
Tiny note, but real-time applications usually refer to systems processing data in a range of nanoseconds, so I think you refer to end-to-end systems.
Also despite of several altered definition of jitter, it all uses the difference of the delays of arrived packets and thus provide you information about the relative changes of the network delay, meanwhile delay itself is an absolute value of the time of delivery.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I usually heard that network bandwidth is measured in bit per second, for example 500 Mbps. But when I reading some network-related text, they say:
"Coaxial cable supports bandwidths up to 600 MHz"?
Why they say that? And what is the relationship between MHz and Kb?
Coaxial cable is not a "network" but a transmission medium. For physical reasons the bandwidth must be measured in Hz, consider in fact that we're talking here of electromagnetic signals and not bits.
When you move to the "network" side, in particular digital networks, the capacity is measured in bps. Note that while a bandwidth (MHz) increase will lead generally to a bps increase, the final bps depends on many factors which depend for example on the digital modulation scheme (at low level) and the network protocol (at higher level). A typical case is the "symbol" representation, which gives you the information of how many bits are sent on a single "pulse".
But the subject is really huge and cannot be faced in a single answer here, I recommend you to read a good book on electric communications to have a clear picture on the subject.
That's the bandwidth of the signal that can be sent through the cable. You might want to have a read of Nyquist-Shannon sampling theorem to read about how that relates to the data that can be transmitted.
How the "MHz relate to Kb" depends on the method for transmitting the data, which is why you'll see cables rated with a bandwidth in MHz like you've seen.
We are dealing with a bit of abuse of terminology. Originally "bandwidth" means the width of the band that you have available to transmit (or receive) on. The term has been co-opted to also mean the amount of digital data you can transmit (or receive) on a line per unit time.
Here's an example of the original meaning. FM radio stations are spaced 200 kHz apart. You can have a station on 95.1 MHz and another one on 94.9 MHz and another one on 95.3 MHz, but none in between. The bandwidth available to any given FM radio station is 200 kHz (actually it may be less than that if there is a built-in buffer zone of no-mans-land frequencies between stations, I don't know).
The bandwidth rating of something like a coaxial cable is the range of frequencies of the electromagnetic waves that it is designed to transmit reliably. Outside that range the physical properties of the cable cause it to not reliably transmit signals.
With (digital) computers, bandwidth almost always has the alternate meaning of data capacity per unit time. It's related though, because obviously if you have more available analog band width, it lets you use a technology that transmits more (digital) data at the same time over that carrier.
I understand latency - the time it takes for a message to go from sender to recipient - and bandwidth - the maximum amount of data that can be transferred over a given time - but I am struggling to find the right term to describe a related thing:
If a protocol is conversation-based - the payload is split up over many to-and-fros between the ends - then latency affects 'throughput'1.
1 What is this called, and is there a nice concise explanation of this?
Surfing the web, trying to optimize the performance of my nas (nas4free) I came across a page that described the answer to this question (imho). Specifically this section caught my eye:
"In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgement from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP."
This sounds like the answer to your question.
BDP as wikipedia describes it
To conclude, it's called Bandwidth Delay Product (BDP) and the shortest explanation I've found is the one above. (Flexo has noted this in his comment too.)
Could goodput be the term you are looking for?
According to wikipedia:
In computer networks, goodput is the application level throughput, i.e. the number of useful bits per unit of time forwarded by the network from a certain source address to a certain destination, excluding protocol overhead, and excluding retransmitted data packets.
Wikipedia Goodput link
The problem you describe arises in communications which are synchronous in nature. If there was no need to acknowledge receipt of information and it was certain to arrive then the sender could send as fast as possible and the throughput would be good regardless of the latency.
When there is a requirement for things to be acknowledged then it is this synchronisation that cause this drop in throughput and the degree to which the communication (i.e. sending of acknowledgments) is allowed to be asynchronous or not controls how much it hurts the throughput.
'Round-trip time' links latency and number of turns.
Or: Network latency is a function of two things:
(i) round-trip time (the time it takes to complete a trip across the network); and
(ii) the number of times the application has to traverse it (aka turns).