I understand latency - the time it takes for a message to go from sender to recipient - and bandwidth - the maximum amount of data that can be transferred over a given time - but I am struggling to find the right term to describe a related thing:
If a protocol is conversation-based - the payload is split up over many to-and-fros between the ends - then latency affects 'throughput'1.
1 What is this called, and is there a nice concise explanation of this?
Surfing the web, trying to optimize the performance of my nas (nas4free) I came across a page that described the answer to this question (imho). Specifically this section caught my eye:
"In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgement from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP."
This sounds like the answer to your question.
BDP as wikipedia describes it
To conclude, it's called Bandwidth Delay Product (BDP) and the shortest explanation I've found is the one above. (Flexo has noted this in his comment too.)
Could goodput be the term you are looking for?
According to wikipedia:
In computer networks, goodput is the application level throughput, i.e. the number of useful bits per unit of time forwarded by the network from a certain source address to a certain destination, excluding protocol overhead, and excluding retransmitted data packets.
Wikipedia Goodput link
The problem you describe arises in communications which are synchronous in nature. If there was no need to acknowledge receipt of information and it was certain to arrive then the sender could send as fast as possible and the throughput would be good regardless of the latency.
When there is a requirement for things to be acknowledged then it is this synchronisation that cause this drop in throughput and the degree to which the communication (i.e. sending of acknowledgments) is allowed to be asynchronous or not controls how much it hurts the throughput.
'Round-trip time' links latency and number of turns.
Or: Network latency is a function of two things:
(i) round-trip time (the time it takes to complete a trip across the network); and
(ii) the number of times the application has to traverse it (aka turns).
Related
According to Wikipedia Jitter is the undesired deviation from true periodicity of an assumed periodic signal, according to a papper on QoS that I am reading jitter is reffered to as delay variation. Are there any definition of the jitter in the context of real time applications? Are there applications that are sensitive to jitter but not sensitive to delay? If for example a streaming application use some kind of buffer to store packets before show them to the user, is it possible that this application is not sensitive to delay but is sensitive to jitter?
Delay: Is the amount of time data(signal) takes to reach the destination. Now a higher delay generally means congestion of some sort of breaking of the communication link.
Jitter: Is the variation of delay time. This happens when a system is not in deterministic state eg. Video Streaming suffers from jitter a lot because the size of data transferred is quite large and hence no way of saying how long it might take to transfer.
If your application is sensitive to jitter it is definitely sensitive to delay.
In Real-time Protocol (RTP, RFC3550), a header contains a timestamp field. The value of it usually comes from a monotonically incremented counter and the frequency of the increment is the clock-rate. This clock-rate must be the same all over the participant wants something with the timestamp field. The counters have different base offsets, because the start time may different or they contains it because of security reason, etc... All in all we say the clocks are not syncronized.
To show it in an example consider if we refer to snd_timestamp and rcv_timestamp the most recent packet sender timestamp from the RTP header field and receiver timestamp generated by the receiver using the same clock-rate.
The wrong conclusion is that
delay_in_timestamp_unit = rcv_timestamp - snd_timestamp
If the receiver and sender clock-rate has different base offset (and they have), this not gives you the delay, also it doesn't consider the wrap around the 32bit unsigned integer.
But monitoring the time for delivering packets is somehow necessary if we want a proper playout adaption algorithm or if we want to detect and avoid congestions.
Also note that if we have syncronized clocks delay_in_timestamp_unit might be not punctually represent the pure network delay, because of components at the sender or at the receiver side retaining these packets after and/or before the timestamp added and/or exemined. So if you calculate a 2seconds delay between the participant, but you know your network delay is around 100ms, then your packets suffer additional delays at the sender or/and at the receiver side. But that additional delay is somehow (or at least you hope that it is) constant, so the only delay changes in time is - hopefully - the network delay. So you should not say that if packet delay > 500ms then we have a congestion, because you have no idea what is the actual network delay if you use only one packet sender and receiver timestamp information.
But the difference between the delays of two consecutive packets might gives you some information about weather something wrong in the network or not.
diff_delay = delay_t0 - delay_t1
if diff_delay equals to 0 the delay is the same, if it greater than 0 the newly arrived packets needed more time then the previous one, and if it smaller than 0 it needed less time.
And from that relative information based on two consecutive delays you could say something.
How you determine the difference between two delay if the clocks are not syncronized?
Consider you stored the last timestamps in rcv_timestamp_t1 and snd_timestamp_t1
diff_delay = (rcv_timestamp_t0 - snd_timestamp_t0) - (rcv_timestamp_t1 - snd_timestamp_t1)
but that would be problem without maintaining the base offsets of the sender and the receiver, so reordering it:
diff_delay = (rcv_timestamp_t0 - rcv_timestamp_t1) - (snd_timestamp_t0 - snd_timestamp_t1)
and here you can subtract rcv timestamps from each other and it eliminates the offset rcv and snd contain, and then you can extract the rcv_diff from snd_diff and it gives you the information about the difference of the delays of two consecutive packets in the unit of the clock-rate.
Now, according to RFC3550 jitter is "An estimate of the statistical variance of the RTP data packet interarrival time".
In order to finally get to the point your question is
"What is the difference between the delay and the jitter in the context of real time applications?"
Tiny note, but real-time applications usually refer to systems processing data in a range of nanoseconds, so I think you refer to end-to-end systems.
Also despite of several altered definition of jitter, it all uses the difference of the delays of arrived packets and thus provide you information about the relative changes of the network delay, meanwhile delay itself is an absolute value of the time of delivery.
I have an application installed at my phone which is providing below details every minute: - Bandwidth , -Packet loss ,-signal strength,- RTT for google.com every minute.
I am trying to predict congestion based on these 4 attribute , but some how it doesn't look accurate to me , previously i have only used bandwidth .
I want predict congestion at any point more appropriately , appreciate any recommendations .
I think you are saying you are trying to measure network 'responsiveness', and from these measurements get a sense of how congested the network is. You also mention you want to predict which I guess means you want to make an estimate of the future 'responsiveness' based on your measurements and observations.
The items you are measuring look sensible, although you may want to include jitter if you are interested in VoIP or other real time streamed media.
The issue you have is that there are many variables which can effect your measurements, for example:
congestion in the radio cell you are in at the time
congestion in the backhaul network
delays in the server you are using to measure the RTT
congestion or faults with the particular APN your mobile is using to access data services
network faults
As some of these can be irregularly occurring but can have a large impact, it is quite hard to build up an accurate view of the overall network 'responsiveness' with a single handset. For example your local cell may be busy or have a problem but others users of Google.com in other cells will have perfectly good response, or Google.com may be busy or delayed and other users in your cell accessing a different server may again have perfectly good response.
It would likely be useful for you to look at some of the generally available web speedtest applications to see the type of information they provide - they have the advantage of being able to gather results from many thousands of users, and also generally have access to the servers to understand any issues on that side.
Depending on what you are trying to achieve it might be that a combination of measurements from one of the general speedtest services, combined with your own measurements will give you enough data to draw some sort of meaningful conclusions.
I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).
I've been creating a reliable networking protocol similar to TCP, and was wondering what a good default value for a re-transmit threshold should be on a packet (the number of times I resend the packet before assuming that the connection was broken). How can I find the optimal number of retries on a network? Also; not all networks have the same reliability, so I'd imagine this 'optimal' value would vary between networks. Is there a good way to calculate the optimal number of retries? Also; how many milliseconds should I wait before re-trying?
This question cannot be answered as presented as there are far, far too many real world complexities that must be factored in.
If you want TCP, use TCP. If you want to design a custom-protocol for transport layer, you will do worse than 40 years of cumulative experience coded into TCP will do.
If you don't look at the existing literature, you will miss a good hundred design considerations that will never occur to you sitting at your desk.
I ended up allowing the application to set this value, with a default value of 5 retries. This seemed to work across a large number of networks in our testing scenarios.
I wish I could play music or video on one computer, and have a second computer playing the same media, synchronized. As in, I can hear both computers' speakers at the same time, and it doesn't sound funny.
I want to do this over Wi-Fi, which is slightly unreliable.
Algorithmically, what's the best approach to this problem?
EDIT 1
Whether both computers "play" the same media, or one "plays" the media and streams it to the other, doesn't matter to me.
I am certain this is a tractable problem because I once saw a demo of Wi-Fi speakers. That was 5+ years ago, so I'm figure the technology should make it easier today.
(I myself was looking for an application which did this, hoping I wouldn't have to write one myself, when I stumbled upon this question.)
overview
You introduce a bit of buffer latency and use a network time-synchronization protocol to align the streams. That is, you split the stream up into packets, and timestamp each packet with "play later at time T", where T is for example 50-100ms in the future (or more if the network is glitchy). You send (or multicast) the packets on the local network, to all computers in the chorus. The computers will all play the sound at the same time because the application clock is synced.
Note that there may be other factors like OS/driver/soundcard latency which may have to be factored into the time-synchronization protocol. If you are not too discerning, the synchronization protocol may be as simple as one computer beeping every second -- plus you hitting a key on the other computer in beat. This has the advantage of accounting for any other source of lag at the OS/driver/soundcard layers, but has the disadvantage that manual intervention is needed if the clocks become desynchronized.
hybrid manual-network sync
One way to account for other sources of latency, without constant manual intervention, is to combine this approach with a standard network-clock synchronization protocol; the first time you run the protocol on new machines:
synchronize the machines with manual beat-style intervention
synchronize the machines with a network-clock sync protocol
for each machine in the chorus, take the difference of the two synchronizations; this is the OS/driver/soundcard latency of each machine, which they each keep track of
Now whenever the network backbone changes, all one needs to do is resync using the network-clock sync protocol (#2), and subtract out the OS/driver/soundcard latencies, obviating the need for manual intervention (unless you change the OS/drivers/soundcards).
nature-mimicking firefly sync
If you are doing this in a quiet room and all machines have microphones, you do not even need manual intervention (#1), because you can have them all follow a "firefly-style" synchronizing algorithm. Many species of fireflies in nature will all blink in unison. http://tinkerlog.com/2007/05/11/synchronizing-fireflies/ describes the algorithm these fireflies use: "If a firefly receives a flash of a neighbour firefly, it flashes slightly earlier." Flashes correspond to beeps or buzzes (through the soundcard, not the mobo piezo buzzer!), and seeing corresponds to listening through the microphone.
This may be a bit awkward over very large room distances due to the speed of sound, but I doubt it'll be an issue (if so, decrease rate of beeping).
The synchronization is relative to the position of the listener relative to each speaker. I don't think the reliability of the network would have as much to do with this synchronization as it would the content of the audio stream. In order to synchronize you need to find the distance between each speaker and the listener. Find the difference between each of those values and the value for the farthest speaker. For each 1.1 feet of difference, delay each of the close speakers by 1ms. This will ensure that the audio stream reaches the listener at the same time. This all assumes an open area, as any in proximity to your scenario will generate reflections of the audio waves and create destructive interference. Objects within the area may also transmit sound at a slower speed resulting in delayed sound of their own.