How does iperf calculate throughput and jitter? - networking

I've read that Iperf basically tries to send as much information down a connection as quickly as possible reporting on the throughput achieved. This tool is especially useful in determining the volume of data that links between two machines can supply.
Is it possible to gather the same results by sending regular data, as in not testing data?
What I'm trying to do is sending data in the foreground while gathering statistics in the background (throughput and jitter).
How iperf calculates these two values?

This is the closest thing I've found
http://openmaniak.com/iperf.php

I have the similar question about how iperf works. Please refer to the following post where I did some research and gave an overview.
How iperf calculates network statistics
Generally, in iperf, it embedded timestamps and sequence number in the payload in the sender side. When receiver receives the packet, it extracts these content and calculates the statistics. You can find more detail in the post.

Throughput is simple: assuming the client is saturating the network, the server only needs to count the received bytes, and divide that by some duration.
This post explains this topic in a greater detail.
Iperf 2 calculates Jitter for UDP only. It is based on what is prescribed within by RTP implementation, as stated by the code.
RTP is used in implementations of audio streaming, where jitter plays a major role, so it's a good place to take the algorithm from - what Iperf reports is what many applications where you would be interested in jitter would see.
See RFC 1889, section 6.3.1, "interarrival jitter" field:
The interarrival jitter J is defined to be the mean deviation (smoothed
absolute value) of the difference D in packet spacing at the receiver compared
to the sender for a pair of packets. As shown in the equation below, this is
equivalent to the difference in the "relative transit time" for the two
packets; the relative transit time is the difference between a packet's RTP
timestamp and the receiver's clock at the time of arrival, measured in the same
units.
If Si is the RTP timestamp from packet i, and Ri is the time of arrival in RTP
timestamp units for packet i, then for two packets i and j, D may be expressed
as
D(i,j)=(Rj-Ri)-(Sj-Si)=(Rj-Sj)-(Ri-Si)
The interarrival jitter is calculated continuously as each data packet i is
received from source SSRC_n, using this difference D for that packet and the
previous packet i-1 in order of arrival (not necessarily in sequence),
according to the formula
J=J+(|D(i-1,i)|-J)/16

Related

Packet Loss ratio in VEINS/Omnet++

I am new to VEINS/Omnet++ and trying various broadcast suppression techniques and would like to calculate the packet loss ratio. I assume I have to use this formula :
Packet Loss Ratio = TotalLostPackets / SentPackets
But since some nodes send 0 packets, is there an easy way to specify this in the Omnet++ .anf config file or maybe in VEINS without doing manual adjustments? Otherwise if any node sends a 0 packet, then all graphs appear as infinity.
Thank you!
This does not directly answer your question, but I would warn against using this equation in a simulation where not all nodes might send the same number of packets or where broadcasts are sent. Each packet sent as a broadcast can potentially be received by many other nodes meaning that even a simulation where only 1 packet is sent might also record 7 successful receptions and 5 packet losses. Your equation would calculate the loss rate as 5/1=500% whereas I would find a rate of 5/12=42% more reasonable.
As a side effect of calculating loss rate as "fail/(success+fail)" you will not need to take special care for nodes that did not send/receive packets.

Timestamp usage in rtp

I read RTP and I have one question.
Based on what I got, timestamp in rtp is for calculation of jitter and use it to de-jitter our packets. Basically I need it for TSoIP which I just need extract ASI from IP and pass to Modulator to process it.
I really appreciate if someone can help me to understand the usage of timestamp in receiver in order to receive ASi over IP. In other words: I didn`t find any good reference to help me to find what is timestamp and how it works
Here you have a detailed discussion about the jitter and RTP.
The main aspect you are interested is the following:
In the Real Time Protocol, jitter is measured in timestamp units. For example, if you transmit audio sampled at the usual 8000 Hertz, the unit is 1/8000 of a second.
In that page they also mention the relation between the timestamp and the receiver: The difference of relative transit times for the two packets is computed as:
D(i,j) = (Rj - Ri) - (Sj - Si) = (Rj - Sj) - (Ri - Si)
Si is the timestamp from the packet i and Ri is the time of arrival for packet i. You can also check to some examples there.

What is the difference between the delay and the jitter in the context of real time applications?

According to Wikipedia Jitter is the undesired deviation from true periodicity of an assumed periodic signal, according to a papper on QoS that I am reading jitter is reffered to as delay variation. Are there any definition of the jitter in the context of real time applications? Are there applications that are sensitive to jitter but not sensitive to delay? If for example a streaming application use some kind of buffer to store packets before show them to the user, is it possible that this application is not sensitive to delay but is sensitive to jitter?
Delay: Is the amount of time data(signal) takes to reach the destination. Now a higher delay generally means congestion of some sort of breaking of the communication link.
Jitter: Is the variation of delay time. This happens when a system is not in deterministic state eg. Video Streaming suffers from jitter a lot because the size of data transferred is quite large and hence no way of saying how long it might take to transfer.
If your application is sensitive to jitter it is definitely sensitive to delay.
In Real-time Protocol (RTP, RFC3550), a header contains a timestamp field. The value of it usually comes from a monotonically incremented counter and the frequency of the increment is the clock-rate. This clock-rate must be the same all over the participant wants something with the timestamp field. The counters have different base offsets, because the start time may different or they contains it because of security reason, etc... All in all we say the clocks are not syncronized.
To show it in an example consider if we refer to snd_timestamp and rcv_timestamp the most recent packet sender timestamp from the RTP header field and receiver timestamp generated by the receiver using the same clock-rate.
The wrong conclusion is that
delay_in_timestamp_unit = rcv_timestamp - snd_timestamp
If the receiver and sender clock-rate has different base offset (and they have), this not gives you the delay, also it doesn't consider the wrap around the 32bit unsigned integer.
But monitoring the time for delivering packets is somehow necessary if we want a proper playout adaption algorithm or if we want to detect and avoid congestions.
Also note that if we have syncronized clocks delay_in_timestamp_unit might be not punctually represent the pure network delay, because of components at the sender or at the receiver side retaining these packets after and/or before the timestamp added and/or exemined. So if you calculate a 2seconds delay between the participant, but you know your network delay is around 100ms, then your packets suffer additional delays at the sender or/and at the receiver side. But that additional delay is somehow (or at least you hope that it is) constant, so the only delay changes in time is - hopefully - the network delay. So you should not say that if packet delay > 500ms then we have a congestion, because you have no idea what is the actual network delay if you use only one packet sender and receiver timestamp information.
But the difference between the delays of two consecutive packets might gives you some information about weather something wrong in the network or not.
diff_delay = delay_t0 - delay_t1
if diff_delay equals to 0 the delay is the same, if it greater than 0 the newly arrived packets needed more time then the previous one, and if it smaller than 0 it needed less time.
And from that relative information based on two consecutive delays you could say something.
How you determine the difference between two delay if the clocks are not syncronized?
Consider you stored the last timestamps in rcv_timestamp_t1 and snd_timestamp_t1
diff_delay = (rcv_timestamp_t0 - snd_timestamp_t0) - (rcv_timestamp_t1 - snd_timestamp_t1)
but that would be problem without maintaining the base offsets of the sender and the receiver, so reordering it:
diff_delay = (rcv_timestamp_t0 - rcv_timestamp_t1) - (snd_timestamp_t0 - snd_timestamp_t1)
and here you can subtract rcv timestamps from each other and it eliminates the offset rcv and snd contain, and then you can extract the rcv_diff from snd_diff and it gives you the information about the difference of the delays of two consecutive packets in the unit of the clock-rate.
Now, according to RFC3550 jitter is "An estimate of the statistical variance of the RTP data packet interarrival time".
In order to finally get to the point your question is
"What is the difference between the delay and the jitter in the context of real time applications?"
Tiny note, but real-time applications usually refer to systems processing data in a range of nanoseconds, so I think you refer to end-to-end systems.
Also despite of several altered definition of jitter, it all uses the difference of the delays of arrived packets and thus provide you information about the relative changes of the network delay, meanwhile delay itself is an absolute value of the time of delivery.

does TCP randomly select a time from an interval to decide when a timeout has occured?

In TCP how is the time for a time out to happen determined? I was told it is randomly selected from an interval that doubles after each time out, but nothing I found on Google mentions anything about random selection and instead says it's calculated used Smoothed Round Trip Time after the first acknowledgment is received. Does it do this for each packet or is there some randomness to the design?
An initial value of the RTT is calculated during the TCP 3-way handshake that starts a connection. It is updated thereafter when qualifying send/acks are seen.
Most modern implementations don't use this method directly but rather using a statistical analysis of the maximum time it should take to get an ACK and retransmit after that interval. The "exponential backoff" (the doubling of the wait interval) happens for further retransmissions of the same data.
A connection "times out" after some number of transmissions with no ACK being received.

defining the time it takes to do something (latency, throughput, bandwidth)

I understand latency - the time it takes for a message to go from sender to recipient - and bandwidth - the maximum amount of data that can be transferred over a given time - but I am struggling to find the right term to describe a related thing:
If a protocol is conversation-based - the payload is split up over many to-and-fros between the ends - then latency affects 'throughput'1.
1 What is this called, and is there a nice concise explanation of this?
Surfing the web, trying to optimize the performance of my nas (nas4free) I came across a page that described the answer to this question (imho). Specifically this section caught my eye:
"In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgement from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP."
This sounds like the answer to your question.
BDP as wikipedia describes it
To conclude, it's called Bandwidth Delay Product (BDP) and the shortest explanation I've found is the one above. (Flexo has noted this in his comment too.)
Could goodput be the term you are looking for?
According to wikipedia:
In computer networks, goodput is the application level throughput, i.e. the number of useful bits per unit of time forwarded by the network from a certain source address to a certain destination, excluding protocol overhead, and excluding retransmitted data packets.
Wikipedia Goodput link
The problem you describe arises in communications which are synchronous in nature. If there was no need to acknowledge receipt of information and it was certain to arrive then the sender could send as fast as possible and the throughput would be good regardless of the latency.
When there is a requirement for things to be acknowledged then it is this synchronisation that cause this drop in throughput and the degree to which the communication (i.e. sending of acknowledgments) is allowed to be asynchronous or not controls how much it hurts the throughput.
'Round-trip time' links latency and number of turns.
Or: Network latency is a function of two things:
(i) round-trip time (the time it takes to complete a trip across the network); and
(ii) the number of times the application has to traverse it (aka turns).

Resources