I read RTP and I have one question.
Based on what I got, timestamp in rtp is for calculation of jitter and use it to de-jitter our packets. Basically I need it for TSoIP which I just need extract ASI from IP and pass to Modulator to process it.
I really appreciate if someone can help me to understand the usage of timestamp in receiver in order to receive ASi over IP. In other words: I didn`t find any good reference to help me to find what is timestamp and how it works
Here you have a detailed discussion about the jitter and RTP.
The main aspect you are interested is the following:
In the Real Time Protocol, jitter is measured in timestamp units. For example, if you transmit audio sampled at the usual 8000 Hertz, the unit is 1/8000 of a second.
In that page they also mention the relation between the timestamp and the receiver: The difference of relative transit times for the two packets is computed as:
D(i,j) = (Rj - Ri) - (Sj - Si) = (Rj - Sj) - (Ri - Si)
Si is the timestamp from the packet i and Ri is the time of arrival for packet i. You can also check to some examples there.
Related
According to Wikipedia Jitter is the undesired deviation from true periodicity of an assumed periodic signal, according to a papper on QoS that I am reading jitter is reffered to as delay variation. Are there any definition of the jitter in the context of real time applications? Are there applications that are sensitive to jitter but not sensitive to delay? If for example a streaming application use some kind of buffer to store packets before show them to the user, is it possible that this application is not sensitive to delay but is sensitive to jitter?
Delay: Is the amount of time data(signal) takes to reach the destination. Now a higher delay generally means congestion of some sort of breaking of the communication link.
Jitter: Is the variation of delay time. This happens when a system is not in deterministic state eg. Video Streaming suffers from jitter a lot because the size of data transferred is quite large and hence no way of saying how long it might take to transfer.
If your application is sensitive to jitter it is definitely sensitive to delay.
In Real-time Protocol (RTP, RFC3550), a header contains a timestamp field. The value of it usually comes from a monotonically incremented counter and the frequency of the increment is the clock-rate. This clock-rate must be the same all over the participant wants something with the timestamp field. The counters have different base offsets, because the start time may different or they contains it because of security reason, etc... All in all we say the clocks are not syncronized.
To show it in an example consider if we refer to snd_timestamp and rcv_timestamp the most recent packet sender timestamp from the RTP header field and receiver timestamp generated by the receiver using the same clock-rate.
The wrong conclusion is that
delay_in_timestamp_unit = rcv_timestamp - snd_timestamp
If the receiver and sender clock-rate has different base offset (and they have), this not gives you the delay, also it doesn't consider the wrap around the 32bit unsigned integer.
But monitoring the time for delivering packets is somehow necessary if we want a proper playout adaption algorithm or if we want to detect and avoid congestions.
Also note that if we have syncronized clocks delay_in_timestamp_unit might be not punctually represent the pure network delay, because of components at the sender or at the receiver side retaining these packets after and/or before the timestamp added and/or exemined. So if you calculate a 2seconds delay between the participant, but you know your network delay is around 100ms, then your packets suffer additional delays at the sender or/and at the receiver side. But that additional delay is somehow (or at least you hope that it is) constant, so the only delay changes in time is - hopefully - the network delay. So you should not say that if packet delay > 500ms then we have a congestion, because you have no idea what is the actual network delay if you use only one packet sender and receiver timestamp information.
But the difference between the delays of two consecutive packets might gives you some information about weather something wrong in the network or not.
diff_delay = delay_t0 - delay_t1
if diff_delay equals to 0 the delay is the same, if it greater than 0 the newly arrived packets needed more time then the previous one, and if it smaller than 0 it needed less time.
And from that relative information based on two consecutive delays you could say something.
How you determine the difference between two delay if the clocks are not syncronized?
Consider you stored the last timestamps in rcv_timestamp_t1 and snd_timestamp_t1
diff_delay = (rcv_timestamp_t0 - snd_timestamp_t0) - (rcv_timestamp_t1 - snd_timestamp_t1)
but that would be problem without maintaining the base offsets of the sender and the receiver, so reordering it:
diff_delay = (rcv_timestamp_t0 - rcv_timestamp_t1) - (snd_timestamp_t0 - snd_timestamp_t1)
and here you can subtract rcv timestamps from each other and it eliminates the offset rcv and snd contain, and then you can extract the rcv_diff from snd_diff and it gives you the information about the difference of the delays of two consecutive packets in the unit of the clock-rate.
Now, according to RFC3550 jitter is "An estimate of the statistical variance of the RTP data packet interarrival time".
In order to finally get to the point your question is
"What is the difference between the delay and the jitter in the context of real time applications?"
Tiny note, but real-time applications usually refer to systems processing data in a range of nanoseconds, so I think you refer to end-to-end systems.
Also despite of several altered definition of jitter, it all uses the difference of the delays of arrived packets and thus provide you information about the relative changes of the network delay, meanwhile delay itself is an absolute value of the time of delivery.
I'm trying to capture packets and reorganize packets for obtaining original HTTP request.
I'm capturing packets by IPQUEUE(by iptables rule), and I figured out that packets are not captured in order.
I already know that in TCP protocol, packets have to be re-sequenced, so I'm trying to re-sequence packets by sequence number.
According to Wikipedia, the sequence number of TCP is 32 bits number. Then, what happens if sequence number reaches to MAX 32bits number?
Because sequence number of SYN packet is random number, I think this limitation can be reached very fast.
If anybody has a commend on it, or has some links helpful, please leave me a answer.
From RFC-1185
Avoiding reuse of sequence numbers within the same connection is
simple in principle: enforce a segment lifetime shorter than the
time it takes to cycle the sequence space, whose size is
effectively 2**31.
If the maximum effective bandwidth at which TCP
is able to transmit over a particular path is B bytes per second,
then the following constraint must be satisfied for error-free
operation:
2**31 / B > MSL (secs)
So In simpler words TCP will take care of it.
In addition of this condition TCP also has concept of Timestamps to handle sequence number wrap around condition. From the same above RFC
Timestamps carried from sender to receiver in TCP Echo options can
also be used to prevent data corruption caused by sequence number
wrap-around, as this section describes.
Specifically TCP uses PAWS mechanism to handle TCP wrap around case.
You can find more information about PAWS in RFC-1323
RFC793 Section 3.3:
It is essential to remember that the actual sequence number space is
finite, though very large. This space ranges from 0 to 2*32 - 1.
Since the space is finite, all arithmetic dealing with sequence
numbers must be performed modulo 2*32. This unsigned arithmetic
preserves the relationship of sequence numbers as they cycle from
2**32 - 1 to 0 again. There are some subtleties to computer modulo
arithmetic, so great care should be taken in programming the
comparison of such values.
Any arithmetics done on the sequence number are modulo 2^32
In simple terms, the 32-bit unsigned number will wrap around:
...
0xFFFFFFFE
0xFFFFFFFF
0x00000000
0x00000001
...
have a look at tcp timestamps section of https://en.wikipedia.org/wiki/Transmission_Control_Protocol
I've read that Iperf basically tries to send as much information down a connection as quickly as possible reporting on the throughput achieved. This tool is especially useful in determining the volume of data that links between two machines can supply.
Is it possible to gather the same results by sending regular data, as in not testing data?
What I'm trying to do is sending data in the foreground while gathering statistics in the background (throughput and jitter).
How iperf calculates these two values?
This is the closest thing I've found
http://openmaniak.com/iperf.php
I have the similar question about how iperf works. Please refer to the following post where I did some research and gave an overview.
How iperf calculates network statistics
Generally, in iperf, it embedded timestamps and sequence number in the payload in the sender side. When receiver receives the packet, it extracts these content and calculates the statistics. You can find more detail in the post.
Throughput is simple: assuming the client is saturating the network, the server only needs to count the received bytes, and divide that by some duration.
This post explains this topic in a greater detail.
Iperf 2 calculates Jitter for UDP only. It is based on what is prescribed within by RTP implementation, as stated by the code.
RTP is used in implementations of audio streaming, where jitter plays a major role, so it's a good place to take the algorithm from - what Iperf reports is what many applications where you would be interested in jitter would see.
See RFC 1889, section 6.3.1, "interarrival jitter" field:
The interarrival jitter J is defined to be the mean deviation (smoothed
absolute value) of the difference D in packet spacing at the receiver compared
to the sender for a pair of packets. As shown in the equation below, this is
equivalent to the difference in the "relative transit time" for the two
packets; the relative transit time is the difference between a packet's RTP
timestamp and the receiver's clock at the time of arrival, measured in the same
units.
If Si is the RTP timestamp from packet i, and Ri is the time of arrival in RTP
timestamp units for packet i, then for two packets i and j, D may be expressed
as
D(i,j)=(Rj-Ri)-(Sj-Si)=(Rj-Sj)-(Ri-Si)
The interarrival jitter is calculated continuously as each data packet i is
received from source SSRC_n, using this difference D for that packet and the
previous packet i-1 in order of arrival (not necessarily in sequence),
according to the formula
J=J+(|D(i-1,i)|-J)/16
I'm adding networked multiplayer to a game I've made. When the server sends an update packet to the client, I include a timestamp so that the client knows exactly when that information is valid. However, the server computer and the client computer might have their clocks set to different times (maybe even just a few seconds difference), so the timestamp from the server needs to be translated to the client's local time.
So, I'd like to know the best way to calculate the time difference between the server and the client. Currently, the client pings the server for a time stamp during initialization, takes note of when the request was sent and when it was answered, and guesses that the time stamp was generated roughly halfway along the journey. The client also runs 10 of these trials and takes the average.
But, the problem is that I'm getting different results over repeated runs of the program. Within each set of 10, each measurement rarely diverges by more than 400 milliseconds, which might be acceptable. But if I wait a few minutes between each run of the program, the resulting averages might disagree by as much as 2 seconds, which is not acceptable.
Is there a better way to figure out the difference between the clocks of two networked devices? Or is there at least a way to tweak my algorithm to yield more accurate results?
Details that may or may not be relevant: The devices are iPod Touches communicating over Bluetooth. I'm measuring pings to be anywhere from 50-200 milliseconds. I can't ask the users to sync up their clocks. :)
Update: With the help of the below answers, I wrote an objective-c class to handle this. I posted it on my blog: http://scooops.blogspot.com/2010/09/timesync-was-time-sink.html
I recently took a one-hour class on this and it wasn't long enough, but I'll try to boil it down to get you pointed in the right direction. Get ready for a little algebra.
Let s equal the time according to the server. Let c equal the time according to the client. Let d = s - c. d is what is added to the client's time to correct it to the server's time, and is what we need to solve for.
First we send a packet from the server to the client with a timestamp. When that packet is received at the client, it stores the difference between the given timestamp and its own clock as t1.
The client then sends a packet to the server with its own timestamp. The server sends the difference between the timestamp and its own clock back to the client as t2.
Note that t1 and t2 both include the "travel time" t of the packet plus the time difference between the two clocks d. Assuming for the moment that the travel time is the same in both directions, we now have two equations in two unknowns, which can be solved:
t1 = t - d
t2 = t + d
t1 + d = t2 - d
d = (t2 - t1)/2
The trick comes because the travel time is not always constant, as evidenced by your pings between 50 and 200 ms. It turns out to be most accurate to use the timestamps with the minimum ping time. That's because your ping time is the sum of the "bare metal" delay plus any delays spent waiting in router queues. Every once in a while, a lucky packet gets through without any queuing delays, so you use that minimum time as the most repeatable time.
Also keep in mind that clocks run at different rates. For example, I can reset my computer at home to the millisecond and a day later it will be 8 seconds slow. That means you have to continually readjust d. You can use the slope of various values of d computed over time to calculate your drift and compensate for it in between measurements, but that's beyond the scope of an answer here.
Hope that helps point you in the right direction.
Your algorithm will not be much more accurate unless you can use some statistical methods. First of all, 10 is probably not sufficient. The first and simplest change would be to gather 100 transit time samples and toss out the x longest and shortest.
Another thing to add would be that both clients send their own timestamp in each packet. Then you can also calculate how different their clocks are and check the average difference between the clocks.
You can also check up on STNP and NTP implementations specifically, as these protocols do this specifically.
The Ethernet II frame format does not contain a length field, and I'd like to understand how the end of a frame can be detected without it.
Unfortunately, I have no idea of physics, but the following sounds reasonable to me: we assume that Layer 1 (Physical Layer) provides us with a way of transmitting raw bits in such a way that it is possible to distinguish between the situation where bits are being sent and the situation where nothing is sent (if digital data was coded into analog signals via phase modulation, this would be true, for example - but I don't know if this is really what's done). In this case, an ethernet card could simply wait until a certain time intervall occurs where no more bits are being transmitted, and then decide that the frame transmission has to be finished.
Is this really what's happening?
If yes: where can I find these things, and what are common values for the length of "certain time intervall"? Why does IEEE 802.3 have a length field?
If not: how is it done instead?
Thank you for your help!
Hanno
Your assumption is right. The length field inside the frame is not needed for layer1.
Layer1 uses other means to detect the end of a frame which vary depending on the type of physical layer.
with 10Base-T a frame is followed by a TP_IDL waveform. The lack of further Manchester coded data bits can be detected.
with 100Base-T a frame is ended with an End of Stream Delimiter bit pattern that may not occur in payload data (because of its 4B/5B encoding).
A rough description you can find e.g. here:
http://ww1.microchip.com/downloads/en/AppNotes/01120a.pdf "Ethernet Theory of Operation"