Wireshark Epochtime vs MAC Timestamp - networking

I am using Wireshark to capture and analyse the data from the radiotap header and the frame metadata. I would like to know if the epoch time from the frame metadata is the arrival time on the router or arrival time on my device.
I know epoch time is the time in microseconds since Epoch ( 1st January 1970 ). So i am using it as the time my device has captured the packet.
I am making a project that uses the difference between the MAC timestamp ( Stamp given in the router ) and the time my device captured the packet and i am using those two fields to get said difference.
The doubt came from the fact that this difference is always in the interval of 0.25 to 2.75 Microseconds, independent of where my computer was capturing the packets. So i wanted to make sure that:
The MAC Timestamp is a timestamp that the Access Point ( My router ) puts into the packet before sending it out and the Epoch Time is a timestamp my computer puts on the frame metadata when said packet was captured.
Is that correct? If not, how can i determine the time my packet was captured?
Edit: Fixing field name for Epoch time.

As it turns out, both are the same. The Mac timestamp is part of the Metadata added by the capturing software, as is epoch time and arrival time.
The timestamp that the access point prints into the packet is a on a base-16 and is contained under the frame tab, on fixed paramers on Beacon and Probre Frames. On wireshark, the field is simply labed "Timestamp:".

Related

What do the adjust reasons in the BLE current time characteristic mean?

I am implementing a CTS (current time service) synchronization method. Here is the documentation on the current time characteristic (one of the characteristics in the CTS).
What do 'Manual time update' and 'External reference time update' mean, and what do they change on the device I am writing the current time on?
I have searched SO, WWW, and the Bluetooth SIG group but not found any further explanation than the names of those fields.
The specification document for Current Time Service defines these values (download pdf here from bluetooth.com).
Specifically, section 3.1.2 Characteristic Behavior -- Notification:
The server device shall set the Adjust Reason field in the Current Time to reflect the
reason for the last adjustment of the local time on the server device.
...
3.1.2.1 Manual Time Update
If the time information on the server device was set / changed manually,
the “Manual Time Update” bit shall be set.
Note: If the time zone or DST offset were changed manually, this bit shall also be set.
3.1.2.2 External Reference Time Update
If the server device received time information from an
external time reference source,
the External Reference Time Update bit shall be set.
3.1.2.3 Change of Time Zone
If the time information on the server device was set / adjusted
because of change of time zone, the “Change of Time Zone” bit shall be set.
Note: Following 3.1.2.1, if the time zone was changed manually the “Manual Time
Update” bit will also be set.
3.1.2.4 Change of DST Offset
If the time information on the server device was set / adjusted
because of change of DST offset, the “Change of DST offset” bit shall be set.
Note: Following 3.1.2.1, if the DST offset was changed manually, the “Manual Time
Update” bit will also be set.

How does timestamp in TCP header work?

I am a newbie to networking,I was analysing WireShark TCP dumps.I found TCP header timestamp value of 4016140 which dates back to 1970.Am i missing something?
TCP timestamps are reset to 0 on host reboot and incremented by OS dependent algorithm (for example it's incremented 100 times per sec ref ), these are not based on host clock/wall time.
Agree with what #VenkatC said. Note that standard (rfc1323) doesn't specify the unit of timestamp value. It's possible to measure the unit of timestamp this way: with Wireshark, capture a tcp session, look at the two packets sent by your PC, compare the difference between the wireshark timestamp and the difference between the TCP timestamp value field of the two packets.
In my case, the Wireshark timestamp difference is 160ms, the difference in TCP header timestamp values is 40. So the unit is 4ms. It may be different on your system.

Timestamp usage in rtp

I read RTP and I have one question.
Based on what I got, timestamp in rtp is for calculation of jitter and use it to de-jitter our packets. Basically I need it for TSoIP which I just need extract ASI from IP and pass to Modulator to process it.
I really appreciate if someone can help me to understand the usage of timestamp in receiver in order to receive ASi over IP. In other words: I didn`t find any good reference to help me to find what is timestamp and how it works
Here you have a detailed discussion about the jitter and RTP.
The main aspect you are interested is the following:
In the Real Time Protocol, jitter is measured in timestamp units. For example, if you transmit audio sampled at the usual 8000 Hertz, the unit is 1/8000 of a second.
In that page they also mention the relation between the timestamp and the receiver: The difference of relative transit times for the two packets is computed as:
D(i,j) = (Rj - Ri) - (Sj - Si) = (Rj - Sj) - (Ri - Si)
Si is the timestamp from the packet i and Ri is the time of arrival for packet i. You can also check to some examples there.

Bandwidth estimation with multiple TCP connections

I have a client which issues parallel requests for data from a server. Each request uses a separate TCP connection. I would like to estimate the available throughput (bandwidth) based on the received data.
I know that for one connection TCP connection I can do so by dividing the amount of data the has been download by the duration of time it took to download the data. But given that there are multiple concurrent connections, would it be correct to sum up all the data that has been downloaded by the connections and divide the sum by the duration between sending the first request and the arrival time of the last byte (i.e., the last byte of the download that finishes last)? Or am I overlooking something here?
[This is a rewrite of my previous answer, which was getting too messy]
There are two components that we want to measure in order to calculate throughput: the total number of bytes transferred, and the total amount of time it took to transfer those bytes. Once we have those two figures, we just divide the byte-count by the duration to get the throughput (in bytes-per-second).
Calculating the number of bytes transferred is trivial; just have each TCP connection tally the number of bytes it transferred, and at the end of the sequence, we add up all of the tallies into a single sum.
Calculating the amount of time it takes for a single TCP connection to do its transfer is likewise trivial: just record the time (t0) at which the TCP connection received its first byte, and the time (t1) at which it received its last byte, and that connection's duration is (t1-t0).
Calculating the amount of time it takes for the aggregate process to complete, OTOH, is not so obvious, because there is no guarantee that all of the TCP connections will start and stop at the same time, or even that their download-periods will intersect at all. For example, imagine a scenario where there are five TCP connections, and the first four of them start immediately and finish within one second, while the final TCP connection drops some packets during its handshake, and so it doesn't start downloading until 5 seconds later, and it also finishes one second after it starts. In that scenario, do we say that the aggregate download process's duration was 6 seconds, or 2 seconds, or ???
If we're willing to count the "dead time" where no downloads were active (i.e. the time between t=1 and t=5 above) as part of the aggregate-duration, then calculating the aggregate-duration is easy: Just subtract the smallest t0 value from the largest t1 value. (this would yield an aggregate duration of 6 seconds in the example above). This may not be what we want though, because a single delayed download could drastically reduce the reported bandwidth estimate.
A possibly more accurate way to do it would be say that the aggregate duration should only include time periods when at least one TCP download was active; that way the result does not include any dead time, and is thus perhaps a better reflection of the actual bandwidth of the network path.
To do that, we need to capture the start-times (t0s) and end-times (t1s) of all TCP downloads as a list of time-intervals, and then merge any overlapping time-intervals as shown in the sketch below. We can then add up the durations of the merged time-intervals to get the aggregate duration.
You need to do a weighted average. Let B(n) be the bytes processed for connection 'n' and T(n) be the time required to process those bytes. The total throughput is:
double throughput=0;
for (int n=0; n<Nmax; ++n)
{
throughput += B(n) / T(n);
}
throughtput /= Nmax;

Measuring time difference between networked devices

I'm adding networked multiplayer to a game I've made. When the server sends an update packet to the client, I include a timestamp so that the client knows exactly when that information is valid. However, the server computer and the client computer might have their clocks set to different times (maybe even just a few seconds difference), so the timestamp from the server needs to be translated to the client's local time.
So, I'd like to know the best way to calculate the time difference between the server and the client. Currently, the client pings the server for a time stamp during initialization, takes note of when the request was sent and when it was answered, and guesses that the time stamp was generated roughly halfway along the journey. The client also runs 10 of these trials and takes the average.
But, the problem is that I'm getting different results over repeated runs of the program. Within each set of 10, each measurement rarely diverges by more than 400 milliseconds, which might be acceptable. But if I wait a few minutes between each run of the program, the resulting averages might disagree by as much as 2 seconds, which is not acceptable.
Is there a better way to figure out the difference between the clocks of two networked devices? Or is there at least a way to tweak my algorithm to yield more accurate results?
Details that may or may not be relevant: The devices are iPod Touches communicating over Bluetooth. I'm measuring pings to be anywhere from 50-200 milliseconds. I can't ask the users to sync up their clocks. :)
Update: With the help of the below answers, I wrote an objective-c class to handle this. I posted it on my blog: http://scooops.blogspot.com/2010/09/timesync-was-time-sink.html
I recently took a one-hour class on this and it wasn't long enough, but I'll try to boil it down to get you pointed in the right direction. Get ready for a little algebra.
Let s equal the time according to the server. Let c equal the time according to the client. Let d = s - c. d is what is added to the client's time to correct it to the server's time, and is what we need to solve for.
First we send a packet from the server to the client with a timestamp. When that packet is received at the client, it stores the difference between the given timestamp and its own clock as t1.
The client then sends a packet to the server with its own timestamp. The server sends the difference between the timestamp and its own clock back to the client as t2.
Note that t1 and t2 both include the "travel time" t of the packet plus the time difference between the two clocks d. Assuming for the moment that the travel time is the same in both directions, we now have two equations in two unknowns, which can be solved:
t1 = t - d
t2 = t + d
t1 + d = t2 - d
d = (t2 - t1)/2
The trick comes because the travel time is not always constant, as evidenced by your pings between 50 and 200 ms. It turns out to be most accurate to use the timestamps with the minimum ping time. That's because your ping time is the sum of the "bare metal" delay plus any delays spent waiting in router queues. Every once in a while, a lucky packet gets through without any queuing delays, so you use that minimum time as the most repeatable time.
Also keep in mind that clocks run at different rates. For example, I can reset my computer at home to the millisecond and a day later it will be 8 seconds slow. That means you have to continually readjust d. You can use the slope of various values of d computed over time to calculate your drift and compensate for it in between measurements, but that's beyond the scope of an answer here.
Hope that helps point you in the right direction.
Your algorithm will not be much more accurate unless you can use some statistical methods. First of all, 10 is probably not sufficient. The first and simplest change would be to gather 100 transit time samples and toss out the x longest and shortest.
Another thing to add would be that both clients send their own timestamp in each packet. Then you can also calculate how different their clocks are and check the average difference between the clocks.
You can also check up on STNP and NTP implementations specifically, as these protocols do this specifically.

Resources