How to read the ntptime status code? - status

From the NTP advanced configuration page there is the following text
windl#elf:~ > ntptime
ntp_gettime() returns code 0 (OK)
time bd6b9cf2.9c3c6c60 Thu, Sep 14 2000 20:52:34.610, (.610297702),
maximum error 3480 us, estimated error 0 us.
ntp_adjtime() returns code 0 (OK)
modes 0x0 (),
offset 1.658 us, frequency 17.346 ppm, interval 128 s,
maximum error 3480 us, estimated error 0 us,
status 0x2107 (PLL,PPSFREQ,PPSTIME,PPSSIGNAL,NANO),
time constant 6, precision 3.530 us, tolerance 496 ppm,
pps frequency 17.346 ppm, stability 0.016 ppm, jitter 1.378 us,
intervals 57, jitter exceeded 29, stability exceeded 0, errors 0.
Followed by The first thing you should look at is the status (0x2107 in our case). The magic words in parentheses explain the meaning of the individual bits.
They list 5 bit positions, but provide a 4 digit hex status code, which translates into 16 bits. Which are the bits that represent the keywords in parenthesis, and what do the other bits mean?

Here is list of the NTP status codes
STA_PLL 0x0001 enable PLL updates (rw)
STA_PPSFREQ 0x0002 enable PPS freq discipline (rw)
STA_PPSTIME 0x0004 enable PPS time discipline (rw)
STA_FLL 0x0008 select frequency-lock mode (rw)
STA_INS 0x0010 insert leap (rw)
STA_DEL 0x0020 delete leap (rw)
STA_UNSYNC 0x0040 clock unsynchronized (rw)
STA_FREQHOLD 0x0080 hold frequency (rw)
STA_PPSSIGNAL 0x0100 PPS signal present (ro)
STA_PPSJITTER 0x0200 PPS signal jitter exceeded (ro)
STA_PPSWANDER 0x0400 PPS signal wander exceeded (ro)
STA_PPSERROR 0x0800 PPS signal calibration error (ro)
STA_CLOCKERR 0x1000 clock hardware fault (ro)
STA_NANO 0x2000 resolution (0 = us, 1 = ns) (ro)
Source: ftp://ftp.ripe.net/test-traffic/ROOT/libDelay/Delay.h
From your listed example: status 0x2107 (PLL,PPSFREQ,PPSTIME,PPSSIGNAL,NANO).
Logically 'OR' the status bit codes together 0x0001 | 0x0002 | 0x0004 | 0x0100 | 0x2000 and result is 0x2107.
Some additional descriptions of the status codes are found here:
http://man7.org/linux/man-pages/man2/adjtimex.2.html

Related

How to convert 0x80 in a 7-bit variable-length integer?

Im reading a book about network protocol structures.
There is an illustration in a chapter about variable-length quantity, which I dont fully understand.(see attachment)
The subject is to convert different numbers to variable-length 7-bit integers.
The first line shows that 0x3F is stored in a single octet as 0x3F.
The second line shows that 0x80 is stored in two octets one as 0x80 and second as 0x01.
However I dont understand why its not 0x81 in the first octet and 0x00 in the second.
Because according to wikipedia, converting numbers into variable-length 7bit integers goes as follows:
Represent the value in binary notation (e.g. 137 as 10001001)
Break it up in groups of 7 bits starting from the lowest significant bit (e.g. 137 as 0000001 0001001). This is equivalent to representing the number in base 128.
Take the lowest 7 bits and that gives you the least significant byte (0000 1001). This byte comes last.
For all the other groups of 7 bits (in the example, this is 000 0001), set the MSB to 1 (which gives 1000 0001 in our example). Thus 137 becomes 1000 0001 0000 1001 where the bits in boldface are something we added. These added bits denote if there is another byte to follow or not. Thus, by definition, the very last byte of a variable length integer will have 0 as its MSB.
So lets do these steps for 0x80:
binary notation: 1000 0000
in groups of 7 bits starting from LSB: 0000001 0000000
and 4. set MSB as described: 1000 0001 0000 0000
Converting that binary number into two hex octets, gives me 0x81 and 0x00.
Which leads me to the question: Is there a printing fail in the book or did I missunderstood something?
Which book is that?
There may be many possible encoding schemes. One of them could go like this:
1. Represent the value in binary notation (e.g. 0x80 as 10000000)
2. Break it up in groups of 7 bits starting from the lowest significant bit: 0000001 0000000
3. Start with the lowest 7 bits: if this is *not* the last group of 7 bits, then set MSB: 10000000; if it's the last, then leave it
alone: 00000001
4. Output starting LSB first: 10000000 00000001, i.e. 0x80 0x01
So what does the book say? What encoding scheme are they using?

Calculating total transmission time of a packet

I'm having some difficulty calculating the total time it takes a packet to get from A to B, the question is:
"We have 200 bytes of data to send from A to B, with a distance of 200km between them. Calculate the total transmission time, assuming the speed of the signal is 200,000 km/s and that the data rate is 1Mbps and that a header of 40 bytes has to be added to the data before it is sent."
My understanding is that at some point you need to factor in propagation and the speed of light (??) but I'm unsure if it's needed in this case. Is there a formula which can be used to work these types of question out?
So we have a total of 200 bytes of payload + 40 bytes of header = 240 bytes. The data can be put on the wire at a rate of 1 Mbps which equals 1,000,000 bits per second (unless the question actually means Mibps which is 1,048,576 bits per second; we'll work on the assumption that Mbps is correct and it's 1,000,000).
240 bytes is equal to 1920 bits (240 * 8), so it takes
1920 bits / 1,000,000 bits per second = 0.00192 seconds
to get the data on the wire.
Now, for the data to be transmitted, it has to travel 200 km at a rate of 200,000 km/s.
200km / 200,000(km/s) = 0.001 seconds.
Now, to take the data from the wire and read into computer in location B takes the same amount of time as putting the data on the wire = 0.00192 seconds.
So the total amount of time is equal to
0.00192 + 0.001 + 0.00192 = 0.00484 seconds = 4.84 milliseconds.

Baud Rate Calculation

I have this 3-axis dongle serial accelerometer connected using RS-232 cable. I am putting the baud rate as 9600 and im getting 80 XXXX-YYYY-ZZZZ readout per second. I am trying to justify why does it shows 80 readings in a second, and here is my calculation,
2 Bytes of data x (1 Start bit + 1 Stop bit + 8 bits) = 20 bits
20 bits x 3 axis x 80 readouts = 4800 bits
While im getting 4800 bits instead of 9600 bits, so i am wondering did i miss out anything in justifying the 80 readouts?
Thanks guys :)
You indicate that you're getting 80 XXXX-YYYY-ZZZZ readouts per second. I'm assuming this is ASCII, so each digit is one byte.
So each "message" is len('XXXX-YYYY-ZZZZ')*8 = 112 bits long. Add a start and stop bit and you have 114. Multiply that times 80 messages per second, and you're transmitting 9120 bits per second, which is much close to the theoretical limit.

Maximum throughput for a sliding window data transmission

I am trying to understand how to calculate the throughput for a sliding window data transmission, by solving some numerical examples. Below is the example followed by my analysis.
Example
Host A is sending data to Host B over a full duplex link. A and B are using sliding window protocol, with send and receive window sizes of 5 each. Data packets sent only from A to B, are 1000 bytes each in size, and transmission time for one such packet is 50 us. Propagation delay is 200 us. Assume Ack packets need negligible transmission time. What is the maximum achievable throughput in Mbps?
A. 7.69 B. 11.11 C. 12.33 D. 15.00
Analysis
Round trip-time is 2 * 200 us = 400 us. ... A
Time required to fill the sending window = window size (5) * transmission time of 1 packet (50 us) = 250 us. ... B
Since B < A, sender has to wait for ack to 1st packet before sending the 6th packet. This ack appears at 450 us. (round-trip time is 400 us.)
Between 250 us and 450 us, the sender is idle, that is no new data is being sent over the line.
Assuming sender has an unlimited supply of data frames, the above cycle would repeat.
Thus, in every 450 us, sender sends 5 packets = 5 * 1000 * 8 = 40000 bits of data.
Hence, throughput = 40000 bits / 450 us = 84.7710 megabits per second. (84.7710 Mbps)
However, this is not one of the given options, not even close! Is there any mistake in my analysis above?
As I stated in my comment, your analysis and calculation method is correct. However, I'd check my calculator if I were you because 40000 bits / 450 us = 88.88...Mbps, not 84.7710 Mbps.
I do not think it is mere coincidence that 88.88 is just 11.11*8, so it's a fair assumption that the question was actually asking to get megabytes per second instead of megabits per second.
Throughput = Window /RTT
Here Window size = 5*1000 bytes = 5000 bytes.
RTT = 50us + 2*200 us=> 450us.
Therefore Throughput= 5000 bytes/450 us = 11.11Mpbs

Hexadecimal calculation for a checksum

I'm not understanding how this result can be zero. This was presented to me has an example to validate a checksum of a message.
ED(12+01+ED=0)
How can this result be zero?
"1201 is the message" ED is the checksum, my question is more on, how can I determine the checksum?
Thank you for any help.
Best regards,
FR
How can this result be zero?
The checksum is presumably represented by a byte.
A byte can store 256 different values, so the calculation is probably done module 256.
Since 0x12 + 0x01 + 0xED = 256, the result becomes 0.
how can I determine the checksum?
The checksum is the specific byte value B that makes the sum of the bytes in the message + B = 0 (modulo 256).
So, as #LanceH says in the comment, to figure out the checksum B, you...
add up the values of the bytes in the message (say it adds up to M)
compute M' = M % 256
Now, the checksum B is computed as 256 - M'.
I'm not sure about your checksum details but in base-16 arithmetic (and in base-10):
base-16 base-10
-----------------------
12 18
01 1
+ ED 237
------------------------
100 256
If your checksum is modulo-256 (16^2), you only keep the last 2 base-16 digits, so you have 00
Well, obviously, when you add up 12 + 01 + ED the result overflows 1 byte, and it's actually the hex number 100. So, if you only take the final byte of 0x0100. you get 0.

Resources