Please help, I am trying to decipher this advertising packet data on the Movesense sensor.
Somewhere in this next lines of data I have to find accelerometer data and heartrate date.
Can someone please point me in the right direction.
Thank you.
9/28/2021, 11:05:23 AMnode: e8e9a966.328e18
msg : Object
object
peripheral: "0c8cdc3872e8"
address: "0c:8c:dc:38:72:e8"
rssi: -75
connectable: true
name: "Movesense 204730000081"
manufacturerData: buffer[19]
[0 … 9]
0: 0x9f
1: 0x0
2: 255
3: 0xd4
4: 0xd
5: 0x0
6: 0x0
7: 0xed
8: 0x24
9: 0x4
[10 … 18]
10: 0x3c
11: 0x0
12: 0x0
13: 0x0
14: 0x0
15: 0x45
16: 0x95
17: 0x88
18: 0x3c
services: array[1]
0: "fe06"
_msgid: "cdbbd743.5fdd88"
Based on the documentation you provided byte 7-10 contain an incremented counter, byte 11-14 the accelerometer and 15-18 the average heart rate data.The last two are stored as float.
The accelerometer data is 0 so I assume you did not move the sensor.
The average heart rate contains the value 4595883c in hex (0x45, 0x95, 0x88, 0x3c). This needs to be converted to a float using little endian byte order. Using an online converter you can get the value 0.01667274. Since this is almost 0 I would assume that you did not measure anything.
You can now check the changing of the values by actually measuring the heart rate and/or moving the sensor.
Related
I have the following 3x2 Int16 matrix as "test_matrix":
10 4
10 8
4 10
And I am expecting a binary output of 12 bytes
0x0A 0x04 0x0A 0x08 0x04 0x0A
I tried the following option:
write("test.bin", htol(test_matrix))
And the output becomes
What I have found are:
The matrix gets serialized (which is what I want)
The matrix gets transposed (which I don't want...)
The integers become 64 bits instead of 16 bits
The first 15 bytes are useless bytes for me
Any idea how should I export serialized matrix into binary correctly..?
To answer your questions:
Ad 2) the matrix is not transposed - Julia uses column major order like e.g. Fortran. You can use transpose to transpose the matrix if you want row major order.
Ad 3) htol works only because you are on little endian machine; on big endian it would error - use htol.(test_matrix) instead to broadcast it. Also most probably you actually have 64-bit integers stored in your matrix.
With these comments it works as you expected on my machine:
julia> test_matrix = Int16[10 4; 10 8; 4 10]
3×2 Array{Int16,2}:
10 4
10 8
4 10
julia> write("test.bin", htol.(transpose(test_matrix)))
12
julia> stat("test.bin")
StatStruct(mode=0o100666, size=12)
julia> read("test.bin")
12-element Array{UInt8,1}:
0x0a
0x00
0x04
0x00
0x0a
0x00
0x08
0x00
0x04
0x00
0x0a
0x00
(if you get a different result when running your code can you please specify what Julia version, what OS and what machine you are working on?)
Im reading a book about network protocol structures.
There is an illustration in a chapter about variable-length quantity, which I dont fully understand.(see attachment)
The subject is to convert different numbers to variable-length 7-bit integers.
The first line shows that 0x3F is stored in a single octet as 0x3F.
The second line shows that 0x80 is stored in two octets one as 0x80 and second as 0x01.
However I dont understand why its not 0x81 in the first octet and 0x00 in the second.
Because according to wikipedia, converting numbers into variable-length 7bit integers goes as follows:
Represent the value in binary notation (e.g. 137 as 10001001)
Break it up in groups of 7 bits starting from the lowest significant bit (e.g. 137 as 0000001 0001001). This is equivalent to representing the number in base 128.
Take the lowest 7 bits and that gives you the least significant byte (0000 1001). This byte comes last.
For all the other groups of 7 bits (in the example, this is 000 0001), set the MSB to 1 (which gives 1000 0001 in our example). Thus 137 becomes 1000 0001 0000 1001 where the bits in boldface are something we added. These added bits denote if there is another byte to follow or not. Thus, by definition, the very last byte of a variable length integer will have 0 as its MSB.
So lets do these steps for 0x80:
binary notation: 1000 0000
in groups of 7 bits starting from LSB: 0000001 0000000
and 4. set MSB as described: 1000 0001 0000 0000
Converting that binary number into two hex octets, gives me 0x81 and 0x00.
Which leads me to the question: Is there a printing fail in the book or did I missunderstood something?
Which book is that?
There may be many possible encoding schemes. One of them could go like this:
1. Represent the value in binary notation (e.g. 0x80 as 10000000)
2. Break it up in groups of 7 bits starting from the lowest significant bit: 0000001 0000000
3. Start with the lowest 7 bits: if this is *not* the last group of 7 bits, then set MSB: 10000000; if it's the last, then leave it
alone: 00000001
4. Output starting LSB first: 10000000 00000001, i.e. 0x80 0x01
So what does the book say? What encoding scheme are they using?
From the NTP advanced configuration page there is the following text
windl#elf:~ > ntptime
ntp_gettime() returns code 0 (OK)
time bd6b9cf2.9c3c6c60 Thu, Sep 14 2000 20:52:34.610, (.610297702),
maximum error 3480 us, estimated error 0 us.
ntp_adjtime() returns code 0 (OK)
modes 0x0 (),
offset 1.658 us, frequency 17.346 ppm, interval 128 s,
maximum error 3480 us, estimated error 0 us,
status 0x2107 (PLL,PPSFREQ,PPSTIME,PPSSIGNAL,NANO),
time constant 6, precision 3.530 us, tolerance 496 ppm,
pps frequency 17.346 ppm, stability 0.016 ppm, jitter 1.378 us,
intervals 57, jitter exceeded 29, stability exceeded 0, errors 0.
Followed by The first thing you should look at is the status (0x2107 in our case). The magic words in parentheses explain the meaning of the individual bits.
They list 5 bit positions, but provide a 4 digit hex status code, which translates into 16 bits. Which are the bits that represent the keywords in parenthesis, and what do the other bits mean?
Here is list of the NTP status codes
STA_PLL 0x0001 enable PLL updates (rw)
STA_PPSFREQ 0x0002 enable PPS freq discipline (rw)
STA_PPSTIME 0x0004 enable PPS time discipline (rw)
STA_FLL 0x0008 select frequency-lock mode (rw)
STA_INS 0x0010 insert leap (rw)
STA_DEL 0x0020 delete leap (rw)
STA_UNSYNC 0x0040 clock unsynchronized (rw)
STA_FREQHOLD 0x0080 hold frequency (rw)
STA_PPSSIGNAL 0x0100 PPS signal present (ro)
STA_PPSJITTER 0x0200 PPS signal jitter exceeded (ro)
STA_PPSWANDER 0x0400 PPS signal wander exceeded (ro)
STA_PPSERROR 0x0800 PPS signal calibration error (ro)
STA_CLOCKERR 0x1000 clock hardware fault (ro)
STA_NANO 0x2000 resolution (0 = us, 1 = ns) (ro)
Source: ftp://ftp.ripe.net/test-traffic/ROOT/libDelay/Delay.h
From your listed example: status 0x2107 (PLL,PPSFREQ,PPSTIME,PPSSIGNAL,NANO).
Logically 'OR' the status bit codes together 0x0001 | 0x0002 | 0x0004 | 0x0100 | 0x2000 and result is 0x2107.
Some additional descriptions of the status codes are found here:
http://man7.org/linux/man-pages/man2/adjtimex.2.html
I have this 3-axis dongle serial accelerometer connected using RS-232 cable. I am putting the baud rate as 9600 and im getting 80 XXXX-YYYY-ZZZZ readout per second. I am trying to justify why does it shows 80 readings in a second, and here is my calculation,
2 Bytes of data x (1 Start bit + 1 Stop bit + 8 bits) = 20 bits
20 bits x 3 axis x 80 readouts = 4800 bits
While im getting 4800 bits instead of 9600 bits, so i am wondering did i miss out anything in justifying the 80 readouts?
Thanks guys :)
You indicate that you're getting 80 XXXX-YYYY-ZZZZ readouts per second. I'm assuming this is ASCII, so each digit is one byte.
So each "message" is len('XXXX-YYYY-ZZZZ')*8 = 112 bits long. Add a start and stop bit and you have 114. Multiply that times 80 messages per second, and you're transmitting 9120 bits per second, which is much close to the theoretical limit.
I'm not understanding how this result can be zero. This was presented to me has an example to validate a checksum of a message.
ED(12+01+ED=0)
How can this result be zero?
"1201 is the message" ED is the checksum, my question is more on, how can I determine the checksum?
Thank you for any help.
Best regards,
FR
How can this result be zero?
The checksum is presumably represented by a byte.
A byte can store 256 different values, so the calculation is probably done module 256.
Since 0x12 + 0x01 + 0xED = 256, the result becomes 0.
how can I determine the checksum?
The checksum is the specific byte value B that makes the sum of the bytes in the message + B = 0 (modulo 256).
So, as #LanceH says in the comment, to figure out the checksum B, you...
add up the values of the bytes in the message (say it adds up to M)
compute M' = M % 256
Now, the checksum B is computed as 256 - M'.
I'm not sure about your checksum details but in base-16 arithmetic (and in base-10):
base-16 base-10
-----------------------
12 18
01 1
+ ED 237
------------------------
100 256
If your checksum is modulo-256 (16^2), you only keep the last 2 base-16 digits, so you have 00
Well, obviously, when you add up 12 + 01 + ED the result overflows 1 byte, and it's actually the hex number 100. So, if you only take the final byte of 0x0100. you get 0.