I'm developing a web app to connect to a bluetooth low energy thermometer. It's a HESTA Smart Band.
I can already connect and read its value. The problem is that I don't have any clue to extract its value into a temperature value.
Here are some BLE data that I got from the device.
68 9F 06 00 5D 19 48 00 93 14 72 16
68 9F 06 00 68 1B 48 00 93 14 7F 16
68 9F 06 00 C1 1C 48 01 93 14 DA 16
68 9F 06 00 56 11 48 01 93 14 64 16
68 9F 06 00 6C 14 4C 01 6C 14 5A 16
68 9F 06 00 DB 18 4C 00 65 15 C6 16
Obviously, Byte[5],[6],[9],[10] may present the temperatures. If I simply convert Byte[5], [6] to decimal, the results are as follow.
195D => 25.93
1B68 => 27.104
1CC1 => 28.193
1156 => 17.86
146C => 20.108
18DB => 24.219
But I'm not sure about that. Why Bytes[9],[10] also contain temperature values? And how about the other Bytes?
Edit:
I used bluetooth debugger and this is what I got:
0000fff0-0000-1000-8000-00805f9b34fb Unknown Service
Unknown Characteristics UUID: 0000fff1-0000-1000-8000-00805f9b34fb HANDLE: 11
Properties: Value: (0x) 689F060050144C0150142216
--
0000180f-0000-1000-8000-00805f9b34fb Battery Service
Battery Level UUID: 00002a19-0000-1000-8000-00805f9b34fb HANDLE: 15
Properties: Value: (0x) 00
As you can see, it can detect Battery Service and battery characteristic (0x2a19), but not the characteristic 0xfff1 which (I think) contains temperature value.
Related
I am working on a object storage project where I need to understand Reed Solomon error correction algorithm,
I have gone through this Doc as a starter and also some thesis paper.
1. content.sakai.rutgers.edu
2. theseus.fi
but I can't seem to understand the lower part of the identity matrix (red box), where it is coming from. How this calculation is done?
Can anyone please explain this.
The encoding matrix is a 6 x 4 Vandermonde matrix using the evaluation points {0 1 2 3 4 5} modified so that the upper 4 x 4 portion of the matrix is the identity matrix. To create the matrix, a 6 x 4 Vandermonde matrix is generated (where matrix[r][c] = pow(r,c) ), then multiplied with the inverse of the upper 4 x 4 portion of the matrix to produce the encoding matrix. This is the equivalent of "systematic encoding" with Reed Solomon's "original view" as mentioned in the Wikipedia article you linked to above, which is different than Reed Solomon's "BCH view", which links 1. and 2. refer to. The Wikipedia's example systematic encoding matrix is a transposed version of the encoding matrix used in the question.
https://en.wikipedia.org/wiki/Vandermonde_matrix
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction#Systematic_encoding_procedure:_The_message_as_an_initial_sequence_of_values
The code to generate the encoding matrix is near the bottom of this github source file:
https://github.com/Backblaze/JavaReedSolomon/blob/master/src/main/java/com/backblaze/erasure/ReedSolomon.java
Vandermonde inverse upper encoding
matrix part of matrix matrix
01 00 00 00 01 00 00 00
01 01 01 01 01 00 00 00 00 01 00 00
01 02 04 08 x 7b 01 8e f4 = 00 00 01 00
01 03 05 0f 00 7a f4 8e 00 00 00 01
01 04 10 40 7a 7a 7a 7a 1b 1c 12 14
01 05 11 55 1c 1b 14 12
Decoding is performed in two steps. First, any missing rows of data are recovered, then any missing rows of parity are regenerated using the now recovered data.
For 2 missing rows of data, the 2 corresponding rows of the encoding matrix are removed, and the 4 x 4 sub-matrix inverted and used the multiply the 4 non-missing rows of data and parity to recover the 2 missing rows of data. If there is only 1 missing row of data, then a second data row is chosen as if it was missing, in order to generate the inverted matrix. The actual regeneration of data only requires the corresponding rows of the inverted matrix to be used for a matrix multiply.
Once the data is recovered, then any missing parity rows are regenerated from the now recovered data, using the corresponding rows of the encoding matrix.
Based on the data shown, the math is based on finite field GF(2^8) modulo 0x11D. For example, encoding using the last row of the encoding matrix with the last column of the data matrix is (0x1c·0x44)+(0x1b·0x48)+(0x14·0x4c)+(0x12·0x50) = 0x25 (using finite field math).
The question example doesn't make it clear that the 6 x 4 encode matrix can encode a 4 x n matrix, where n is the number of bytes per row. Example where n == 8:
encode data encoded data
01 00 00 00 31 32 33 34 35 36 37 38
00 01 00 00 31 32 33 34 35 36 37 38 41 42 43 44 45 46 47 48
00 00 01 00 x 41 42 43 44 45 46 47 48 = 49 4a 4b 4c 4d 4e 4f 50
00 00 00 01 49 4a 4b 4c 4d 4e 4f 50 51 52 53 54 55 56 57 58
1b 1c 12 14 51 52 53 54 55 56 57 58 e8 eb ea ed ec ef ee dc
1c 1b 14 12 f5 f6 f7 f0 f1 f2 f3 a1
assume rows 0 and 4 are erasures and deleted from the matrices:
00 01 00 00 41 42 43 44 45 46 47 48
00 00 01 00 49 4a 4b 4c 4d 4e 4f 50
00 00 00 01 51 52 53 54 55 56 57 58
1c 1b 14 12 f5 f6 f7 f0 f1 f2 f3 a1
invert encode sub-matrix:
inverse encode identity
46 68 8f a0 00 01 00 00 01 00 00 00
01 00 00 00 x 00 00 01 00 = 00 01 00 00
00 01 00 00 00 00 00 01 00 00 01 00
00 00 01 00 1c 1b 14 12 00 00 00 01
reconstruct data using sub-matrices:
inverse encoded data restored data
46 68 8f a0 41 42 43 44 45 46 47 48 31 32 33 34 35 36 37 38
01 00 00 00 x 49 4a 4b 4c 4d 4e 4f 50 = 41 42 43 44 45 46 47 48
00 01 00 00 51 52 53 54 55 56 57 58 49 4a 4b 4c 4d 4e 4f 50
00 00 01 00 f5 f6 f7 f0 f1 f2 f3 a1 51 52 53 54 55 56 57 58
The actual process only uses the rows of the matrices that correspond
to the erased rows that need to be reconstructed.
First data is reconstructed:
sub-inverse encoded data reconstructed data
41 42 43 44 45 46 47 48
46 68 8f a0 x 49 4a 4b 4c 4d 4e 4f 50 = 31 32 33 34 35 36 37 38
51 52 53 54 55 56 57 58
f5 f6 f7 f0 f1 f2 f3 a1
Once data is reconstructed, reconstruct parity
sub-encode data reconstruted parity
31 32 33 34 35 36 37 38
1b 1c 12 14 x 41 42 43 44 45 46 47 48 = e8 eb ea ed ec ef ee dc
49 4a 4b 4c 4d 4e 4f 50
51 52 53 54 55 56 57 58
One alternate to this approach is to use BCH view Reed Solomon. For an odd number of parities, such as RS(20,17) (3 parities), 2 matrix multiplies and one XOR is needed for encoding, and for a single erasure only XOR is needed. For e>1 erasures, a (e-1) by n matrix multiply is done, followed by an XOR. For an even number of parities, if an XOR is used as part of the encode, then a e by n matrix multiply is needed to correct, or the e x n matrix used for encode, allowing one XOR for the correction.
Another alternative is Raid 6, where "syndromes" are appended to the matrix of data, but do not form a proper code word. One of the syndrome rows, called P, is just XOR, the other row called Q, is successive powers of 2 in GF(2^8). For a 3 parity Raid 6, the third row is called R, is successive powers of 4 in GF(2^8). Unlike standard BCH view, if a Q or R row is lost, it has to be recalculated (as opposed to using XOR to correct it). By using a diagonal pattern, if 1 of n disks fail, then only 1/n of the Q's and R's need to be regenerated when the disk is replaced.
http://alamos.math.arizona.edu/RTG16/ECC/raid6.pdf
Note that this pdf file's alternate method uses the same finite field as the method above, GF(2^8) mod 0x11D, which may make it easier to compare the methods.
I am trying to gain access to in game chat information from dota2 packets. I knew this used to possible since there were multiple projects that intercepted dota2 network traffic and translated chat text to print out on an overlay over dota2. Right now I am using wireshark with protobuf addon installed. I can see a few packets here and there to valve servers outside the USA and can see the protobuf addon for wireshark working on these packets but I get an unknown wiretype error for 95% of the packets I believe to be related to dota. In almost all of these packets the UDP data payload starts off with 56 53 30 31
here is an example hex dump from wireshark. Are these 4 bytes some sort of header and then the proto messages start?
0000 c8 a7 0a a4 63 ed 6c fd b9 4b 6e 16 08 00 45 00
0010 00 70 58 db 40 00 40 11 85 1a c0 a8 01 f5 d0 40
0020 c9 a9 9e 96 69 89 00 5c 72 7c **56 53 30 31** 30 00
0030 06 00 00 02 00 00 00 1d fe 11 11 10 00 00 d7 0a
0040 00 00 01 00 00 00 11 10 00 00 30 00 00 00 24 fd
0050 37 3c b4 30 a5 48 fa 3d ea 30 1a 1f d8 a9 41 e0
0060 e0 6c 44 ba bb 4e ba fc e7 ac ed f9 40 19 86 20
0070 84 71 52 5d b3 1f da 36 40 d9 b6 2e e1 e5
That is ascii code for "VS01", so yes, it might be some kind of version identifier.
I'm developing a C project to read/write Desfire Contactless Cards.
Right now I achieved to authenticate and I'm able to read data from the card, but it's encrypted with 3DES.
I want to decrypt next message:
EB 54 DF DD 07 6D 7C 0F BD D6 D1 D1 90 C6 C7 80 92 F3 89 4D 6F 16 7C BF AA 3E 7C 48 A8 71 CF A2 BD D0 43 07 1D 65 B8 7F
My SessionKey (generated in Authentication step) is:
44 E6 30 21 4A 89 57 38 61 7A B8 7C A9 91 B2 C0
I know the IV={ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
With this information, I can go here and choosing 3DES, CBC mode, I can decrypt the message and I have means to know that it's right.
It should be, decrypted:
10 1a 01 31 32 ae 03 de 39 b0 00 97 7f 65 e9 43 93 89 53 5c 9e 04 a9 3f 95 71 24 0f 0a 9b f7 ee d4 5b 1b c6 78 7a f4 36
Anyhow, I tried to implement the C code using OpenSSL des library and I find the next difficulty:
I need 3 Keys of 8 bytes each, but I have 1 SessionKey of 16 bytes
long.
I tried to split SessionKey into Key1/Key2/Key1 without success.
I have read so much about it, the only clue i found is that I have to generate those 3 keys from my 16byte SessionKey (taking it as a password) but I feel it is too advanced for me.
If this is the only way, is there any tutorial about ossl key derivation (evp_bytestokey)? Is there any other way?
Thanks
Edit:
So, right now I'm in a very weird spot. As noted by many of you, I had already taken first 8 bytes from Session Key as Key 3 (that's what I referred to with Key1/Key2/Key1). Anyway it seemed to not work, but slightly it did, which is what puzzles me.
I get:
Decrypted : 11 1B 00 30 33 AF 02 DF DE 01 00 00 00 01 01 00 14 C1 26 8F 03 20 20 41 00 30 39 01 00 00 00 00 00 00 00 00 00 00 75 B1
When
Expected : 10 1a 01 31 32 ae 03 de de 01 00 00 00 01 01 00 14 c1 26 8f 03 20 20 41 00 30 39 01 00 00 00 00 00 00 00 00 00 00 75 b1
So I get the expected result XORing first 8 bytes with 01. Does that make any sense?? As in OSSL docu it says: Note that there is both a DES_cbc_encrypt() and a DES_ncbc_encrypt() in libcrypto. I recommend you only use the ncbc version (n stands for new). See the BUGS section of the OpenSSL DES manpage and the source code for these functions.
But I have access only to older version... Could it be the problem??
Perhaps the encryption is two-key 3DES, in that case repeat the first 8-bytes , bytes 0-7 as bytes 16-23: 44 E6 30 21 4A 89 57 38 61 7A B8 7C A9 91 B2 C0 44 E6 30 21 4A 89 57 38.
Some 3DES implementations will do this automatically, some you must do it yourself.
If this does not work you will need to provide more information in the question.
Size of session key
Since you refer to MIFARE DESFire and you are using a 16 byte session key, you probably use 2-key triple DES. This means that the 16 byte session key is actually two keys (8 bytes, or actually 56 bits, each with 8 unused "parity" bits).
In order to map this to 3DES with 3 keys, you simply need to append the first 8 bytes to the end of your session key, so that you get
+-------------------------+-------------------------+
16 byte session key: | 8 bytes | 8 bytes |
| 44 E6 30 21 4A 89 57 38 | 61 7A B8 7C A9 91 B2 C0 |
+-------------------------+-------------------------+-------------------------+
24 byte 3DES key: | 8 bytes | 8 bytes | 8 bytes |
| 44 E6 30 21 4A 89 57 38 | 61 7A B8 7C A9 91 B2 C0 | 44 E6 30 21 4A 89 57 38 |
+-------------------------+-------------------------+-------------------------+
First block of decrypted plaintext
If the first 8 bytes of the decrypted plaintext differ from the expected value but the remaining bytes match, this is a clear indication that you are using an incorrect initialization vector for CBC mode.
Have a look at how CBC mode works:
So for the first block, the plaintext is calculated as
P0 = DecK(C0) XOR IV
For the remaining blocks, the plaintext is calculated as
Pn = DecK(Cn) XOR Cn-1
This means that only the decryption of the first block depends on the IV. The decryption of the remaining blocks depends on the preceding ciphertext instead.
Since you assumed the IV to be all zeros, the XOR operation does nothing. Hence, in your case, the plaintext of the first block is calculated as
P0 = DecK(C0) XOR {0} = DecK(C0) = '10 1A 01 31 32 AE 03 DE'
As this expected value deviates from the actual value that you get ('11 1B 00 30 33 AF 02 DF'). This most likely means that you used an incorrect IV for decryption:
P0 = DecK(C0) = '10 1A 01 31 32 AE 03 DE'
P'0 = DecK(C0) XOR IV = '11 1B 00 30 33 AF 02 DF'
You can calculate the IV that you used by XORing the two values:
P'0 = P0 XOR IV
P'0 XOR P0 = IV
IV = '11 1B 00 30 33 AF 02 DF' XOR '10 1A 01 31 32 AE 03 DE'
= '01 01 01 01 01 01 01 01'
As this IV differs in exactly the LSB of each byte being set to one, I wonder if you accidentally used the method DES_set_odd_parity() on the IV. This would explain why the LSB (i.e. the parity bit if the value was a DES key) was changed.
It's possible that you don't need 3 keys of 32bits, but only one of 3*32bits, with the bytes in the good order
Best regards
I am trying to calculate the checksum of a tcp packet and I can't get the same value as in the packet captured with wireshark
the original captured packet is:
"6c f0 49 e8 a3 0d 24 b6 fd 52 40 cb 08 00 45 00 00 28 02 22 40 00 80 06 00 00 00 0a 2a 00 1c 1f 0d 5a 24 ca 7d 01 bb 3f 44 f8 6e 6c 83 75 20 50 10 01 02 83 91 00 00"
As I saw in wireshark:
The first 14 bytes are ETH.
After that (in the IP part) there are 12 bytes of "header length",'DSCP','total length,'identification','fragment offset','TTL','protocol','header checksum'.
and then there are 4 bytes of IP-src and 4 bytes of IP-dst (which are the only one in the IP header that are important for the calculation).
we are left with 20 bytes of TCP header (no data).
I created the new packet for the calculation with the pseudo header in the form:
IPsrc/IPdst/reserved(0x00)/protocol(0x06)/TCP-length(0x0014)/TCP-header
Which got me:
"0a 2a 00 1c 1f 0d 5a 24 00 06 00 14 ca 7d 01 bb 3f 44 f8 6e 6c 83 75 20 50 10 01 02 83 91 00 00"
Zeroing the tcp checksum field (the 0x8391 according to wireshark) gets:
"0a 2a 00 1c 1f 0d 5a 24 00 06 00 14 ca 7d 01 bb 3f 44 f8 6e 6c 83 75 20 50 10 01 02 00 00 00 00"
calculating checksum on the new packet got me the value: 0xcc45 which is differen than the one in the original packet (0x8391)
data="0a 2a 00 1c 1f 0d 5a 24 00 06 00 14 ca 7d 01 bb 3f 44 f8 6e 6c 83 75 20 50 10 01 02 00 00 00 00"
def carry_around_add(a, b):
c = a + b
return (c & 0xffff) + (c >> 16)
def checksum(msg):
s = 0
for i in range(0, len(msg), 2):
w = ord(msg[i]) + (ord(msg[i+1]) << 8)
s = carry_around_add(s, w)
return ~s & 0xffff
data = data.split()
data = map(lambda x: int(x,16), data)
data = struct.pack("%dB" % len(data), *data)
print ' '.join('%02X' % ord(x) for x in data)
print "Checksum: 0x%04x" % checksum(data)
What am I doing wrong?
Your total test data length is 55 bytes, it should be 14 (ETH) + 20 (IP) + 20 (TCP). It's not that it's just one byte too long since that would not match with the 0x8391 checksum. But, the IP Protocol byte (0x06 - TCP) at IP offset 9 is stil in place. The IP source address seems rather unlikely though: 0.10.40.0.
The 'which got me' part starts with '0a 2a' which are the 2nd and 3rd byte of the IP source address. So let's assume the three zero bytes before this address should actually be two zero bytes. If that's correct, we're still on the right track.
The 12 byte TCP pseudo-header is: IPsrc/IPdst/0x00/0x06/TCP-length(0x0014) (without the TCP header), so far so good. The TCP checksum is calculated over: pseudo-header + TCP header + TCP data. But your test data uses only the pseudo-header and the TCP header data, the TCP data itself is missing.
I didn't really check your Python code but I don't see any obvious problems. The main issue seems to be the lack of the TCP payload data in the calculation.
Message from syslogd#saskappcu at Mar 18 13:24:54 ...
kernel:BUG: soft lockup - CPU#30 stuck for 61s! [events/30:161]
Message from syslogd#saskappcu at Mar 18 13:24:54 ...
kernel:Process events/30 (pid: 161, ti=f4ea4000 task=f4e5faa0 task.ti=f4ea4000)
Message from syslogd#saskappcu at Mar 18 13:24:54 ...
kernel:Stack:
Message from syslogd#saskappcu at Mar 18 13:24:54 ...
kernel:Call Trace:
Message from syslogd#saskappcu at Mar 18 13:24:54 ...
kernel:Code: 00 89 51 04 89 0a 89 43 20 89 43 24 8b 43 08 39 d8 74 23 83 40 7c 01 31 c9 8b 7b 0c 8b 15 58 09 ac c0 8b 02 39 c2 75 09 eb 31 90 <8b> 00 39 d0 74 2a 3b 78 0c 75 f5 89 d8 ba 00 00 04 00 e8 b9 a0
Message from syslogd#saskappcu at Mar 18 13:24:58 ...
kernel:BUG: soft lockup - CPU#8 stuck for 61s! [buildop:2223]
Message from syslogd#saskappcu at Mar 18 13:24:58 ...
kernel:Process buildop (pid: 2223, ti=e9724000 task=f3ba0aa0 task.ti=e9724000)
Message from syslogd#saskappcu at Mar 18 13:24:58 ...
kernel:Stack:
Message from syslogd#saskappcu at Mar 18 13:24:58 ...
kernel:Call Trace:
Message from syslogd#saskappcu at Mar 18 13:24:58 ...
kernel:Code: 26 00 89 c8 f0 81 28 00 00 00 01 74 05 e8 2c fe ff ff c3 8d b4 26 00 00 00 00 66 ba 00 01 f0 66 0f c1 10 38 f2 74 0e f3 90 8a 10 <eb> f6 66 83 38 00 75 f4 eb e5 c3 8d 74 26 00 f0 81 28 00 00 00
A softlockup is defined as a bug that causes the kernel to loop in
kernel mode for more than 20 seconds without giving other tasks a chance to run.
The log
kernel:BUG: soft lockup - CPU#30 stuck for 61s! [events/30:161]
is generated by the following line in kernel/kernel/watchdog.c
pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
smp_processor_id(),
duration,
current->comm,
task_pid_nr(current));
It means that
The CPU core 30 in the system,
has been busy executing kernel code for the past 61 seconds.
The current thread on the system being events/30,
whose process-id is 161.
For more details, checkout kernel/Documentation/lockup-watchdogs.txt.