I'm trying to do some reverse engineering on my heating system. Monitoring the CAN BUS results in receiving hexademical strings, for example:
00 D0 68 D6 86 83 61 8F
61 C0 02 5C 12 B5 02 5C
12 78 04 39 04 03 05 02
05 C4 04 5C 12 5C 12 5C
12 5C 12 D0 68 00 00 00
00 18 08 37 D2 00 00 00
00 00 00 00 00 15 75 F2
F0 01 00 01 00 00 00 1F
I know that for example the temperature value of 22.5°C should be somewhere in there.
So far I have tried to look for the following conversions:
Possibility 1: ascii to hex
22.5 = 32 32 2E 35
Possibility 2: float to hex conversion
22.5 = 0x 41 b4 00 00
However none of these resulted in a match.
What would be other possibilities to converted a float to a hex string?
Thx
note: the given string is just a small part of my can sniffer so don't look for 22.5 in my given string here. I'm just looking for other possible conversions.
Related
Has anyone seen (fixed?) a problem with strings of zeroes in audio data captured with the AL5645 codec microphone input on the Coral dev board? It's happening for me with default settings using arecord, as well as my python code using PyAudio. 16 bit (mono) samples, sample rates 16000Hz and 44100Hz. e.g. 83 ce 34 0b 09 3f 00 00 00 00 00 00 2b 0e 2b 0e b0 d0 5a b9 ee d9 00 00 00 00 75 44 75 44 75 44 ba 38 8a ff e6 c6 00 00 00 00 00 e7 00 e7 00 e7 85 26 f4 46 bc 2e
?
Cheers,
Mark
I have the following process to calculate the tcp checksum
static inline uint32_t
csum_part(const void *buf, size_t len, uint32_t sum)
{
uintptr_t p = (uintptr_t)buf;
while (len > 1)
{
sum += *(uint16_t *)p;
len -= 2;
p += 2;
}
if (len)
sum += *(uint8_t *)p;
return sum;
}
and the following function to pack it
uint16_t calc(uint32_t x)
{
while((x >> 16) != 0)
x = (x & 0xffff) + (x>>16);
return ~x;
}
When I calculate the checksum for the header I use the following code
uint32_t calc_tcp_checksum(char * pkt, int hdrlen, int pktlen) {
struct ip * ih = (struct ip *)
(pkt+ hdrlen - sizeof(struct tcphdr) - sizeof(struct ip));
struct tcphdr * th = (struct tcphdr *)
(pkt + hdrlen - sizeof(struct tcphdr));
#ifndef __FAVOR_BSD
th->check = 0;
#else
th->th_sum = 0;
#endif
//th->
uint32_t header_chksum = csum_part(th, sizeof(struct tcphdr), 0);
uint32_t pseudo = (uint32_t)ih->ip_src.s_addr + ih->ip_dst.s_addr +
htons(IPPROTO_TCP) + htons(pktlen);
header_chksum += pseudo;
return header_chksum;
}
I have a packet which is the following
0000 58 f3 9c 81 2b bc 00 1c 73 13 1f 94 08 00 45 00
0010 00 dc 00 00 40 00 40 06 40 19 0a e6 35 90 ac 13
0020 0d 7a b9 be 2a 44 63 36 c2 98 c7 82 d0 1e 50 18
0030 10 00 eb 15 00 00 00 b4 00 00 09 cd 1c fb 66 40
0040 ec c7 0d 30 cb 0b e4 cb 88 74 13 3d 4e 20 00 00
0050 9a d6 00 00 00 00 9f db 4f 50 54 49 44 58 42 41
0060 4e 4b 4e 49 46 54 59 20 4d 03 8a e8 00 2d ed d0
0070 43 45 46 4e 45 30 30 30 37 20 20 20 00 01 00 02
0080 00 00 00 00 00 00 00 4b 00 00 a7 7b 00 00 00 00
0090 02 00 00 02 00 00 9a d6 39 30 30 35 39 4f 49 43
00a0 49 43 49 30 30 30 30 35 32 30 00 01 02 00 b0 6d
00b0 c8 04 42 f6 bd f9 52 7c 42 80 41 41 45 43 45 32
00c0 34 31 33 51 00 00 a1 e4 00 00 00 00 00 00 00 00
00d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00e0 00 00 00 00 00 00 00 00 00 00 72 dd 89 69
In the example above,
pktlen = 180
hdrlen = 54
I get the checksum to be 0xeb15, wireshark says it's 0xea15. What am I doing wrong? Note that it always is not incorrect, just sometimes.
Section 4.1 of RFC 1071 - Computing the Internet Checksum provides implementation example in "C", which seems to be the method you're basing your implementation from. Except that the RFC 1071 example combines the folding part within the same function that computes the checksum, whereas your implementation does not. RFC 1071 obviously assumes that the pseudo-header is already included in the buffer pointed to by addr, but again, yours does not. This would all be OK, except that you never actually fold the final result by calling your calc() function, at least not that I can see.
So for your implementation, it would seem that any computed TCP checksum that doesn't have any bits set in the upper 16-bits of the 32-bit accumulator will be correct, but any computed checksum that does have at least 1 bit set in the upper 16-bits of the accumulator will result in an incorrect TCP calculation. I believe this would explain why some checksums your code computes are correct and some are wrong.
And in case you're interested, you can have a look at Wireshark's implementation of Internet checksums in in_cksum.c as well as how it's called from the TCP dissector.
It's currently 04:40 AM and I am stuck on something I simply do not understand. I am trying to look up a domain's nameservers directly by using the DNS protocol. If I send a host -t ns google.com 1.1.1.1 and monitor it with Wireshark, I can see the full query of the DNS query. However, I cannot figure out, why some ASCII characters are used one time, but not another time. Here is an example:
0000 70 4d 7b 94 dd e0 00 d8 61 a9 c5 ec 08 00 45 00 pM{.....a.....E.
0010 00 38 d6 ff 00 00 80 11 9f 50 c0 a8 01 bb 01 01 .8.......P......
0020 01 01 e8 40 00 35 00 24 a0 19 9e f7 01 00 00 01 ...#.5.$........
0030 00 00 00 00 00 00 06 67 6f 6f 67 6c 65 03 63 6f .......google.co
0040 6d 00 00 02 00 01 m.....
In this DNS query, I am looking up the nameservers for google.com. The actual query starts at 06 07.
06 in ASCII is ACK/Acknowledgment.
Now, if we take a look at gmail.com instead:
0000 70 4d 7b 94 dd e0 00 d8 61 a9 c5 ec 08 00 45 00 pM{.....a.....E.
0010 00 37 d7 00 00 00 80 11 9f 50 c0 a8 01 bb 01 01 .7.......P......
0020 01 01 e8 58 00 35 00 23 8f cc 6f e2 01 00 00 01 ...X.5.#..o.....
0030 00 00 00 00 00 00 05 67 6d 61 69 6c 03 63 6f 6d .......gmail.com
0040 00 00 02 00 01 .....
the query starts at 05 67 instead.
05 is ENQ/Enquiry.
Why are they different? If I try to send 06 instead of 05 the DNS server gives me no response but Wireshark tells me:
Unknown extended label
I've seen 05, 06, and 09 so far. 09 is my biggest "wat" of all time, because it's a HT/Horizontal Tab.
Anyone with a lot of DNS knowledge who can help me here? I'm not looking for "just use dig/nslookup/host command". I'm currently trying to research a bit on the DNS protocol, and this is a thing I do not understand.
Good read where I got a lot of help: http://dev.lab427.net/dns-query-wth-netcat.html
For a binary protocols like this, you can't assume each byte corresponds to the matching ASCII character.
Take a look at section 4.1.2 of the DNS RFC (https://www.ietf.org/rfc/rfc1035.txt).
The domain name in a DNS request is broken up into "labels". For each label, the first byte is the length of the label, then the bytes for the string are written.
For your Google.com example, the labels are "google" and "com". The 06 is the number of bytes in the first label. This is followed by the bytes for "google". Then the 03 is the number of bytes in the "com" label. After the "com" bytes, the 00 byte is the NULL label to mark the end.
I'm trying to read public transport cards and I've figured out the data format mostly but the record dates and times are a mystery. Some data:
e1 a2 00 00 ce 04 05 b1 7e 00 68 22 0a 10 00 ce - 01.03.2014 23:36
e4 a2 00 00 ce 04 e5 7b 7e 00 e4 2e 0a 10 00 e9 - 04.03.2014 16:31
e4 a2 00 00 4c 04 43 8c d0 07 30 00 01 00 00 72 - 04.03.2014 18:42
e4 a2 00 00 ce 04 65 8d 7e 00 7c 17 0a 10 00 a2 - 04.03.2014 18:51
ea a2 00 00 ce 04 25 63 7e 00 70 09 0a 10 00 f1 - 10.03.2014 13:13
ec a2 00 00 ce 04 25 63 7e 00 70 09 0a 10 00 da - 12.03.2014 13:13
f3 a2 00 00 ce 04 85 69 7e 00 64 3b 0a 10 00 9d - 19.03.2014 14:04
f5 a2 00 00 ce 04 e5 89 7e 00 70 22 0a 10 00 ba - 21.03.2014 18:23
f6 a2 00 00 ce 04 6a 00 82 01 68 22 2a 10 00 df - 22.03.2014 00:03
fb a2 00 00 ce 04 85 75 7e 00 84 17 0a 10 00 2a - 27.03.2014 15:40
fb a2 00 00 ce 04 a5 91 7e 00 78 17 0a 10 00 a6 - 27.03.2014 19:25
c1 a2 28 00 ce 04 0b 6b 00 00 74 17 08 10 04 94 - 28.01.2014 14:16
c7 a2 00 00 ce 04 a5 5d 7e 00 6c 09 0a 10 00 1b - 03.02.2014 12:29
c7 a2 00 00 ce 04 25 6c 7e 00 68 2d 0a 10 00 68 - 03.02.2014 14:25
c7 a2 0e 00 ce 04 eb 6d 00 00 88 17 08 10 04 45 - 03.02.2014 14:39
ce a2 00 00 ce 04 85 52 7e 00 68 09 0a 10 00 77 - 10.02.2014 11:00
ce a2 00 00 ce 04 e5 5c 7e 00 64 09 0a 10 00 58 - 10.02.2014 12:23
eb a2 00 00 ce 04 85 41 7e 00 80 22 0a 10 00 dd - 11.03.2014 08:44
eb a2 00 00 ce 04 85 6a 7e 00 a4 28 0a 10 00 66 - 11.03.2014 14:12
eb a2 20 00 ce 04 8b 6e 00 00 7c 17 08 10 04 e0 - 11.03.2014 14:44
|| || || || ** ** ** ** **
Date? Time?
Stars represent known data (as in I know what those mean and they aren't relevant to date and time)
Provided dates are correct, because they're from usage history printout.
I've tried converting values to unix timestamps, seconds, milliseconds and much more, but I can't determine the format. Also the data might be in little endian.
I'm not sure about possible timezone, data might be in UTC, UTC+2 or UTC+3.
I appreciate any help.
I figured out the format, it goes like this:
All data is in little endian.
To get the time in minutes, the value must be bitsifted to right five times.
For example:
6e8b >> 5 = 884
884 minutes = 14 hours, 44 minutes (14:44)
Date is days from 1.1.1900. For example:
a2eb = 41707 (11.03.2014)
I started a thread in the NI support forums about my project, but my current problem is more broad than just driver writing in labview. I have an anemometer that uses a USB UART bridge
to interface with the computer. I asked Extech for any kind of documentation for and received only the communication protocol below.
Serial Communication Protocol
I encountered several problems working with this, so I took the software included with the anemometer and used portmon to sniff the commands going to and from, and here's where it gets worse. To simplify matters as best as I could, I only took ambient temperature readings. The following was what portmon captured when I used the manufacture's software to connect to the instument:
(This is the 'upload protocol' on the above protocol documentation)
AA 61 64 6A 67 08 40 00 40 00 01 00 00 C6 41 00 00 00 00 00 3C 1C C6 9A 19 99 42 00 3C 1C C6 00 00
AA 61 64 6A 67 08 40 10 40 00 01 7D 0C C6 41 00 00 00 00 00 3C 1C C6 39 1F 99 42 00 3C 1C C6 00 00
AA 61 64 6A 67 08 40 10 40 00 01 00 00 C6 41 00 00 00 00 00 3C 1C C6 9A 19 99 42 00 3C 1C C6 00 00
AA 61 64 6A 67 08 40 10 40 00 01 83 F3 C5 41 00 00 00 00 00 3C 1C C6 FB 13 99 42 00 3C 1C C6 00 00
This is slightly truncated, but the important parts should be there. The ambient temperature read about 76.5F at the time. So according to the documentation, this should be in the 10-13th bits, so I believe:
0000c641
7d0cc641
0000c641
83f3c541
To be the recorded ambient temperatures, but I have no idea how to read this. I see no reason why a conversion from Kelvin or Celsius would be necessary as there seems to be a bit for that in F1. Also of note is the fact that I get values completely different than anything documented for several fields, so either I'm reading something wrong or the documentation is just wrong. I haven't been able to get any more answers from the manufacturer about the protocol, so I have no idea why my data only half resembles what is expected.
41C60000 converts to 24.75 as an IEEE754 standard 32-bit single precision float. This looks like a Celsius value which would map to 76.55 F.
For the rest of the data you would have :
41C60000 = 24.7500000000000000000
41C60C7D = 24.7560977935791015625
41C5F383 = 24.7439022064208984375
I think that sorts out the endianness and formatting for you.