My checksum of 5 bytes of data does not - math

I'm processing 6-byte messages from a piece of serial hardware.
In their manual, the manufacturer has laid out that the checksum of each message (its 6th byte) is composed of 'the low byte of the summation of the rest of the message.'
Here is one of their examples, dissected
Here are some others
I haven't tried all those examples yet, let me show my work on the first 'dissected' example:
This is the formula as provided:
Low byte of 0xB2 + 0x00 + 0x69 + 0x1A + 0x83 = 0x68
So, the summation is 0x1B8, if I take the first 8-bits, I get 0xB8
Hmmm... am I doing that wrong?
I thought for a bit and guessed, oh, maybe they just do a bitwise-operation instead, that's pretty common on older hardware right?
So I wrote out the bits of each part and XORed the series together...
0xB2 ^ 0x00 = 0xB2 (duh)
0xB2 ^ 0x69 = 0xDB
0xDB ^ 0x1A = 0xC1
0xC1 ^ 0x83 = 0x42
I did this by hand, and by calculator. Same result.
I was able to reproduce my computations in my program, my checksums are pretty different than what the hardware is outputting. The manual model number matches the hardware I have...
Looking at the binary of each part of the summation, I'm not sure I can see a clear pattern to each their documented output. In some checksums, like the IPv4 header, the carry is shifted or added back into the checksum, could that be the case here?
My question is:
Am I making a math error in how this checksum is being calculated?
Any help would be greatly appreciated! Thank you.

I just attacked all the samples with the Windows RT calculator, and all of the others (here) are fine - it's just the first example (which you dissected) that is erroneous. This looks like a simple documentation typo.

Related

how those bit-wise operation work and why wouldn't it use little/small endian instead

i found those at arduino.h library, and was confused about the lowbyte macro
#define lowByte(w) ((uint8_t) ((w) & 0xff))
#define highByte(w) ((uint8_t) ((w) >> 8))
at lowByte : wouldn't the conversion from WORD to uint8_t just take the low byte anyway? i know they w & 0x00ff to get the low byte but wouldn't the casting just take the low byte ?
at both the low/high : why wouldn't they use little endians, and read with size/offset
i.e. if the w is 0x12345678, high is 0x1234, low is 0x5678, they write it to memory as 78 56 34 12 at say offset x
to read the w, you read to size of word at location x
to read the high, you read byte/uint8_t at location x
to read the low, you read byte/uint8_t at location x + 2
at lowByte : wouldn't the conversion from WORD to uint8_t just take the low byte anyway? i know they w & 0x00ff to get the low byte but wouldn't the casting just take the low byte ?
Yes. Some people like to be extra explicit in their code anyway, but you are right.
at both the low/high : why wouldn't they use little endians, and read with size/offset
I don't know what that means, "use little endians".
But simply aliasing a WORD as a uint8_t and using pointer arithmetic to "move around" the original object generally has undefined behaviour. You can't alias objects like that. I know your teacher probably said you can because it's all just bits in memory, but your teacher was wrong; C and C++ are abstractions over computer code, and have rules of their own.
Bit-shifting is the conventional way to achieve this.
In the case of lowByte, yes the cast to uint8_t is equivalent to (w) & 0xff).
Regarding "using little endians", you don't want to access individual bytes of the value because you don't necessarily know whether your system is using big endian or little endian.
For example:
uint16_t n = 0x1234;
char *p = (char *)&n;
printf("0x%02x 0x%02x", p[0], p[1]);
If you ran this code on a little endian machine it would output:
0x34 0x12
But if you ran it on a big endian machine you would instead get:
0x12 0x34
By using shifts and bitwise operators you operate on the value which must be the same on all implementations instead of the representation of the value which may differ.
So don't operate on individual bytes unless you have a very specific reason to.

How this CRC (Cyclic Redundancy Check) calculation can be solved?

I want to send data to a TCP 105 circuit.
The following hex command is OK to send data 123:
7F30001103 313233 45D4
Here, 313233 is hex representation of 123 and 45D4 is the CRC value.
I'm in problem to obtain this 45D4 after calculating CRC. After searching for a long time on the web, I'm getting other CRC values in different standards. But those CRC values are not being accepted by my LED display circuit.
Please help me to know how is it possible to get 45D4 from 7F30001103313233.
Thanks in advance.
The command matches an algorithm called CRC-16/CMS.
$ reveng -w 16 -s 7f30001103313233d445
width=16 poly=0x8005 init=0xffff refin=false refout=false xorout=0x0000 ch
eck=0xaee7 name="CRC-16/CMS"
This is probably the correct algorithm, as you've only given one codeword (and because I've assumed that the CRC has been byte-swapped.)
To generate code that computes this CRC, see Mark Adler's crcany tool, for instance.

cyclic redundancy check in DLL

A bit stream 11100110 is to be transmitted using CRC method. The generator polynomial is X4+ X3 + 1.
What is the actual bit transmitted ?
Suppose the third bit from the left is inverted during the transmission. How the error is detected.
How the generator polynomial is already known to sender side as well as receiver side, please make this clear.
Solution :
Here, FCS will be 0110 since n = 4.
So actual bit transmitted is >> 11100110 0110
I am confused with the problem 2, 3. please reply my 2, 3 questions.
Thank You!
If you know how to generate the 0110, then invert the bit and generate a new CRC. You will see that it's different. On the other end when you compute the CRC of the eight bits sent, it will not match the four bit CRC sent.
The two sides agree a priori on a protocol, that includes the definition of the CRC to be used.

Simple SDLC CRC calculation not giving the correct value

I am trying to figure out how the calculate the CRC for very simple SDLC frames.
Using an MLT I am capturing the stream and i see some simple frames being sent out like: 0x3073F9E3 and 0x3011EDE3
From my understanding the F9E3 and EDE3 are the 2 byte checksums of the 3073 and 3011 since that is all that was in that frame.
using numerous CRC calculators and calculations I have been able to get the first byte of the checksum, but not the last byte (the F9 and the ED).
Using this calculator (http://www.zorc.breitbandkatze.de/crc.html):
Select CRC-CCITT
Change Final XOR Value to: FFFF
Check Reverse Data Bytes and reverse CRC result before Final XOR
Then type the input: %30%11
Which will give the output B8ED so the last byte is the ED.
Any ideas?
You are getting the correct crc16's (F9 F8, ED B8). I don't know why your last byte is E3 in both cases. This is perhaps a clue that the packets are not being disassembled correctly.

One's complement instead of just a sum of bits

A question in my university homework is why use the one's complement instead of just the sum of bits in a TCP checksum. I can't find it in my book and Google isn't helping. Any chance someone can point me in the right direction?
Thanks,
Mike
Since this is a homework question, here is a hint:
Suppose you calculated a second checksum over the entire packet, including the first checksum? Is there a mathematical expression which would determine the result?
Probably the most important is that it is endian independent.
Little Endian computers store hex numbers with the LSB last (Intel processors for example). Big Endian computers put the LSB first (IBM mainframes for example). When carry is added to the LSB to form the 1's complement sum) it doesn't matter if we add 03 + 01 or 01 + 03: the result is the same.
Other benefits include the easiness of checking the transmission and the checksum calculation plus a variety of ways to speed up the calculation by updating only IP fields that have changed.
Ref: http://www.netfor2.com/checksum.html

Resources