I have the following frame:
7e 01 00 00 01 00 18 ef 00 00 00 b5 20 c1 05 10 02 71 2e 1a c2 05 10 01 71 00 6e 87 02 00 01 42 71 2e 1a 01 96 27 be 27 54 17 3d b9 93 ac 7e
If I understand correctly, then it is this portion of the frame on which the FCS is calculated:
010000010018ef000000b520c1051002712e1ac205100171006e8702000142712e1a019627be2754173db9
I've tried entering this into a number of online calculators but I cant produce 0x93ac from the above data.
http://www.lammertbies.nl/comm/info/crc-calculation.html with input type hex.
How is 0x93ac arrived at?
Thanks,
Barry
Answering rather for others who got here while searching for advice.
The key is what several points in the closely related ITU-T recommendations (e.g. Q.921, available online for quite some time already) say:
1. the lowest order bit is transmitted (and thus received) first
This legacy behaviour is in contrary to the daily life conventions where highest order digits are written first in the order of reading, and all the generic online calculators and libraries perform the calculation using the conventional order and provide optional settings to facilitate the reversed one.
Therefore, you must ask the online calculator
to reverse the order of bits in the message you've input in the "conventional" format before performing the calculation,
to reverse the order of bits of the result so that you get them in
the same order like in the message itself
Quite reasonably, some calculators offer just a single common setting for both.
This reasons the settings "reverse data bytes" and "reverse CRC result before Final XOR" recommended in the previous answer;
2. the result of the CRC calculation must be bit-inverted before sending
Bit inversion is another name of "xor by 0xffff...". There is a purpose in bit-inverting the CRC calculation result before sending it as the message FCS (the last two bytes of the message, the '93 ac' in your example).
See point 4 for details.
This reasons the setting "Final value ffff", whose name is quite misleading as it actually defines the pattern to be for xor'ed with the result of the calculation. As such operation is required by several CRC types, only the xor patterns vary from 0 (no op) through 0xfff... (complete inversion), generic calculators/libraries offer it for simplicity of use.
3. the calculation must include processing of a leading sequence of 0xffff
This reasons the point "initial value ffff".
4. on the receiving (checking) side, it is recommended to push the complete message, i.e. including the FCS, through the CRC calculation, and expect the result to be 0x1d0f
There is some clever thinking behind this:
the intrinsic property of the CRC algorithm is that
CRC( x.CRC(x) )
is always 0 (x represents the original message and "." represents concatenation).
running the complete message through the calculation rather than
calculating only the message itself and comparing with the FCS
received separately means much simpler algorithm (or even circuitry)
at the receiving side.
however, it is too easy to make a coding mistake causing a result to become 0. Luckily, thanks to the CRC algorithm intrinsic properties again,
CRC( x.(CRC(x))' )
yields a constant value independent of x and different from 0 (at least for CRC-CCITT, which we talk about here). The "'" sign represents the bit inversion as required in point 2.
First of all, CRC value is 0xac93
Use this calculator: http://www.zorc.breitbandkatze.de/crc.html
Set CRC order 16
Polynomial 1021
Initial value ffff
Final value ffff
"reverse data bytes"
"reverse CRC result before Final XOR"
Enter your sequence as:
%01%00%00%01%00%18%ef%00%00%00%b5%20%c1%05%10%02%71%2e%1a%c2%05%10%01%71%00%6e%87%02%00%01%42%71%2e%1a%01%96%27%be%27%54%17%3d%b9
Press "calculate" and you get 0xAC93
This is simple Python script for HDLC CRC calculation. You can use it for DLMS
def byte_mirror(c):
c = (c & 0xF0) >> 4 | (c & 0x0F) << 4
c = (c & 0xCC) >> 2 | (c & 0x33) << 2
c = (c & 0xAA) >> 1 | (c & 0x55) << 1
return c
CRC_INIT=0xffff
POLYNOMIAL=0x1021
DATA_VALUE=0xA0
SNRM_request=[ 0x7E, 0xA0, 0x08, 0x03, 0x02, 0xFF, 0x93, 0xCA, 0xE4, 0x7E]
print("sent>>", end=" ")
for x in SNRM_request:
if x>15:
print(hex(x), end=" ")
else:
a=str(hex(x))
a = a[:2] + "0" + a[2:]
print(a, end=" ")
lenn=len(SNRM_request)
print(" ")
crc = CRC_INIT
for i in range(lenn):
if( (i!=0) and (i!=(lenn-1)) and (i!=(lenn-2)) and (i!=(lenn-3)) ):
print("i>>",i)
c=SNRM_request[i]
c=byte_mirror(c)
c = c << 8
print(hex(c))
for j in range(8):
print(hex(c))
print("CRC",hex(crc))
if (crc ^ c) & 0x8000:
crc = (crc << 1) ^ POLYNOMIAL
else:
crc = crc << 1
c = c << 1
crc=crc%65536
c =c%65536
print("CRC-CALC",hex(crc))
crc=0xFFFF-crc
print("CRC- NOT",hex(crc))
crc_HI=crc//256
crc_LO=crc%256
print("CRC-HI",hex(crc_HI))
print("CRC-LO",hex(crc_LO))
crc_HI=byte_mirror(crc_HI)
crc_LO=byte_mirror(crc_LO)
print("CRC-HI-zrc",hex(crc_HI))
print("CRC-LO-zrc",hex(crc_LO))
crc=256*crc_HI+crc_LO
print("CRC-END",hex(crc))
For future readers, there's code in appendix C of RFC1662 to calculate FCS for HDLC.
Related
I was looking at some code today for integrating a real time clock with an arduino and it had some binary to decimal (and vice versa) that I don't fully understand.
The code in question is below:
byte decToBcd(byte val)
{
return ( (val/10*16) + (val%10) );
}
byte bcdToDec(byte val)
{
return ( (val/16*10) + (val%16) );
}
ex: decToBcd(12);
I really fail to grasp how this works. I am not sure I understand the math, or if some sort of assumptions are being taken advantage of.
Would someone mind explaining how exactly the math and data types below are supposed to work? If possible touching on why the value "16" is used in the conversions instead of "8" when we are supposed to be working with a byte value.
For context, the full code can be found here: http://www.codingcolor.com/microcontrollers/an-arduino-lcd-clock-using-a-chronodot-rtc/
The key hint here is BCD - Binary-coded decimal - in the function name. In BCD each decimal digit is represented by four bits (half of a byte). As a result the maximum (decimal) number you can store using BCD notation is 99 - 9 in the upper nibble (half of the byte) and 9 in the lower nibble.
Let's take a look at number 12 as an example. Number 12 looks as follows in the binary notation:
12 = %00001010
However in BCD it looks as follows:
12 = %00010010
because
0001 0010
1 2
Now if you look at the decToBcd function val%10 is responsible for calculating the value of the ones place (i.e. the last digit). Since this goes to the lower part of the byte we don't need to do anything special here. val/10*16 first calculates the value of the tens place - val/10. However since the value has to go to the upper half of the byte it needs to be shifted up by four bits - hence *16. Another (in my opinion more readable) way of writing this function would be:
((val / 10) << 4) | (val % 10)
The bcdToDec does the reverse conversion.
RTC usually stores Year in 1 byte as 2 digits only, i.e: 2014 is 14.
And some of them stores it as a number from the year 1970 so 2014 = 44.
So maximum it can hold is 99 in both cases.
I'm writing some crypto (known algorithm - not rolling my own) but I couldn't find any specific documentation on this case.
One method of padding (although the issue is there for any of them could have the same problem) works like this:
If your block is < 8 bytes, pad the end with the number of padding bytes
So FF E2 B8 AA becomes FF E2 B8 AA 04 04 04 04
Which is great and allows you with a pretty obvious window with which you can remove padding during decryption, but my question is that instead of the above example say I have this -
10 39 ff ef 09 64 aa (7 bytes in length). Now in this situation the above algorithm would say to convert this to 10 39 ff ef 09 64 aa 01, but my question is then when decrypting how do you decide between when you get a 01 byte on the end of a decrypted message how do you know whether it's meant to be padding (and should be stripped) or it's part of the actual message and you should keep it?
The most reasonable solutions I can think of would be append/prepend the size of the actual message in the encryption or add a parity block to state whether there's padding or not, which both have their own problems in my mind.
I'm assuming this problem has been encountered before but I was wondering what the solution was.
PKCS #5/7 padding is always added – if the length of the plaintext is a multiple of the block size, a whole block of padding is added. This way there is no ambiguity, which is the main benefit of PKCS #7 over, say, zero padding.
Quoted from the PKCS #7 specification:
2. Some content-encryption algorithms assume the
input length is a multiple of k octets, where k > 1, and
let the application define a method for handling inputs
whose lengths are not a multiple of k octets. For such
algorithms, the method shall be to pad the input at the
trailing end with k - (l mod k) octets all having value k -
(l mod k), where l is the length of the input. In other
words, the input is padded at the trailing end with one of
the following strings:
01 -- if l mod k = k-1
02 02 -- if l mod k = k-2
.
.
.
k k ... k k -- if l mod k = 0
The padding can be removed unambiguously since all input is
padded and no padding string is a suffix of another. This
padding method is well-defined if and only if k < 256;
methods for larger k are an open issue for further study.
I'm programming my Arduino micro controller and I found some code for accepting accelerometer sensor data for later use. I can understand all but the following code. I'd like to have some intuition as to what is happening but after all my searching and reading I can't wrap my head around what is going on and truly understand.
I have taken a class in C++ and we did very little with bitwise operations or bit shifting or whatever you'd like to call it. Let me try to explain what I think I understand and you can correct me where it is needed.
So:
I think we are storing a value in x, pretty sure in fact.
It appears that the data in array "buff", slot number 1, is being set to the datatype of integer.
The value in slot 1 is being bit shifted 8 places to the left.(does this point to buff slot 0?)
This new value is being compared to the data in buff slot 0 and if either bits are true then the bit in the data stored in x will also be true so, 0 and 1 = 1, 0 and 0 = 0 and 1 and 0 = 1 in the end stored value.
The code does this for all three axis: x, y, z but I'm not sure why...I need help. I want full understanding before I progress.
//each axis reading comes in 10 bit resolution, ie 2 bytes.
// Least Significant Byte first!!
//thus we are converting both bytes in to one int
x = (((int)buff[1]) << 8) | buff[0];
y = (((int)buff[3]) << 8) | buff[2];
z = (((int)buff[5]) << 8) | buff[4];
This code is being used to convert the raw accelerometer data (in an array of 6 bytes) into three 10-bit integer values. As the comment says, the data is LSB first. That is:
buff[0] // least significant 8 bits of x data
buff[1] // most significant 2 bits of x data
buff[2] // least significant 8 bits of y data
buff[3] // most significant 2 bits of y data
buff[4] // least significant 8 bits of z data
buff[5] // most significant 2 bits of z data
It's using bitwise operators two put the two parts together into a single variable. The (int) typecasts are unnecessary and (IMHO) confusing. This simplified expression:
x = (buff[1] << 8) | buff[0];
Takes the data in buff[1], and shifts it left 8 bits, and then puts the 8 bits from buff[0] in the space so created. Let's label the 10 bits a through j for example's sake:
buff[0] = cdefghij
buff[1] = 000000ab
Then:
buff[1] << 8 = ab00000000
And:
buff[1] << 8 | buff[0] = abcdefghij
The value in slot 1 is being bit shifted 8 places to the left.(does this point to buff slot 0?)
Nah. Bitwise operators ain't pointer arithmetic, don't confuse the two. Shifting by N places to the left is (roughly) equivalent with multiplying by 2 to the Nth power (except some corner cases in C, but let's not talk about those yet).
This new value is being compared to the data in buff slot 0 and if either bits are true then the bit in the data stored in x will also be true
No. | is not the logical OR operator (that would be ||) but the bitwise OR one. All the code does is combining the two bytes in buff[0] and buff[1] into a single 2-byte integer, where buff[1] denotes the MSB of the number.
The device result is in 6 bytes and the bytes need to be rearranged into 3 integers (having values that can only take up 10 bits at most).
So the first two bytes look like this:
00: xxxx xxxx <- binary value
01: ???? ??xx
The ??? part isn't part of the result because the xxx part comprise the 10 bits. I guess the hardware is built in such a way that the ??? part is all zero bits.
To get this into a single integer variable, we need all 8 of the low bits plus the upper-order 2 bits, shifted left by 8 position so they don't interfere with the low order 8 bits. The logical OR (| - vertical bar) will join those two parts into a single integer that looks like this:
x: ???? ??xx xxxx xxxx <- binary value of a single 16 bit integer
Actually it doesn't matter how big the 'int' is (in bits) as the remaining bits (beyond that 16) will be zero in this case.
to expand and clarify the reply by Carl Norum.
The (int) typecast is required because the source is a byte. The bitshift is performed on the source datatype before the result is saved into X. Therefore it must be cast to at least 16 bits (an int) in order to bitshift 8 bits and retain all the data before the OR operation is executed and the result saved.
What the code is not telling you is if this should be an unsigned int or if there is a sign in the bit data. I'd expect -ve data is possible with an Accelerometer.
I had a long time decoding IR codes with optimum's Ken Shirriff Arduino Library. I modified the code a bit so that I was able to dump a Samsung air conditioner (MH026FB) 56-bit signals.
The results of my work is located in Google Docs document Samsung MH026FB AirCon IR Codes Dump.
It is a spreasheet with all dumped values and the interpretation of results. AFAIK, air conditioner unit sends out two or three "bursts" of 56 bit data, depending on command. I was able to decode bits properly, figuring out where air conditioner temperature, fan, function and other options are located.
The problem I have is related to the checksum. In all those 7-byte codes, the second one is computed somehow from the latter 5 bytes, for example:
BF B2 0F FF FF FF F0 (lead-in code)
7F B8 8A 71 F6 4F F0 (auto mode - 25 degrees)
7F B2 80 71 7A 4F F0 (auto mode - 26 degrees)
7F B4 80 71 FA 7D F0 (heat mode - 26 degrees - fan auto)
Since I re-create the IR codes at runtime, I need to be able to compute checksum for these codes.
I tried with many standard checksum algorithms, none of them gave meaningful results. The checksum seems to be related to number of zeroes in the rest of code (bytes from 3 to 7), but I really can't figure it how.
Is there a solution to this problem?
Ken Shirriff sorted this out. Algorithm is as follow:
Count the number of 1 bits in all the bytes except #2 (checksum)
Compute count mod 15. If the value is 0, use 15 instead.
Take the value from 2, flip the 4 bits, and reverse the 4 bits.
The checksum is Bn where n is the value from the previous step.
Congraturations to him for his smartness and sharpness.
When bit order in bytes/packets and 0/1 are interpreted properly (from the algorithm it appears that both are reversed), the algorithm would be just sum of 0 bits modulo 15.
It is nearly correct.
Count the 0's / 1's (You can call them what you like, but it is the short signals).
Do not count 2. byte and first/last bit of 3.byte (depending if you are seeing it as big or little indian).
Take the result and -30 (29-30 = 15, only looking af 4 bits!)
Reverse result
Checksum = 0x4 "reverse resultesult", if short signals = 0, and 0xB "reverse resultesult" if long signal = 0.
i used Ken's method but mod 15 didnt work for me.
Count the number of 1 bits in all the bytes except #2 (checksum)
Compute count mod 17. if value is 16, use first byte of mode result(0).
Take the value , flip the 4 bits.
The checksum is 0xn9 where n is the value from the previous step.
I'm not understanding how this result can be zero. This was presented to me has an example to validate a checksum of a message.
ED(12+01+ED=0)
How can this result be zero?
"1201 is the message" ED is the checksum, my question is more on, how can I determine the checksum?
Thank you for any help.
Best regards,
FR
How can this result be zero?
The checksum is presumably represented by a byte.
A byte can store 256 different values, so the calculation is probably done module 256.
Since 0x12 + 0x01 + 0xED = 256, the result becomes 0.
how can I determine the checksum?
The checksum is the specific byte value B that makes the sum of the bytes in the message + B = 0 (modulo 256).
So, as #LanceH says in the comment, to figure out the checksum B, you...
add up the values of the bytes in the message (say it adds up to M)
compute M' = M % 256
Now, the checksum B is computed as 256 - M'.
I'm not sure about your checksum details but in base-16 arithmetic (and in base-10):
base-16 base-10
-----------------------
12 18
01 1
+ ED 237
------------------------
100 256
If your checksum is modulo-256 (16^2), you only keep the last 2 base-16 digits, so you have 00
Well, obviously, when you add up 12 + 01 + ED the result overflows 1 byte, and it's actually the hex number 100. So, if you only take the final byte of 0x0100. you get 0.