CRC16 checksum: HCS08 vs. Kermit vs. XMODEM - microcontroller

I'm trying to add CRC16 error detection to a Motorola HCS08 microcontroller application. My checksums don't match, though. One online CRC calculator provides both the result I see in my PC program and the result I see on the micro.
It calls the micro's result "XModem" and the PC's result "Kermit."
What is the difference between the way those two ancient protocols specify the use of CRC16?

you can implement 16 bit IBM, CCITT, XModem, Kermit, and CCITT 1D0F using the same basic code base. see http://www.acooke.org/cute/16bitCRCAl0.html which uses code from http://www.barrgroup.com/Embedded-Systems/How-To/CRC-Calculation-C-Code
the following table shows how they differ:
name polynomial initial val reverse byte? reverse result? swap result?
CCITT 1021 ffff no no no
XModem 1021 0000 no no no
Kermit 1021 0000 yes yes yes
CCITT 1D0F 1021 1d0f no no no
IBM 8005 0000 yes yes no
where 'reverse byte' means that each byte is bit-reversed before processing; 'reverse result' means that the 16 bit result is bit-reversed after processing; 'swap result' means that the two bytes in the result are swapped after processing.
all the above was validated with test vectors against http://www.lammertbies.nl/comm/info/crc-calculation.html (if that is wrong, we are all lost...).
so, in your particular case, you can convert code for XModem to Kermit by bit-reversing each byte, bit reversing the final result, and then swapping the two bytes in the result.
[i believe, but haven't checked or worked out the details, that reversing each byte is equivalent to reversing the polynomial (plus some extra details). which is why you'll see very different explanations in different places for what is basically the same algorithm.
also, the approach above is not efficient, but is good for testing. if you want efficient the best thing to do is translate the above to lookup-tables.]
edit what i have called CCITT above is documented in the RevEng catalogue as CCITT-FALSE. for more info, see the update to my blog post at the link above.

My recollection (I used to do modem stuff way back when) is that Kermit processes the bits in each byte of the data using the least significant bit first.
Most software CRC implementations (Xmodem, probably) run through the data bytes most significant bit first.
When looking at the library source (download it from http://www.lammertbies.nl/comm/software/index.html) used for the CRC Calculation page you linked to, you'll see that XModem uses CRC16-CCITT, the polynomial for which is:
x^16 + x^12 + x^5 + 1 /* the '^' character here represents exponentition, not xor */
The polynomial is represented by the bitmap (note that bit 16 is implied)
0x1021 == 0001 0000 0010 0001 binary
The Kermit implementation uses:
0x8408 == 1000 0100 0000 1000 binary
which is the same bitmap as XModem's, only reversed.
The text file that accompanies the library also mentions the following difference for Kermit:
Only for CRC-Kermit and CRC-SICK: After all input processing, the one's complement of the CRC is calculated and the two bytes of the CRC are swapped.
So it should probably be easy to modify your CRC routine to match the PC result. Note that the source in the CRC library seems to have a pretty liberal license - it might make sense to use it more or less as is (at least the portions that apply for your application).

X-Modem 1K CRC16.
Process for bytewise CRC-16 using input data {0x01, 0x02} and polynomial 0x1021
Init crc = 0
Handle first input byte 0x01:
2.1 'Xor-in' first input byte 0x01 into MSB(!) of crc:
0000 0000 0000 0000 (crc)
0000 0001 0000 0000 (input byte 0x01 left-shifted by 8)
0000 0001 0000 0000 = 0x0100
The MSB of this result is our current divident: MSB(0x100) = 0x01.
2.2 So 0x01 is the divident. Get the remainder for divident from our table: crctable16[0x01] = 0x1021. (Well this value is famila from the manual computation above.)
Remember the current crc value is 0x0000. Shift out the MSB of current crc and xor it with the current remainder to get the new CRC:
0001 0000 0010 0001 (0x1021)
0000 0000 0000 0000 (CRC 0x0000 left-shifted by 8 = 0x0000)
0001 0000 0010 0001 = 0x1021 = intermediate crc.
Handle next input byte 0x02:
Currently we have intermediate crc = 0x1021 = 0001 0000 0010 0001.
3.1 'Xor-in' input byte 0x02 into MSB(!) of crc:
0001 0000 0010 0001 (crc 0x1021)
0000 0010 0000 0000 (input byte 0x02 left-shifted by 8)
0001 0010 0010 0001 = 0x1221
The MSB of this result is our current divident: MSB(0x1221) = 0x12.
3.2 So 0x12 is the divident. Get the remainder for divident from our table: crctable16[0x12] = 0x3273.
Remember the current crc value is 0x1021. Shift out the MSB of current crc and xor it with the current remainder to get the new CRC:
0011 0010 0111 0011 (0x3273)
0010 0001 0000 0000 (CRC 0x1021 left-shifted by 8 = 0x2100)
0001 0011 0111 0011 = 0x1373 = final crc.

Related

How to transition a hexadecimal number to a new hexadecimal number where the new nibbles represent the original hex number's decimal equivalent?

As an example, take hex number 0x04D2 equal to decimal 1234.
Can someone think of a process to transition 0x04D2 to 0x1234?
I am writing a program in MIPS and plan on taking each nibble of the new hex number and convert it to ASCII for printing, but I can't seem to figure out how to make the transition to the new hex number. When I get the new hex number, the ASCII transition should be a piece of cake. Even though I intend to implement this in MIPS, I'm more interested in a universal bitwise process or algorithm.
Also, I do know that MIPS can print the decimal number by integer_print syscall'ing the register, but I would rather do it the hard way. Plus, I need the ASCII in the register for what I am doing.
So, starting with only 0x04D2, is it possible to make this transition to 0x1234?
Here are the conversions for hex, dec, bin:
0x04D2 = 1234 = 0000 0100 1010 0010
0x1234 = 4660 = 0001 0010 0011 0100
Thanks!

Decode SNMP PDUs - Where to Start? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Hello my first ever question on here, in need of bit of guidance.
I'm working on a packet sniffer mainly to decode SNMP PDUs, however I am not entirely sure where to go with it.
Simply put my packet sniffer can extract information from packets however I am interested in the data payload field. It is written in C++ and I am using winsock.
What way should I go about this? Are the SNMP fields encoded in basic encoding rules or will I have to delve into ASN.1?
I am only looking to decode those SNMP fields within the data payload field into human readable form. They are going to be dumped into a text file. So I will be looking at decoding OIDs also. I am verifying everything as I go along with Wireshark and using GETIF to query my SNMP node.
Any guidance is appreciated.
EDIT:
Thanks user1793963 very well explained. Sorry to all who have marked this as too broad.
To elaborate on my original question could anyone explain the initial part of the PDU itself.
Example: My program outputs these hex values 30 82 00 A3 02 01 00, which is SEQUENCE (30), LENGTH (82) and two other values. This is from a GetRequest PDU.
The GetResponse PDU shows these values 30 81 B7 02 01 00, SEQUENCE, 81 in LENGTH and another value.
Could someone explain the values marked in bold. If it uses the simple TLV structure what are the values representing? What I know is the start of the sequence (30) and the total PDU length (which is 82 and 81) and I know 02 01 00 are INTEGER 1 in LENGTH and VERSION 0 however I do not understand 00 A3 (GetRequest) and B7 (GetResponse). What do these values represent?
Many thanks.
I am also using Wireshark to check values, however they do not state the start of the PDU sequence
Update, 9 years later :-)
30 82 00 A3 and 30 81 B7 02 the 30 is data type = sequence. Following that are (long) length fields (> 127 bytes).
The rule for large numbers is that only the lower 7 bits in the byte are used for holding the value (0-127). The highest order bit is used as a flag
to let the recipient know that this number spans more than one byte. If more than two bytes are required to encode the number, all will have the top bit set apart from the last byte. Any number over 127 must be encoded using more than one byte.
SNMP packets are encoded in ASN.1, but it's a very simple protocol (at least for SNMP v1 and v2.c, I do not have a lot of experience with v3). It uses a simple TLV structure: type, length, value. For example the bytes 0x4 0x6 0x70 0x75 0x62 0x6c 0x63 are a string (type 4) with length 6 and value "public". You can find a list of types here.
I find it useful to write out packages like this:
1. 0x30 0x34
2. 0x2 0x1 0x1
3. 0x4 0x6 0x70 0x75 0x62 0x6c 0x63
4. 0xa2 0x27
5. 0x2 0x4 0x1 0x2 0x3 0x4
6. 0x2 0x1 0x0
7. 0x2 0x1 0x0
8. 0x30 0x19
9. 0x30 0x17
10. 0x6 0x8 0x2b 0x6 0x1 0x2 0x1 0x1 0x2 0x0
11. 0x6 0xb 0x2b 0x6 0x1 0x4 0x1 0x85 0x22 0xd5 0xf 0x97 0x54
This is a response on a get request where I requested the OID 1.3.6.1.2.1.1.2.0 (sysObjectID).
A list (type 0x30) with a length of 52 bytes
The version: SNMPv2.c (0=v1, 1=v2.c)
The community string: "public". (note how this is send in cleartext)
A SNMP get-response with length 39
The request ID, a 32 bit integer.
The error code, 0 means no error.
The error index.
A list with length 25
A list with length 23
An OID with length 8: 1.3.6.1.2.1.1.2.0
An OID with length 11: 1.3.6.1.4.1.674.10895.3028
As you can see integers and strings are easy, but OIDs are a bit trickier. First of all the first two parts ("1.3") are represented as a single byte (0x2b). They did this to make each message a few bytes shorter.
The second problem is representing numbers larger than 255. To do this SNMP uses only the 7 least significant bits to store data, the most significant bit is a flag to signal that the data continues in the next byte. Numbers lower than 128 are stored in a single byte.
0x7f
= 0 111 1111
= 127
0x85 0x22
= 1 000 0101, 0 010 0010
= 000 0101 010 0010
= 674
0xc0 0x80 0x80 0x80
= 1 100 0000, 1 000 0000, 1 000 0000, 0 000 0000
= 100 0000 000 0000 000 0000 000 0000
= 0x8000000
This method is also used if the length of a TLV field is larger than 127.
RFC1592 describes the structure of the messages, take a look at page 11 for a similar example.
I can also recommend using Wireshark to analyze packets, it does an excellent job of translating them to something readable.

explicitly setting and unsetting some pins of a port

I have two shields which conveniently (ie no pin clash) share a port and I need to be able to manipulate just SOME pins on the port. But I cannot be sure if I am manipulating pins on or off, I just want to set them arbitrarily as the need arrises, ie, in one operation I may be turning some pins on and some off.
I do know:
PORTX |= B11110000 // turns on bits 4-7
PORTX &= B11000011 // turns off bits 2-5
PORTX ^= B00111111 // toggles bits 0-5
My challenge has been to turn on AND off only some pins, leaving others unchanged.
I have achieved the desired result, and in as far as I think I have done it in a safe way, I want to confirm it is in fact SAFE, and have I gone about it the right (best) way, or can I achieve this a much simpler way.
First, I am using PORTD, pins 4-7. I set those pins as outputs and then set them all as low to ensure my program starts with them (4x relays) all off.
void initRelays(){
RELAYDDR |= B11110000;
RELAYPORT &= ~RELAYDDR;
}
I believe tis will set pins 4-7 off without modifying the lower bits due to the bitwise AND with ZERO. I believe this will leave bit 0-3 as they were previously set.
Inverting this value and ANDing it with the existing port value, will ensure those pins are off and leaves the other bits unchanged. I'm sure this line is not required, I am having it here for safety sake :)
I have left the comments in the below code in order for you to try and understand what I am doing.
void relayPush(byte stack){
// stack has bit 1 to relay 1 (pin 4), thru bit 4 to relay 4 (pin 7)
// take stack and isolate the four bottom bits (the information we want to convert)
stack &= B00001111; // (1) I think this line is probably not required
// now shift to the position we need
stack <<= 4; // (2)
// OR the new stack with the PORT
// (this turns on any relays set in stack)
RELAYPORT |= stack; // (3)
// we need to NOT modify the bottom bits of the port
// mark those with a '1' so as to not turn them off
// bottom of stack mask = 0x0f
// XOR stack and mask
stack ^= 0x0f; // (4)
// AND new stack and port to turn off appropriate relays
RELAYPORT &= stack; // (5)
}
I know I have done it in two PORT operations, and I could make this one by using a temp variable, that's not of a major concern since it's only turning everything required to be on in the first instance and then turning everything off thats required in the second instance.
Have a missed a simpler way of doing this?
edit: I have had a look at what #Ignacio has said about changing the final operations and this is what I've come up with:
0011 0011 current port assignment
xxxx 1010 current stack assignment (we only want the lower nibble)
1010 0011 desired result
0011 0011 current port
xxxx 1010 current stack
0000 1111 step 1 - apply this mask to stack
0000 1010 resultant stack
1010 0000 step 2 - stack << 4
0011 0011 PORT
1010 0000 STACK
1010 0011 step 3 - resultant PORT (port OR stack)
0000 1111 (MASK for step 4)
1010 0000 stack at step 4
1010 1111 step 4 - resultant stack (mask XOR stack)
1010 0011 port from step 3
1010 1111 stack from step 4
1010 0011 port AND stack (desired result)
/// changing steps 4 and 5 to drop XOR, and applying complement
1010 0000 stack prior to step 4
0101 1111 ~stack
1010 0011 port from step 3
0000 0011 stack AND port (not the desired result)
summary:
XOR is needed to populate the bottom nibble to B00001111 and leaving top nibble unchanged. Since we know the bottom nibble is ZERO (from earlier shift), we could simply add 0x0F. XOR achieves the same thing.
For the final AND operation, we need to switch off top nibble ZEROs. Hence, no complement.
New idea from my comment to #Ignacio:
0011 0011 current port
xxxx 1010 current stack
1010 0000 shifted stack
0000 0011 temp = port AND 0x0F
1010 0011 stack OR temp (desired result)
Sorry for the long post, but I think that is a better solution, although it does use another variable.
Your and operation with RELAYPORT clears the upper 4 bits. You should not perform the earlier xor operation and instead should just and it with the complement.
RELAYPORT &= ~stack;
Some thoughts ... based on my experience with megaAVR's (esp. AT90USB1287)
When you split a port and operate some bits as input and some bits as output, I recommend to take extra care when writing the whole output port. There are good BIT instructions in the AVR. If you want to write a complete port, keep in mind that writing to bits configured as input does have an effect, namely to activate (PORTxy<-1) or deactivate (PORTxy<-0) internal pullup resistors - so depending on your hardware you have to choose what to use for the "unwanted" bits (with another dependency on MCUCR(PUD). In other words, the (to-be-ignored) input bits in a register you write out can't contain any random values but exactly the ones that support the configuration of the internal pullups. Insofar it's not usefull to read in a PINx before writing (parts) back to the port. (This was used on older processors with port hardware less elaborated than the AVR processors)
When writing out a PORTx insert a NOP before reading back PINx (due to the internal latch).
In initRelays() I'd use a constant, because it compiles faster (single instruction) rather than a function of RELAYDDR which involves reading back RELAYDDR into a register and writing the register to RELAYPORT.

Reading tcpdump header length command

This is my first post and I absolutely <3 this site! So much great content!
So, I have the following TCPDump command I want to understand what it is asking (in plain English).
tcpdump 'tcp[12] & 80 !=0'
Is it asking to grab all TCP packets on byte offset 12 (TCP Header length and Reserved bits) with values at least 80 that is true? I believe I am wrong.
If the above is true, can someone write out the possible binaries for it?
80 gives 0101 0000. My mentor also wrote down: 1111 0000 and 0111 0000. But I don't know why...
If it's at least 80, the binary combo for that could be countless...
Is it asking to grab all TCP packets on byte offset 12 (TCP Header length and Reserved bits) with values at least 80 that is true
No. 80 in decimal is 50 in hexadecimal, so it's equivalent to tcp[12] & 0x50 !=0, which tests whether either the 0100 0000 bit or the 0001 0000 bit in the 12th byte of the TCP header are set. That's true of 0101 0000, but is also true of 1111 0000 and 0111 0000, as well as 0100 0000 and 0001 0000 and 0100 1111 and....
If you want to test the uppermost bit of that byte, you'd use tcp[12] & 0x80 !=0. That would, in effect, match all values >= 0x80.

net mask and TCP connection issue

Question1. Suppose computers A and B have IP addresses 10.105.1.113 and 10.105.1.91 respectively and they both use the same net mask N. Which of the values of N given below should not be used if A and B should belong to the same network?
255.255.255.0
255.255.255.128
255.255.255.192
255.255.255.224
Question2. While opening a TCP connection, the initial sequence number is to be derived using a time-of-day (ToD) clock that keeps running even when the host is down. The low order 32 bits of the counter of the ToD clock is to be used for the initial sequence numbers. The clock counters increments once per millisecond. The maximum packet lifetime is given to be 64s. Which one of the choices given below is closest to the minimum permissible rate at which sequence numbers used for packets of a connection can increase?
0.015/s
0.064/s
0.135/s
0.327/s
During an interview in company interviewer ask me these questions. How to solve these question. Please help me.
Thank you.
Really you should ask only one question per post...
For question 1, after masking the IP addresses have to look the same. Masking is a bitwise AND operation, so you need to write down the numbers in question in binary. Now the first three groups don't matter, since 255 == 11111111 and you will not change anything. Let's focus on the last number only:
113 = 0111 0001
91 = 0101 1011
And for the mask:
0 = 0000 0000
128 = 1000 0000
192 = 1100 0000
224 = 1110 0000
Now for the masking:
Example:
1110 0000
0111 0001
========= AND
0110 0000
Since 0 AND 1 == 0, but 1 AND 1 == 1
Applying this mask to the two addresses, we get
113 91
0 0000 0000 0000 0000
128 0000 0000 0000 0000
192 0100 0000 0100 0000
224 0110 0000 0100 0000 **** when this mask is applied to the two IP addresses, the result is different
We conclude that the two addresses would end up on different subnets.
Conclusion: you can't use 255.255.255.224 as the mask if you want these two IP addresses on the same subnet. For more information you can go to https://en.wikipedia.org/wiki/Subnetwork for example.
As for question 2, it is one of those badly phrased questions. Is a "minimum rate" the lowest number, or the highest number? When you say "this is the maximum rate" you typically mean "the lowest number" but it's open for interpretation. I think in this case they are asking about the "maximum rate" (the smallest number), since the literal interpretation of the question makes no sense. Still I am struggling to understand what they are asking. When two computers communicate, they increase the sequence number on each packet. So what is "permissible"? I don't know. But 0.015/s is close to 1/64s - if I were a betting man, that's where I'd put my money but I can't explain it. I hope the answer to your first question at least is useful... and maybe that the rambling for the second spurs some good discussion and an actual answer.

Resources