What is CRC? And how does it help in error detection? - networking

What is CRC? And how does it help in error detection?

CRC is a non-secure hash function designed to detect accidental changes to raw computer data, and is commonly used in digital networks and storage devices such as hard disk drives.
A CRC-enabled device calculates a short, fixed-length binary sequence, known as the CRC code, for each block of data and sends or stores them both together. When a block is read or received the device repeats the calculation; if the new CRC code does not match the one calculated earlier, then the block contains a data error and the device may take corrective action such as requesting the block be sent again.
Source: Wikipedia

CRC stands for Cyclic Redundancy Check.
it helps in error detection..
It consists of the following
b(x)-> transmitted code word
q(x)-> quotient
i(x)-> information polynomial
r(x)-> remainder polynomial
g(x)-> generated polynomial
step 1: x^(n-k) * i(x)
step 2: r(x) = (x^(n-k) * i(x))%g(x)
step 3: b(x) = (x^(n-k) * i(x)) XOR with r(x)
which results in a transmitted code word.
this b(x) is send to the reciever end from the sender and if u divide the
transmitted code word i.e. b(x) with g(x) and if the remainder
i.e. r(x) is equal to 0 at the reciever end then there is no error
otherwise there is an error in the transmitted code word during the
transmission from sender to reciever.
In this way it is helpful in error detection.

Cyclic Redundancy Check is a hash function that allows you to compute an unique value given some input which is guaranteed to be always the same for the same input. If the input changes somehow from the original, a different CRC checksum will be generated. So if you have an input and a checksum you could calculate a new checksum from the input and compare both checksums. If they are the same it means that the input hasn't changed.

Related

1's complement checksum even bit errors

I am trying to wrap my head around 1's complement checksum error detection as is used in UDP.
My understanding with simplified example for an UDP-like 1's complement checksum error checking algorithm operating on 8 bit words (I know UDP uses 16 bit words):
Sum all 8 bit words of data, carry the MSB rollover to the LSB.
Take 1's complement of this sum, set checksum, send datagram
Receiver adds with carry rollover all received 8 bit words of data in the incoming datagram, adds checksum.
If sum = 0xFF, no errors. Else, error occurred, throw away packet.
It is obvious that this algorithm can detect 1 bit errors and by extension any odd-numbered bit errors. If just one bit in an 8-bit data word is corrupted, the sum + checksum will never equal 0xFF. A plain and simple example would be A = 00000000, B = 00000001, then ~(A + B) = 11111110. If A(receiver) = 00000001, B(reciever) = 00000001, the sum + checksum would be 0x00 != 0xFF
My question is:
It's not too clear to me if this can detect 2 bit errors. My intuition says no, and a simple example is taking A = 00000001, B = 00000000, then sum + checksum would be 0xFF, but there are two total errors in A and B from sender to receiver. If the 2 bit error occurred in the same word, theres a chance it could be detected, but it doesn't seem guaranteed.
How robust is UDP error checking? Does it work for even numbers of bit errors?
Some even-bit changes can be detected, some can't.
Any error that changes the sum will be detected. So a 2-bit error that changes the sum will be detected, but a 2-bit error that does not change the sum will not be detected.
A 2-bit error in a single word (single byte in your simplified example) will change the value of that word, which will change the sum, and therefore will always be detected. Most 2-bit errors across different words will be detected, but a 2-bit error that changes the same bit in different directions (one 0->1, the other 1->0) in different words will not change the sum -- the change in value created by one of the changed bits will be cancelled out by the equal-but-opposite change in value created by the other changed bit -- and therefore that error will not be detected.
Because this checksum is simply an addition, it will also fail to detect the insertion or removal of words whose arithmetic value is zero (and since this is a one's complement computation, that means words whose content is all 0s or all 1s).
It will also fail to detect transpositions of words, (because a+b gives the same sum as b+a), or more generally it will fail to detect errors that add the same amount to one word as they subtract from the other (because a+b gives the same sum as (a+n)+(b-n), e.g. 3+3=4+2=5+1). You could consider the transposition and cancelling-error cases to be made up of multiple pairs of same-bit changes.

LDPC: Hard_decision decoding algorithm

I am working with Bit-flipping decoding [Hard_decision] for one bit flip.
I have followed for below H_matrix :"For the bit-flipping algorithm the messages passed along the Tanner graph edges are also binary: a bit node sends a message declaring if it is a one or a zero, and each check node sends a message to each connected bit node, declaring what value the bit is based on the information available to the check node.
The check node determines that its parity-check equation is satisfied if the modulo-2 sum of the incoming bit values is zero. If the majority of the messages received by a bit node are different from its received value the bit node changes (flips) its current value. This process is repeated until all of the parity-check equations are satisfied.
Correct codeword = [10010101]
H matrix[4][8] = {{0,1,0,1,1,0,0,1},{1,1,1,0,0,1,0,0},{0,0,1,0,0,1,1,1}, {1,0,0,1,1,0,1,0}};
ReceivedCodeWord[8]={1,0,1,1,0,1,0,1}; //Error codeword
I need to get [10010101] but instead i am getting [10010001] for ReceivedCodeWord[8] = {1,0,1,1,0,1,0,1}.
But for other possible ReceivedCodeWord i am getting correct.
e.g. ReceivedCodeWord [00010101] i am getting correct codeword [10010101]
ReceivedCodeWord [11010101] i am getting correct codeword [10010101].
Doubt: why for ReceivedCodeWord {1,0,1,1,0,1,0,1} i am getting [10010001], its totally wrong. Please explain me.
Here's a link
Thanks
This is caused by the liner block code you used. Firstly, this code isn't a LDPC code, since the H martix doesn't satisfy the row-column constraint (e.g., the 3-th colmun and the 6-th column have 1s in two same poistions).
(1) When {1,0,1,1,0,1,0,1} is received , the check sums are {0,1,1,0}. If you use multiple bit flipping decoding algorithm, which means more than one bit can be flipped during one iteration, the 3-th variable node and 6-th variable node will be flipped together since both of them connect two unsatisfied check nodes, then the result will be {1,0,0,1,0,0,0,1} after first iteration. During the second iteration, the check sums are still {0,1,1,0}, so the 3-th variable node and 6-th variable node will be flipped again, this process will be recurrent in later iterations.
(2) The minimum Hamming distance of this code is 2, so this code can dectect 1 erros and correct 0.5 errors. Even you use single bit flipping decoding algorithm, it will has 50% probability to decode the received codeword {1,0,1,1,0,1,0,1} as another vaild codeword {1,0,1,1,0,0,0,1}.

Calculating CRC / Checksum of hex stream (sniffed)

I'm trying a few days to get the type of the CRC with the following hex stream (sniffed with wireshark):
The Hex data i sniffed:
0000001ec001075465737431323308557365726e616d650850617373776f7264d224
This should be the DATA in HEX:
0000001ec001075465737431323308557365726e616d650850617373776f7264
So the last 4 digits are the checksum, in this case d224
I used many code snippets (PHP, java), and some online checksum calcuation sites:
e.g.:
http://www.scadacore.com/field-applications/programming-calculators/online-checksum-calculator/
But I don't get the correct CRC value.
Thanks!
Update 1
Here are more hex streams with CRC included (the last 4 digits):
0000001dc001045465737409557365726e616d65310950617373776f726431cc96
0000001dc001045465737409557365726e616d65320950617373776f72643289d9
0000001dc001045465737409557365726e616d65330950617373776f726433b51c
0000001dc001045465737409557365726e616d65340950617373776f7264340347
0000001dc001045465737409557365726e616d65350950617373776f7264353f82
It appears to be the ARC CRC, polynomial 0x8005, reflected, zero initial value and no final xor, if I discard the initial 0000001d on each message, and take the CRC at the end to be put in the stream in little-endian order.

cyclic redundancy check in DLL

A bit stream 11100110 is to be transmitted using CRC method. The generator polynomial is X4+ X3 + 1.
What is the actual bit transmitted ?
Suppose the third bit from the left is inverted during the transmission. How the error is detected.
How the generator polynomial is already known to sender side as well as receiver side, please make this clear.
Solution :
Here, FCS will be 0110 since n = 4.
So actual bit transmitted is >> 11100110 0110
I am confused with the problem 2, 3. please reply my 2, 3 questions.
Thank You!
If you know how to generate the 0110, then invert the bit and generate a new CRC. You will see that it's different. On the other end when you compute the CRC of the eight bits sent, it will not match the four bit CRC sent.
The two sides agree a priori on a protocol, that includes the definition of the CRC to be used.

Networking and CRC confusion

I am currently working on a project that requires data to be sent from A to B. Once B receives the data, it needs to be able to determine if an error occurred during transmission.
I have read about CRC and have decided that CRC16 is right for my needs; I can chop the data into chunks and send a chunk at a time.
However, I am confused about how B will be able to tell if an error occurred. My initial thought was to have A generate a CRC and then send the data to B. Once B receives the data, generate the CRC and send it back to A. If the CRCs match, the transmission was successful. BUT - what if the transmission of the CRC from B to A errors? It seems redundant to have the CRC sent back because it can become corrupted in the same way that the data can be.
Am I missing something or over-complicating the scenario?
Any thoughts would be appreciated.
Thanks,
P
You usually send the checksum with the data. Then you calculate the checksum out of the data on the receiving end, and compare it with the checksum that came along with it. If they don't match, either the data or the checksum was corrupted (unless you're unlucky enough to get a collision) - in which case you should ask for a retransmission.
CRC is error-detection and, notice, your code can only detect a finite number of errors. However, you can calculate the probability of a CRC16 collision (this is relatively small for most practical purposes).
Now how CRC works is using polynomial division. Your CRC value is some polynomial (probably on the order of (x^15) for CRC16). That said, the polynomial is represented in binary as the coefficients. For example, x^3 + [(0)*x^2] + x + 1 = 1011 is some polynomial on order x^3. Now, you divide your data chunk by your CRC polynomial. The remainder is the CRC value. Thus, when you do this division operation again to the received chunk (with the remainder) on B, the polynomial division should come out even to 0. If this does not occur then you have a transmission error.
Now, this assumes (including corruption of your CRC value) that if n bits are corrupted, the CRC check will detect the failure (assuming no collision). If the CRC check does not pass, simply send a retransmission request to A. Otherwise, continue processing as normal. If a collision occurred, there is no way to verify the data is corrupted until you look at your received data manually (or send several, hopefully error-free copies - note that this method incurs a lot of overhead and redundancy only works to finite precision again).

Resources