validating CRC calculation - networking

I am going over the following past paper question:
Consider the 4-bit generator, G = 1001, and suppose that the data portion
of a bit stream to be transmitted prior to Cyclic Redundancy Check (CRC)
calculation is 11001001. Show the actual bit stream transmitted. Suppose
the leftmost bit in the transmitted bit stream is inverted due to noise on the
transmission link. Show that this error is detected at the receiver’s end.
I have calculated the CRC to be appended to the end of the transmission using XOR as follows:
11001001000
1001|||||||
----|||||||
0101|||||||
1001||||||
-----||||||
001000||||
1001||||
----||||
000001100|
1001|
----|
01010
1001
----
0011
So R = 011 is appended to the transmission and is what is sent.
For the second part of the question I do the same thing except due to the error the leftmost but is now 0 so:
01001001011
1001|||||||
----|||||||
1101|||||||
1001|||||||
----|||||||
01001||||||
1001||||||
----||||||
0000001011
1001
----
0010 therefore there is an error
Where do I go from here? if it is all zeroes do I stop? But this would mean there is no error...

01001001011
1001||||||
----||||||
0000001011
1001
----
0010 <- Error!

Related

How can I encode 0000 to 11110 in 4B/5B encoding scheme

From the 4B/5B encoding scheme dataward 0000 in encoded to 11110 codeword similarly 0001 is encoded to 01001 etc.
Here the result of XOR operation between two codewords will be another valid codeword.
For example XOR of 11110 and 01001 is another codeword 10111 whose dataword is 1011.Here I have no problem.
Again, to avoid dc component NRZ-I line coding scheme is used. As a result there is not three consecutive Zero's in the output codewords.
There is no more one heading and two tailing zero's in codewords. We have no worry about the number of one's in NRZ-I coding scheme.
But, how can I encode 0000 to 11110 or 0001 to 01001 and which
algorithm I should apply for this encoding scheme.
I search google and study books too. But everywhere they are telling only the same thing but I did not get my answer.
Thanks in advance
Decimal Representation
To understand this mechanism properly we should consider all codewords’ decimal value. Observe the above table carefully I converted all binary value of your table to decimal form.
Now to avoid dc component during transmission we should consider only the codewords which don’t have more than one starting and two tailing zeros .
So we get every two consecutive datawords are assigned to another two consecutive codewords.
Like this
(2,3) to (20,21),
(4,5) to (10,11)
(6,7) to (14,15)
(8,9) to (18,19)
(10,11) to (22,23)
(12, 13) to (26,27)
(14,15) to (28,29)
Exception
(0,1) to (30,9)
1 is assigned to 9 because all codewords from 0 to 8 (inclusive) are invalid because of having excessive zero . So first valid codeword 9 is assigned to 1.
If all valid codewords are assigned consecutively then changing only one bit (single bit error) during transmission it will convert to next or previous codeword and this error will remain undetected.
We know that in block coding if a valid codeword is convert to another valid codeword during transmission as a result of error , it will remain undetected and this a limitation of block coding. So to avoid this these all valid codewords are not consecutively assigned with datawords.

cyclic redundancy check in DLL

A bit stream 11100110 is to be transmitted using CRC method. The generator polynomial is X4+ X3 + 1.
What is the actual bit transmitted ?
Suppose the third bit from the left is inverted during the transmission. How the error is detected.
How the generator polynomial is already known to sender side as well as receiver side, please make this clear.
Solution :
Here, FCS will be 0110 since n = 4.
So actual bit transmitted is >> 11100110 0110
I am confused with the problem 2, 3. please reply my 2, 3 questions.
Thank You!
If you know how to generate the 0110, then invert the bit and generate a new CRC. You will see that it's different. On the other end when you compute the CRC of the eight bits sent, it will not match the four bit CRC sent.
The two sides agree a priori on a protocol, that includes the definition of the CRC to be used.

Networking and CRC confusion

I am currently working on a project that requires data to be sent from A to B. Once B receives the data, it needs to be able to determine if an error occurred during transmission.
I have read about CRC and have decided that CRC16 is right for my needs; I can chop the data into chunks and send a chunk at a time.
However, I am confused about how B will be able to tell if an error occurred. My initial thought was to have A generate a CRC and then send the data to B. Once B receives the data, generate the CRC and send it back to A. If the CRCs match, the transmission was successful. BUT - what if the transmission of the CRC from B to A errors? It seems redundant to have the CRC sent back because it can become corrupted in the same way that the data can be.
Am I missing something or over-complicating the scenario?
Any thoughts would be appreciated.
Thanks,
P
You usually send the checksum with the data. Then you calculate the checksum out of the data on the receiving end, and compare it with the checksum that came along with it. If they don't match, either the data or the checksum was corrupted (unless you're unlucky enough to get a collision) - in which case you should ask for a retransmission.
CRC is error-detection and, notice, your code can only detect a finite number of errors. However, you can calculate the probability of a CRC16 collision (this is relatively small for most practical purposes).
Now how CRC works is using polynomial division. Your CRC value is some polynomial (probably on the order of (x^15) for CRC16). That said, the polynomial is represented in binary as the coefficients. For example, x^3 + [(0)*x^2] + x + 1 = 1011 is some polynomial on order x^3. Now, you divide your data chunk by your CRC polynomial. The remainder is the CRC value. Thus, when you do this division operation again to the received chunk (with the remainder) on B, the polynomial division should come out even to 0. If this does not occur then you have a transmission error.
Now, this assumes (including corruption of your CRC value) that if n bits are corrupted, the CRC check will detect the failure (assuming no collision). If the CRC check does not pass, simply send a retransmission request to A. Otherwise, continue processing as normal. If a collision occurred, there is no way to verify the data is corrupted until you look at your received data manually (or send several, hopefully error-free copies - note that this method incurs a lot of overhead and redundancy only works to finite precision again).

Simple SDLC CRC calculation not giving the correct value

I am trying to figure out how the calculate the CRC for very simple SDLC frames.
Using an MLT I am capturing the stream and i see some simple frames being sent out like: 0x3073F9E3 and 0x3011EDE3
From my understanding the F9E3 and EDE3 are the 2 byte checksums of the 3073 and 3011 since that is all that was in that frame.
using numerous CRC calculators and calculations I have been able to get the first byte of the checksum, but not the last byte (the F9 and the ED).
Using this calculator (http://www.zorc.breitbandkatze.de/crc.html):
Select CRC-CCITT
Change Final XOR Value to: FFFF
Check Reverse Data Bytes and reverse CRC result before Final XOR
Then type the input: %30%11
Which will give the output B8ED so the last byte is the ED.
Any ideas?
You are getting the correct crc16's (F9 F8, ED B8). I don't know why your last byte is E3 in both cases. This is perhaps a clue that the packets are not being disassembled correctly.

parity bit question

i have been readin about the "parity bit" method, and how is is used to check is the "packet" is received correctly.
so using odd parity: (from wiki)
A wants to transmit: 1001
A computes parity bit value: ~(1^0^0^1) = 1
A adds parity bit and sends: 10011
B receives: 10011
B computes overall parity: 1^0^0^1^1 = 1
B reports correct transmission after observing expected odd result.
what if during the transmission, instead of "10011",
"11001" is received. how will the parity check for that, since it checks only the number of "1"'s ?
or is it impossible for bits to change during transmission like i stated before? thx
Parity bit is simplest error detection technique. It works if odd number of bits (including the parity bit) are transmitted incorrectly. So if two bits are corrupt then it will not work.

Resources