How do I run error detection on this binary messsage using polynomial/CRC? - networking

"Show the new message to be sent by a sender after performing the CRC calculation >using the generator X3+1 on the message: 101110110:"
I have the following done but I am not sure if it is correct, some help would be appreciated:
I worked out the generator using the following steps:
Step one:
x³+1 . x³ = 1 . there is no x^2 so x^2 = 0 . There is no x^0 so x^0 = 0
x³ + 1 = 1001
generator = 1001
Step two:
I divide the message 101110110 by 1001 I get the remainder of 0101
The new message is 101110101 ??
Is this correct and which part is the CRC?

Note: because the polynomial you are using is of degree 3, the remainder has only 3 bits. So, the message you have to transmit is the original appended by the 3 bit remainder. In this case, the transmitted message is: 101110110101 where the last 3 bits is the remainder you found (correctly!).

Related

How to do DED in hamming

I'm trying to create a 1 bit error correction with 2 bit error detection hamming code. However, I encounter some problem with the algorithm. Without the 2 bit error detection part, 1 bit error correction works fine. However, adding 2 bit error detection into it, for some cases, it cannot correct 1 bit error nor detect 2 bit error.
My 2 bit error detection logic is as follow:
(say its (7 4) hamming code and 3 parity bit is called c)
1) xor every bit from the receiver side (called it as X)
2) If X == 0 and c == 0 then it is correct
3) If X ~= 0 then it has 1 error
4) If X == 0 and c ~= 0 then 2 bit error occurs.
I followed this https://www.tutorialspoint.com/hamming-code-for-single-error-correction-double-error-detection. But it doesn't seem it's working. Can someone help me out or give me some idea or a correct logic for 2 bit error detection.
Thanks!

Simple subtraction in Verilog

I've been working on a hex calculator for a while, but seem to be stuck on the subtraction portion, particularly when B>A. I'm trying to simply subtract two positive integers and display the result. It works fine for A>B and A=B. So far I'm able use two 7-segment displays to show the integers to be subtracted and I get the proper difference as long as A>=B
When B>A I see a pattern that I'm not able to debug because of my limited knowledge in Verilog case/if-else statements. Forgive me if I'm not explaining the best way but what I'm observing is that once the first number, A, "reaches" 0 (after being subtracted from) it loops back to F. The remainder of B is then subtracted from F rather than 0.
For example: If A=1, B=3
A - B =
1 - 1 = 0
0 - 1 = F
F - 1 = E
Another example could be 4-8=C
Below are the important snippets of code I've put together thus far.
First, my subtraction statement
always#*
begin
begin
Cout1 = 7'b1000000; //0
end
case(PrintDifference[3:0])
4'b0000 : Cout0 = 7'b1000000; //0
4'b0001 : Cout0 = 7'b1111001; //1
...
4'b1110 : Cout0 = 7'b0000110; //E
4'b1111 : Cout0 = 7'b0001110; //F
endcase
end
My subtraction is pretty straightforward
output [4:0]Difference;
output [4:0] PrintDifference;
assign PrintDifference = A-B;
I was thinking I could just do something like
if A>=B, Difference = B-A
else, Difference = A-B
Thank you everyone in advance!
This is expected behaviour of twos complement addition / subtraction which I would recommend reading up on since it is so essential.
The result obtained can be changed back into an unsigned form by inverting all the bits and adding one. Checking the most significant bit will tell you if the number is negative or not.

How to detect errors for Reed-Solomon Code?

I am using (7,5) Reed-Solomon Error Correction Code.
I think I can decode "correct 1 error" or "find 2 error position".
However, there is a problem. My code can not find 2 error position.
For example, the message is 1 3 5 2 1 and RS parity is 0 5. So RS code is 0513521.
After then, there are two errors at parity part. So code is changed to 1113521.
I want to find these two errors, but my decoder said the answer is 1113621.
What should I do?
RS(7,5) can correct 1 error or detect up to 2 errors, but not determine the position of the 2 errors. In a two error case, there are multiple combinations of 2 error values and 2 error locations that produce the same 2 syndromes. Using your example, the two error cases 1113521 (errors in locations 0 and 1) and 0463521 (errors in locations 1 and 2) produce the same result: syndrome_1 = 4 and syndrome_2 = 6, and there's no way to determine where the errors are, only that they exist.
As commented, if a 1 error correction is attempted in a 2 error case, it's possible for the decoder to mis-correct and create a third error, in order to create a "valid" codeword, in this case it created 1113621. I got the same result with a test program I have.
The question is missing information, based on the example, it's using GF(8) = GF(2^3), modulo x^3 + x^2 + 1 (hex d), and the generator polynomial = (x-2)(x-4) = x^2 + 6 x + 5. Note for GF(2^m), addition and subtraction are both xor. The data is displayed least significant term first, so 0513521 = 0 + 5x + 1x^2 + 3x^3 + 5x^4 + 2x^5 + 1x^6.

On the nature of Information and Entropy definitions

I was looking at the Shannon's definitions if intrinsic information and entropy (of a "message").
Honestly, I fail to intuitively grasp why Shannon defined those two in terms of the logarithm (apart from the desirable "split multiplication into sum" property of logarithms, which is indeed desirable).
Can anyone help me to shed some light on this?
Thanks.
I believe that Shannon was working at Bell Labs when he developed the idea of Shannon entropy : the goal of his research was to best encode information, with bits (so 0 and 1).
This is the reason of the log2: it has to do with binary encoding of a message. If numbers that can take 8 different values are transmitted on a telecommunication line, signals of length 3 bits (log2(8) = 3) will be needed to transmit these numbers.
Shannon entropy is the minimum number of bits you will need to encode each character of a message (for any message written in any alphabet).
Let us take an example. We have the following message to encode with bits:
"0112003333".
The characters of the message are in {0,1,2,3}, so we would need at most log2(4) = 2 bits to encode the characters of this message. For example, we could use the following way to encode the characters:
0 would be coded by 00
1 would be coded by 01
2 would be coded by 10
3 would be coded by 11
The message would then be encoded like that: "00010110000011111111"
However we could do better if we chose to code the most frequent characters on only one bit and the other on two bits:
0 would be coded by 0
1 would be coded by 01
2 would be coded by 10
3 would be coded by 1
The message would then be encoded like that: "0010110001111"
So the entropy of "0112003333" is between 1 and 2 (it is 1.85, to be more precise).

Why we need to add 1 while doing 2's complement

The 2's complement of a number which is represented by N bits is 2^N-number.
For example: if number is 7 (0111) and i'm representing it using 4 bits then, 2's complement of it would be (2^N-number) i.e. (2^4 -7)=9(1001)
7==> 0111
1's compliment of 7==> 1000
1000
+ 1
-------------
1001 =====> (9)
While calculating 2's complement of a number, we do following steps:
1. do one's complement of the number
2. Add one to the result of step 1.
I understand that we need to do one's complement of the number because we are doing a negation operation. But why do we add the 1?
This might be a silly question but I'm having a hard time understanding the logic. To explain with above example (for number 7), we do one's complement and get -7 and then add +1, so -7+1=-6, but still we are getting the correct answer i.e. +9
Your error is in "we do one's compliment and get -7". To see why this is wrong, take the one's complement of 7 and add 7 to it. If it's -7, you should get zero because -7 + 7 = 0. You won't.
The one's complement of 7 was 1000. Add 7 to that, and you get 1111. Definitely not zero. You need to add one more to it to get zero!
The negative of a number is the number you need to add to it to get zero.
If you add 1 to ...11111, you get zero. Thus -1 is represented as all 1 bits.
If you add a number, say x, to its 1's complement ~x, you get all 1 bits.
Thus:
~x + x = -1
Add 1 to both sides:
~x + x + 1 = 0
Subtract x from both sides:
~x + 1 = -x
The +1 is added so that the carry over in the technique is taken care of.
Take the 7 and -7 example.
If you represent 7 as 00000111
In order to find -7:
Invert all bits and add one
11111000 -> 11111001
Now you can add following standard math rules:
00000111
+ 11111001
-----------
00000000
For the computer this operation is relatively easy, as it involves basically comparing bit by bit and carrying one.
If instead you represented -7 as 10000111, this won't make sense:
00000111
+ 10000111
-----------
10001110 (-14)
To add them, you will involve more complex rules like analyzing the first bit, and transforming the values.
A more detailed explanation can be found here.
Short answer: If you don't add 1 then you have two different representations of the number 0.
Longer answer: In one's complement
the values from 0000 to 0111 represent the numbers from 0 to 7
the values from 1111 to 1000 represent the numbers from 0 to -7
since their inverses are 0000 and 0111.
There is the problem, now you have 2 different ways of writing the same number, both 0000 and 1111 represent 0.
If you add 1 to these inverses they become 0001 and 1000 and represent the numbers from -1 to -8 therefore you avoid duplicates.
I'm going to directly answer what the title is asking (sorry the details aren't as general to everyone as understanding where flipping bits + adding one comes from).
First let motivate two's complement by recalling the fact that we can carry out standard (elementary school) arithmetic with them (i.e. adding the digits and doing the carrying over etc). Easy of computation is what motivates this representation (I assume it means we only 1 piece of hardware to do addition rather than 2 if we implemented subtraction differently than addition, and we do and subtract differently in elementary school addition btw).
Now recall the meaning of each of the digit's in two's complements and some binary numbers in this form as an example (slides borrowed from MIT's 6.004 course):
Now notice that arithmetic works as normal here and the sign is included inside the binary number in two's complement itself. In particular notice that:
1111....1111 + 0000....1 = 000....000
i.e.
-1 + 1 = 0
Using this fact let's try to derive what the two complement representation for -A should be. So the problem to solve is:
Q: Given the two's complement representation for A what is the two's complement's representation for -A?
To do this let's do some algebra using values we know:
A + (-A) = 0 = 1 + (-1) = 11...1 + 00000...1 = 000...0
now let's make -A the subject expressed in terms of numbers expressed in two's complement:
-A = 1 + (-1 - A) = 000.....1 + (111....1 - A)
where A is in two's complements. So what we need to compute is the subtraction of -1 and A in two's complement format. For that we notice how numbers are represented as a linear combination of it's bases (i.e. 2^i):
1*-2^N-1 + 1 * 2^N-1 + ... 1 = -1
a_N * -2^N-1 + a_N-1 * 2^N-1 + ... + a_0 = A
--------------------------------------------- (subtract them)
a_N-1 * -2^N-1 + a_N-1 -1 * 2^N-1 + ... + a_0 -1 = A
which essentially means we subtract each digit for it's corresponding value. This ends up simply flipping bits which results in the following:
-A = 1 + (-1 - A) = 1 + ~ A
where ~ is bit flip. This is why you need to bit flip and add 1.
Remark:
I think a comment that was helpful to me is that complement is similar to inverse but instead of giving 0 it gives 2^N (by definition) e.g. with 3 bits for the number A we want A+~A=2^N so 010 + 110 = 1000 = 8 which is 2^3. At least that clarifies what the word "complement" is suppose to mean here as it't not just the inverting of the meaning of 0 and 1.
If you've forgotten what two's complement is perhaps this will be helpful: What is “2's Complement”?
Cornell's answer that I hope to read at some point: https://www.cs.cornell.edu/~tomf/notes/cps104/twoscomp.html#whyworks

Resources