finding the checksum - networking

We have three 16-bit words:
0110011001100000
0101010101010101
1000111100001100
sum of the first two
0110011001100000
0101010101010101
-----------------
1011101110110101
adding the sum to the third
1000111100001100
1011101110110101
-------------------
10100101011000001
but the book says for that part that it's:
0100101011000010
It says that the last addition had overflow which was wrapped around but i don't understand.
After that it obtains the 1st complement:
1011010100111101
which becomes the checksum.
I don't understand the adding the sum to the third part. Can anyone explain?

Here's adding the sum to the third value.
Note the indentation. The overflow bit is the leftmost bit.
1000111100001100
1011101110110101
-----------------
10100101011000001
^
Add the overflow to the truncated result:
0100101011000001
0000000000000001
-----------------
0100101011000010
Which is the desired result for that step.

Related

What does this logic equation of overflow mean?

here is a paragraph from the textbook:
When two's complement numbers are added or subtracted...Overflow is defined as the situation in which the result of an arithmetic operation lies outside of the number range that can be represented by the number of bits in the word...
The logic function that indicates that the result of
an operation is outside of the representable number range is:
OVR = Cs XOR Cs+1 where Cs is the carry-in to the sign bit and Cs+1 is the
carry-out of the sign bit.
I assume that by saying "sign bit" the author means the top bit. Now assume we have a 4-bit adder, 1100+1100, which leads to an overflow. The carry-in to the sign bit is 1 and the carry-out is also 1. This seems to contradict the formula. Where is the mistake?
(Please read the comments of the original question for more details)
In fact, as Raymond Chen mentioned, 1100+1100 does not cause an overflow, as the result still firs in a 4-bit signed value.
If we instead use 1000+1111, then the resulting -9 is indeed an overflow and we can observe the carry-in as 0 and carry-out as 1 for the sign-bit.

How are the Bits arranged in a QRcode?

I've wondered how QR codes are working, so i did a research and tried to paint my own in an table in word.
On Wikipedia I found this picture
I understand the configuration, but how you actually store a letter doesnt make sense to me.
With the example letter w.
On even rows black is 0 and on odd rows 1.
So the example should give this binary number 01110011 which would be 115 but w is number 32.
So how do I get the right number
I dont know much about this topic but I found you a video where dude explains it. And from what I understood, there are those cells that are read in order of numbers depending on arrow (there are 4 options here and you posted those yourself). So you simply follow those numbers and write 1s and 0s on paper which results in 8bit number. That video has much more detail.
It is also worth pointing out that it is MSB, meaning if we follow your example (you just considering numbers, not colors since you mislabeled it), it has arrow pointing up, meaning you write right/down to up/left which leads to number : 01110011 which has most significant bit at the left which means its 115

computer networking error detction cyclic redundancy check

could anyone tell me that
is it right way if i calculate the crc by the method given below:
i did this because dataword length is large.
is it right?
The bottom set of divisions are what you want for a CRC, and arrive at the correct remainders. Your quotients each need one more bit, but they are not used anyway.
The top division was not completed, but it is not relevant, since you need to append the four zeros first as you did in the bottom left, or the four-bit CRC as you did in the bottom right.
Ultimately, You are doing the same thing what a division does. Refer https://www.wikihow.com/Divide-Binary-Numbers binary division for more. However, the data word to be sent to the receiver should not be altered.

Is a number sequence increasing or decreasing?

I'm asking for a non-programming point of view because i want to see the meaning - why is it that way?
There is a sequence in one book and the formula for it is (2n+3)/(6n-5). And it is said that it is decreasing which can be seen by the obtained formula: -28/((6+1)(6n-5)). I see the formula works for every member but how can i obtain that formula which determines if the sequence is decreasing or increasing?
What you're interested in is the difference between two sequential elements, take for example n and (n+1).
The nth term is (2n+3)/(6n-5)
The (n+1)th term is (2n+5)/(6n+1)
Now, you can find the difference between these two terms:
f(n+1)-f(n) = (2n+5)/(6n+1) - (2n+3)/(6n-5)
Notice that, conceptually, the value is the Difference between one term and the next one.
This simplifies to the expression you wrote. Now, just to be pedantic, there is a small typo in the solution you gave, but it looks like an actual typo, not a misunderstanding or wrong answer. You have "(6+1)" where it should be "(6n+1)"
Now, when this value is positive, the sequence is increasing, and when it is negative the sequence is decreasing. This value, for example, will always be negative for n>5/6. There is a negative number in the numerator, and no way for the denominator to become negative to cancel it out.
Go to : http://www.wolframalpha.com/widgets/view.jsp?id=c44e503833b64e9f27197a484f4257c0
Under "derivative of" input your formula : (2*x+3)/(6*x-5)
Click "submit" button
Click the "Step-by-step solution" link
OPs question: How to get from (2*x+3)/(6*x-5) to -28/(5-6x)^2
Answer: Find the first derivative of (2*x+3)/(6*x-5)
How: Start with quotient rule for finding derivatives http://en.wikipedia.org/wiki/Quotient_rule to simplify that you'll need a few other rules http://en.wikipedia.org/wiki/Category:Differentiation_rules

error correction code upper bound

If I want to send a d-bit packet and add another r bits for error correction code (d>r)
how many errors I can find and correct at most?
You have 2^d different kinds of packets of length d bits you want to send. Adding your r bits to them makes them into codewords of length d+r, so now you have 2^d possible codewords you could send. The receiver could get 2^(d+r) different received words(codewords with possible errors). The question then becomes, how do you map those 2^(d+r) received words to the 2^d codewords?
This comes down to the minimum distance of the code. That is, for each pair of codewords, find the number of bits where they differ, then take the smallest of those values.
Let's say you had a minimum distance of 3. You received a word and you notice that it isn't one of the codewords. That is, there's an error. So, for the lack of a better decoding algorithm, you flip the first bit, and see if its a codeword. If it isn't you flip it back and flip the next one. Eventually, you get a codeword. Since all codewords differ in 3 positions, you know this codeword is the "closest" to the received word, since you would have to flip 2 bits in the received word to get to another codeword. If you didn't get a codeword from flipping just one bit at a time, you can't figure out where the errors are, since there are multiple codewords you could get to by flipping two bits, but you know there are at least two errors.
This leads to the general principle that for a minimum distance md, you can detect md-1 errors and correct floor((md-1)/2) errors. Calculating the minimum distance depends on the details of how you generate the codewords, otherwise known as the code. There are various bounds you can use to figure out an upper limit on md based on d and (d+r).
Paul mentioned the Hamming Code, which is a good example. It achieves the Hamming bound. For the (7,4) Hamming code, you have 4 bit messages and 7 bit codewords, and you achieve a minimum distance of 3. Obviously*, you are never going to get a minimum distance greater than the number of bits you are adding so this is the very best you can do. Don't get too used to this though. The Hamming code is one of the few examples of a non-trivial perfect code, and most of those have a minimum distance that is less than the number of bits you add.
*It's not really obvious, but I'm pretty sure it's true for non-trivial error correcting codes. Adding one parity bit gets you a minimum distance of two, allowing you to detect an error. The code consisting of {000,111} gets you a minimum distance of 3 by adding just 2 bits, but it's trivial.
You should probably read the wikipedia page on this:
http://en.wikipedia.org/wiki/Error_detection_and_correction
It sounds like you specifically want a Hamming Code:
http://en.wikipedia.org/wiki/Hamming_code#General_algorithm
Using that scheme, you can look up some example values from the linked table.

Resources