computer networking error detction cyclic redundancy check - networking

could anyone tell me that
is it right way if i calculate the crc by the method given below:
i did this because dataword length is large.
is it right?

The bottom set of divisions are what you want for a CRC, and arrive at the correct remainders. Your quotients each need one more bit, but they are not used anyway.
The top division was not completed, but it is not relevant, since you need to append the four zeros first as you did in the bottom left, or the four-bit CRC as you did in the bottom right.

Ultimately, You are doing the same thing what a division does. Refer https://www.wikihow.com/Divide-Binary-Numbers binary division for more. However, the data word to be sent to the receiver should not be altered.

Related

How are the Bits arranged in a QRcode?

I've wondered how QR codes are working, so i did a research and tried to paint my own in an table in word.
On Wikipedia I found this picture
I understand the configuration, but how you actually store a letter doesnt make sense to me.
With the example letter w.
On even rows black is 0 and on odd rows 1.
So the example should give this binary number 01110011 which would be 115 but w is number 32.
So how do I get the right number
I dont know much about this topic but I found you a video where dude explains it. And from what I understood, there are those cells that are read in order of numbers depending on arrow (there are 4 options here and you posted those yourself). So you simply follow those numbers and write 1s and 0s on paper which results in 8bit number. That video has much more detail.
It is also worth pointing out that it is MSB, meaning if we follow your example (you just considering numbers, not colors since you mislabeled it), it has arrow pointing up, meaning you write right/down to up/left which leads to number : 01110011 which has most significant bit at the left which means its 115

Using stegonography to find a secret message in a bmp image

I have been struggling with this assignment for a while now and I feel completely lost.
I have this image:
There is a hidden message in it, hidden using LSD coding. I understand the concept of changing the least significant bits, (the one furthest to the right) but I can't quite figure out how to extract a message. I have looked at the image with this tool and located some weird looking pixels at the very bottom. The problem is I don't know how to locate them and extract a message from it. I think bmp is bottom up so they should be at the top of the hex code. Anyways I would be happy if anyone could help me or point me in the right direction.
LSD steganography uses the least siginificant digit (ie, the least significant bit). Go through the image file, ignoring the headers, just looking at the bytes forming the actual image. From each byte extract the least significant bit. Every eight bytes you will have 8 bits, which make a single byte of the hidden massage. Assemble the bits into bytes, and assemble the bytes into the hidden message.

Are there any conventional rules for offset binary arithmetic with fixed size bit-width registers?

Let's say we have two fixed sized binary numbers (8-bit), e.g.
00000101 (5) and 00100000 (32)
The task is to add them in offset binary (excess of 128). Are there any specific rules concerning how to go about this?
Would I for instance first convert both the numbers into offset binary notation, then add them and afterwards subtract the offset (because I added it twice)? But if so, what about overflow, given that the imaginary registers are only 8 bit wide?
Or would I first subtract the excess and then add the second number? Are there any conventional rules when it comes to offset binary arithmetic?
I'm preparing for an exam in computer architecture and computer data arithmetic. This has been a task on an exercise sheet in a previous term. I'v already searched the net extensively for answers but can't seem to find a solid one.
I do not know what the "conventional rules" are for this operation, but I can tell you how I did this operation back when I did machine code.
This method works well when the offset is half the first number that overflows the register. That is the case for you, since the offset is 128 and the 8-bit register overflows on 256. This works especially well when the two numbers you want to add are already in the offset format.
The method is: add the two offset numbers, as unsigned addition and ignoring any overflow, then flip the most significant bit.
In your case, you are adding 10000101 (5 in offset) and 10100000 (32 in offset). Adding those results in 00100101, since there is overflow out of the most significant bit. Flipping the most-significant bit results in 10100101, which is indeed 37 in offset format.
This method may result in overflow, but only when the result is too positive or too negative to fit into the offset format anyway. And in most CPUs the two operations (unsigned addition and flipping the MSB) are practically trivial.

How big should epsilon be when checking if dot product is close to 0?

How big should epsilon be when checking if dot product is close to 0?
I am working on raytracing project, and i need to check if the dot product is 0. But
that will probably never happen, so I want to take it as 0 if its value is in a small
area [-eps, +eps], but I am not sure how big should eps be?
Thanks
Since you describe this as part of a ray-tracing project, your accuracy needed is likely dictated by the "world coordinates" of a scene, or perhaps even the screen coordinates to which those are translated. Either of these would give an acceptable target for absolute error in your calculation.
It might be possible to backtrack from that to the accuracy required in the intermediate calculations you are doing, such as forming an inner product which is supposed "theoretically" to be zero. For example, you might be trying to find a shortest path (reflected light) between two smooth bodies, and the disappearance of the inner product (perpendicularity) gives the location of a point.
In a case like that the inner product may be a quadratic in the unknowns (location of a point) that you seek. It's possible that the unknowns form a "double root" (zero of multiplicity 2), making the location of that root extra sensitive to the computation of the inner product being zero.
For such cases you would want to get roughly twice the number of digits "zero" in the inner product as needed in the accuracy of the location. Essentially the inner product changes very slowly with location in the neighborhood of a double root.
But your application might not be so sensitive; analysis of the algorithm involved is necessary to give you a good answer. As a general rule I do the inner product in double precision to get answers that may be reliable as far as single precision, but this may be too costly if the ray-tracing is to be done in real time.
There is no definitive answer. I use two approaches.
If you are only concerned by floating point error then you can use a pretty small value, comparable to the smallest floating number that the compiler can handle. In c/c++ you can use the definitions provided in float.h, such as DBL_MIN to check for these numbers. I'd use a small multiple of the number, e.g., 10. * DBL_MIN as the value for eps.
If the problem is not floating point math rounding error, then I use a value that is small (say 1%) compared to the modulus of the smallest vector.

error correction code upper bound

If I want to send a d-bit packet and add another r bits for error correction code (d>r)
how many errors I can find and correct at most?
You have 2^d different kinds of packets of length d bits you want to send. Adding your r bits to them makes them into codewords of length d+r, so now you have 2^d possible codewords you could send. The receiver could get 2^(d+r) different received words(codewords with possible errors). The question then becomes, how do you map those 2^(d+r) received words to the 2^d codewords?
This comes down to the minimum distance of the code. That is, for each pair of codewords, find the number of bits where they differ, then take the smallest of those values.
Let's say you had a minimum distance of 3. You received a word and you notice that it isn't one of the codewords. That is, there's an error. So, for the lack of a better decoding algorithm, you flip the first bit, and see if its a codeword. If it isn't you flip it back and flip the next one. Eventually, you get a codeword. Since all codewords differ in 3 positions, you know this codeword is the "closest" to the received word, since you would have to flip 2 bits in the received word to get to another codeword. If you didn't get a codeword from flipping just one bit at a time, you can't figure out where the errors are, since there are multiple codewords you could get to by flipping two bits, but you know there are at least two errors.
This leads to the general principle that for a minimum distance md, you can detect md-1 errors and correct floor((md-1)/2) errors. Calculating the minimum distance depends on the details of how you generate the codewords, otherwise known as the code. There are various bounds you can use to figure out an upper limit on md based on d and (d+r).
Paul mentioned the Hamming Code, which is a good example. It achieves the Hamming bound. For the (7,4) Hamming code, you have 4 bit messages and 7 bit codewords, and you achieve a minimum distance of 3. Obviously*, you are never going to get a minimum distance greater than the number of bits you are adding so this is the very best you can do. Don't get too used to this though. The Hamming code is one of the few examples of a non-trivial perfect code, and most of those have a minimum distance that is less than the number of bits you add.
*It's not really obvious, but I'm pretty sure it's true for non-trivial error correcting codes. Adding one parity bit gets you a minimum distance of two, allowing you to detect an error. The code consisting of {000,111} gets you a minimum distance of 3 by adding just 2 bits, but it's trivial.
You should probably read the wikipedia page on this:
http://en.wikipedia.org/wiki/Error_detection_and_correction
It sounds like you specifically want a Hamming Code:
http://en.wikipedia.org/wiki/Hamming_code#General_algorithm
Using that scheme, you can look up some example values from the linked table.

Resources