Calculating the Checksum in the receiver - networking

I'm reading the book Data Communications and Networking 4th Edition Behrouz-Forouzan. I have a question in an exercise that asked me the following: The receiver of a message uses the checksum technique (Checksum) for 8-bit characters and get the following information
100101000011010100101000
. How I can know if the Data sent is correct or not? and why?
I Learned how to calculate the checksum in hexadecimal values, but do not understand as determined by a binary output, if the information is correct.

The sender calculates checksum to the data are sends it with the data in same message.
The receiver calculates the checksum again to the received data and checks if result matches with the received checksum.
There is still a chance that both the data and checksum got modified during transmission so they still match but the likelihood of that happening because of random noise is extremely low.

Related

are CRC generator bits same for all?

In CRC if sender has to send data, It will divide the data bits with g(x) which will give some remainder bits. Those remainder bits are appended to the data bits. Now when this code word is sent to the receiver, the receiver will divide the code word with same g(x) giving some remainder. If this remainder is zero that means data is correct.
Now, If all the systems can communicate with each other does that mean every system in the world have the same g(x). because sender and receiver must have common g(x).
(please answer only if have correct knowledge with some valid proof)
It depends on the protocol. CRC by itself can work for different polynomials, the protocols that use it define the g(x) polynomial to use.
There is a list of examples on https://en.wikipedia.org/wiki/Cyclic_redundancy_check#Standards_and_common_use
This is not an issue since systems cannot communicate using different protocols on the sending and receiving end, obviously. Potentially, a protocol could also use a variable polynomial, somehow decided at the start of the communication, but I can't see why that would be useful
That's a big no. Furthermore, there are several other variations besides just g(x).
If someone tells you to compute a CRC, you have many questions to ask. You need to ask: what is g(x)? What is the initial value of the CRC? Is it exclusive-or'ed with a constant at the end? In what order are bits fed into the CRC, least-significant or most-significant first? In what order are the CRC bits put into the message? In what order are the CRC bytes put into the message?
Here is a catalog of CRCs, with (today) 107 entries. It is not all of the CRCs in use. There are many lengths of CRCs (the degree of g(x)), many polynomials (g(x)), and among those, many choices for the bit orderings, initial value, and final exclusive-or.
The person telling you to compute the CRC might not even know! "Isn't there just one?" they might naively ask (as you have). You then have to either find a definition of the CRC for the protocol you are using, and be able to interpret that, or find examples of correct CRC calculations for your message, and attempt to deduce the parameters.
By the way, not all CRCs will give zero when computing the CRC of a message with the CRC appended. Depending on the initial value and final exclusive-or, you will get a constant for all correct messages, but that constant is not necessarily zero. Even then, you will get a constant only if you compute the bits from the CRC in the proper order. It's actually easier and faster to ignore that property and compute the CRC on just the received message and then simply compare that to the CRC received with the message.

Is IP header checksum a full proof method of error detection?

While going through IP header checksum i.e. 1's complement of 1's complement sum of 16 bits data, I can't help but think that how come this method can detect error/alteration in data. For example, computer A sends a packet with data (12 and 7) and computer B receives the packet but with data altered (13 and 6). Hence in the receiver, checksum still match however data is altered. Could you please help me to understand if I am missing something in this topic?
Thank you.
Is IP header checksum a full proof method of error detection?
No.
The IP header checksum's purpose is to enable detection of a damaged IP header. It does not protect against manipulation or damage to the data field (which often has its own checksum).
For protection against manipulation a cryptographic method is required.

What happens in rs232 if both sender and receiver are following odd parity and bit gets swapped?

I am studying USART, with the help of rs232 and max232 for communication.
I want to know if, in a scenario, sender and receiver are following odd parity and except the parity and start, stop bit rest bits gets swapped. So in this case how the receiver will get to know that data received by the receiver is wrong.Here,
Odd/Even parity is not particularly useful for exactly the reason you have identified - it detects only a subset of errors. In days when the number of gates that could fit on chip was far fewer, it had the advantage at least of requiring minimal logic to implement.
However even if you detect an error, what do you do about it? Normally a higher level packet based protocol is used where the packets have a more robust error check such as a CRC. In that case on an error, the receiver can request a resend of the erroneous packet.
At the word rather then packet level, it is possible to use a more sophisticated error checking mechanism bu using more bits for error checking and fewer for the data. This reduces the effective data rate further, and on a simple UART requires a software implementation. It is even possible to implement error-detection and correction at the word level, but this is seldom used for UART/USART comms.

Why is the TCP/UDP checksum finally complemented?

In TCP/UDP, the sender xors 16-bit words and the final result is complemented again to get the checksum. Now, this is done so that the receiver would recompute the checksum with the data and the checksum and if the result were all ones, it can be certain (well, almost!) that there's no error. My question is why would we have to do a final complement of the result at the sender. We might as well send it as such so that when the receiver recomputes the checksum, it'll have to check for all zeros, instead of all ones like in the other case.
Because 0 has a special meaning. It is used to indicate that checksum computation is to be ignored.
So that the receiver can just do a 1's complement sum of the all the data (including the checksum field) and see if it is -0 (0xffff).

How to determine the length of an Ethernet II frame?

The Ethernet II frame format does not contain a length field, and I'd like to understand how the end of a frame can be detected without it.
Unfortunately, I have no idea of physics, but the following sounds reasonable to me: we assume that Layer 1 (Physical Layer) provides us with a way of transmitting raw bits in such a way that it is possible to distinguish between the situation where bits are being sent and the situation where nothing is sent (if digital data was coded into analog signals via phase modulation, this would be true, for example - but I don't know if this is really what's done). In this case, an ethernet card could simply wait until a certain time intervall occurs where no more bits are being transmitted, and then decide that the frame transmission has to be finished.
Is this really what's happening?
If yes: where can I find these things, and what are common values for the length of "certain time intervall"? Why does IEEE 802.3 have a length field?
If not: how is it done instead?
Thank you for your help!
Hanno
Your assumption is right. The length field inside the frame is not needed for layer1.
Layer1 uses other means to detect the end of a frame which vary depending on the type of physical layer.
with 10Base-T a frame is followed by a TP_IDL waveform. The lack of further Manchester coded data bits can be detected.
with 100Base-T a frame is ended with an End of Stream Delimiter bit pattern that may not occur in payload data (because of its 4B/5B encoding).
A rough description you can find e.g. here:
http://ww1.microchip.com/downloads/en/AppNotes/01120a.pdf "Ethernet Theory of Operation"

Resources