Is IP header checksum a full proof method of error detection? - ip

While going through IP header checksum i.e. 1's complement of 1's complement sum of 16 bits data, I can't help but think that how come this method can detect error/alteration in data. For example, computer A sends a packet with data (12 and 7) and computer B receives the packet but with data altered (13 and 6). Hence in the receiver, checksum still match however data is altered. Could you please help me to understand if I am missing something in this topic?
Thank you.

Is IP header checksum a full proof method of error detection?
No.
The IP header checksum's purpose is to enable detection of a damaged IP header. It does not protect against manipulation or damage to the data field (which often has its own checksum).
For protection against manipulation a cryptographic method is required.

Related

How do you read without specifying the length of a byte slice beforehand, with the net.TCPConn in golang?

I was trying to read some messages from a tcp connection with a redis client (a terminal just running redis-cli). However, the Read command for the net package requires me to give in a slice as an argument. Whenever I give a slice with no length, the connection crashes and the go program halts. I am not sure what length my byte messages need going to be before hand. So unless I specify some slice that is ridiculously large, this connection will always close, though this seems wasteful. I was wondering, is it possible to keep a connection without having to know the length of the message before hand? I would love a solution to my specific problem, but I feel that this question is more general. Why do I need to know the length before hand? Can't the library just give me a slice of the correct size?
Or what other solution do people suggest?
Not knowing the message size is precisely the reason you must specify the Read size (this goes for any networking library, not just Go). TCP is a stream protocol. As far as the TCP protocol is concerned, the message continues until the connection is closed.
If you know you're going to read until EOF, use ioutil.ReadAll
Calling Read isn't guaranteed to get you everything you're expecting. It may return less, it may return more, depending on how much data you've received. Libraries that do IO typically read and write though a "buffer"; you would have your "read buffer", which is a pre-allocated slice of bytes (up to 32k is common), and you re-use that slice each time you want to read from the network. This is why IO functions return number of bytes, so you know how much of the buffer was filled by the last operation. If the buffer was filled, or you're still expecting more data, you just call Read again.
A bit late but...
One of the questions was how to determine the message size. The answer given by JimB was that TCP is a streaming protocol, so there is no real end.
I believe this answer is incorrect. TCP divides up a bitstream into sequential packets. Each packet has an IP header and a TCP header See Wikipedia and here. The IP header of each packet contains a field for the length of that packet. You would have to do some math to subtract out the TCP header length to arrive at the actual data length.
In addition, the maximum length of a message can be specified in the TCP header.
Thus you can provide a buffer of sufficient length for your read operation. However, you have to read the packet header information first. You probably should not accept a TCP connection if the max message size is longer than you are willing to accept.
Normally the sender would terminate the connection with a fin packet (see 1) not an EOF character.
EOF in the read operation will most likely indicate that a package was not fully transmitted within the allotted time.

Is it possible to send one bit from one computer to another computer through socket?

I am working on a project in which i have to send data in bits.
Is it possible to send 1 bit from one computer to another through internet.Most of the people
said to me that minimum internet packet size is 64bytes.If i send 1bit from one computer to another then packet bandwidth is 64bytes.
A TCP or UDP packet consists of a header and data. Maybe you could have one bit of data in the data section, but you would need the header as well. Without the header it would be impossible to send the packet. The header contains all the information required for sending the packet where it is supposed to go and making sure it arrives safely.
I will take "internet packet" to mean Ethernet frame based on the value you give.
An Ethernet frame has a minimum total size of 64 bytes including both header and payload, this is to ensure that the time to transmit a single frame is greater than the round trip time between nodes.
This requirement is a feature of any network that uses CSMA/CD (specifically the CD (collision detection) part), it allows a sensing node to detect a collision whilst still transmitting a frame.
Whilst Ethernet can be used to send a frame smaller than 64 bytes "padding" will be added to ensure the frame is at least 64 bytes.

Is there a good way to frame a protocol so data corruption can be detected in every case?

Background: I've spent a while working with a variety of device interfaces and have seen a lot of protocols, many serial and UDP in which data integrity is handled at the application protocol level. I've been seeking to improve my receive routine handling of protocols in general, and considering the "ideal" design of a protocol.
My question is: is there any protocol framing scheme out there that can definitively identify corrupt data in all cases? For example, consider the standard framing scheme of many protocols:
Field: Length in bytes
<SOH>: 1
<other framing information>: arbitrary, but fixed for a given protocol
<length>: 1 or 2
<data payload etc.>: based on length field (above)
<checksum/CRC>: 1 or 2
<ETX>: 1
For the vast majority of cases, this works fine. When you receive some data, you search for the SOH (or whatever your start byte sequence is), move forward a fixed number of bytes to your length field, and then move that number of bytes (plus or minus some fixed offset) to the end of the packet to your CRC, and if that checks out you know you have a valid packet. If you don't have enough bytes in your input buffer to find an SOH or to have a CRC based on the length field, then you wait until you receive enough to check the CRC. Disregarding CRC collisions (not much we can do about that), this guarantees that your packet is well formed and uncorrupted.
However, if the length field itself is corrupt and has a high value (which I'm running into), then you can't check the (corrupt) packet's CRC until you fill up your input buffer with enough bytes to meet the corrupt length field's requirement.
So is there a deterministic way to get around this, either in the receive handler or in the protocol design itself? I can set a maximum packet length or a timeout to flush my receive buffer in the receive handler, which should solve the problem on a practical level, but I'm still wondering if there's a "pure" theoretical solution that works for the general case and doesn't require setting implementation-specific maximum lengths or timeouts.
Thanks!
The reason why all protocols I know of, including those handling "streaming" data, chop up the datastream in smaller transmission units each with their own checks on board is exactly to avoid the problems you describe. Probably the fundamental flaw in your protocol design is that the blocks are too big.
The accepted answer of this SO question contains a good explanation and a link to a very interesting (but rather heavy on math) paper about this subject.
So in short, you should stick to smaller transmission units not only because of practical programming related arguments but also because of the message length's role in determining the security offered by your crc.
One way would be to encode the length parameter so that it would be easily detected to be corrupted, and save you from reading in the large buffer to check the CRC.
For example, the XModem protocol embeds an 8 bit packet number followed by it's one's complement.
It could mean doubling your length block size, but it's an option.

ipv4 HEADER CHECKSUM

I'm a beginner of TCP/IP suite.
One field of ip header named HEADER CHECKSUM is formed by treating the header as a sequence of 16 bit integers,adding them together using one's complement arithmetic,and then taking the one's complement of the result.
But the ip header also includes TTL field,which may change in the transmission.
Why would it not lead inconsistence between the sender and receiver?
The checksum is recomputed at every hop
As the TTL field is decremented on each hop, a new checksum must be
computed each time. The method used to compute the checksum is defined
by RFC 1071

Calculating the Checksum in the receiver

I'm reading the book Data Communications and Networking 4th Edition Behrouz-Forouzan. I have a question in an exercise that asked me the following: The receiver of a message uses the checksum technique (Checksum) for 8-bit characters and get the following information
100101000011010100101000
. How I can know if the Data sent is correct or not? and why?
I Learned how to calculate the checksum in hexadecimal values, but do not understand as determined by a binary output, if the information is correct.
The sender calculates checksum to the data are sends it with the data in same message.
The receiver calculates the checksum again to the received data and checks if result matches with the received checksum.
There is still a chance that both the data and checksum got modified during transmission so they still match but the likelihood of that happening because of random noise is extremely low.

Resources