Why is an empty TCP segment at right edge of receive window not acceptable? - networking

The TCPv4 specification (RFC 793) classifies a received segment as unacceptable if it has zero length, a sequence number equal to RCV.NXT+RCV.WND while the receive window is not zero (second row in the table).
This essentially means that the segment will be discarded, other than possibly sending an ACK. No ACK processing or send window update will be done.
What is the justification of this?
Consider this scenario:
Host A sends all possible data segments to host B, just exhausting the receive window of B.
Host A shortly also sends an empty segment, e.g. a window update or acknowledgement of received data. This segment has sequence number equal to the right edge of the receive window of host B (RCV.NXT+RCV.WND), since it was set to the latest SND.NXT of host A.
The mentioned data packets are lost in the network or delayed, and host B receives the empty segment first.
Host B will classify the empty segment as not acceptable, and drop it, ignoring any acknowledgement or window update.
Is there some part that I am not understanding correctly? Is this scenario really possible?
note: I ask here instead of on networkengineering.stackexchange.com since I encountered the issue while implementing a TCP/IP stack and these protocol details seem closer to programming than what is commonly understood as network engineering.

Related

TCP packet sequence number generation

I'm creating pseudo TCP packet from scratch (based on some existing data) that can be later analysed using Wireshark.
How should I fill out sequence/acknowledgement number so that in Wireshark we can see a clean flow? Currently, I'm seeing TCP retransmission/out of order/previous segment not captured etc.
What I've tried: increment sequence number with the current TCP packet's length, but it didn't work.
(It doesn't necessarily need to makes sense, I should just get rid of the warnings)

Network layer Lan network

When we sent packets from one router to another router on the network layer and the packet size is greater than the MTU (maximum transferable unit) of the router, we have to fragment the packet. My questions is: suppose we need to add padding bits in last fragment, then where do we add padding bits (in the LSB or MSB) and how does the destination router differentiate between packet bits or padding bits?
I want you to consider the following things before:
Limit on the maximum size of IP data-gram is imposed by data link protocol.
IP is the highest layer protocol that is implemented both at routers and hosts.
Reassembly of original data-grams is only done at destination host. This takes off the extra work that need to be done by the routers present in the network core.
I will use the information from the following image to help you get to the answer with an example.
Here initial length of the packet is 2400 bytes which needs to to fragmented according to MTU limit of 1000 bytes.
There are only 13 bits available for the fragment offset and the offset is given as a multiple of eight bytes. This is why the data fields in first and second fragment has size of 976 bytes (It is the highest number divisible by 8, which is smaller than 1000 - 20 bytes). This makes first and second fragment of total size of 996 bytes. The last fragment contains the remaining of 428 bytes of payload (with 448 bytes of total).
Offset can be calculated as 0; 976/8 = 122 and 1952/8 = 244.
When these fragments reach the destination host, reassembly needs to be done. Host uses identification, flag and fragmentation offset for this task. In order to make sure which fragments belong to which data-gram, host uses source, destination addresses and identification to uniquely identify them. Offset values and more fragment bits are used to determine whether all fragments have arrived or not.
Answer to your question
The need to divide payload into multiples of 8 is only required for non-last fragment. Reason of using offset dividing by 8 helps the host to identify the starting address of the next fragment. The host don't need the address of the next fragment if it encounters the last fragment. Thus, no need to worry about payload being multiple of 8 in case of last fragment. Host checks the more fragment flag to identify the last fragment.
A bit of additional information: It is not the responsibility of the network layer to guarantee the delivery of the data-gram. If it encounters that one or more fragment(s) have not arrived then, it simply discards the whole data-gram. Transport layer, which is working above network layer, will take care of this thing, if it is using TCP, by asking the source to re-transmit the data.
Reference: Computer Networking-A Top Down Approach, James F. Kurose, Keith W. Ross (Fifth Edition)
You don't need to add any padding bits. All bits will be push on down the route until the full frame has been sent.

retransmission mechanism in TCP protocol

Can somebody just simplely describe the retransmission mechanism in TCP?
I want to know how it deal in this situation?
A send a packet to B:
A send a packet.
B receive and send ack,but this ack is lose.
A timeout and resend.
In this situation B will receive 2 same packets, how can B do to avoid dealing the same packet again?
Thanks.
Each packet has a sequence number associated with it. As data is sent, the sequence number is incremented by the amount of original data in the packet. You can think of the sequence number as the offset of the first byte in the packet from the beginning of the data stream although it may not, likely will not, start at zero. When A sends the retry, it will use the same sequence number it used the first time. B tracks the sequence numbers as it receives data and can know that it has seen the retry's sequence number before. If it has already made that data available to the (upper layer) client, then it knows that it should not do so again.

SR & GBN: Out-of-window ACKs

I'm currently studying fairly basic networking, and I'm currently on the subject of reliable transmission. I'm using the book Computer Networking by Kurrose & Ross, and two of the review questions were as follows:
With the selective-repeat/go-back-n protocol, it is possible for the
sender to receive an ACK for a packet that falls outside of its
current window?
For the SR version, my answer to the question was as follows:
Yes, if the window size is too big for the sequence number space. For
example, a receiver gets a number of packets equal to the space of the
sequence numbers. Its receive window has thus moved so that it is
expecting a new set of packets with the same sequence numbers as the
last one. The receiver now sends an ACK for each of the packets, but
all of them are lost along the way. This eventually causes the sender
to timeout for each of the previous set of packets, and retransmits
each of them. The receiver think that this duplicate set of packets
are really the new ones that it is expecting, and it sends ACKs for
each of them that successfully reaches the sender. The sender now
experiences a similar kind of confusion, where it thinks that the ACKs
are confirmations that each of the old packets have been received,
when they are really ACKs meant for the new, yet-to-be-sent packets.
I'm pretty sure this is correct (otherwise, please tell me!), since this kind of scenario seems to be the classic justification of why window size should be less than or equal to half the size of the sequence number space when it comes to SR protocols, but what about GBN?
Can the same kind of wraparound issue occur for it, making the answers mostly identical? If not, are there any other cases that can cause a typical GBN sender to receive an ACK outside of its window?
Regarding the later, the only example I can think of is the following:
A GBN sender sends packets A & B in order. The receiver receives both in order, and sends one cumulative ACK covering every packet before and up to A, and then another one covering every packet before and up to B (including A). The first one is so heavily delayed that the second one arrives first to the sender, causing its window to slide beyond A & B. When the first one finally arrives, it needlessly acknowledges that everything up to A has been correctly received, when A is already outside of the sender's window.
This example seems rather harmless and unlikely in contrast to the previous one, so I doubt that its correct (but again, correct me if I'm wrong, please!).
In practical world, how about a duplicated ACK delayed long enough to fall out of the window?
The protocol is between the sender and the receiver, but it does not have control over how the media (network path) behaves.
The protocol would still be reliable according to design but the implementation shall be able to handle such out-of-window duplicated ACKs.

How to manage multi-packet sends with gsocket?

I got a question regarding tcp/ip socket networking. Basically it is there: are there any parts of tcp/ip that I can leverage to help manage multi-packet sends. For example, I want to send a 100 mb binary file which would take something like 70-80 tcp packets. Meanwhile I have a relatively fast polling receive on the other side. Would my receive have to receive each packet it individually and "stitch" together the data packet by packet, looking for some size to be reached(it can look at the opcode and determine size) or is there some way to tell tcp to say "hey I'm sending 100 mb here, let them know when it is finished."
I am using glib's low level socket library (gsocket).
When using a binary encoding like, say protocol buffers, you would wrap the actual payload by inserting a header that would include the information necessary to decode the payload on the other end.
Say appending 8 bytes where the first 4 signify the type of the encoded message and the second four indicate the length of the entire message.
On the receiving side you are then reading this header, that's part of the payload, to determine the message type and length of the message. This lets you combine multiple messages in one payload or split messages across packets and reliably recombine them.

Resources