TCP -- partial segments in receiver's sliding window - tcp

RFC793 states that, at the receiver, the incoming segment is accepted upon the following check:
The first part of this test checks to see if the beginning of the segment
falls in the window, the second part of the test checks to see if the end of
the segment falls in the window; if the segment passes either part of the
test it contains data in the window.
However, there may be a case that the the beginning of the segment falls in the window, but the end of the segment doesn't. This is he case when there still is room in the window, but the segment size is longer than the remaining space in the buffer. What if this is the case-- what happens?
Does TCP drop this segment? or Does it arrange the buffer based on Maximum Segment Size so that it can take these partial segments?
TIA.

Normally the sender will not send more data than the receiver can accept in its window, as the current window size is sent on each TCP header. If the receiver's window is filling, it will send a lower window size value, so the sender will know how much data the receiver can accept until it fills.

Related

Which TCP window update is most recent?

I was writing a TCP implementation, did all the fancy slow and fast retransmission stuff, and it all worked so I thought I was done. But then I reviewed my packet receive function (almost half of the 400 lines total code), and realized that my understanding of basic flow control is incomplete...
Suppose we have a TCP connection with a "sender" and "receiver". Suppose that the "sender" is not sending anything, and the receiver is stalling and then unstalling.
Since the "sender" is not sending anything, the "receiver" sees no ack_no delta. So the two window updates from the "receiver" look like:
ack_no = X, window = 0
ack_no = X, window = 8K
since both packets have the same ack_no, and they could be reordered in transit, how does the sender know which came first?
If the sender doesn't know which came first, then, after receiving both packets, how does it know whether it's allowed to send?
One guess is that maybe the window's upper endpoint is never allowed to decrease? Once the receiver has allocated a receive buffer and advertised it, it can never un-advertise it? In that case the window update could be reliably handled via the following code (assume no window scale, for simplicity):
// window update (https://stackoverflow.com/questions/63931135/)
int ack_delta = pkt_ack_no - c->tx_sn_ack;
c->tx_window = MAX(BE16(PKT.l4.window), c->tx_window - ack_delta);
if (c->tx_window)
Net_Notify(); // wake up transmission
But this is terrible from a receiver standpoint: it vastly increases the memory you'd need to support 10K connections reliably. Surely the protocol is smarter than that?
There is an assumption that the receive buffer never shrinks, which is intentionally undocumented to create an elite "skin in the game" club in order to limit the number of TCP implementations.
The original standard says that shrinking the window is "discouraged" but doesn't point out that it can't work reliably:
The mechanisms provided allow a TCP to advertise a large window and
to subsequently advertise a much smaller window without having
accepted that much data. This, so called "shrinking the window," is
strongly discouraged.
Even worse, the standard is actually missing the MAX operation proposed in the question, and just sets the window from the most recent packet if the acknowledgement number isn't increasing:
If SND.UNA < SEG.ACK =< SND.NXT, the send window should be
updated. If (SND.WL1 < SEG.SEQ or (SND.WL1 = SEG.SEQ and
SND.WL2 =< SEG.ACK)), set SND.WND <- SEG.WND, set
SND.WL1 <- SEG.SEQ, and set SND.WL2 <- SEG.ACK.
Note that SND.WND is an offset from SND.UNA, that SND.WL1
records the sequence number of the last segment used to update
SND.WND, and that SND.WL2 records the acknowledgment number of
the last segment used to update SND.WND. The check here
prevents using old segments to update the window.
so it will fail to grow the window if packets having the same ack number are reordered.
Bottom line: implement something that actually works robustly, not what's in the standard.

Selective repeat buffer size

Why in selective repeat algorithm both sending and receiving windows don't need to have the same window size, but the sending and receiving buffers should be the same size?
Window size actually means that how many packets a sender can send and how many packets a receiver can receive until every packet in the window gets acknowledged and received.
But in case of selective repeat, receiver don't need to keep track of packets received, because it just have to keep putting them into buffer or into file itself, in sequence. Whereas sender needs to keep track of window and base for keeping the Acknowledgment track.
So in Selective Repeat if you don't even put a window in receiver side, it will work.
But whereas Sending and Receiving Buffers do need to be same because they are going to share similar size and type of data. That's why we need to have same size of Sending and Receiving Buffers.
I hope this will help.
Good Question.

SR & GBN: Out-of-window ACKs

I'm currently studying fairly basic networking, and I'm currently on the subject of reliable transmission. I'm using the book Computer Networking by Kurrose & Ross, and two of the review questions were as follows:
With the selective-repeat/go-back-n protocol, it is possible for the
sender to receive an ACK for a packet that falls outside of its
current window?
For the SR version, my answer to the question was as follows:
Yes, if the window size is too big for the sequence number space. For
example, a receiver gets a number of packets equal to the space of the
sequence numbers. Its receive window has thus moved so that it is
expecting a new set of packets with the same sequence numbers as the
last one. The receiver now sends an ACK for each of the packets, but
all of them are lost along the way. This eventually causes the sender
to timeout for each of the previous set of packets, and retransmits
each of them. The receiver think that this duplicate set of packets
are really the new ones that it is expecting, and it sends ACKs for
each of them that successfully reaches the sender. The sender now
experiences a similar kind of confusion, where it thinks that the ACKs
are confirmations that each of the old packets have been received,
when they are really ACKs meant for the new, yet-to-be-sent packets.
I'm pretty sure this is correct (otherwise, please tell me!), since this kind of scenario seems to be the classic justification of why window size should be less than or equal to half the size of the sequence number space when it comes to SR protocols, but what about GBN?
Can the same kind of wraparound issue occur for it, making the answers mostly identical? If not, are there any other cases that can cause a typical GBN sender to receive an ACK outside of its window?
Regarding the later, the only example I can think of is the following:
A GBN sender sends packets A & B in order. The receiver receives both in order, and sends one cumulative ACK covering every packet before and up to A, and then another one covering every packet before and up to B (including A). The first one is so heavily delayed that the second one arrives first to the sender, causing its window to slide beyond A & B. When the first one finally arrives, it needlessly acknowledges that everything up to A has been correctly received, when A is already outside of the sender's window.
This example seems rather harmless and unlikely in contrast to the previous one, so I doubt that its correct (but again, correct me if I'm wrong, please!).
In practical world, how about a duplicated ACK delayed long enough to fall out of the window?
The protocol is between the sender and the receiver, but it does not have control over how the media (network path) behaves.
The protocol would still be reliable according to design but the implementation shall be able to handle such out-of-window duplicated ACKs.

TCP - difference between Congestion window and Receive window

I try to understand the difference between Congestion window and Receive window.
As I understand, the receiver window is a buffer where the receiver can get the packets. The same is with the Congestion window which tell us the bound of the Receiver's abilities, and change according to lost packets, etc.
So what is the diffrence between them?
To give a short answer: the receive window is managed by the receiver, who sends out window sizes to the sender. The window sizes announce the number of bytes still free in the receiver buffer, i.e. the number of bytes the sender can still send without needing an acknowledgement from the receiver.
The congestion window is a sender imposed window that was implemented to avoid overrunning some routers in the middle of the network path. The sender, with each segment sent, increases the congestion window slightly, i.e. the sender will allow itself more outstanding sent data. But if the sender detects packet loss, it will cut the window in half. The rationale behind this is that the sender assumes that packet loss has occurred because of a buffer overflow somewhere (which is almost always true), so the sender wants to keep less data "in flight" to avoid further packet loss in the future.
For more, start here: http://en.wikipedia.org/wiki/Slow-start
Initially, CongWindow is set equal to one packet. It then sends the first packet into the network and waits for an acknowledgment. If the acknowledgment for this packet arrives before the timer runs out, the sender increases CongWindow by one packet and sends out two packets. Once all of these packets are acknowledged before their timeouts, CongWindow is increased by two—one for each of the acknowledged segments. Now the size of CongWindow is four packets, and thus, the sender transmits four packets. Such an exponential increase continues as long as the size of CongWindow is below the threshold and acknowledgments are received before their corresponding timeouts expire.One important difference is that CongWindow changes in size but receive window size is always constant.

How to calculate sampleTime and sampleDuration with ogg file

I have create ogg decoder in media foundation.
I have read some packets as a sample (compress data), now I need to know the sample' time and sample's duration.
Now I know the AvgBytesPerSec and SamplesPerSec and so on, but this parameters are use for uncompress data.
so how can get IMFSample's time and duration by use compress data ?
I'll assume you know a few things before answering:
How to read the Vorbis setup packets (1st and 3rd in the stream):
Sample Rate
Decoding parameters (specifically the block sizes and modes)
How to read Vorbis audio packet headers:
Validation bit
Mode selection
How to calculate the current timestamp for uncompressed PCM data based on sample number.
How to calculate the duration of a buffer of uncompressed PCM data based on sample count.
The Vorbis Specification should help with the first two. Since you are not decoding the audio, you can safely discard the time, floor, residue, and mapping configuration after you've read it in (technically you can discard the codebooks as well, but only after you've read the floor configuration in).
Granule Position and Sample Position are interchangable terms in Vorbis.
To calculate the number of samples in a packet, add the current packet's block size to the previous packet's block size, then divide by 4. There are two exceptions to this: The first audio packet is empty (0 samples), and the last audio packet's size is calculated by subtracting the second last page's granule position from the last page's granule position.
To calculate the last sample position of a packet, use the following logic:
The first audio packet in the stream is 0.
The last full audio packet in a page is the page's granule position (including the last page)
Packets in the middle of a page are calculated from the page's granule position. Start at the granule position of the last full audio packet in the page, then subtract the number of samples in each packet after the one you are calculating for.
If you need the initial position of a packet, use the granule position of the previous packet.
If you need an example of how this is all done, you might try reading through this one (public domain, C). If that doesn't help, I have a from-scratch implementation in C# that I can link to.

Resources