Why in selective repeat algorithm both sending and receiving windows don't need to have the same window size, but the sending and receiving buffers should be the same size?
Window size actually means that how many packets a sender can send and how many packets a receiver can receive until every packet in the window gets acknowledged and received.
But in case of selective repeat, receiver don't need to keep track of packets received, because it just have to keep putting them into buffer or into file itself, in sequence. Whereas sender needs to keep track of window and base for keeping the Acknowledgment track.
So in Selective Repeat if you don't even put a window in receiver side, it will work.
But whereas Sending and Receiving Buffers do need to be same because they are going to share similar size and type of data. That's why we need to have same size of Sending and Receiving Buffers.
I hope this will help.
Good Question.
Related
I'm sending 1k data using TCP/IP (using FreeRTOS + LwiP). From documents I understood that TCP/IP protocol has its flow control inside its stack itself, but this flow control is dependent on the Network buffers. I'm not sure how this can be handled in my scenario which is described below.
Receive data of 1k size using TCP/IP from wifi (this data rate will be in 20Mb/s)
The received Wifi data is put into a queue of 10k size10 block, each block having a size of 1K
From the queue, each block is taken and send to another interface at lower rate 1Mb/s
So in this scenario, do I have to implement flow control manually between data from wifi <-> queue? How can I achieve this?
No you do not have to implement flow control yourself, the TCP algorithm takes care of it internally.
Basically what happens is that when a TCP segment is received from your sender LwIP will send back an ACK that includes the available space remaining in its buffers (the window size). Since the data is arriving faster than you can process it the stack will eventually send back an ACK with a window size of zero. This tells the sender's stack to back off and try again later, which it will do automatically. When you get around to extracting more data from the network buffers the stack should re-ACK the last segment it received, only this time it opens up the window to say that it can receive more data.
What you want to avoid is something called silly window syndrome because it can have a drastic effect on your network utilisation and performance. Try to read data off the network in big chunks if you can. Avoid tight loops that fill a buffer 1-byte at a time.
Assume we talking about the situation of many senders sending packets to a receiver.
Often senders would be the one that control congestion by using sliding window that limits sending rate.
We have:
snd_cwnd = min(cwnd,rwnd)
Using explicit or implicit feedback information from network (router,switch), sender would control cwnd to control sending rate.
Normally, rwnd is always big enough that sender only care about cwnd. But if we consider rwnd, using it to limit snd_cwnd, it would make congestion control more efficiently.
rwnd is the number of packets (or bytes) that receiver be able to receive. What I'm concerned about is capability of senders.
Questions:
1. So how do receiver know how many flows sending packets to it?
2. Is there anyway that receiver know the snd_cwnd of sender?
This is all very confused.
The number of flows into a receiver isn't relevant to the rwnd of any specific flow. The rwnd is simply the amount of space left in the receive buffer for that flow.
The receiver has no need to know the sender's cwnd. That's the sender's problem.
Your statement that 'normally rwnd is always big enough that sender only cares about cwnd' is simply untrue. The receive window changes with every receive; it is re-advertised with every ACK; and it frequently drops to zero.
Your following statement 'if we consider rwnd, using it to limit cwnd ...' is simply a description of what already happens, as per 'snd_cwnd = min(cwnd, rwnd)'.
Or else it may constitute a completely unexplained proposal to needlessly modify TCP's flow control which has been working for 25 years, and which didn't work for several years before that: I remember several Arpanet freezes in the middle 1980s.
I am trying to learn how TCP Flow Control works when I came across the concept of receive window.
My question is, why is the TCP receive window scale-able? Are there any advantages from implementing a small receive window size?
Because as I understand it, the larger the receive window size, the higher the throughput. While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
My question is, why is the TCP receive window scale-able?
There are two questions there. Window scaling is the ability to multiply the scale by a power of 2 so you can have window sizes > 64k. However the rest of your question indicates that you are really asking why it is resizeable, to which the answer is 'so the application can choose its own receive window size'.
Are there any advantages from implementing a small receive window size?
Not really.
Because as I understand it, the larger the receive window size, the higher the throughput.
Correct, up to the bandwidth-delay product. Beyond that, increasing it has no effect.
While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
Yes, up to the bandwidth-delay product (see above).
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network),
No it doesn't. Simulations show that if packet loss gets above a few %, TCP becomes unusable.
the sender will not need to resend a lot of packets.
It doesn't happen like that. There aren't any advantages to small window sizes except lower memory occupancy.
After much reading around, I think I might just have found an answer.
Throughput is not just a function of receive window. Both small and large receive windows have their own benefits and harms.
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network), the sender will not need to resend a lot of packets.
A large receive window ensures that the sender will not be idle a most of the time as it waits for the receiver to acknowledge that a packet has been received.
The receive window needs to be adjustable to get the optimal throughput for any given network.
I'm currently studying fairly basic networking, and I'm currently on the subject of reliable transmission. I'm using the book Computer Networking by Kurrose & Ross, and two of the review questions were as follows:
With the selective-repeat/go-back-n protocol, it is possible for the
sender to receive an ACK for a packet that falls outside of its
current window?
For the SR version, my answer to the question was as follows:
Yes, if the window size is too big for the sequence number space. For
example, a receiver gets a number of packets equal to the space of the
sequence numbers. Its receive window has thus moved so that it is
expecting a new set of packets with the same sequence numbers as the
last one. The receiver now sends an ACK for each of the packets, but
all of them are lost along the way. This eventually causes the sender
to timeout for each of the previous set of packets, and retransmits
each of them. The receiver think that this duplicate set of packets
are really the new ones that it is expecting, and it sends ACKs for
each of them that successfully reaches the sender. The sender now
experiences a similar kind of confusion, where it thinks that the ACKs
are confirmations that each of the old packets have been received,
when they are really ACKs meant for the new, yet-to-be-sent packets.
I'm pretty sure this is correct (otherwise, please tell me!), since this kind of scenario seems to be the classic justification of why window size should be less than or equal to half the size of the sequence number space when it comes to SR protocols, but what about GBN?
Can the same kind of wraparound issue occur for it, making the answers mostly identical? If not, are there any other cases that can cause a typical GBN sender to receive an ACK outside of its window?
Regarding the later, the only example I can think of is the following:
A GBN sender sends packets A & B in order. The receiver receives both in order, and sends one cumulative ACK covering every packet before and up to A, and then another one covering every packet before and up to B (including A). The first one is so heavily delayed that the second one arrives first to the sender, causing its window to slide beyond A & B. When the first one finally arrives, it needlessly acknowledges that everything up to A has been correctly received, when A is already outside of the sender's window.
This example seems rather harmless and unlikely in contrast to the previous one, so I doubt that its correct (but again, correct me if I'm wrong, please!).
In practical world, how about a duplicated ACK delayed long enough to fall out of the window?
The protocol is between the sender and the receiver, but it does not have control over how the media (network path) behaves.
The protocol would still be reliable according to design but the implementation shall be able to handle such out-of-window duplicated ACKs.
I try to understand the difference between Congestion window and Receive window.
As I understand, the receiver window is a buffer where the receiver can get the packets. The same is with the Congestion window which tell us the bound of the Receiver's abilities, and change according to lost packets, etc.
So what is the diffrence between them?
To give a short answer: the receive window is managed by the receiver, who sends out window sizes to the sender. The window sizes announce the number of bytes still free in the receiver buffer, i.e. the number of bytes the sender can still send without needing an acknowledgement from the receiver.
The congestion window is a sender imposed window that was implemented to avoid overrunning some routers in the middle of the network path. The sender, with each segment sent, increases the congestion window slightly, i.e. the sender will allow itself more outstanding sent data. But if the sender detects packet loss, it will cut the window in half. The rationale behind this is that the sender assumes that packet loss has occurred because of a buffer overflow somewhere (which is almost always true), so the sender wants to keep less data "in flight" to avoid further packet loss in the future.
For more, start here: http://en.wikipedia.org/wiki/Slow-start
Initially, CongWindow is set equal to one packet. It then sends the first packet into the network and waits for an acknowledgment. If the acknowledgment for this packet arrives before the timer runs out, the sender increases CongWindow by one packet and sends out two packets. Once all of these packets are acknowledged before their timeouts, CongWindow is increased by two—one for each of the acknowledged segments. Now the size of CongWindow is four packets, and thus, the sender transmits four packets. Such an exponential increase continues as long as the size of CongWindow is below the threshold and acknowledgments are received before their corresponding timeouts expire.One important difference is that CongWindow changes in size but receive window size is always constant.