I have some questions about TCP receive window - tcp

I am currently digging through pcaps, and may have misunderstood some things. Are the following assumptions about the receive window true?
RWND is basically the current size of the buffer. It has some max, and the TCP packets contain the current buffer size (MAX - receivedOctets).
Assuming the previous is correct, does the MAX buffer size change during communication? Is the MAX buffer size usually constant?

Related

Why is TCP receive window a multiple of MSS?

Why is TCP receive window considered to be a multiple of MSS Maximum Segment Size?
Wiki states that in order to fully utilize the packet lengths and avoid IP fragmentation , an integral multiple of the Maximum Segment Size (MSS) is generally recommended for the receive window and the value is therefore often only given as a factor/multiple.
Here it is stated that segments exceeding the MSS size is discarded.
For alignment... ergo it fits in a nic buffer on [ROM] # a place in its memory which is then what you read from via a port or DMA

TCP buffer in play while using boost::asio::async_write?

I understand from this discussion that boost::asio::async_write writes data to the kernel buffers only. It does not mean that the peer has received the data. But if I am sending big packets of size let's say 200000 bytes each, and then I pull the network cable to kill the connection abruptly. Will it still keep reporting on and on saying 200000 bytes written into kernel buffers for each async_write? My testing says that it doesn't. It gives up with a large buffer like 200000 bytes and does not report all bytes sent. But if its a small buffer like 30-40 bytes, it keeps reporting okay?
Question:
The primary point of raising this question is: Is there an underlying buffer size which gets filled up at one point for async_write to say that now its not able to write anymore because the earlier scheduled data has not gone out? If yes then what is the size of this underlying buffer? Can I query it from the boost::asio::ip::tcp::socket?
You can query/change the underlying system socket buffer size with send_buffer_size socket option.
The operating system though can dynamically adjust the socket buffer size and limit its maximum size:
tcp_wmem (since Linux 2.4)
This is a vector of 3 integers: [min, default, max]. These
parameters are used by TCP to regulate send buffer sizes. TCP
dynamically adjusts the size of the send buffer from the
default values listed below, in the range of these values,
depending on memory available.

TCP buffering on Linux

I have a peripheral over USB that is sending data samples at a rate of 183 MBit/s. I would like to send this data over ethernet, which is limited to < 100 Mbit/s. Is it possible to send this data without overflow (i.e missing data) by increasing the TCP socket buffer?
It also depends on the receiver window size. Even if there is 100mbits, sender will push data depending on the window size available on the receiver. TCP window size without scaling enabled can go only upto 64kb. In your case, this size is not sufficient as it needs at least (100-183Mbits)10MB buffer. In Windows 7 & newer Linux OS, TCP by default enables window scaling which can extend the size upto 1GB. After enabling the TCP window scaleing option, you can increase the socket buffer to a bigger size say 50MB which should provide the required buffering.
The short answer is, it depends.
Increasing buffers (at transmitter) can help if the data is bursty. If the average rate is <100MBit (actually less, you need to allow for network contention and overhead), then buffering can help. You can do this by increasing the size of the buffers internally to the TCP stack, or by buffering internally to your application.
If the data isn't bursty, or the average is still too high, you might need to compress the data before transmission. Dependant on the nature of the data, you may be able to achieve significant compression.

TCP Receiver Window size

Is there a way to change the TCP receiver window size using any of winsock api functions? The RCVBUF value just increases the capacity of the temporary storage. I need to the improve the speed of data transfer and I thought increasing the receiver window size would help but I couldn't find a way to improve it using winsock api. Is there a way to do it or should I modify the registry?
The RCVBUF value just increases the capacity of the temporary storage.
No, the RCVBUF value sets the size of the receive buffer, which is the maximum receive window. No 'just' about it. The receive window is the amount if data the sender may send, which the receiver has to be able to store somewhere ... guess where? In the receive buffer.
On Windows it was historically 8k for decades, which was far too low, and gave rise to an entire sub-industry of 'download tweaks' which just raised the default (some of them also played dangerously with other settings, which isn't usually a good idea).

What is the maximum window size in segments of a TCP connection?

Consider a single TCP (Reno) connection that uses a 10 Mbps link.
Assume this link does not buffer data and that the receiver's receive buffer is much larger than the congestion window.
Let each TCP segment be of size 1500 bytes and the two-way propagation delay of the connection between sender and receiver be 200 msec.
Also, assume that the TCP connection is always in congestion avoidance phase (ignore slow start).
What is the maximum window size in segments that this TCP connection can achieve?
So we know the throughput of the connection and the delay,
I think we can should be able to manipulate the following formula so that we are able to find the Window Size.
Throughput = Window Size / RTT
Throughput * RTT = Window Size
10 Mbps * 200 msec = Window Size
I am not sure if this is correct. I am having a hard time finding anything else that relates in finding Window Size other than this formula.
The Maximum windows size in terms of segments can be up to 2^30/ MSS, where MSS is the maximum segment size. The 2^30 = (2^16*2^14) comes through this as Michael mentioned you in his answer. If your network bandwidth and delay product exceeds than the TCP receiver window size than the window scaling option is enabled for the TCP connection and most OS support this features. The scaling supports up to 14-bit multiplicative shift for the window size. You can read following for the better explanation:
http://en.wikipedia.org/wiki/TCP_window_scale_option
http://www.ietf.org/rfc/rfc1323.txt
I think what you are asking is how data can I get end to end on the wire. In that case you are close. Throughput*RTT [units: B/S * S] is how much the wire holds. Ignoring PMTU, packet overhead, hardware encoding, etc. then Throughput*RTT/PacketSize would give you the estimate. But hold on, I used RTT. My receive window is really about how much can fit on the wire in one direction so divide that in half.
If your implementation doesn't support window scaling then min that with 2^16. If it does then you min it with 2^30.
A packets will be dropped if the maximum sending rate exceeds link capacity
(max window size*size of 1 segment) / RTT = link capacity
(max window size * 1500*8) / 200*10^-3 = 10 * 10^-6
you can solve this for max window size.
We divide by the RTT because after this time an ACK will be received so the sender can send more segments without the need to increase the window size.

Resources