TCP buffering on Linux - tcp

I have a peripheral over USB that is sending data samples at a rate of 183 MBit/s. I would like to send this data over ethernet, which is limited to < 100 Mbit/s. Is it possible to send this data without overflow (i.e missing data) by increasing the TCP socket buffer?

It also depends on the receiver window size. Even if there is 100mbits, sender will push data depending on the window size available on the receiver. TCP window size without scaling enabled can go only upto 64kb. In your case, this size is not sufficient as it needs at least (100-183Mbits)10MB buffer. In Windows 7 & newer Linux OS, TCP by default enables window scaling which can extend the size upto 1GB. After enabling the TCP window scaleing option, you can increase the socket buffer to a bigger size say 50MB which should provide the required buffering.

The short answer is, it depends.
Increasing buffers (at transmitter) can help if the data is bursty. If the average rate is <100MBit (actually less, you need to allow for network contention and overhead), then buffering can help. You can do this by increasing the size of the buffers internally to the TCP stack, or by buffering internally to your application.
If the data isn't bursty, or the average is still too high, you might need to compress the data before transmission. Dependant on the nature of the data, you may be able to achieve significant compression.

Related

TCP buffer in play while using boost::asio::async_write?

I understand from this discussion that boost::asio::async_write writes data to the kernel buffers only. It does not mean that the peer has received the data. But if I am sending big packets of size let's say 200000 bytes each, and then I pull the network cable to kill the connection abruptly. Will it still keep reporting on and on saying 200000 bytes written into kernel buffers for each async_write? My testing says that it doesn't. It gives up with a large buffer like 200000 bytes and does not report all bytes sent. But if its a small buffer like 30-40 bytes, it keeps reporting okay?
Question:
The primary point of raising this question is: Is there an underlying buffer size which gets filled up at one point for async_write to say that now its not able to write anymore because the earlier scheduled data has not gone out? If yes then what is the size of this underlying buffer? Can I query it from the boost::asio::ip::tcp::socket?
You can query/change the underlying system socket buffer size with send_buffer_size socket option.
The operating system though can dynamically adjust the socket buffer size and limit its maximum size:
tcp_wmem (since Linux 2.4)
This is a vector of 3 integers: [min, default, max]. These
parameters are used by TCP to regulate send buffer sizes. TCP
dynamically adjusts the size of the send buffer from the
default values listed below, in the range of these values,
depending on memory available.

How to increase TCP bandwidth in Iperf

In IPERF we have a option to increase the target bandwidth with the option "-b 100m" but in TCP i dont see a option in both JPERF 2.0.2 and also in cli command. Please let me know how can i increase the bandwidth for my throughput testing since i can only receive the traffic at a rate of 20mbps .
Try setting the TCP window with -w. Multiply your desired throughput by the latency to get a starting point for the window value. If you wanted to get 50mbps on a link with 40ms rtt:
50000000 * .04 = 2000000 bytes
For TCP, you cannot set target bandwidth. As for TCP, its sending rate is regulated by flow and congestion control which is determined by RTT and loss. For example, in slow-start phase, the sender can send double number of packets every RTT. In congestion-avoidance state, the congestion window size will be cut by half (or 1/3 in TCP Cubic) once a loss detected.
However, -w can set the sending/receiving window size. If your window size is too small, the total throughput may be bottle-necked by it. So, usually try a large window size, e.g. 65535. Remember a large window size just makes sure your TCP rate would not be bottle-necked by window size, it does not "guarantee" a large throughput.

Why window size, TCP segment is increasing in wireShark?

I submit text file to server capturing by wireShark. my computer's window size is steady to 17408. but server's window size is increasing 6912, 9856, 12800 ...
I want to know why server's window size is increasing. and first TCP segment data is 502 bytes. and the other TCP segment is 1460 bytes.
why window size is increasing? why first TCP segment data is different the other?
As far as I know, the growth of TCP window size is related to the so-called slow start algorithm, which is described in detail here TCP Slow start
Also in the program Wireshark is a definite option to recompile the TCP-packets in accordance with the useful content L7, so that the size of the package there may be, in principle, any
For example, if a set of TCP-packets, which are encapsulated huge HTTP-request, instead of breaking in fragments, it can be shown in one huge fake TCP-packet
This option is called TCP reassemble

TCP Receiver Window size

Is there a way to change the TCP receiver window size using any of winsock api functions? The RCVBUF value just increases the capacity of the temporary storage. I need to the improve the speed of data transfer and I thought increasing the receiver window size would help but I couldn't find a way to improve it using winsock api. Is there a way to do it or should I modify the registry?
The RCVBUF value just increases the capacity of the temporary storage.
No, the RCVBUF value sets the size of the receive buffer, which is the maximum receive window. No 'just' about it. The receive window is the amount if data the sender may send, which the receiver has to be able to store somewhere ... guess where? In the receive buffer.
On Windows it was historically 8k for decades, which was far too low, and gave rise to an entire sub-industry of 'download tweaks' which just raised the default (some of them also played dangerously with other settings, which isn't usually a good idea).

Benefit of small TCP receive window?

I am trying to learn how TCP Flow Control works when I came across the concept of receive window.
My question is, why is the TCP receive window scale-able? Are there any advantages from implementing a small receive window size?
Because as I understand it, the larger the receive window size, the higher the throughput. While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
My question is, why is the TCP receive window scale-able?
There are two questions there. Window scaling is the ability to multiply the scale by a power of 2 so you can have window sizes > 64k. However the rest of your question indicates that you are really asking why it is resizeable, to which the answer is 'so the application can choose its own receive window size'.
Are there any advantages from implementing a small receive window size?
Not really.
Because as I understand it, the larger the receive window size, the higher the throughput.
Correct, up to the bandwidth-delay product. Beyond that, increasing it has no effect.
While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
Yes, up to the bandwidth-delay product (see above).
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network),
No it doesn't. Simulations show that if packet loss gets above a few %, TCP becomes unusable.
the sender will not need to resend a lot of packets.
It doesn't happen like that. There aren't any advantages to small window sizes except lower memory occupancy.
After much reading around, I think I might just have found an answer.
Throughput is not just a function of receive window. Both small and large receive windows have their own benefits and harms.
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network), the sender will not need to resend a lot of packets.
A large receive window ensures that the sender will not be idle a most of the time as it waits for the receiver to acknowledge that a packet has been received.
The receive window needs to be adjustable to get the optimal throughput for any given network.

Resources