Is there a way to change the TCP receiver window size using any of winsock api functions? The RCVBUF value just increases the capacity of the temporary storage. I need to the improve the speed of data transfer and I thought increasing the receiver window size would help but I couldn't find a way to improve it using winsock api. Is there a way to do it or should I modify the registry?
The RCVBUF value just increases the capacity of the temporary storage.
No, the RCVBUF value sets the size of the receive buffer, which is the maximum receive window. No 'just' about it. The receive window is the amount if data the sender may send, which the receiver has to be able to store somewhere ... guess where? In the receive buffer.
On Windows it was historically 8k for decades, which was far too low, and gave rise to an entire sub-industry of 'download tweaks' which just raised the default (some of them also played dangerously with other settings, which isn't usually a good idea).
Related
I'm currently working on STM32H747XI (Portenta H7). I'am programming the ADC1 with the DMA1 to get 16bits data at 1Msps.
I'm sorry, I can't share my entire code but I will therefore try to describe my configuration as precisely as possible.
I'm using the ADC1 trigged by a 1MHz timer. The ADC is working in continus mode with the DMA circular and double buffer mode. I tryed direct mode and burst with a full FIFO. I have no DMA error interrupe and no ADC overrun.
My peripheral are running but I'm stuck front of two issues. First issue, I'am doing buffer of 8192 uint16_t and I send it on the USB CDC with the arduino function USBserial.Write(buf,len). In this case, USB transfer going right but I have some missing data in my buffer. The DMA increments memory but doesn't write. So I don't have missing sample but the value is false (it belongs to the old buffer).
You can see the data plot below :
transfer with buffer of 8192 samples
If I double the buffer size, this issue is fixed but another comes. The USB VPC transfer fail if the data buffer is longer than 16384 byte. Some data are cut. I tried to solve this with differents sending and delays but it doesn't work. I still have the same kind of cut.
Here the data plot of the same script with a longer buffer : transfer withe buffer of 16384 sample (32768 byte)
Thank you for your help. I remain available.
For a fast check try to disable data cache. You're probably not managing cache correctly or you haven't disable caching in the memory space where you're using DMA.
Peripherals are not aware of cache so you must manage it manually. You have also to align buffers to cache lines in this case.
Refer to AN4839
I've been using packETH for a while and I have always wondered one thing.
When I set packet generation speed on Gen-b option, I realized packETH doesn't really send packets as set.
I think when I use packETH on a virtual machine, maximum speed tends to decrease.
Even if I set number of packets to send : 40000000 and set packets per second : 4000000, the operation wouldn't be finished in 10 seconds and instead I think packETH tries to send out packets as fast as possible but can't quite reach that speed and decides to send out packets slower and therefore taking longer for the operation to finish.
So, what decides packETH's maximum packet generation/transfer speed?
Does it automatically adjust the maximum speed so that the receiving server can intake all the packets correctly?
Thank you so much in advnace.
I've read about packETH and I didn't found anything related to be a multi-threaded package sender, so there should be a problem. What you want is a multithreaded package sender which can receive any amount of packages and send them in parallel. But first, let focus on packETH:
You have tried which configuration?
In the Auto mode you can choose one of five distribution modes. Except the random mode you can see different timings by choosing different mode. In the random mode the generator tries to be smart :). Beside timing you can also specify the amount of traffic for each stream. In the manual mode you select all of the parameters by hand.
Here is where I've found it: http://packeth.sourceforge.net/packeth/GUI_version.html
Related to a multithreaded sender I would suggest trafgen, let's expose some features:
This will help you at not worrying about limit
Process a number of packets and then exit. If the number of packets is 0, then this is equivalent to infinite packets resp. processing until interrupted. Otherwise, a number given as an unsigned integer will limit processing.
This will ensure paralelism
Specify the number of processes trafgen shall fork(2) off. By default trafgen will start as many processes as CPUs that are online and pin them to each, respectively. Allowed value must be within interval [1,CPUs].
I'm sending 1k data using TCP/IP (using FreeRTOS + LwiP). From documents I understood that TCP/IP protocol has its flow control inside its stack itself, but this flow control is dependent on the Network buffers. I'm not sure how this can be handled in my scenario which is described below.
Receive data of 1k size using TCP/IP from wifi (this data rate will be in 20Mb/s)
The received Wifi data is put into a queue of 10k size10 block, each block having a size of 1K
From the queue, each block is taken and send to another interface at lower rate 1Mb/s
So in this scenario, do I have to implement flow control manually between data from wifi <-> queue? How can I achieve this?
No you do not have to implement flow control yourself, the TCP algorithm takes care of it internally.
Basically what happens is that when a TCP segment is received from your sender LwIP will send back an ACK that includes the available space remaining in its buffers (the window size). Since the data is arriving faster than you can process it the stack will eventually send back an ACK with a window size of zero. This tells the sender's stack to back off and try again later, which it will do automatically. When you get around to extracting more data from the network buffers the stack should re-ACK the last segment it received, only this time it opens up the window to say that it can receive more data.
What you want to avoid is something called silly window syndrome because it can have a drastic effect on your network utilisation and performance. Try to read data off the network in big chunks if you can. Avoid tight loops that fill a buffer 1-byte at a time.
I have a peripheral over USB that is sending data samples at a rate of 183 MBit/s. I would like to send this data over ethernet, which is limited to < 100 Mbit/s. Is it possible to send this data without overflow (i.e missing data) by increasing the TCP socket buffer?
It also depends on the receiver window size. Even if there is 100mbits, sender will push data depending on the window size available on the receiver. TCP window size without scaling enabled can go only upto 64kb. In your case, this size is not sufficient as it needs at least (100-183Mbits)10MB buffer. In Windows 7 & newer Linux OS, TCP by default enables window scaling which can extend the size upto 1GB. After enabling the TCP window scaleing option, you can increase the socket buffer to a bigger size say 50MB which should provide the required buffering.
The short answer is, it depends.
Increasing buffers (at transmitter) can help if the data is bursty. If the average rate is <100MBit (actually less, you need to allow for network contention and overhead), then buffering can help. You can do this by increasing the size of the buffers internally to the TCP stack, or by buffering internally to your application.
If the data isn't bursty, or the average is still too high, you might need to compress the data before transmission. Dependant on the nature of the data, you may be able to achieve significant compression.
I am trying to learn how TCP Flow Control works when I came across the concept of receive window.
My question is, why is the TCP receive window scale-able? Are there any advantages from implementing a small receive window size?
Because as I understand it, the larger the receive window size, the higher the throughput. While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
My question is, why is the TCP receive window scale-able?
There are two questions there. Window scaling is the ability to multiply the scale by a power of 2 so you can have window sizes > 64k. However the rest of your question indicates that you are really asking why it is resizeable, to which the answer is 'so the application can choose its own receive window size'.
Are there any advantages from implementing a small receive window size?
Not really.
Because as I understand it, the larger the receive window size, the higher the throughput.
Correct, up to the bandwidth-delay product. Beyond that, increasing it has no effect.
While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
Yes, up to the bandwidth-delay product (see above).
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network),
No it doesn't. Simulations show that if packet loss gets above a few %, TCP becomes unusable.
the sender will not need to resend a lot of packets.
It doesn't happen like that. There aren't any advantages to small window sizes except lower memory occupancy.
After much reading around, I think I might just have found an answer.
Throughput is not just a function of receive window. Both small and large receive windows have their own benefits and harms.
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network), the sender will not need to resend a lot of packets.
A large receive window ensures that the sender will not be idle a most of the time as it waits for the receiver to acknowledge that a packet has been received.
The receive window needs to be adjustable to get the optimal throughput for any given network.