TcpWindowSize vs Socket Buffer Size on Windows - tcp

What is difference between TcpWindowSize and Socket Buffer Sizes ?
I assume TcpWindowSize can be changed only using Registry Settings and Socket Buffer Sizes can be changed using SO_SNDBUF and SO_RCVBUF socket parameters?

The TcpWindowSize Registry setting controls the maximum advertised window of the interface as described in the MSDN link. Being a Registry setting, you change it in the Registry.
SO_RCVBUF controls the size of the socket receive buffer. This is the maximum advertised TCP window of the connection, and it is evidently subject to overriding by (1).
SO_SNDBUF controls the size of the socket send buffer. It doesn't have anything directly to do with windowing.

Related

What are the differences between Kernel Buffer, TCP Socket Buffer and Sliding Window

Here's my understanding of incoming data flow in TCP/IP
Kernel reads data to its buffer from network interface
Kernel copy data from its buffer to TCP Socket Buffer, where Sliding Window works
The program that is blocked by read() wakes up and copy data from socket buffer.
I'm a little bit confused about where does the sliding window locate, or is it the same as socket buffer
Linux does not handle TCP's sliding window as a separate buffer, rather as several indices indicating how much has already been received / read. The Linux kernel packet handling process can be described in many ways and can be divided to small parts as yo go deeper, but the general flow is as follows:
The kernel prepares to receive data over a network interface, it prepares SKB (Socket Buffer) data structures and map them to the interface Rx DMA buffer ring.
When packets arrive, they fill these preconfigured buffers and notify the kernel in an interrupt context of the packets arrival. In this context, the buffers are moved to a recv queue for the network stack to handle them out of an interrupt context.
The network stack retrieves these packets and handles them accordingly, eventually arriving to the TCP layer (if they are indeed TCP packets) which in turn handles the window.
See struct tcp_sock member u32 rcv_wnd which is then used in tp->rcvq_space.space as the per-connection space left in window.
The buffer is added to socket receive queue and is read accordingly as stream data in tcp_recvmsg()
The important thing to remember here is that copies is the worst thing regarding performance. Therefore, the kernel will always (unless absolutely necessary) will avoid copies and use pointers instead.

How to increase TCP window size

I am working on a video streaming server which streams video at 6Mbps rate. When checked through Wireshark, I noticed that the window size does not go above 3100 or so. For testing purpose, I connected an IP Camera and checked window size. For this, I found that the window size is approximately 6100.
I increased the send buffer size of my application's TCP socket. But, no luck. It actually reduced the window size to 1560 or so. Any suggestion on how to increase the window size.
My application's target recipient is a device on LAN.
I increased the send buffer size of my application's TCP socket. But, no luck. It actually reduced the window size to 1560 or so.
The receive window size is controlled by the receive socket buffer size on the reciever.

Is TCP Buffer In Address Space Of Process Memory?

I am told to increase TCP buffer size in order to process messages faster.
My Question is, no matter what buffer i am using for TCP message(ByteBuffer, DirectByteBuffer etc), whenever CPU receives interrupt from say NIC, to handle network request to read the socket data, does OS maintain any buffer in memory outside Address Space of requesting process(i.g. the process which is listening on that socket)
or
whatever way CPU receives network data, it will always be written in a buffer of process address space only and no buffer(including 'Recv-Q' and 'Send-Q' of netstat command) outside of the address space is maintained for this communication?
The process by which the Linux network stack receives data is a bit complicated. I wrote a comprehensive guide to the Linux network stack that explains everything you need to know starting from the device driver up to a userland program's socket receive queue.
There are many places buffers are maintained in the kernel:
The DMA ring where packets are written by the NIC after they've arrived.
References to the packets on the DMA ring are used to process the packet.
Eventually, the packet data is added to process' receive queue, if the receive queue is not full already.
Reads from the socket will pull packets from the process' receive queue.
If packet sniffing is occurring, packet data is duplicated and sent to any filters added by the packet sniffing code.
The full process of how data is moved, accounted for, and dropped (when required) is described in the blog post linked above.
Now, if you want to process messages faster, I assume you mean you want to reduce your packet processing latency, correct? If so, you should consider using SO_BUSYPOLL which can help reduce packet processing latency.
Increasing the receive buffer just increases the number of packets that can be queued for a userland socket. To increasing packet processing power, you need to carefully monitor and tune each component of the network stack. You may need to use something like RPS to increase the number of CPUs processing packets.
You will also want to monitor each component of your network stack to ensure that available buffers and CPU processing power is sufficient to handle your packet workload.
See:
http://linux.die.net/man/3/setsockopt
The options are SO_SNDBUF, and SO_RCVBUF. If you directly use the C-API, the call is setsockopt itself. If you use some kind of framework look up how to set socket options. This is indeed a kernel-side buffer, not one held by your process. It determines how many bytes the kernel can hold ready for you to fetch from a call to read/receive. It also affects the flow control mechanism of TCP.
You are being told to increase the socket send or receive buffer sizes. These are associated with the socket, in the TCP part of the kernel. See setsockopt() and SO_RCVBUF and SO_SNDBUF.

TCP receive window and congestion window

I am trying to understand TCP's advertised receive window size and how CUBIC congestion control works.
Can we set the initially advertised receive window size ? I tried setting SO_RCVBUF, but didn't affect.
What can change the advertised receive window during transmission - what actions/events will affect the receive window size ?
What is the relation between congestion control and receive window size?
I am using Linux 3.11.
Can we set the initially advertised receive window size ? I tried setting SO_RCVBUF, but didn't affect.
It does. You must have done it wrong. You have to set it before connecting the socket, or, in the case of a server, on the listening socket, from which all accepted sockets will inherit it. Setting it after the connect doesn't work if window scaling is required, as that is only negotiated durimg the connect handshake.
What can change the advertised receive window during transmission - what actions/events will affect the receive window size ?
Reading from the socket.
What is the relation between congestion control and receive window size?
Nil.

How can I retrieve current TCP congestion window size from the kernel? Any command or simple script?

Is there any command or script to retrieve the current TCP congestion window of a tcp connection. So suppose some communication is going on over tcp through the network interface (eg. eth0), now is there any way to dynamically (periodically) retrieve the tcp congestion window?(in Linux platform)
Is there any command or script to retrieve the current TCP congestion window of a network interface?
No, because there isn't any such thing. Network interfaces don't have congestion windows. Connections do.
I recommend you to try tcpprobe. By dynamic load tcpprobe, you can get the congestion window size.
http://www.linuxfoundation.org/collaborate/workgroups/networking/tcpprobe

Resources