How to increase TCP bandwidth in Iperf - tcp

In IPERF we have a option to increase the target bandwidth with the option "-b 100m" but in TCP i dont see a option in both JPERF 2.0.2 and also in cli command. Please let me know how can i increase the bandwidth for my throughput testing since i can only receive the traffic at a rate of 20mbps .

Try setting the TCP window with -w. Multiply your desired throughput by the latency to get a starting point for the window value. If you wanted to get 50mbps on a link with 40ms rtt:
50000000 * .04 = 2000000 bytes

For TCP, you cannot set target bandwidth. As for TCP, its sending rate is regulated by flow and congestion control which is determined by RTT and loss. For example, in slow-start phase, the sender can send double number of packets every RTT. In congestion-avoidance state, the congestion window size will be cut by half (or 1/3 in TCP Cubic) once a loss detected.
However, -w can set the sending/receiving window size. If your window size is too small, the total throughput may be bottle-necked by it. So, usually try a large window size, e.g. 65535. Remember a large window size just makes sure your TCP rate would not be bottle-necked by window size, it does not "guarantee" a large throughput.

Related

TCP buffering on Linux

I have a peripheral over USB that is sending data samples at a rate of 183 MBit/s. I would like to send this data over ethernet, which is limited to < 100 Mbit/s. Is it possible to send this data without overflow (i.e missing data) by increasing the TCP socket buffer?
It also depends on the receiver window size. Even if there is 100mbits, sender will push data depending on the window size available on the receiver. TCP window size without scaling enabled can go only upto 64kb. In your case, this size is not sufficient as it needs at least (100-183Mbits)10MB buffer. In Windows 7 & newer Linux OS, TCP by default enables window scaling which can extend the size upto 1GB. After enabling the TCP window scaleing option, you can increase the socket buffer to a bigger size say 50MB which should provide the required buffering.
The short answer is, it depends.
Increasing buffers (at transmitter) can help if the data is bursty. If the average rate is <100MBit (actually less, you need to allow for network contention and overhead), then buffering can help. You can do this by increasing the size of the buffers internally to the TCP stack, or by buffering internally to your application.
If the data isn't bursty, or the average is still too high, you might need to compress the data before transmission. Dependant on the nature of the data, you may be able to achieve significant compression.

Queue Length really affect Latency in DCTCP?

DCTCP is a variant of TCP for Data Center environment. The source is here
DCTCP using ECN feature in commodity switch to limit queue length of buffer in switch around the threshold K. Doing so, packet loss is rarely happen because K is much smaller than buffer's capacity so buffer isn't almost full.
DCTCP achieve low latency for small-flows while maintaining high throughput for big-flow. The reason is when queue length exceeds threshold K, a notification of congestion will be feedback to sender. At sender, a value for probability of congestion is computed over time, so sender will decrease sending rate correspondingly to the extent of congestion.
DCTCP states that small queue length will decrease the latency or the transmission time of flows. I doubted that. Because unless packet loss leading to re-transmission and so high latency. In DCTCP, packet loss rarely happens.
Small queue at switch forces senders to decrease sending rates so force packets to queue in TX buffer of senders.
Bigger queue at switch make senders have higher sending rates and packets instead queue in TX buffer of senders, it now queue in buffer of switch.
So I think that delay in both small and big queue is still the same.
What do you think?
The buffer in the switch does not increase the capacity of the network, it only helps to not loose too much packets if you have a traffic burst. But, TCP can deal with packet loss by sending slower, which is exactly what it needs to do in case the network capacity is reached.
If you continuously run the network at the limit, the queue of the switch will be full or nearly full all the time, so you still loose packets if the queue is full. But, you also increase the latency, because the packet needs some time to get from the end of the queue where it arrived to the beginning where it will be forwarded. This latency again causes the TCP stack to react slower to congestion, which again increases congestion, packet loss etc.
So the ideal switch behaves like a network cable, e.g. does not have any buffer at all.
You might read more about the problems caused by large buffers by searching for "bufferbloat", e.g. http://en.wikipedia.org/wiki/Bufferbloat.
And when in doubt benchmark yourself.
It depends on queue occupancy. DCTCP aims to maintain small queue occupancy, because the authors think that queueing delay is the reason of long latency.
So, it does not matter how maximum size of queue is. In 16Mb of maximum queue size or just 32kb of maximum queue size, if we can maintain queue occupancy always around 8kb or something small size, queueing delay will be the same.
Read a paper, HULL from NSDI 2012, of M. Alizadeh who is the first author of DCTCP. HULL also aims to maintain short queue occupancy.
What they talk about small buffer is, because trends of data center switches shift from 'store and forward' buffer to 'cut-through' buffer. Just google it, and you can find some documents from CISCO or somewhere related webpages.

Why window size, TCP segment is increasing in wireShark?

I submit text file to server capturing by wireShark. my computer's window size is steady to 17408. but server's window size is increasing 6912, 9856, 12800 ...
I want to know why server's window size is increasing. and first TCP segment data is 502 bytes. and the other TCP segment is 1460 bytes.
why window size is increasing? why first TCP segment data is different the other?
As far as I know, the growth of TCP window size is related to the so-called slow start algorithm, which is described in detail here TCP Slow start
Also in the program Wireshark is a definite option to recompile the TCP-packets in accordance with the useful content L7, so that the size of the package there may be, in principle, any
For example, if a set of TCP-packets, which are encapsulated huge HTTP-request, instead of breaking in fragments, it can be shown in one huge fake TCP-packet
This option is called TCP reassemble

TCP Congestion Window Size Too Large?

I try to emulate a network comprised of 2 host and 1 switch using Mininet.
One host is a sender, sending packets continuously to the other host (receiver) by using iperf tool.
H1----------------------------Switch--------------------------H2
-------100Mbps|0.125ms-----------100Mbps|0.125ms------
The link between host and switch has bandwidth of 100Mbps and delay of 0.125ms.
Each packets sent has size of 1.5KB and Switch has buffer of 400 packets.
Delay of each link is 0.125ms so the RTT between H1, H2 is 4*0.125=0.5ms
CWND (congestion window) is the number of packets that sender send in one RTT, so the throughput is computed as: throughput = CWND/RTT.
Because MAX(througput) < bandwidth so CWND < RTT*bandwidth=0.5*10^(-3)*100*10^6=50000b~6KB = 4packets
But when I monitor CWND using tcp_probe tool, it surprisingly display with CWND always bigger than 200KB (~120packets), much bigger than what I expected.
Even the buffer is 400 packets, but It cannot has CWND so large like that.
Please explain it for me, I'm really stuck at this problem.
Thank you!
I don't think you can calculate CWND and RTT the way you do, because you effectively argue that the time a packet stays in the switch and in the network stacks of H1 and H2 is zero.
The congestion window (CWND) is the amount of data which can be transferred without packet loss, e.g. it will be increased as long as everything gets ACKed and decreased on packet loss.
According to your data the CWND gets downgraded at about 600, so the packet loss starts at about 400 packets, which is the buffer size of the switch. So in this moment there are not 4 packets in transit between H1 and H2, but about 400 and the RTT is probably much larger than 0.5ms.

What is the maximum window size in segments of a TCP connection?

Consider a single TCP (Reno) connection that uses a 10 Mbps link.
Assume this link does not buffer data and that the receiver's receive buffer is much larger than the congestion window.
Let each TCP segment be of size 1500 bytes and the two-way propagation delay of the connection between sender and receiver be 200 msec.
Also, assume that the TCP connection is always in congestion avoidance phase (ignore slow start).
What is the maximum window size in segments that this TCP connection can achieve?
So we know the throughput of the connection and the delay,
I think we can should be able to manipulate the following formula so that we are able to find the Window Size.
Throughput = Window Size / RTT
Throughput * RTT = Window Size
10 Mbps * 200 msec = Window Size
I am not sure if this is correct. I am having a hard time finding anything else that relates in finding Window Size other than this formula.
The Maximum windows size in terms of segments can be up to 2^30/ MSS, where MSS is the maximum segment size. The 2^30 = (2^16*2^14) comes through this as Michael mentioned you in his answer. If your network bandwidth and delay product exceeds than the TCP receiver window size than the window scaling option is enabled for the TCP connection and most OS support this features. The scaling supports up to 14-bit multiplicative shift for the window size. You can read following for the better explanation:
http://en.wikipedia.org/wiki/TCP_window_scale_option
http://www.ietf.org/rfc/rfc1323.txt
I think what you are asking is how data can I get end to end on the wire. In that case you are close. Throughput*RTT [units: B/S * S] is how much the wire holds. Ignoring PMTU, packet overhead, hardware encoding, etc. then Throughput*RTT/PacketSize would give you the estimate. But hold on, I used RTT. My receive window is really about how much can fit on the wire in one direction so divide that in half.
If your implementation doesn't support window scaling then min that with 2^16. If it does then you min it with 2^30.
A packets will be dropped if the maximum sending rate exceeds link capacity
(max window size*size of 1 segment) / RTT = link capacity
(max window size * 1500*8) / 200*10^-3 = 10 * 10^-6
you can solve this for max window size.
We divide by the RTT because after this time an ACK will be received so the sender can send more segments without the need to increase the window size.

Resources