TCP Connection window size - networking

A 10 Mbps congested non-buffered link is used to send a huge file between two hosts. The receiving host has a large buffer than the congestion window. Assume the turnaround time is 100 ms, and the TCP Reno connection is always in congestion avoidance phase
what is the max window size (in segment) that this TCP connection can achieve?
Im using the formula W*MSS/RTT=10Mbps
I have RTT which is 100ms but im not sure where to get the MSS(maximum segment size) to be able to solve for W

Related

TCP avoid delayed acknowledge

I'm having an application on a micro controller with a small TCP/IP stack. This application waits for a connection on a specific TCP port. As soon as the TCP connection is established the micro controller need to send 8 KB of data. Because the TX buffer of the TCP socket is only 1 KB tall I will need to send 8 segments of data. The TX buffer size can't be changed!
But now I have the problem, that after every segment a delay of 200 ms occurs. I know that this is caused by the delayed ack (which is on Windows 200 ms). But in my use case this means the whole process includes 1400 ms of delay time, which is just wasted time.
Is there any possibility that I can force the PC to acknowledge the data instantly (maybe a bit in the TCP header)? I can't change anything on the PC side.
Or should I instead sent two 512 byte tall segments instead of the 1 KB segment? Would this fix / trick out the issue? I have read that the PC will acknowledge the data if a second segement arrives.
What is the right way to solve such a usecase?

How to set ssthresh value in tcp

I am trying to start TCP slow start congestion algoritham in my raspberry device. As it documented in RFC 2581, it needs to set ssthresh value greater than the congestion window (cwnd). So I have chnaged /sys/module/tcp_cubic/parameters# sudo nano initial_ssthresh value to 65000 and cwnd was set to 10 ( checked with ss -i). After this settings I tried to send big packet from raspberry of size 19000 bytes. According to slow start it first needs to send to the destination device 2 packtes and then 4, then 8 ..etc.
But its not happening at raspberry. it sending me 10 packtes. Did I do something worng ?. In this case How can i start slow start algoritham.
Thanks
When CWND is less than ssthresh, the connection is in slowstart. When the CWND becomes greater than the ssthresh, the connection goes into congestion avoidance.
What you're seeing is that newer versions of linux have the initial congestion window set to 10. Before it was the default setting, you could change your initial congestion window from 3 through an ip route command. I haven't tried it, but I'm guessing you can do the opposite here.
Long story short, your machine is doing slow start. It is just starting with a larger initial congestion window.

what is the diffrence between TCP TAHOE and TCP RENO

what is the diffrence between TCP TAHOE and TCP RENO.
what I want to know is about the behivor to 3-dup-ack and timeout?
what happend to cwind what happend to SST?
thanks!
TCP Tahoe and Reno are two forms of handling TCP congestion controls specifically when it comes to receiving 3 duplicate acks.
Tahoe: handles 3 duplicate acks similar (exactly?) to receiving a timeout. It first performs a fast retransmit. Then, it halves the ssthresh value to original congestion window size, and sets the new window size to 1 and staying in slow start.
Reno: The successor to Tahoe, goes into fast recovery mode upon receiving three duplicate acks thereby halving the ssthresh value. For each successive duplicate acks (fourth, fifth, sixth), cwind increases by 1. Once the receiver finally receives the missing packet, TCP will move to congestion avoidance or slowstate upon a timeout.
Source: TCP congestion control - TCP Tahoe and Reno

TCP congestion window size in slow start phase

I have a question about the increasing rate of TCP sender's congestion window during the slow start phase.
Traditionally, the size of cwnd exponentially increases for every RTT. For instance, if the initial cwnd value is 1, it increases 2->4->8->16->.... .
In my case, since the sender uses linux kernel 3.5, the initial cwnd is 10.
I expected the cwnd increases as 10->20->40->... without delayed ACK (I turned it off at the receiver). However, when the receiver downloads a large size (over 1MB) of object from the sender over HTTP, cwnd increases as 10->12->19->29->.... I cannot understand this sequence.
I set RTT to 100ms and the link bandwidth is high enough. There is no loss during a session. I estimated the sender's cwnd by counting the number of packet the receiver received within one RTT.
Does anyone have idea for this behavior?
Thanks.
The dafault congestion control algorithm in kernel 3.5 is not TCP Reno, but CUBIC.
CUBIC has different behavior. It stems from BIC, and I know that CUBIC share slow start phase of BIC. Just see codes of BIC and CUBIC at /usr/src/yourkernelname/net/ipv4/tcp_cubic.c.
Moreover, delayed ack MAY result in such sequences. You know, 'traditional' TCP congestion control behavior is TCP reno without DACK nor SACK.
Know that even current TCP reno in Linux is not reno but new reno (reno with SACK).
Check the options of Delayed ack and Seletive ack using sysctl command.

What effects can inconsistent latency have on TCP applications?

I am testing a GNU Radio program which can tunnel TCP traffic over a wireless link. We are having some strange results in testing, and in looking for a culprit I was curious about inconsistent latency.
How can inconsistent latency affect TCP applications? By inconsistent I mean widely different RTT for ACKs on a connection. For awhile ACks seem to be coming at a normal rate, then they disappear and we have retransmissions followed by the 'delayed' ACK.
For instance, say the first several ACK's received have a similar RTT. What would happen when the next ACK isn't receieved in twice the RTT of the previous ACKs? Whatever the issue is I see lots of retransmissions after a long wait for an ACK.
Now, more specifically, how can RTTs for ACKs which bounce between fast and slow affect a TCP connection?
Having said that, is there any way to tune the IP stack to handle this environment better?
TCP maintains a smoothed RTT (SRTT) to tell it how fast the intervening network is, i.e. how fast it can transmit. If the SRTT goes up TCP will slow down. If SRTT goes down TCP will speed up. If the actual RTT goes up and down violently, TCP may not react quickly enough, due to the smoothing, and transmit too fast, which would cause packet loss, which in turn causes retransmission, which wastes the bandwidth used by the lost packets. RTT smoothing is done via exponential decay with a gain of I think 0.2, so the old SRTT value has four times the weight of the current RTT when computing the new SRTT value.

Resources