I am interested in tcp congestion control algorithm and looking through linux source code. Why linux use segment as the tcp cwnd unit? What's the difference between segment and byte used as the tcp cwnd unit?
Related
I read a lot blogs about tcp protocol. They all mention the congestion avoidance that once the timeout occured, tcp will set the ssthresh half as before. But none of them refer to how to increase the ssthresh. I'm curious about it when/how to increase ssthresh.
I have an assumption that every tcp connection has its own ssthresh, and there is a default value of the ssthresh, which every tcp connection'ssthresh increses from. I don't know if I'm right, or if there is any other mechanism?
what is the diffrence between TCP TAHOE and TCP RENO.
what I want to know is about the behivor to 3-dup-ack and timeout?
what happend to cwind what happend to SST?
thanks!
TCP Tahoe and Reno are two forms of handling TCP congestion controls specifically when it comes to receiving 3 duplicate acks.
Tahoe: handles 3 duplicate acks similar (exactly?) to receiving a timeout. It first performs a fast retransmit. Then, it halves the ssthresh value to original congestion window size, and sets the new window size to 1 and staying in slow start.
Reno: The successor to Tahoe, goes into fast recovery mode upon receiving three duplicate acks thereby halving the ssthresh value. For each successive duplicate acks (fourth, fifth, sixth), cwind increases by 1. Once the receiver finally receives the missing packet, TCP will move to congestion avoidance or slowstate upon a timeout.
Source: TCP congestion control - TCP Tahoe and Reno
A 10 Mbps congested non-buffered link is used to send a huge file between two hosts. The receiving host has a large buffer than the congestion window. Assume the turnaround time is 100 ms, and the TCP Reno connection is always in congestion avoidance phase
what is the max window size (in segment) that this TCP connection can achieve?
Im using the formula W*MSS/RTT=10Mbps
I have RTT which is 100ms but im not sure where to get the MSS(maximum segment size) to be able to solve for W
As far as we know the absolute limitation on TCP packet size is 64K (65535 bytes), and in practicality this is far larger than the size of any packet you will see, because the lower layers (e.g. ethernet) have lower packet sizes. The MTU (Maximum Transmission Unit) for Ethernet, for instance, is 1500 bytes.
I want to know, Is there any any way or any tools, to send packets larger than 64k?
I want to test a device in facing with packet larger than 64k! I mean I want to see, if I send a packet larger than 64K, how it behave? Does it drop some part of it? Or something else.
So :
1- How to send this large packets? What is the proper layer for this?
2- How the receiver behave usually?
The IP packet format has only 16 bit for the size of the packet, so you will not be able to create a packet with a size larger than 64k. See http://en.wikipedia.org/wiki/IPv4#Total_Length. Since TCP uses IP as the lower layer this limit applies here too.
There is no such thing as a TCP packet. TCP data is sent and received in segments, which can be as large as you like up to the limits of the API you're using, as they can be comprised of multiple IP packets. At the receiver TCP is indistinguishable from a byte stream.
NB osi has nothing to do with this, or anything else.
TCP segments are not size-limited. The thing which imposes the limit is that IPv4 and IPv6 packets have 16 bit length fields, so a size larger than this limit is not possible to express.
However, RFC 2675 is a proposed standards for IPv6 which would expand the length field to 32 bits, allowing much larger TCP segments.
See here for a talk about why this change could help improve performance and here for a set of (experimental) patches to Linux to enable this RFC.
I have a question about the increasing rate of TCP sender's congestion window during the slow start phase.
Traditionally, the size of cwnd exponentially increases for every RTT. For instance, if the initial cwnd value is 1, it increases 2->4->8->16->.... .
In my case, since the sender uses linux kernel 3.5, the initial cwnd is 10.
I expected the cwnd increases as 10->20->40->... without delayed ACK (I turned it off at the receiver). However, when the receiver downloads a large size (over 1MB) of object from the sender over HTTP, cwnd increases as 10->12->19->29->.... I cannot understand this sequence.
I set RTT to 100ms and the link bandwidth is high enough. There is no loss during a session. I estimated the sender's cwnd by counting the number of packet the receiver received within one RTT.
Does anyone have idea for this behavior?
Thanks.
The dafault congestion control algorithm in kernel 3.5 is not TCP Reno, but CUBIC.
CUBIC has different behavior. It stems from BIC, and I know that CUBIC share slow start phase of BIC. Just see codes of BIC and CUBIC at /usr/src/yourkernelname/net/ipv4/tcp_cubic.c.
Moreover, delayed ack MAY result in such sequences. You know, 'traditional' TCP congestion control behavior is TCP reno without DACK nor SACK.
Know that even current TCP reno in Linux is not reno but new reno (reno with SACK).
Check the options of Delayed ack and Seletive ack using sysctl command.