Determining the time to receive in object using TCP - tcp

The question I am trying to figure out is:
In this problem we consider the delay introduced by the TCP slow-start
phase. Consider a client and a Web server directly connected by one
link of rate R. Suppose the client wants to retrieve an object whose
size is exactly equal to 15S, where S is the maximum segment size
(MSS). Denote the round-trip time between client and server as RTT
(assumed to be constant). Ignoring protocol headers, determine the
time to retrieve the object (including TCP connection establishment)
when
4S/R > S/R + RTT > 2S/R
8S/R > S/R + RTT > 4S/R
S/R > RTT
I have the solution already (its a problem from a textbook), but I do not understand how they got to the answer.
RTT + RTT + S/R + RTT + S/R + RTT + 12S/R = 4 · RTT + 14 · S/R
RTT + RTT +S/R + RTT +S/R + RTT +S/R + RTT + 8S/R = 5 · RTT + 11 ·S/R
RTT + RTT + S/R + RTT + 14S/R = 3 · RTT + 15 · S/R
and here is the image that goes with the answer:
What kind of makes sense to me: Each of the scenarios is one where the RTT time is more or less than the time it takes to transmit a certain amount of segments. So for the first one, it takes somewhere between 3S/R and S/R seconds per RTT. From there I don't understand how slow-start is operating. I thought it just increases the window size for every acknowledged packet. But, for example in the solution to #1, Only two packets appear to be sent and ACKed and yet the window size jumps to 12S? What am I missing here?

Yes the answer is correct,
Slow start double the amount of MSS every time, so starting from 1 then 2 then 4 then 8...
To understand the figure, think of it that way: EACH time ONE MSS is well received , 2 MSS are sent.
in your example: when the first green mss is well acknowledged , 2 blue mss are sent and when the second mss is well acknowledged , 2 more blue mss are sent.
When the number of mss increases you wont be waiting RTT because while sending the acknowledgment other MSS are being set in a simultaneous way.

This trick is to understand that if 3R/S > RTT > R/S, then when the server is busy 4 packets (blue ones), it will receive an ACK before it has finished sending the 4th one. So the rest of the time would be 1RTT for (previous ack and) packets to travel and n * R/S where n is the number of packets sent one after another without wait.
Now before we get to the server sending 4 packets, there will be 1 RTT for auth, 1 RTT + R/S till the time client has sent 1st Ack, 1 RTT + R/S till the time client has sent 2nd Ack, and when the server receives the 2nd ack (and 3rd in very short order), it will send the rest of the packets (blue and purple ones) together because it will not be waiting for ACK because RTT < 3R/S. You can do a similar analysis for the other parts.

Related

Maximum throughput meaning

Host A is sending data to host B over a full duplex
link. A and B are using the sliding window
protocol for flow control. The send and receive
window sizes are 5 packets each. Data packets
(sent only from A to B) are all 1000 bytes long
and the transmission time for such a packet is
50 ps. Acknowledgment packets (sent only from
B to A) are very small and require negligible
transmission time. The propagation delay over
the link is 200 trrs. What is the maximum
achievable throughput in this communication?
This question was asked in gate my question. I have calculated it, but what is the meaning of word 'maximum'? The calculation was just for throughput. How would one calculate minimum throughput?
I think maximum means assuming no packets loss and therefore no retries. Also, no additional transmission time above the 50ms. Basically, given the above transmission time and propagation delay, how many bytes can be sent and acknowledged per sec?
My intuition is to figure out how long it takes to send 5 packets to fill up the window with the propagation delay added. Then add the time for the acknowledgement for the first packet to arrive at the sender. That's your basic window send and acknowledgement time because as soon as the acknowledgement arrives the window will slide forward by one packet.
Since the window is 5 packets and each packet is 1,000 bytes then the maximum throughput should be 5,000 bytes / the time you calculated for the above cycle.

Why there is a TCP congestion window inflation during fast recovery?

TCP fast recovery algorithm is described as follows(from TCP illustrated vol. 1). What I can't understand is in step 1, why there's a CWD window inflation by three times the segment size?
When the third duplicate ACK is received, set ssthresh to one-half the current congestion window, cwnd. Retransmit the missing segment.
Set cwnd to ssthresh plus 3 times the segment size.
Each time another duplicate ACK arrives, increment cwnd by the segment size and transmit a packet (if allowed by the new value of
cwnd).
When the next ACK arrives that acknowledges new data, set cwnd to ssthresh. This should be the ACK of the retransmission from step 1,
one round-trip time after the retransmission. Additionally, this ACK
should acknowledge all the intermediate segments sent between the lost
packet and the receipt of the first duplicate ACK. This step is
congestion avoidance, since we're slowing down to one-half the rate we
were at when the packet was lost.
From [RFC 2001][1]
When the third duplicate ACK in a row is received, set ssthresh
to one-half the current congestion window, cwnd, but no less
than two segments. Retransmit the missing segment. Set cwnd to
ssthresh plus 3 times the segment size. This inflates the
congestion window by the number of segments that have left the
network and which the other end has cached
So, when you receive 3 duplicate ACKs in a row you cut cwnd to half and perform a fast retransmit, from now on you're trying to not just idle while waiting for the next new ACK (1 RTT at best). Once you enter fast recovery, you send new data with
cwnd= original cwnd + # of duplicate ACKs received
until either you receive the ACK you were waiting for or the timer for that ACK expires.
Basically, that "+3" takes account for those 3 acks received that made you enter fast recovery in the first place so that you transmit a number of new bytes equal to the lost bytes + the ones that got to the receiver but were discarded.
[1]: https://www.rfc-editor.org/rfc/rfc2001

How To Estimate the total time to complete the request In UDP and TCP ( Distirbuted Systems)

I have Stumbled Upon a Question Which I really can't figure out how the answers came up. I Will Post the Question And Answer Below.
Consider a distributed system that has the following characteristics:
* Latency per Packet (Local or remote, incurred on both send and receive): 5 ms.
* Connection setup time (TCP only): 5 ms.
* Data transfer rate: 10 Mbps.
* MTU: 1000 bytes.
* Server request processing time: 2 ms
Assume that the network is lightly loaded. A client sends a 200-byte request message to
a service, which produces a response containing 5000 bytes. Estimate the total time to
complete the request in each of the following cases, with the performance assumptions listed
below:
1) Using connectionless (datagram) communication (for example, UDP);
Answer : UDP: 5 + 2000/10000 + 2 + 5(5 + 10000/10000) = 37.2 milliseconds
We were not given any formula so I have trouble finding what the numbers in above calculation actually means.
2000/10000 - i think 10000 has to be 10Mbps * 1000 , i just dont know what 2000 means
(5+10000/10000) - ( I know that this has to be multiplied by 5 because MTU is 1000 Bytes , But I just dont know what the numbers Mean)
Thank You , Looking Forward to Your Ideas
For 2000/10000, I guess that 2000 means the request message size in terms of bits. Theoretically, the request message size should be 1600 bits since 200 bytes = 200*8 bits. I guess the answer approximate 1600 to be 2000 for simplicity.
For 5(5+10000/10000), first MTU is short for Maximum Transmission Unit, which is the largest packet size that can be communicated in the network. The response message is 5000 bytes while MTU is 1000 bytes, so the response is divided into 5 packets, each having 1000 bytes.
Since this is connectionless communication, there is no pipelining. There is only one packet in the link each time. Thus, for each packet, the time to send it back is 5 + 10000/10000 (strictly, it should be 8000/10000 since MTU is 1000*8 bits. Again, I guess it is also approximated to be 10000 for simplicity). So to send back all of the 5 packets, the total time is 5(5+10000/10000).
Here is how I calculate for UDP and TCP.
Total transmission time (UDP) = transmission time for request message packet
+ Server request processing time
+ transmission time response message
Total transmission time (TCP) = connection setup time +
transmission time for request +
server request processing time +
transmission time for response message packet
Note: this might be specific to the type of parameters given in the question. This is just one iteration of the answer.

Understanding TCP Slow Start

I was trying to get my head around tcp congestion control and came across what is called the slow start phase, where tcp starts by sending just 1 MSS and then keep on adding 1 MSS to the congestion window on receipt of an ack. This much is clear. But after this, almost all books/articles that i refers goes ahead a say that this results in doubling the cwnd every RTT showing a image something like below where i got confused.
The first segment is clear, tcp sends it and receives the ack after a RTT and then doubles the cwnd which now is 2. Now it transmits two segments, the ack for the fist one comes after RTT making the cwnd 3. But the ack for the second segment comes after this making cwnd 4(ie doubling it). So i am not able to understand how the cwnd doubles every RTT, since as per my understanding, in this example, cwnd doubled on the first RTT and got incremented by one on the second RTT and again doubled on some other time(RTT+tx time of the first segment i believe). Is this understanding correct. Please explain.
After the two segments' acks had been received by the sender, the CWND was increased by 2, not 1. Note that the second ack in round 2 arrived right after the first ack in round 2, that's why they were considered in the same round and costed 1 RTT.
I can not agree with you more. I think there is not exist a double mechanism and they get the wrong meaning for the Slow Start.
When the server receives the ACK, the cwnd will increase the value of 1 MSS. So if you send n requests and receive all the ACK, cwnd will become (n + n) MSS.
In the process of Slow Start, the requests can be done in 1 RTT, because it is not easy to happend traffic jam in the Internet. So it looks like it become double per RTT. But, the real mechanism is to add, not multiply.

TCP packet sequence number

This question is on a test review and I'm not really sure of the answer.
TCP packets are being sent from a client to a server. The MMS is equal to 1460 bytes, and each TCP packet is sent with the maximum capacity. How many TCP packets can be sent before the sequence number field in the TCP header will wrap around?
How much time in seconds will this take on a 1 Mbit/s link?
How much time in seconds will this take on a 1Gbit/s link?
Is there some sort of formula used to figure this out?
Thanks!
Each TCP segments contains 1460 bytes, and sequence number in TCP header is 4 bytes=32 bits so there need to be send 2^32 bytes (because sequence number measure bytes and not bits) in order to sequence number field to wrap around.
In order to calculate the delay you need to consider:
Transmission time - time it takes to push the packet's bits onto the link.
Propagation time - time for a signal to reach its destination.
Processing delay - time routers take to process the packets header.
Queuing delay - time the packet spends in routing queues.
In your questions the transmission time is 1 Mbit/s and 1Gbit/s, and I assume the other delays are 0; so the time it will take to send 2^32 bytes= 8*2^32 bits on:
1 Mbit/s link:
8*2^32 / 10^6 = 34359 seconds
1Gbit/s link:
8*2^32 / 10^9 = 34 seconds
Hope this help you

Resources