This question is on a test review and I'm not really sure of the answer.
TCP packets are being sent from a client to a server. The MMS is equal to 1460 bytes, and each TCP packet is sent with the maximum capacity. How many TCP packets can be sent before the sequence number field in the TCP header will wrap around?
How much time in seconds will this take on a 1 Mbit/s link?
How much time in seconds will this take on a 1Gbit/s link?
Is there some sort of formula used to figure this out?
Thanks!
Each TCP segments contains 1460 bytes, and sequence number in TCP header is 4 bytes=32 bits so there need to be send 2^32 bytes (because sequence number measure bytes and not bits) in order to sequence number field to wrap around.
In order to calculate the delay you need to consider:
Transmission time - time it takes to push the packet's bits onto the link.
Propagation time - time for a signal to reach its destination.
Processing delay - time routers take to process the packets header.
Queuing delay - time the packet spends in routing queues.
In your questions the transmission time is 1 Mbit/s and 1Gbit/s, and I assume the other delays are 0; so the time it will take to send 2^32 bytes= 8*2^32 bits on:
1 Mbit/s link:
8*2^32 / 10^6 = 34359 seconds
1Gbit/s link:
8*2^32 / 10^9 = 34 seconds
Hope this help you
Related
Host A is sending data to host B over a full duplex
link. A and B are using the sliding window
protocol for flow control. The send and receive
window sizes are 5 packets each. Data packets
(sent only from A to B) are all 1000 bytes long
and the transmission time for such a packet is
50 ps. Acknowledgment packets (sent only from
B to A) are very small and require negligible
transmission time. The propagation delay over
the link is 200 trrs. What is the maximum
achievable throughput in this communication?
This question was asked in gate my question. I have calculated it, but what is the meaning of word 'maximum'? The calculation was just for throughput. How would one calculate minimum throughput?
I think maximum means assuming no packets loss and therefore no retries. Also, no additional transmission time above the 50ms. Basically, given the above transmission time and propagation delay, how many bytes can be sent and acknowledged per sec?
My intuition is to figure out how long it takes to send 5 packets to fill up the window with the propagation delay added. Then add the time for the acknowledgement for the first packet to arrive at the sender. That's your basic window send and acknowledgement time because as soon as the acknowledgement arrives the window will slide forward by one packet.
Since the window is 5 packets and each packet is 1,000 bytes then the maximum throughput should be 5,000 bytes / the time you calculated for the above cycle.
TCP fast recovery algorithm is described as follows(from TCP illustrated vol. 1). What I can't understand is in step 1, why there's a CWD window inflation by three times the segment size?
When the third duplicate ACK is received, set ssthresh to one-half the current congestion window, cwnd. Retransmit the missing segment.
Set cwnd to ssthresh plus 3 times the segment size.
Each time another duplicate ACK arrives, increment cwnd by the segment size and transmit a packet (if allowed by the new value of
cwnd).
When the next ACK arrives that acknowledges new data, set cwnd to ssthresh. This should be the ACK of the retransmission from step 1,
one round-trip time after the retransmission. Additionally, this ACK
should acknowledge all the intermediate segments sent between the lost
packet and the receipt of the first duplicate ACK. This step is
congestion avoidance, since we're slowing down to one-half the rate we
were at when the packet was lost.
From [RFC 2001][1]
When the third duplicate ACK in a row is received, set ssthresh
to one-half the current congestion window, cwnd, but no less
than two segments. Retransmit the missing segment. Set cwnd to
ssthresh plus 3 times the segment size. This inflates the
congestion window by the number of segments that have left the
network and which the other end has cached
So, when you receive 3 duplicate ACKs in a row you cut cwnd to half and perform a fast retransmit, from now on you're trying to not just idle while waiting for the next new ACK (1 RTT at best). Once you enter fast recovery, you send new data with
cwnd= original cwnd + # of duplicate ACKs received
until either you receive the ACK you were waiting for or the timer for that ACK expires.
Basically, that "+3" takes account for those 3 acks received that made you enter fast recovery in the first place so that you transmit a number of new bytes equal to the lost bytes + the ones that got to the receiver but were discarded.
[1]: https://www.rfc-editor.org/rfc/rfc2001
I have Stumbled Upon a Question Which I really can't figure out how the answers came up. I Will Post the Question And Answer Below.
Consider a distributed system that has the following characteristics:
* Latency per Packet (Local or remote, incurred on both send and receive): 5 ms.
* Connection setup time (TCP only): 5 ms.
* Data transfer rate: 10 Mbps.
* MTU: 1000 bytes.
* Server request processing time: 2 ms
Assume that the network is lightly loaded. A client sends a 200-byte request message to
a service, which produces a response containing 5000 bytes. Estimate the total time to
complete the request in each of the following cases, with the performance assumptions listed
below:
1) Using connectionless (datagram) communication (for example, UDP);
Answer : UDP: 5 + 2000/10000 + 2 + 5(5 + 10000/10000) = 37.2 milliseconds
We were not given any formula so I have trouble finding what the numbers in above calculation actually means.
2000/10000 - i think 10000 has to be 10Mbps * 1000 , i just dont know what 2000 means
(5+10000/10000) - ( I know that this has to be multiplied by 5 because MTU is 1000 Bytes , But I just dont know what the numbers Mean)
Thank You , Looking Forward to Your Ideas
For 2000/10000, I guess that 2000 means the request message size in terms of bits. Theoretically, the request message size should be 1600 bits since 200 bytes = 200*8 bits. I guess the answer approximate 1600 to be 2000 for simplicity.
For 5(5+10000/10000), first MTU is short for Maximum Transmission Unit, which is the largest packet size that can be communicated in the network. The response message is 5000 bytes while MTU is 1000 bytes, so the response is divided into 5 packets, each having 1000 bytes.
Since this is connectionless communication, there is no pipelining. There is only one packet in the link each time. Thus, for each packet, the time to send it back is 5 + 10000/10000 (strictly, it should be 8000/10000 since MTU is 1000*8 bits. Again, I guess it is also approximated to be 10000 for simplicity). So to send back all of the 5 packets, the total time is 5(5+10000/10000).
Here is how I calculate for UDP and TCP.
Total transmission time (UDP) = transmission time for request message packet
+ Server request processing time
+ transmission time response message
Total transmission time (TCP) = connection setup time +
transmission time for request +
server request processing time +
transmission time for response message packet
Note: this might be specific to the type of parameters given in the question. This is just one iteration of the answer.
I have been learning wireshark lately. While inspecting TCP segments, I saw a strange situation, at least for me. There was a mismatching SEQ,ACK numbers. Then I realized that difference between two ACK's same as 1 and half package size. However, as far as I know, ACKs are only growing by the full packet size. So what happened here?
SEQ ACK
1 1
2897 8689
5793 13033 <--
8689 14481
11585
14481
Well, there's not much to go by here, but I'm going to guess that you are capturing packets on the machine sending the large packets, i.e., the packets with 2896 bytes of TCP payload. As such, you are seeing the packets before they are IP-fragmented and actually transmitted out on the wire.
But you can't just send 2896 bytes of data onto the wire; Ethernet links typically impose a 1500 byte MTU (Maximum Transmission Unit), and when you account for IP and TCP header overhead, you usually end up with 1460 bytes of available payload or MSS (Maximum Segment Size). In your case it looks like you're only getting 1448 bytes of available MSS, most likely due to the addition of one or more IP and/or TCP header options.
In any case, the 2896 bytes of payload are going to be fragmented over 2 IP fragments, each containing 1448 bytes of TCP payload. I'm pretty sure that what you're seeing is an ACK from the receiver after having received 1 full segment plus 1 IP fragment from the next segment.
The previous ACK number was 8689, and 8689 + 2896 = 11585. Now add 1/2 of the data segment (2896 / 2 = 1448) and you get 11585 + 1448 = 13033. That's the ACK number you're seeing. Now add the other 1/2 and you get 13033 + 1448 = 14481, which is the ACK number of the next packet.
I hope that makes sense?
For an in depth look at the drawbacks of local packet captures, I direct you to a well-written blog by Jasper Bongertz titled, "The drawbacks of local packet captures".
Since I do not have reputation to comment the answer of Christopher Maynard, I comment it here.
As Christopher stated correctly the MTU (Maximum Transmission Unit) is standard Ethernet networks is 1500Bytes. However, crafting 1460Bytes (1500Bytes - 20Bytes IP Header - 20Bytes TCP Header, ignoring possible TCP options) TCP segments imposes a high load on the network stack since every segment must be handled separately.
As an example, if I want to transmit 1Mb (equals to 1024*1024Bytes = 1048576Bytes) of data, it would result in 1Mb/1460Bytes = 719 segments. In contrast to 1Mb/65525Bytes (theoretical max TCP segment size) = 16 segments. Since the overhead in the Kernel is mostly independent of the segment/packet size, small segments require much more processing.
To counteract this fact, TCP Segmentation Offloading (TSO) was developed. TSO allows the Kernel to craft TCP segments with maximum size and the NIC (Network Interface Card) uses hardware acceleration to split these segments into the separate TCP segments of 1460Bytes. Therefore, no IP fragmentation is required.
suppose i have a 4MBits network and i want to calculate the data throughput, this is considering the max transfer rate minus overhead from ethernet/IP/TCP headers.
Reading on the web i found out that the MSS ( maximum segment size) of a TCP segment is 576 - 20 - 20, these last two being TCP and IP headers overhead, resulting in a 93% of data, meaning i will be only using 93% of my 4MBits link to transfer data. Now where's the link ayer overhead? Shouldn't it be added as well? If im not wrong an ethernet header is around 46 bytes so the final sum would be 576 - 20 - 20 - 46 = 490, resulting in an 85% data throughput, but am i doing something wrong?
Just work bottom up. Regular ethernet frames (no jumbo frames, no vlan tagging) are 1542 bytes in total and can have a payload of 1500 bytes. An Ipv4 header without options is 20 bytes and a TCP header without options also 20 bytes. So you end up with 1460 bytes possible payload of a 1542 byte link-layer frame. So your efficiency is 1460/1542=0.9468223086900129, resulting in a maximum throughput of 3.7872892347600517Mbps.
Notice however this will usually be lower. This is the theoretical maximum rate for a continuous stream you can get on a full duplex link, after the TCP session is established and when you're the only user of that link. Also note that as soon as you're sending at a slightly higher rate for some time your link will get congested, you will see drops and your actual TCP throughput might drop significantly because of slow-start.
If the link is wireless (802.11) the calculation becomes a lot more complex because of RTS/CTS mechanisms, but it's about /2 for only one active user and that's without incorporating loss, which is unrealistic.
In general, the protocol can impact network throughput and much more than simply the packet overhead. You mention that you want to measure throughput on an Ethernet/IP/TCP network but the impact of packet overhead of those protocols is NOT the only thing to consider. TCP is a connection-oriented protocol and uses ACK's to signal if a packet has been received or not. user1777914 missed the mark about ACK's but was on to something - they do not take up any more SPACE but they can DELAY the transmission of packets. As latency increases the overall network throughput can decrease based on how often the application or hosting OS expects a response.
W. Richard Stevens has written an AMAZING book on TCP/IP. Here is an except that explains theoretical TCP performance, what impacts it and how it is calculated.
There too is the Nagle algorithm helps with latency but if disabled can slow down throughput.