Estimating TCP and UDP delay between two nodes - networking

Suppose we have 2 nodes, A and B, directly connected by Internet (we can ignore the underlyng network eg, routers, ISP etc).
We know RTT between nodes (80ms)
We know packet loss (0.1)
We know jitter (1ms)
We know bandwith, A=100/10mbps B=50/5mbps (first value is download, second is upload)
A sends a 1GB file to B by using the TCP protocol (with 64KB segment size).
How many times they need to exchange the file?
How many times it takes to do the same thing by using the UDP
protocol?
EDIT:
i guess the main difference in the calculation between UDP and TCP is that in TCP we need to wait for every packet to be sent before sending the next one. Or, in other words, we have to add in the delay calculation one RTT for every packet. Moreover, packetloss is not considered at all in UDP. I am not sure of what I'm sayng in this edit, so let me know if I'm wrong.

Related

udp vs tcp packet dropping

If I send two packets via the net one is UDP packet and the other is TCP packet, which packet is more likely to reach its destination? I have been told that the TCP protocol is safer but this is because of it's "fail-safe" mechanism. But does it also mean that UDP packets are more likely to fall in the way?
I think it's related to the specific router implementation, because on one hand if a UDP packet disappears then both sides probably know it might happen and can afford to lose a packet or two but on the other hand if a TCP packet disappears then by it's "fail-safe" mechanism it will send another and the problem is solved, and TCP packet is much heavier.
I would like to have more solid answer for that question because i find this subject quite interesting.
If you are making a decision on which protocol to use for your application, you really need to look into both in more detail. Below is just an overview.
TCP is a stream protocol that provides several mechanisms that will deliver: a guaranteed delivery of data, in order. It will control the rate at which the data is sent (it will start transmitting slowly, then upping the speed auntil it reaches a rate that is sustainable by the peer). It will resend any data that was not received on the other side. To do that, you pay a price (for example the slow start, the need for acknowledging all data received etc.)
UDP on the other side is a "data chunk" (datagram) protocol and provides none of the checks of integrity / rate / order. It "compensates" by being (potentially) faster: you pump out data as fast as you can, the other side receives whatever it is able to catch, at full network speed in the extreme case. No guarantee of delivery or order of data arriving at the other side. They either receive the whole datagram or nothing.
Any decision one usually makes has nothing to do with the possibility of data being lost or not but the criticality of losing any of it. Video streaming is done via UDP many times since missing the occasional datagram is less critical than having a smooth image. File transmission cannot afford any data loss or inversions of data chunks, so TCP is the natural choice.
Apart form that question, remeber that the network protocol is only half your problem. The other half is coming up with your application protocol to interprest the bytes you are receiveing...

Proper way to calculate Link Throughput

I have read some articles online and I got a pretty good idea about the TCP and UDP in general. However, I still have some doubts which I am sure not completely clear to me.
What is the proper way to calculate throughput ?
(Can't we just divide Total number of bytes received by total time taken ?)
What is that key feature in TCP that makes it have much much higher
throughput than UDP ?
UPDATE:
I understood that TCP uses windows which is nothing but that much segments can be sent before actually waiting for Acknowledgements. But my doubt is that in UDP segments are continuously sent without even bothering about Acknowledgements. So there is no extra overheads in UDP. Then, why the throughput of TCP is much much higher than that of UDP ?
Lastly,
Is this true ?
TCP throughput = (TCP Window Size / RTT) = BDP / RTT = (Link Speed in Bytes/sec * RTT)/RTT = Link Speed in Bytes/sec
If so then TCP throughput is always equals to the Know Link speed. And since the RTTs cancels out each other, the TCP throughput does not even depends on RTT.
I have seen in some network analysis tools like iperf, passmark performance test etc. that the TCP/UDP Throughput changes with Block size.
How is throughput dependent on Block size ?
Is Block size equals TCP window or UDP datagram size ?
What is the proper way to calculate throughput?
There are multiple ways, depending on what exactly you want to measure. They all boil down to dividing some number of bits (or bytes) to some duration, as you mention; what varies is which bits you are counting or (more rarely) which moments of time you are considering for measuring the duration.
The factors you need to take into account are:
At which layer in the network stack are you measuring throughput?
If you measure at the application layer, all that matters is what useful data you transmit to the other endpoint. For example, if you are transferring a file of 6 kB, the amount of data you count when measuring throughput is 6 kB (that is 6,000 bytes, not bits, and note the multiplier of 1000, not 1024; these conventions are common in networking).
This is usually called goodput and it may be different from what is actually sent at the transport layer (as in TCP or UDP), for two reasons:
1. Overhead due to headers
Each layer in the network adds a header to the data that introduces some overhead due to its transmission time. Moreover, the transport layer breaks your data into segments; this is because the network layer (as in IPv4 or IPv6) has a maximum packet size called MTU, typically 1,500 B in Ethernet networks. This value includes the network layer header size (e.g. the IPv4 header, which is variable in length but usually 20 B long) and the transport layer header (for TCP, it is also variable in length but usually 40 B long). This leads to a maximum segment size MSS (number of data bytes, without headers, in one segment) of 1500 - 40 - 20 = 1440 bytes.
Thus if we want to send 6 kB of application-layer data, we must break it into 6 segments, 4 of 1440 bytes each and one of 240 bytes. However at the network layer we end up sending 6 packets, 4 of 1500 bytes each and one of 300 bytes, for a total of 6.3 kB.
Here I have not considered the fact that the link layer (as in Ethernet) adds its own header and possibly also a suffix, which increases the overhead further. For Ethernet this is 14 bytes for the Ethernet header, optionally 4 bytes for VLAN tag, then a CRC of 4 bytes and a gap of 12 bytes, for a total of 36 bytes per packet.
If you consider a fixed-rate link, say of 10 Mb/s, depending on what you measure you will get a different throughput. Normally you want one of these:
The goodput, i.e. application layer throughput, if what you want to measure is application performance. For this example, you divide 6 kB by the transfer duration.
The link-layer throughput, if what you want to measure is network performance. For this example, you divide 6 kB + TCP overhead + IP overhead + Ethernet overhead = 6.3 kB + 5 * 36 B = 6516 B by the transfer duration.
Retransmission overheads
The Internet is a best-effort network, meaning that the packets will be delivered if possible, but may also be dropped. Packet drops are corrected by the transport layer, in case of TCP; for UDP, there is no such mechanism, which means that either the application does not care if some parts of the data do not get delivered, or the application implements retransmission itself on top of UDP.
Retransmission reduce goodput for two reasons:
a. Some data needs to be sent again, which takes time. This introduces a delay which is inversely proportional to the rate of the slowest link in the network between the sender and the receiver (a.k.a the bottleneck link).
b. Detecting that some data was not delivered needs feedback from the receiver to the sender. Due to propagation delays (sometimes called latency; caused by the finite speed of light in the cable), feedback can only be received by the sender with some latency, which slows down the transmission even more. In most practical cases, this is the most significant contribution to the extra delay caused by the retransmission.
Clearly, if you use UDP instead of TCP and you do not care about packet loss, you will of course get better performance. But for many applications, data loss cannot be tolerated, so such a measurement is meaningless.
There are some applications that do use UDP for transferring data. One is BitTorrent, which may use either TCP or a protocol they designed called uTP, which emulates TCP on top of UDP, but aims at being more efficient with many parallel connections. Another transport protocol implemented over UDP is QUIC, which also emulates TCP and offers multiplexing multiple parallel transfers over a single connection, and forward error correction to reduce retransmissions.
I will discuss forward error correction a little since it is related to your question about throughput. A naive way of implementing it is by sending every packet twice; in case one gets lost, the other still has a chance of being received. This reduces the amount of retransmissions to half, but also halves your goodput since you send redundant data (note that the network or link layer throughput remains the same!). In some cases this is fine; especially if the latency is very large, such as on intercontinental or satellite links. Moreover, some mathematical methods exist where you don't have to send a full copy of the data; for instance for every n packets you send, you send another reduntant one which is the XOR (or some other arithmetic operation) of them; if the redundant one gets lost, it doesn't matter; if one of the n packets gets lost, you can reconstruct it based on the redundant one and the other n-1. You can thus configure the overhead introduced by forward error correction to whatever amount of bandwidth you can spare.
How you are measuring the transfer time
Is the transfer completed when the sender finished sending the last bit over the wire, or does it also include the time it takes for the last bit to travel to the receiver? Additionally, does it include the time it takes to get a confirmation from the receiver, stating that all data has been received successfully and no retransmission is neede?
It really depends on what you want to measure. Note that for large transfers, one extra round-trip-time is insignificant in most cases (unless you are communicating, for instance, with a probe on Mars).
What is that key feature in TCP that makes it have much much higher throughput than UDP?
This is not true, although a common misconception.
In addition to retransmitting data when needed, TCP will also adjust its sending rate so that it will not cause packet drops by congesting the network. The adjustment algorithm has been perfected over decades, and usually converges quickly to the maximum rate supported by the network (actually, the bottleneck link). For this reason it is usually difficult to beat TCP in throughput.
With UDP, there is no rate limiting at the sender. UDP lets the application send as much as it wants. But if you try to send more than the network can handle, some of the data will be dropped, lowering your throughput, and also making the admin of the network you are congesting very angry. This means that sending UDP traffic at high rates is impractical (unless the goal is to DoS a network).
Some media applications are using UDP but rate-limiting the transfer at the sender at a very small rate. This is typically used in VoIP applications or Internet Radio, where you require very little throughput but low latency. I suppose this is one of the reasons for the misconception that UDP is slower than TCP; that is not the case, UDP can be as fast as the network allows.
As I said before, there are protocols such as uTP or QUIC, implemented over UDP, which achieve performance similar to TCP.
Is this true ?
TCP throughput = (TCP Window Size / RTT)
Without packet loss (and retransmissions), this is correct.
TCP throughput = BDP / RTT = (Link Speed in Bytes/sec * RTT)/RTT = Link Speed in Bytes/sec
This is correct only if the window size is configured to the optimal value. BDP/RTT is the optimal (maximum possible) transfer rate in the network. Most modern operating systems should be able to auto-configure it optimally.
How is throughput dependent on Block size ? Is Block size equals TCP window or UDP datagram size?
I don't see any block size in the iperf documentation.
If you refer to the TCP window size, if it is smaller than BDP, then your throughput will be suboptimal (because you waste time waiting for ACKs instead of sending more data; if needed I can explain further). If it is equal or higher to the BDP, then you achieve optimal throughput.
It depends on how you define "Throughput". It usually can be one of the followings.
Number of bytes (or bits) sent in a fixed period of time;
Number of bytes (or bits) sent and received on the receiver end in a fixed period of time;
You can apply these definition to every layer when people talking about throughput. In application layer, 2nd definition means the bytes have really been received by the receiver end of the application. Some people refer to it as "goodput". In Transport layer, say TCP, 2nd definition means the corresponding TCP ACKs are received. To me, most of people should be only interested in the bytes are really received by the receiver end. So, 2nd definition is usually what people mean by "Throughput".
Now, once we have a clear definition of throughput (2nd definition). We can discuss how to measure the throughput correctly.
Usually, people either use TCP or UDP to measure the network throughput.
TCP: People usually measure TCP throughput only on the sender end. As for packets successfully received by the receiver end, ACKs will be sending back. So, sender itself will know how many bytes are sent and received on the receiver end. Divided this number by the measuring time, we will know the throughput.
But, there are two things need to be noticed during TCP throughput measurement:
Is sender side always full buffer during the measurement? i.e. During the measurement period, sender should always has packets to send. It is important for correct throughput measurement. e.g. if I set my measuring time to be 60 seconds, but my file has been finished transmission in 40 seconds. Then there are 20 seconds the network is actually idle. I will under-estimate the throughput.
TCP rate is regulated by its congestion window size, slow-start duration, sender window (and receiver window) size. Sub-optimal configuration of these parameters will result in under-estimated TCP throughput. Although most of the modern TCP implementation should have a quite good configuration of all of these, it is hard for a tester to 100% sure all these configurations are optimal.
Due to these limitations/risks of TCP in network throughput estimation, quite a number of researchers will use UDP for measuring network throughput.
UDP: As UDP has no ACK sending back once the packets are successfully received, people has to measure the throughput in the receiver end. Or, if the receiver end is not easily accessed, people can compare the logs on both sender and receiver sides to determine the throughput. But, this inconvenience is mitigated by some throughput measuring tools. For example, iperf has embedded sequence numbers in its customized payload, so that it can detect any loss. Also, a receiver's report will be sent to the sender to show the throughput.
As UDP by nature is just sending whatever it has to the network and not waiting for the feedback. Its throughput (remember the 2nd definition) once measured will be the actual capacity (or bandwidth) of the network.
So, usually, the throughput measured by UDP should be higher than that from TCP although the difference should be small (~5%-10%).
One biggest drawback of UDP throughput measuring is that, when using UDP one should also make sure that sender-side buffer must be full. (Otherwise, it results in under-estimated throughput as TCP). This step will be little tricky. In iperf, one can specify the sending rate by -b option. Increase -b value in different rounds of testing will converge the throughput measured. For example, in my gigabit ethernet, I first use -b 100k in the test. The throughput measured is 100Kbps. Then I perform the following iterations to converge the maximum throughput which is the capacity of my ethernet.
-b 1m --> throughput: 1Mbps
-b 10m --> throughput: 10Mbps
-b 100m --> throughput: 100Mbps
-b 200m --> throughput: 170Mbps
-b 180m --> throughput: 175Mbps (this should be quite close to the actual capacity)

When packet drop occurs in a link with VoIP and TCP working concurrently?

Let's assume TCP Reno version
I have this situation: a VoIP (UDP) stream and a TCP session on the same host.
Let's say at t=10s the TCP opens the session with the TCP receiver (another host), they exchanges the max window during the 3-way handshake and then they start the stream with the slow start approach.
At t=25s, a VoIP stream starts. Since it's an UDP stream, the aim is to saturate the receiver. Not having any congestion control, it should be bursting packets as much as it can.
Since there is this concurrency in the same channel and we are assuming that in the topology of the network no router goes down etc (so no anomalies), my question is:
Is there any way for achieve packet loss for the VoIP stream?
I was thinking that since VoIP is sensible to jitter, and the slow-start approach of TCP is not really slow, the packet loss could be achieved because the routers queues add variation of delay and they are "flooded" by the TCP early packets.
Is there any other reason?
A couple of comments first:
VoIP will not usually 'saturate' the receiver (or the network) - it will simply send as many packets as it needs for the particular codec you are using. In other words it won't just keep growing until it fills the network.
VoIP systems are sensitive to jitter as you note. Packet loss is actually related to this as a VoIP system will generally consider a packet lost if it arrives outside the jitter buffer window. So even though the packet may not in fact be lost, and only delayed, if it arrives outside the jitter buffer window it is effectively lost as far as the VoIP system is concerned.
Answering your specific question: yes other traffic can create delayed packets which may appear lost to the VoIP receiver. It is worth nothing that in a link where UDP and TCP are sharing the bandwidth, TCP is better 'behaved' than UDP in that it will try to limit itself to avoid congestion. UDP does not and hence may actually get more than its fair share of the bandwidth compared to the TCP traffic because of this.

UDP Packet size and packet losses

I've been writing a program that uses a stop and wait protocol on top of UDP to send packets over LAN and also over WAN. I've recently been testing my program and have noticed that the packet loss rate is higher for larger packets (approaching 64k bytes). Intuitively this makes sense but what are the actual reasons for this?
UDP packets greater than the MTU size of the network that carries them will be automatically split up into multiple packets, and then reassembled by the recipient. If any of those multiple sub-packets gets dropped, then the receiver will drop the rest of them as well.
So for example if you send a 63k UDP packet, and it goes over Ethernet, it will get broken up into 47+ smaller "fragment" packets (because Ethernet's MTU is 1500 bytes, but some of those are used for UDP headers, etc, so the amount of user-data-space available in a UDP packet is smaller than that). The receiver will only "see" that UDP packet if all 47+ of those fragment-packets make it through okay. If just one of those fragment-packets gets dropped, the whole operation fails.
Well, data networks are far from reliable; packets get dropped all the time. Overloaded routers, full buffers and corrupt packets are some of the reasons. Since UDP has no flow control capabilities, it can't slow down if for example the receiving end is overloaded.
As Jeremy explained, the bigger the payload, the more packets it is going to be split into, and therefore a bigger chance of losing some of them.
UDP is used in cases where a dropped packet here in there won't affect anything or cases that you need something to get there in time or not at all. (VOIP, streaming video etc)
Its all about IP fragmentation and defragmentation. Packet more than MTU would be fragmented and has to be defragmented at the final host, there are also chances the fragments gets fragmented again on the path and which again can add the delay. sometimes if some N/W element is configured for layer 4 filtering then it defragments(not the final host) apply rules and then again frgaments and forward. Thats the reason the applicaiton which need performance always try to send data with size <= (MTU-ETHHDR-IPHDR)

What does LAN/traffic congestion mean?

While talking about UDP I saw/heard congestion come up a few times. What does that mean?
congestion is when you are trying to send too much data over a limited bandwidth, it cannot send the data faster than the incoming amount so additional packets are dropped.
When congestion occurs, you can see these effects:
Delay due to the queue at one end of the connection being too big, so it takes time for your packet to be transmitted.
Packet loss when new packets are simply dropped, forcing connection resets (and often causing more congestion).
Lower quality of service, protocols like TCP will do a cutback on the transmission rate, so your throughput will be lowered.
Blocking, certain networks have protocol priorities, so your UDP packets may be dropped in favor of allowing TCP traffic through.
Its like a traffic jam, imagine right after a sports game where a parking lot full of cars is trying to empty out into a small side street.
It means that network-connected devices are attempting to send more data across the network than it can handle, e.g. 20 Mbps of data across a 10 Mbps link.
In context of UDP, it's your main source of lost datagrams under ordinary circumstances.
Most LANs use some sort of a collission detection/avoidance system. A congestion typically means that the amount of data which is being transmiited on the medium is causing enough collissions to deteriorate the quality of service defined for that medium.
You may want to read up CSMA/CD at wikipedia.
As UDP packets can often be broadcasted, congestion can occur more often.
Kind regards,
For instance, Ethernet is a broadband protocol. Once a message is sent, every node receives it but ignores if the packet are not sent to them. What happens when two nodes send a packet at the same time? It will cause a collision and data loss.
So, both of the nodes will have to resend the message. To avoid more collisions, nodes are designed to wait a random number of milliseconds. Otherwise they keep going on sending messages simultaneously and packages will collide forever.

Resources