TCP vs UDP throughput - tcp

Suppose we have tcp and udp connection over the same link of capacity C . Tcp has transfer rate of C whereas UDP has 8C as its transfer rate . Which will be more efficient ?

Theoretically, if nothing on the way happens to any of the packets, UDP would be faster. UDP doesn't require to acknowledge every packet like TCP does (ACK Flag). Also, no handshake and no connection tear-down is required. UDP would be the faster choice in an ideal network, where no packets get dropped.
The problem is, in a real world example, UDP would lose packets. You would be slower, because you would have to implement a packet control like in TCP in UDP too. UDP does not acknowledge the receival of packets, and it also does not knock on the door to see if anybody is home (TCP SYN). UDP Packets are easier structured than TCP packets, but sacrifice security for their size. http://www.diffen.com/difference/TCP_vs_UDP describes the differences.
So for your example. With a cable that can hold C packets/s, and TCP at a rate of C packets/s and UDP at a rate of 8*C packets/s, UDP would be much faster.

Related

Can TCP meltdown happen for TCP over Quic?

It is a common knowledge that transferring TCP packet inside a tunnel with TCP connection can create a devastating effect called TCP meltdown and degrade tunnel quality greatly. I somehow wondering if similar effect may happen in we try to transfer TCP data over a Quic connection. Even though Quic is UDP packets, but it need to have something similar to windowing for keeping track of received packets in order to provide a connection-oriented protocol. So I'm not sure if a similar effect will happen or not.
Any idea?
QUIC indeed uses a similar congestion control as TCP, see https://www.rfc-editor.org/rfc/rfc9002.html#name-congestion-control. So when tunnelling a TCP connection over a QUIC stream, I would say the same "meltdown" problems can occur (a QUIC stream has the same properties as a TCP connection: reliable ordered stream of byte, so the stream will stall if QUIC packets are lost).
However, a QUIC extensions is being defined for sending datagrams, https://datatracker.ietf.org/doc/html/draft-ietf-quic-datagram. That might provide a better way for transporting TCP packets, as these datagrams will never be retransmitted at QUIC level. However, it would require TCP packets to fit into a Datagram frame.

In a congested LAN, does UDP send faster than TCP?

I have a real-time application (C++ using websockets) that has to communicate through a congested LAN. Because it's realtime, delays can't be tolerated. Will UDP perform better than TCP in this case?
I cannot tolerate packet loss, but can address it through retries if using UDP.
In a congested network UDP will send its packets faster than TCP. This is because TCP actively tries to avoid overloading the network using a mechanism called congestion control. UDP has no such mechanism; its send speed is limited only by the resources of the sender.
If your first priority is to just send the packets, then UDP is the way to go. However, your probably also want them to arrive at the other end, which is a separate problem.
Sending UDP packets into a congested network at a high rate will only cause it to become more congested, leading to long delays and packet loss.
The problem here is neither TCP nor UDP - but the congested network. If the road is congested, it doesn't matter whether you're driving a car or a bus, you'll be late either way.
So, it doesn't matter all that much which protocol you choose. To send something quickly over a congested network you need a solution at the network level, possibly some QoS mechanism to prioritize some packets over others. QoS can give you the network equivalent of bus lanes that allow buses to quickly pass congested roads at the expense of other traffic.

When packet drop occurs in a link with VoIP and TCP working concurrently?

Let's assume TCP Reno version
I have this situation: a VoIP (UDP) stream and a TCP session on the same host.
Let's say at t=10s the TCP opens the session with the TCP receiver (another host), they exchanges the max window during the 3-way handshake and then they start the stream with the slow start approach.
At t=25s, a VoIP stream starts. Since it's an UDP stream, the aim is to saturate the receiver. Not having any congestion control, it should be bursting packets as much as it can.
Since there is this concurrency in the same channel and we are assuming that in the topology of the network no router goes down etc (so no anomalies), my question is:
Is there any way for achieve packet loss for the VoIP stream?
I was thinking that since VoIP is sensible to jitter, and the slow-start approach of TCP is not really slow, the packet loss could be achieved because the routers queues add variation of delay and they are "flooded" by the TCP early packets.
Is there any other reason?
A couple of comments first:
VoIP will not usually 'saturate' the receiver (or the network) - it will simply send as many packets as it needs for the particular codec you are using. In other words it won't just keep growing until it fills the network.
VoIP systems are sensitive to jitter as you note. Packet loss is actually related to this as a VoIP system will generally consider a packet lost if it arrives outside the jitter buffer window. So even though the packet may not in fact be lost, and only delayed, if it arrives outside the jitter buffer window it is effectively lost as far as the VoIP system is concerned.
Answering your specific question: yes other traffic can create delayed packets which may appear lost to the VoIP receiver. It is worth nothing that in a link where UDP and TCP are sharing the bandwidth, TCP is better 'behaved' than UDP in that it will try to limit itself to avoid congestion. UDP does not and hence may actually get more than its fair share of the bandwidth compared to the TCP traffic because of this.

Iptables filtering performance: TCP and UDP

i am writting to ask about iptables performance in TCP and UDP filtering. I was testing it with large number of iptables rules.
When in FORWARD chain is 10 000 mixed TCP and UDP rules i get TCP throughput 35.5 Mbits/sec and UDP throughput 25.2 Mbits/sec
I am confused why TCP throughput is bigger than UDP? I thought TCP will be slower because of ACK packets. I have already tested it with cisco ACL, there UDP is faster.
PC ---- FW ----- PC
Topology
Firewall overhead is most significant with respect to packets, not bytes. So if the average UDP packets were smaller than the average TCP packets, then the CPU will be maxed out at a smaller number of bits-per-second with UDP than with TCP.
Conversely, if the UDP packets are large enough to cause fragmentation and the firewall is configured to reassemble fragments before inspecting them, then the reassembly will cause substantial overhead which will reduce bits-per-second throughput.
There may be also other factors specific to the firewall implementation and configuration, but I believe those two would be first-order.

Throughput of TCP and UDP

If we have a some Mbps connection between source and destination and known latency and the source has two processes sending data via TCP and UDP respectively, which of two process will have a higher throughput and how to calculate it?
I am not a computer science student and I don't know networks.
TCP and UDP are both using the IP layer and will both have the same network available to them. Depending on the protocol you use you could get more throughput via UDP. This would require you to write a protocol to transfer data that was more aggressive than TCP or discard data without having to resend it.
If you did write a protocol more aggressive than TCP it would likely be banned by anyone managing a network that came into contact with it since it will degrade TCP sessions on that network.
If you could discard any data that came through then you wouldn't waste the bandwidth resending the lost packets in TCP and UDP would be a more natural choice but since you care about bandwidth I'm guessing that thats not the case?

Resources