How does HTTP/3 handle packet loss? - http

One of the key differences between HTTP/2 and HTTP/3 is the switch from TCP to UDP.
As I understand it, TCP verifies data integrity by verifying that no data packets have been lost. Any packets lost are requested again to ensure all data is correctly received.
For UDP, there is no such validation. If packets are lost then so be it.
That being said, if I make a request on HTTP/3 and a packet is lost, is there a mechanism to ensure I get all my data, or will there be a risk that my response will be missing data packets?

If packets are lost then so be it.
No, with UDP it is no "so be it" but it is up to the protocol on top of UDP to care about packet loss, duplication and reordering or not to care. For example with RTP (real time audio in VoIP etc) it is fine to have some packet loss, since there is no use for later arriving packets anyway (audio must be low latency). And reordering and duplication is handled in RTP with protocol-inherent sequence numbers.
For HTTP/3 instead data loss is not acceptable. HTTP/3 is build on top of QUIC which is build on top of UDP. Packet loss is handled within QUIC (see QUIC Loss Detection and Congestion Control). Thus HTTP/3 is basically build on top of a reliable transport (QUIC) the same as HTTP/1 and HTTP/2 are build on top of an reliable transport layer (TCP).

Related

Can TCP meltdown happen for TCP over Quic?

It is a common knowledge that transferring TCP packet inside a tunnel with TCP connection can create a devastating effect called TCP meltdown and degrade tunnel quality greatly. I somehow wondering if similar effect may happen in we try to transfer TCP data over a Quic connection. Even though Quic is UDP packets, but it need to have something similar to windowing for keeping track of received packets in order to provide a connection-oriented protocol. So I'm not sure if a similar effect will happen or not.
Any idea?
QUIC indeed uses a similar congestion control as TCP, see https://www.rfc-editor.org/rfc/rfc9002.html#name-congestion-control. So when tunnelling a TCP connection over a QUIC stream, I would say the same "meltdown" problems can occur (a QUIC stream has the same properties as a TCP connection: reliable ordered stream of byte, so the stream will stall if QUIC packets are lost).
However, a QUIC extensions is being defined for sending datagrams, https://datatracker.ietf.org/doc/html/draft-ietf-quic-datagram. That might provide a better way for transporting TCP packets, as these datagrams will never be retransmitted at QUIC level. However, it would require TCP packets to fit into a Datagram frame.

UDP TCP management of server congestion

Does TCP and UDP protocol have a way to manage their saturation?
When I write saturation, I mean Network congestion: what happens if the buffer of the server is full and the client sends a datagram UDP/TCP to the server?
Have these protocols a way to handle this scenario, or data would be lost?
This is a question about TCP/UDP basics. For this reason this answer is not going to be a complete TCP and UDP guide.
Network congestion at low level protocols
In case of network congestions, the data sender will usually notice it because of the failure of data sending APIs (e.g. the BSD functions send() and sendto()).
For example I have personal experience of TCP/IP over GPRS, in which the network problems caused data sending APIs to fail. In that case it was up to the sender to preserve its data in order to send it as soon as possible.
Congestion at receiver's side
That's what the asker had actually in mind.
Let's start from UDP. Really short answer: by its own design, data sent to congested servers will be lost. From Wikipedia,EN:
[...]It has no handshaking dialogues, and thus exposes the user's
program to any unreliability of the underlying network; there is no
guarantee of delivery, ordering, or duplicate protection[...]
Finally TCP. It has been designed to provide what is missing in UDP. From Wikipedia, EN:
TCP provides reliable, ordered, and error-checked delivery of a stream
of octets (bytes) between applications running on hosts communicating
via an IP network
How are these features achieved? I cannot provide a full TCP tutorial in this answer, but I can list three TCP's fundamental traits for achieving reliability:
Packets are numbered (each packet, but we can say each byte has a specific sequence number)
Retransmissions. The receiver sends an acknowledge (ACK) for each packet (but we can say each byte) it receives. For this reason the sender understands that the packet has not been received and can retransmit it (the number of retransmissions allowed vary according to different implmentations and user settings)
Sliding window. Let's describe it in a simply way: each peer currently informs the remote peer about its window, the number of bytes it is able to receive. As soon as a congestion occurs, a peer can reduce the windows so that the sender will slow down until the congestion ends.
To answer OP's question: in case of server congestion in case of TCP connection, the protocol assure retransmissions and throughput dynamic management that preserve for a reasonable amount of time any sent data.
I hope this simple description helps. It probably has raised even more questions, and in this case I suggest to deepen your study at the real source:
RFC 768 (UDP)
RFC 793 (TCP)

Does UDP provide a response message after receiving all packets?

UDP is unreliable on packet level.
But, doesn't it provide one response message after recieving all(supposed) packets?
I understand that TCP provide ack for all packets whereas UDP provides one response message for all packets. Am I correct?
UDP is unreliable on packet level.
The qualification is meaningless. UDP is unreliable, period.
But, Doesn't is provide one response message after recieving all(supposed) packets?
No.
I understand that TCP provide ack for all packets
No. It provides enough ACKs so the sender knows where the receiver is up to. This can be accomplished by selective ACKs.
whereas UDP provides one response message for all packets.
No.
Am I correct?
No.
UDP doesn't send any responses at all. The application must do that if required.
TCP connection provides reliable communication between hosts on internet. To ensure reliable communication TCP maintains state information and sequencing of packets in client and server side leading to a duplex connection. This state information is used to provide acknowledgements for received packets. There are variety of techniques of syncing and requesting lost packets according to which number of acknowledgements vary.
UDP is used where reliability is not a requirement so maintaining state information is not required leading to a more light weight protocol operation. One example of its use is streaming applications. No acknowledgement is send in UDP.

Advantages of UDP over TCP?

TCP has a greater computation overhead to ensure reliable delivery of packets. But, since modern networks are fast, is there any scenario in which performance of UDP outweighs the reliability of TCP?
Is there any other particular advantage of UDP over TCP?
I can see two cases, where UDP would have an upper hand over TCP.
First, one of the attractive features of UDP is that since it does not need to retransmit lost packets nor does it do any connection setup, sending data incurs less delay. This lower delay makes UDP an appealing choice for delay-sensitive applications like audio and video.
Second, multicast applications are built on top of UDP since they have to do point to multipoint. Using TCP for multicast applications would be hard since now the sender would have to keep track of retransmissions/sending rate for multiple receivers.
It depends on your usage. If your application is time sensitive, like Voice over IP, then you don't care about missing packets. What you care about is the delay in that case.
You should have a look at this answer: What are examples of TCP and UDP in real life?
You could also look at the Wikipedia related section: http://en.wikipedia.org/wiki/User_Datagram_Protocol#Comparison_of_UDP_and_TCP
Applications that require constant data flow, bulk data and which require fastness than reliability uses UDP over TCP.
udp provides better application level control over what data is sent....since the data is packaged in a udp segment and immediately passed over to the network layer......hence no-frills segment delivery service is observed.
There is no need for connection establishment hence no delay(unlike tcp...which requires handshaking before the actual data transfer)
There is no need to maintain connection state in the end systems(ie no need for send and receive buffers,congestion control parameters and sequence and acknowledgement number parameters)..hence more active clients could be supported
Small packet header overhead for udp(only 8 bytes) where as tcp has 20 bytes of header
Facebook uses UDP connections instead of TCP/IP to connect to theirs Memcached Servers
There are couple of differences of UDP over TCP.
First, TCP is connection-based whereas UDP is connectionless.
Connection-based: Make sure that all messages will arrive and arrive in the correct order.
Connectionless: It does not guarantee order or completeness.
Second, Here is why UDP is faster over TCP:
UDP does not require ACK message back
UDP has no flow control
No duplication verification at the receiving end
Shorter header

Connectionless UDP vs connection-oriented TCP over an open connection

If a TCP connection is established between a client and server, is sending data faster on this connection-oriented route compared to a connectionless given there is less header info in the packets? So a TCP connection is opened and bytes sent down the open connection as and when required. Or would UDP still be a better choice via a connectionless route where each packet contains the destination address?
Is sending packets via an established TCP connection (after all hand shaking has been done) a method to be faster than UDP?
I suggest you read a little bit more about this topic.
just as a quick answer. TCP makes sure that all the packages are delivered. So if one is dropped for whatever reason. The sender will continue sending it until the receiver gets it. However, UDP sends a packet and just forgets it, so you might loose some of the packets. As a result of this, UDP sends less number of packets over network.
This is why they use UDP for videos because first loosing small amount of data is not a big deal plus even if the sender sends it again it is too late for receiver to use it, so UDP is better. In contrast, you don't want your online banking be over UDP!
Edit: Remember, the speed of sending packets for both UDP and TCP is almost same, and depends on the network! However, after handshake is done in TCP, still receiver needs to send the acks, and sender has to wait for ack before sending new batch of data, so still would be a little slower.
In general, TCP is marginally slower, despite less header info, because the packets must arrive in order, and, in fact, must arrive. In a UDP situation, there is no checking of either.

Resources