Why am I unable to prioritize TCP traffic using ToS fields? - networking

I am trying to prioritize TCP traffic using ToS field in IP header.
I am saturating the interface(ethernet) by sending 1GB data through iperf with ToS field set to 0x10 (Minimize-Delay).
I then start another TCP client with default ToS (0).
Expectation :
My TCP client should not send data till iperf completes sending its data.
Result:
The data from my client is sent even tough iperf is sending packets with higher priority.
I also tried to create the same scenario by creating 2 separate clients and allocating 0x10 and 0x08 ToS to respective clients using iptables.
I used :
iptables -A PREROUTING -t mangle -p tcp --sport 5000 -j TOS --set-tos Minimize-Delay
I am still not able to prioritize one client over other.
Altough I can see the packets marked with ToS in wireshark.
I am using Ubuntu (14.04) with iptables version 1.4.21
Can someone kindly help me solve the issue?
Thanks
Varun

TL;DR
Simply setting ToS or DSCP markings on packets does nothing. You actually need to configure a device to do different things with different markings. If you want that to include queuing, you need to configure queues, and assign different markings to different queues.
A More Complete Explanation
You are wanting to use QoS. QoS is a huge subject, but I will try to explain a few things. A guide to using QoS on Linux can be found at Traffic Control HOWTO.
ToS, which has really been supplanted by DSCP (Differentiated Services Code Point), is simply marking packets to differentiate the various packets for different treatment at some point. The ToS field was part of the original IPv4 packet specification, but there is nothing in the standard that mandates a device must use or respect that field.
QoS involves differentiating (marking) packets, and then doing something based on the marking. What you do with the markings can be things like shaping, policing, queuing (including priority queuing).
A hardware interface will have a FIFO queue, and that is the default and only queue, regardless of packet markings. The hardware is completely unaware of packet headers or markings.
Actually using the markings is usually done in a layer-3 network device, e.g. a router. For instance, you can configure different software queues for a router interface, and you can assign packets with different markings to different queues. Queues are relatively small, not like real buffers. Priority queues will be served before regular queues. Queues don't exist until you define them, and packets are not assigned to different queues unless you have configured rules to do that. You could assign BE (Best Effort, ToS 0) packets to a priority queue, and EF (Expedited Forwarding) packets to a low priority queue.
When a queue fills up, new packets destined for that queue will be dropped (called tail-drop). Tail-drop can be a problem for TCP because it can cause all TCP flows using a queue to become synchronized (global synchronization) where they back off and ramp up synchronously, alternately starving or flooding a queue. There are methods to try to prevent this, e.g. RED (Random Early Detection). RED will actually drop random packets in a queue. This is to force the various TCP flows using the queue to back off and ramp up on different schedules.
Many network switches will automatically assign BE (Best Effort, ToS 0) to anything coming into the switch, unless you have configured the switch to trust the markings on one or more interfaces. Routers will typically trust the markings, but they will not do anything with the markings unless you configure them to do so.
QoS will not work across the Internet, only within your network. You need to have a comprehensive set of QoS policies that are consistently implemented across your network. You may be able to negotiate, for a fee, for your ISP to respect some of your QoS markings and policies, but that only goes as far as your ISP. As your traffic leaves your network, or you ISPs network if you have an agreement with it, your QoS markings and policies will be completely ignored, and the packets will probably be set to BE.

Related

If TCP is connection oriented why do packets follow different paths?

According to my knowledge if an internet application has to be designed, we should use either a connection-oriented service or connection-less service, but not both.
Internet's connection oriented service is TCP and connection-less service is UDP, and both resides in the transport layer of Internet Protocol stack.
Internet's only network layer is IP, which is a connection-less service. So it means whatever application we design it eventually uses IP to transmit the packets.
Connection-oriented services use the same path to transmit all the packets, and connection-less does not.
Therefore my problem is
if a connection oriented application has been designed, it should transmit the packets using the same path. But IP breaks that rule by using different routes.So how do both TCP and IP work together in this sense? It totally confuses me.
You, my friend, are confusing the functionality of two different layers.
TCP is connection oriented in the sense that there's a connection establishment, between the two ends where they may negotiate different things like congestion-control mechanism among other things.
The transport layer protocols' general purpose is to provide process-to-process delivery meaning that it doesn't know anything about routes; how your packets reach the end system is beyond their scope, they're only concerned with how packets are being transmitted between the two end PROCESSES.
IP, on the other hand, the Network layer protocol for the Internet, is concerned with data-delivery between end-systems yet it's connection-less, it maintains no connection so each packet is handled independently of the other packets.
Leaving your system, each router will choose the path that it sees fit for EACH packet, and this path may change depending on availability/congestion.
How does that answer your question?
TCP will make sure packets reach the other process, it won't care HOW they got there.
IP, on the other hand, will not care if they reach the other end at all, it'll simply forward each different packet according to what it sees most fit for a particular packet.
Note:
Let's assume that IP was connection-oriented, would that mean packets would follow the same-path?
Not necessarily, it depends on what the word 'connection' at this layer means, if it means negotiating certain options related to security, for instance, you may still have all your packets being forwarded through different routes over the Internet.
EDIT:
Not to confuse you though, most connection-oriented services at the network-layer and below mean that the connection, when established, also establishes a virtual-path that all 'packets' must follow, for further information read about:
Virtual circuit and frame-relay networks
This link answers your question pretty well http://www.tcpipguide.com/free/t_ConnectionOrientedandConnectionlessProtocols-3.htm
Some people consider this (TCP) to be like a “simulation” of circuit-switching at higher network layers; this is perhaps a bit of a dubious analogy. Even though a TCP connection can be used to send data back and forth between devices, all that data is indeed still being sent as packets; there is no real circuit between the devices. This means that TCP must deal with all the potential pitfalls of packet-switched communication, such as the potential for data loss or receipt of data pieces in the incorrect order.
The TCP protocol deals with the problem of IP packets arriving out of order or being lost, to give you the feeling they arrive through a single FIFO channel. Yes, TCP is smart enough to do that, there's no need for a dedicated underlying channel.
The TCP protocal is implemented by the sending/receiving machines, once the packets leave the sending machine, the routers they travel along know nothing about TCP, they just use IP to get the packets from the source the to destination. Then, it is the destination machines job to, using TCP, make sure that all the packets arrive and that they arrive in the correct order. The internet itself doesn't know anything about TCP, it's just a layer (often software) that gives connection to a connectionless medium (the internet).
So onces a packet leaves a destination, it can go along any path (mostly) as long as it gets to the desintation, regardless of the higher level protocol (such as TCP or UDP).
I mean, it's a bit more complicated then that, but as far as I can remember that's the general Idea.
Refer my short points properly,
1) Connection oriented means ==> reserving resources(buffer,cpu,bandwidth etc.)..but "Where??".(where resources are reserved?? This where is reason of your confusion, so following is ans.).
2) Connection oriented at Transport Layer means ==> Reserving the resources at Both End processes/Ports.(Since TCP is a transport layer,then its responsibility is to reserve the resources at both end processes only,irrespective of whats happening in the intermediate path.)
3) Connection oriented at Network Layer means ==> Reserving the resources at Network Layers.(Now In the whole journey of a packet from source to destination, Network layer is found at all intermediate routers too(but not transport layer). Hence if any protocol at Network layer is connection oriented then,its responsibility is to reserve resources at all intermediate routeres too i.e. all packets will have to follow same intermediate path, But IP is connection less hence,intermediate resources will not be reserved. i.e journey of a packets may follow different paths etc.)
#CONCLUSION:==> Intermediate path is decided by Network Layer, hence if IP then paths may be different.(IP may contain TCP),But TCP is responsible for Resource reservation at both End processes ,irrespective of Intermediate path of packet.
router works on three layers only (physical , data link and Network layers) , so routers will take decision depending only on the info. of network layer (IP protocol ) hence there is no information available about its TCP or UDP at the router

How much additional load does a multicast subscriber impose on a switch?

If you have a switch with at least one subscriber to a multicast address, how much additional load would each additional subscriber add?
Example:
You have a 10G switch (with IGMP) with 10 servers and no other activity.
When Server1 subscribers to a 1G multicast feed, the switch will have 1G of load.
What would the load be after Server2 and Server3 subscribed?
Obviously traffic to the switch would not increase, but what about the switch's internal load?
Houw would the answer be different without IGMP?
The whole idea of multicast is that it is efficient. The presence of one subscriber downstream causes the switch to send an IGMP join request of its own upstream and pass incoming multicasts downstream, without duplication. The addition of further downstream subscribers has no effect at all except to increment an internal subscriber count for that group. When that goes back to zero it sends an IGMP leave request of its own upstream.
I don't know what you mean by 'without IGMP'. There is no such thing as UDP multicast without IGMP. It is a contradiction in terms.
Firstly, some background information for you.
The traditional definition of routers and switches are along the lines of:
Router: a device capable of routing a packet form one IP subnet to a different IP subnet
Switch: a device capable of switching a packet within the same IP subnet
However, this traditional definition no longer holds these days because we have switches that can route traffic from one IP subnet to another IP subnet and even perform complex operations such as QoS at wire speed.
Therefore it is often easier to redefine Routers and Switches as follows:
Router: a device that uses the CPU to route packets, often inspects parts of packets that are higher up the OSI layer.
Switch: a device with ASIC(s) (a.k.a switching chips) that switches/routes traffic at full wire speed. What this means is that if the switch has 24 1Gbps ports, it will be able to switch 24Gbps bi-directional traffic without dropping any packets.
Now, to answer your question, it is important to determine whether the ASIC in your switch is capable of handling multicast traffic or not. If so, adding "load" really isn't an issue, as long as you ensure that each switch port is not congested (e.g. 2Gbps of traffic trying to egress out of 1Gbps port). If the ASIC in your switch is NOT capable of handling multicast traffic, it is highly likely that the switch will simply send all multicast traffic up to the CPU. Then it would be up to the software to determine where each packet goes. CPUs on switches are not powerful, because their primary role isn't to route/switch packets, but to manage the switch (e.g. configure the ASIC so that packets get switched properly). Therefore, if your switch is sending packets up to the CPU, the switch will struggle. You won't get anywhere near 1Gbps of multicast via the CPU.
Without IGMP, switches, by default, will flood out the traffic on all ports. Again, this is not a problem for the switch itself because it can handle this at wirespeed. It may cause problems for other parts of the network because traffic is needlessly being duplicated.
The reason for this long answer is because the phrase "10G switch" in your example is quite misleading, and it led me to believe that you maybe thinking that a powerful CPU sits at the center of the switch that is capable of performing 10Gbps bi-directional switching. This is simply not the case, and talking about "load" on a switch therefore often makes little sense.
I hope this helps.

TCP vs UDP on video stream

I just came home from my exam in network-programming, and one of the question they asked us was "If you are going to stream video, would you use TCP or UDP? Give an explanation for both stored video and live video-streams". To this question they simply expected a short answer of TCP for stored video and UDP for live video, but I thought about this on my way home, and is it necessarily better to use UDP for streaming live video? I mean, if you have the bandwidth for it, and say you are streaming a soccer match, or concert for that matter, do you really need to use UDP?
Lets say that while you are streaming this concert or whatever using TCP you start losing packets (something bad happened in some network between you and the sender), and for a whole minute you don't get any packets. The video-stream will pause, and after the minute is gone packets start to get through again (IP found a new route for you). What would then happen is that TCP would retransmit the minute you lost and continue sending you the live stream. As an assumption the bandwidth is higher than the bit-rate on the stream, and the ping is not too high, so in a short amount of time, the one minute you lost will act as a buffer for the stream for you, that way, if packet-loss happens again, you won't notice.
Now, I can think of some appliances where this wouldn't be a good idea, like for instance video-conferences, where you need to always be at the end of the stream, because delay during a video-chat is just horrible, but during a soccer-match, or a concert what does it matter if you are a single minute behind the stream? Plus, you are guaranteed that you get all the data and it would be better to save for later viewing when it's coming in without any errors.
So this brings me to my question. Are there any drawbacks that I don't know of about using TCP for live-streaming? Or should it really be, that if you have the bandwidth for it you should go for TCP given that it is "nicer" to the network (flow-control)?
Drawbacks of using TCP for live video:
As you mentioned, TCP buffers the unacknowledged segments for every client. In some cases this is undesirable, such as TCP streaming for very popular live events: your list of simultaneous clients (and buffering requirements) are large in this case. Pre-recorded video-casts typically don't have as much of a problem with this because viewers tend to stagger their replay activity.
TCP's delivery guarantees are a blocking function which isn't helpful in interactive conversations. Assume your network connection drops for 15 seconds. When we miss part of a conversation, we naturally ask the person to repeat (or the other party will proactively repeat if it seems like you missed something). UDP doesn't care if you missed part of a conversation for the last 15 seconds; it keeps working as if nothing happened. On the other hand, the app could be designed for TCP to replay the last 15 seconds (and the person on the other end may not want or know about that). Such a replay by TCP aggravates the problem, and makes it more difficult to stay in sync with other parties in the conversation. Comparing TCP and UDP’s behavior in the face of packet loss, one could say that it’s easier for UDP to stay in sync with the state of an interactive conversation.
IP multicast significantly reduces video bandwidth requirements for large audiences; multicast requires UDP (and is incompatible with TCP). Note - multicast is generally restricted to private networks. Please note that multicast over the internet is not common. I would also point out that operating multicast networks is more complicated than operating typical unicast networks.
FYI, please don't use the word "packages" when describing networks. Networks send "packets".
but during a soccer-match, or a
concert what does it matter if you are
a single minute behind the stream?
To some soccer fans, quite a bit. It has been remarked that delays of even a few seconds present in digital video streams due to encoding (or whatever) can be very annoying when, during high-profile events such as world cup matches, you can hear the cheers and groans from the guys next door (who are watching an undelyed analog program) before you get to see the game moves that caused them.
I think that to someone caring a lot about sports (and those are the biggest group of paying customers for digital TV, at least here in Germany), being a minute behind in a live video stream would be completely unacceptable (As in, they'd switch to your competitor where this doesn't happen).
Usually a video stream is somewhat fault tolerant. So if some packages get lost (due to some router along the way being overloaded, for example), then it will still be able to display the content, but with reduced quality.
If your live stream was using TCP/IP, then it would be forced to wait for those dropped packages before it could continue processing newer data.
That's doubly bad:
old data be re-transmitted (that's probably for a frame that was already displayed and therefore worthless) and
new data can't arrive until after old data was re-transmitted
If your goal is to display as up-to-date information as possible (and for a live-stream you usually want to be up-to-date, even if your frames look a bit worse), then TCP will work against you.
For a recorded stream the situation is slightly different: you'll probably be buffering a lot more (possibly several minutes!) and would rather have data re-transmitted than have some artifacts due to lost packages. In this case TCP is a good match (this could still be implemented in UDP, of course, but TCP doesn't have as much drawbacks as for the live stream case).
There are some use cases suitable to UDP transport and others suitable to TCP transport.
The use case also dictates encoding settings for the video. When broadcasting soccer match focus is on quality and for video conference focus is on latency.
When using multicast to deliver video to your customers then UDP is used.
Requirement for multicast is expensive networking hardware between broadcasting server and customer. In practice this means if your company owns network infrastructure you can use UDP and multicast for live video streaming. Even then quality-of-service is also implemented to mark video packets and prioritize them so no packet loss happens.
Multicast will simplify broadcasting software because network hardware will handle distributing packets to customers. Customers subscribe to multicast channels and network will reconfigure to route packets to new subscriber. By default all channels are available to all customers and can be optimally routed.
This workflow places dificulty on authorization process. Network hardware does not differentiate subscribed users from other users. Solution to authorization is in encrypting video content and enabling decryption in player software when subscription is valid.
Unicast (TCP) workflow allows server to check client's credentials and only allow valid subscriptions. Even allow only certain number of simultaneous connections.
Multicast is not enabled over internet.
For delivering video over internet TCP must be used. When UDP is used developers end up re-implementing packet re-transmission, for eg. Bittorrent p2p live protocol.
"If you use TCP, the OS must buffer the unacknowledged segments for every client. This is undesirable, particularly in the case of live events".
This buffer must exist in some form. Same is true for jitter buffer on player side. It is called "socket buffer" and server software can know when this buffer is full and discard proper video frames for live streams. It is better to use unicast/TCP method because server software can implement proper frame dropping logic. Random missing packets in UDP case will just create bad user experience. like in this video: http://tinypic.com/r/2qn89xz/9
"IP multicast significantly reduces video bandwidth requirements for large audiences"
This is true for private networks, Multicast is not enabled over internet.
"Note that if TCP loses too many packets, the connection dies; thus, UDP gives you much more control for this application since UDP doesn't care about network transport layer drops."
UDP also doesn't care about dropping entire frames or group-of-frames so it does not give any more control over user experience.
"Usually a video stream is somewhat fault tolerant"
Encoded video is not fault tolerant. When transmitted over unreliable transport then forward error correction is added to video container. Good example is MPEG-TS container used in satellite video broadcast that carry several audio, video, EPG, etc. streams. This is necessary as satellite link is not duplex communication, meaning receiver can't request re-transmission of lost packets.
When you have duplex communication available it is always better to re-transmit data only to clients having packet loss then to include overhead of forward-error-correction in stream sent to all clients.
In any case lost packets are unacceptable. Dropped frames are ok in exceptional cases when bandwidth is hindered.
The result of missing packets are artifacts like this one:
Some decoders can break on streams missing packets in critical places.
I recommend you to look at new p2p live protocol Bittorent Live.
As for streaming it's better to use UDP, first because it lowers the load on servers, but mostly because you can send packets with multicast, it's simpler than sending it to each connected client.
It depends. How critical is the content you are streaming? If critical use TCP. This may cause issues in bandwidth, video quality (you might have to use a lower quality to deal with latency), and latency. But if you need the content to guaranteed get there, use it.
Otherwise UDP should be fine if the stream is not critical and would be preferred because UDP tends to have less overhead.
One of the biggest problems with delivering live events on Internet is 'scale', and TCP doesn’t scale well. For example when you are beaming a live football match -as opposed to an on demand movie playback- the number of people watching can easily be 1000 times more. In such a scenario using TCP is a death sentence for the CDNs (content delivery networks).
There are a couple of main reasons why TCP doesn't scale well:
One of the largest tradeoffs of TCP is the variability of throughput achievable between the sender and the receiver. When streaming video over the Internet the video packets must traverse multiple routers over the Internet, each of these routers is connected with different speed connections. The TCP algorithm starts with TCP window off small, then grows until packet loss is detected, the packet loss is considered a sign of congestion and TCP responds to it by drastically reducing the window size to avoid congestion. Thus in turn reducing the effective throughput immediately. Now imagine a network with TCP transmission using 6-7 router hops to the client (a very normal scenario), if any of the intermediate router looses any packet, the TCP for that link will reduce the transmission rate. In-fact The traffic flow between routers follow an hourglass kind of a shape; always gong up and down in-between one of the intermediate routers. Rendering the effective through put much lower compared to best-effort UDP.
As you may already know TCP is an acknowledgement-based protocol. Lets for example say a sender is 50ms away (i.e. latency btw two points). This would mean time it takes for a packet to be sent to a receiver and receiver to send an acknowledgement would be 100ms; thus maximum throughput possible as compared to UDP based transmission is already halved.
The TCP doesn’t support multicasting or the new emerging standard of multicasting AMT. Which means the CDNs don’t have the opportunity to reduce network traffic by replicating the packets -when many clients are watching the same content. That itself is a big enough reason for CDNs (like Akamai or Level3) to not go with TCP for live streams.
While reading the TCP UDP debate I noticed a logical flaw. A TCP packet loss causing a one minute delay that's converted into a one minute buffer cant be correlated to UDP dropping a full minute while experiencing the same loss. A more fair comparison is as follows.
TCP experiences a packet loss. The video is stopped while TCP resend's packets in an attempt to stream mathematically perfect packets. Video is delayed for one minute and picks up where it left off after missing packet makes its destination. We all wait but we know we wont miss a single pixel.
UDP experiences a packet loss. For a second during the video stream a corner of the screen gets a little blurry. No one notices and the show goes on without looking for the lost packets.
Anything that streams gains the most benefits from UDP. The packet loss causing a one minute delay to TCP would not cause a one minute delay to UDP. Considering that most systems use multiple resolution streams making things go blocky when starving for packets, makes even more sense to use UDP.
UDP FTW when streaming.
If the bandwidth is far higher than the bitrate, I would recommend TCP for unicast live video streaming.
Case 1: Consecutive packets are lost for a duration of several seconds. => live video will stop on the client side whatever the transport layer is (TCP or UDP). When the network recovers:
- if TCP is used, client video player can choose to restart the stream at the first packet lost (timeshift) OR to drop all late packets and to restart the video stream with no timeshift.
- if UDP is used, there is no choice on the client side, video restart live without any timeshift.
=> TCP equal or better.
Case 2: some packets are randomly and often lost on the network.
- if TCP is used, these packets will be immediately retransmitted and with a correct jitter buffer, there will be no impact on the video stream quality/latency.
- if UDP is used, video quality will be poor.
=> TCP much better
Besides all the other reasons, UDP can use multicast. Supporting 1000s of TCP users all transmitting the same data wastes bandwidth.
However, there is another important reason for using TCP.
TCP can much more easily pass through firewalls and NATs. Depending on your NAT and operator, you may not even be able to receive a UDP stream due to problems with UDP hole punching.
For video streaming bandwidth is likely the constraint on the system. Using multicast you can greatly reduce the amount of upstream bandwidth used. With UDP you can easily multicast your packets to all connected terminals.
You could also use a reliable multicast protocol, one is called Pragmatic General Multicast (PGM), I don't know anything about it and I guess it isn't widespread in its use.
All the 'use UDP' answers assume an open network and 'stuff it as much as you can' approach. Good for old-style closed-garden dedicated audio/video networks, which are a vanishing sort.
In the actual world, your transmission will go through firewalls (that will drop multicast and sometimes udp), the network is shared with others more important ($$$) apps, so you want to punish abusers with window scaling.
This is the thing, it is more a matter of content than it is a time issue. The TCP protocol requires that a packet that was not delivered must be check, verified and redelivered. UDP does not use this requirement. So if you sent a file which contains millions of packets using UDP, like a video, if some of the packets are missing upon delivery, they will most likely go unmissed.

Difference between TCP and UDP?

What is the difference between TCP and UDP?
I know that TCP is used in the case of non-time critical applications, and UDP is used for games or applications that require fast transmission of data. I know that TCP is used for HTTP, HTTPs, FTP, SMTP, and Telnet. I know that UDP is used for DNS and DHCP.
But why? What characteristics of TCP and UDP make it useful for their respective use cases?
TCP is a connection oriented stream over an IP network. It guarantees that all sent packets will reach the destination in the correct order. This imply the use of acknowledgement packets sent back to the sender, and automatic retransmission, causing additional delays and a general less efficient transmission than UDP.
UDP is a connection-less protocol. Communication is datagram oriented. The integrity is guaranteed only on the single datagram. Datagrams reach destination and can arrive out of order or don't arrive at all. It is more efficient than TCP because it uses non ACK. It's generally used for real time communication, where a little percentage of packet loss rate is preferable to the overhead of a TCP connection.
In certain situations UDP is used because it allows broadcast packet transmission. This is sometimes fundamental in cases like DHCP protocol, because the client machine hasn't still received an IP address (this is the DHCP negotiaton protocol purpose) and there won't be any way to establish a TCP stream without the IP address itself.
From the Skullbox article:
TCP (Transmission Control Protocol) is the most commonly used protocol on the Internet.
The reason for this is because TCP offers error correction. When the TCP protocol is used there is a "guaranteed delivery." This is due largely in part to a method called "flow control." Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, the client re-requests the packet from the server until the whole packet is complete and is identical to its original.
UDP (User Datagram Protocol) is anther commonly used protocol on the Internet. However, UDP is never used to send important data such as webpages, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA) , Real Player (.RM), and others use UDP because it offers speed! The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Internet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality.
1) TCP is connection oriented and reliable where as UDP is connection less and unreliable.
2) TCP needs more processing at network interface level where as in UDP it’s not.
3) TCP uses, 3 way handshake, congestion control, flow control and other mechanism to make sure the reliable transmission.
4) UDP is mostly used in cases where the packet delay is more serious than packet loss.
Think of TCP as a dedicated scheduled UPS/FedEx pickup/dropoff of packages between two locations, while UDP is the equivalent of throwing a postcard in a mailbox.
UPS/FedEx will do their damndest to make sure that the package you mail off gets there, and get it there on time. With the post card, you're lucky if it arrives at all, and it may arrive out of order or late (how many times have you gotten a postcard from someone AFTER they've gotten home from the vacation?)
TCP is as close to a guaranteed delivery protocol as you can get, while UDP is just "best effort".
Reasons UDP is used for DNS and DHCP:
DNS - TCP requires more resources from the server (which listens for connections) than it does from the client. In particular, when the TCP connection is closed, the server is required to remember the connection's details (holding them in memory) for two minutes, during a state known as TIME_WAIT_2. This is a feature which defends against erroneously repeated packets from a preceding connection being interpreted as part of a current connection. Maintaining TIME_WAIT_2 uses up kernel memory on the server. DNS requests are small and arrive frequently from many different clients. This usage pattern exacerbates the load on the server compared with the clients. It was believed that using UDP, which has no connections and no state to maintain on either client or server, would ameliorate this problem.
DHCP - DHCP is an extension of BOOTP. BOOTP is a protocol which client computers use to get configuration information from a server, while the client is booting. In order to locate the server, a broadcast is sent asking for BOOTP (or DHCP) servers. Broadcasts can only be sent via a connectionless protocol, such as UDP. Therefore, BOOTP required at least one UDP packet, for the server-locating broadcast. Furthermore, because BOOTP is running while the client... boots, and this is a time period when the client may not have its entire TCP/IP stack loaded and running, UDP may be the only protocol the client is ready to handle at that time. Finally, some DHCP/BOOTP clients have only UDP on board. For example, some IP thermostats only implement UDP. The reason is that they are built with such tiny processors and little memory that the are unable to perform TCP -- yet they still need to get an IP address when they boot.
As others have mentioned, UDP is also useful for streaming media, especially audio. Conversations sound better under network lag if you simply drop the delayed packets. You can do that with UDP, but with TCP all you get during lag is a pause, followed by audio that will always be delayed by as much as it has already paused. For two-way phone-style conversations, this is unacceptable.
One of the differences is in short
UDP : Send message and dont look back if it reached destination, Connectionless protocol
TCP : Send message and guarantee to reach destination, Connection-oriented protocol
TCP establishes a connection before the actual data transmission takes place, UDP does not. In this way, UDP can provide faster delivery. Applications like DNS, time server access, therefore, use UDP.
Unlike UDP, TCP uses congestion control. It responses to the network load. Unlike UDP, it slows down when network congestion is imminent. So, applications like multimedia preferring constant throughput might go for UDP.
Besides, UDP is unreliable, it doesn't react on packet losses. So loss sensitive applications like multimedia transmission prefer UDP. However, TCP is a reliable protocol, so, applications that require reliability such as web transfer, email, file download prefer TCP.
Besides, in today's internet UDP is not as welcoming as TCP due to middle boxes. Some applications like skype fall down to TCP when UDP connection is assumed to be blocked.
Run into this thread and let me try to express it in this way.
TCP
3-way handshake
Bob: Hey Amy, I'd like to tell you a secret
Amy: OK, go ahead, I'm ready
Bob: OK
Communication
Bob: 'I', this is the first letter
Amy: First letter received, please send me the second letter
Bob: ' ', this is the second letter
Amy: Second letter received, please send me the third letter
Bob: 'L', this is the third letter
After a while
Bob: 'L', this the third letter
Amy: Third letter received, please send me the fourth letter
Bob: 'O', this the forth letter
Amy: ...
......
4-way handshake
Bob: My secret is exposed, now, you know my heart.
Amy: OK. I have nothing to say.
Bob: OK.
UDP
Bob: I LOVE U
Amy received: OVI L E
TCP is more reliable than UDP with even message order guaranteed, that's no doubt why UDP is more lightweight and efficient.
The Law of Leaky Abstractions
by Joel Spolsky
http://www.joelonsoftware.com/articles/LeakyAbstractions.html
Short and simple differences between Tcp and Udp protocol:
1) Tcp - Transmission control protocol and Udp - User datagram protocol.
2) Tcp is reliable protocol, Where as Udp is a unreliable protocol.
3) Tcp is a stream oriented, where as Udp is a message oriented protocol.
4) Tcp is a slower than Udp.
This sentence is a UDP joke, but I'm not sure that you'll get it. The below conversation is a TCP/IP joke:
A: Do you want to hear a TCP/IP joke?
B: Yes, I want to hear a TCP/IP joke.
A: Ok, are you ready to hear a TCP/IP joke?
B: Yes, I'm ready to hear a TCP/IP joke.
A: Well, here is the TCP/IP joke.
A: Did you receive a TCP/IP joke?
B: Yes, I **did** receive a TCP/IP joke.
TCP and UDP are transport layer protocol, Layer 4 protocol in OSI(open systems interconnection model). The main difference along with pros and cons are as following.
TCP
PROS:
Acknowledgment
Guaranteed Delivery
Connection based
Ordered packets
Congestion control
CONS:
Larger Packet
More bandwidth
Slower
Statefull
Consume memory
UDP
PROS:
Packets are smaller
Consume less bandwidth
Faster
Stateless
CONS:
No acknowledgment
No guaranteed delivery
Connectionless
No congestion control
No order packet
TLDR;
TCP - stream-oriented, requires a connection, reliable, slow
UDP - message-oriented, connectionless, unreliable, fast
Before we start, remember that all disadvantages of something are a continuation of its advantages. There only a right tool for a job, no panacea. TCP/UDP coexist for decades, and for a reason.
TCP
It was designed to be extremely reliable and it does its job very well. It's so complex because it accomplishes a hard task: providing a reliable transport over the unreliable IP protocol.
Since all TCP's complex logic is encapsulated into the network stack, you are free from doing lots of laborious, error-prone low-level stuff in the application layer.
When you send data over TCP, you write a stream of bytes to the socket at the sender side where it gets broken into packets, passed down the stack and sent over the wire. On the receiver side packets get reassembled again into a continous stream of bytes.
Maintaining this nice abstraction has a cost in terms of complexity and performance. If the 1st packet from the byte stream is lost, the receiver will delay processing of subsequent packets even those have already arrived (the so-called "head of line blocking").
In addition, in order to be reliable, TCP implements this:
TCP requires an established connection, which requires 3 round-trips ("infamous" 3-way handshake)
TCP has a feature called "slow start" when it gradually ramps up the transmission rate after establishing a connection to allow a receiver to keep up with data rate
Every sent packet has to be acknowledged or else a sender will stop sending more data
And on and on and on...
All this is exacerbated in slow unreliable wireless networks because TCP was designed for wired networks where delays are predictable and packet loss is not so common. In addition, like many people already mentioned, for some things TCP just doesn't work at all (DHCP). However, where relevant, TCP still does its work exceptionally well.
Using a mail analogy a TCP session is similar to telling a story to your secretary who breaks it into mails and sends over a crappy mail service to a publisher. On the other side another secretary assembles mails into a single piece of text. Some mails get lost, some get corrupted, so a very complex procedure is required for reliable delivery and your 10-page story can take a long time to reach your publisher.
UDP
UDP, on the other hand, is message-oriented, so a receiver writes a message (packet) to the socket and then it gets transmitted to a receiver as-is, without any splitting/assembling in the transport layer.
Compared to TCP, its specification is very straightforward. Essentially, all it does for you is adding a checksum to the packet so a receiver can detect its corruption. Everything else must be implemented by you, a software developer. Now read the voluminous TCP spec and try thinking of re-implementing even a small subset of it.
Some people went this way and got very decent results, to the point that HTTP/3 uses QUIC - a protocol based on UDP. However, this is more of an exception. Common applications of UDP are audio/video streaming and conferencing applications like Skype, Zoom or Google Hangout where loosing packets is not so important compared to a delay introduced by TCP.
Simple Explanation by Analogy
TCP is like this.
Imagine you have a pen-pal on Mars (we communicated with written letters back in the good ol' days before the internet).
You need to send your pen pal the seven habits of highly effective people. So you decide to send it in seven separate letters:
Letter 1 - Be proactive
Letter 2 - Begin with the end in mind...
etc.
etc..Letter 7 - Sharpen the Saw
Requirements:
You want to make sure that your pen pal receives all your letters - in order and that they arrive perfectly. If your pen pay receives letter 7 before letter 1 - that's no good. if your pen pal receives all letters except letter 3 - that also is no good.
Here's how we ensure that our requirements are met:
Confirmation Letter: So your pen pal sends a confirmation letter to say "I have received letter 1". That way you know that your pen pal has received it. If a letter does not arrive, or arrives out of order, then you have to stop, and go back and re-send that letter, and all subsequent letters.
Flow Control: Around the time of Xmas you know that your pen pal will be receiving a lot of mail, so you slow down because you don't want to overwhelm your pen pal. (Your pen pal sends you constant updates about the number of unread messages there are in penpal's mailbox - if your pen pal says that the inbox is about to explode because it is so full, then you slow down sending your letters - because your pen pal won't be able to read them.
Perfect arrival. Sometimes while you send your letter in the mail, it can get torn, or a snail can eat half of it. How do you know that all your letter has arrived in perfect condition? Well your pen pal will give you a mechanism by which you can check whether they've got the full letter and that it was the exactly the letter that you sent. (e.g. via a word count etc. ). a basic analogy.

What are examples of TCP and UDP in real life?

I know the difference between the two on a technical level.
But in real life, can anyone provide examples (the more the better) of applications (uses) of TCP and UDP to demonstrate the difference?
UDP: Anything where you don't care too much if you get all data always
Tunneling/VPN (lost packets are ok - the tunneled protocol takes care of it)
Media streaming (lost frames are ok)
Games that don't care if you get every update
Local broadcast mechanisms (same application running on different machines "discovering" each other)
TCP: Almost anything where you have to get all transmitted data
Web
SSH, FTP, telnet
SMTP, sending mail
IMAP/POP, receiving mail
EDIT: I'm not going to bother explaining the differences, since you state that you already know and every other answer explains it anyway :)
UDP is mailing a letter at the post office.
TCP is mailing a letter with a return receipt at the post office, except that the post master will organize the letters in-order-of mailing and only deliver them in-order.
Well, it was an attempt anyway.
TCP:
World Wide Web(HTTP)
E-mail (SMTP TCP)
File Transfer Protocol (FTP)
Secure Shell (SSH)
UDP:
Domain Name System (DNS)
Streaming media applications such as movies
Online multiplayer games
Voice over IP (VoIP)
Trivial File Transfer Protocol (TFTP)
REAL TIME APPLICATION FOR TCP:
Email:
Reason: suppose if some packet(words/statement) is missing we cannot understand the content.It should be reliable.
REAL TIME APPLICATION FOR UDP:
video streaming:
* **Reason: ***suppose if some packet(frame/sequence) is missing we can understand the content.Because video is collection of frames.For 1 second video there should be
25 frames(image).Even though we can understand some frames are missing due to our imagination skills. Thats why UDP is used for video streaming.
The classic standpoint is to consider TCP as safe and UDP as unreliable.
But when TCP-IP protocols are used in safety critical applications,
TCP is not recommended because it can stop on error for multiple reasons.
Whereas UDP lets the application software deal with errors, retransmission timers, etc.
Moreover, TCP has more processing overhead than UDP.
Currently, UDP is used in aircraft controls and flight instruments,
in the ARINC 664 standard also named AFDX (Avionics Full-Duplex Switched Ethernet).
In ARINC 664, TCP is optional but UDP is used with the RTOS (real time operating systems) designed for the ARINC 653 standard (high reliability control software in civil aircrafts).
For more information about real time controls using IP and UDP in AFDX,
you can read the pages 27 to 50 in
http://www.afdx.com/pdf/AFDX_Training_October_2010_Full.pdf
TCP
I will not send data anymore until i get an acknowledgment.
this process is slow
It is used for security purpose
example: web, sending mail, receiving mail etc
UDP
Here i have no headache with acknowledgment.
this process is faster but here data can be lost .
example : video streaming , online games etc
TCP + UDP = SMTP(example : mobile,telephone)
TCP guarantees (in-order) packet delivery. UDP doesn't.
TCP - used for traffic that you need all the data for. i.e HTML, pictures, etc.
UDP - used for traffic that doesn't suffer much if a packet is dropped, i.e. video & voice streaming, some data channels of online games, etc.
TCP is a connection oriented protocol, It establishes a path, or a virtual connection all the way through switches routers proxies etc and then starts any communication. Various mechanisms like routing djikstras shortest path algorithm exist to establish the virtual end to end connection. So it finds itself used while browsing HTML and other pages, making payments and web applications in general.
UDP is a connectionless protocol - it simply has a destination and nodes simply pass it along if it comes as best as they can. So packets arriving out of order, along various routes etc are common. So Instant messengers and similar software developers think UDP an ideal solution.
In real life if you want to throw data in the net, without worrying about time taken to reach, order of reaching use UDP. If you want a solid path before you start throwing packets, and want same order and latency for your data packets use TCP - I will use UDP for Torrents and TCP for PayPal!
TCP :
Transmission Control Protocol is a connection-oriented protocol, which means that it requires handshaking to set up end-to-end communications. Once a connection is set up, user data may be sent bi-directionally over the connection.
Reliable – Strictly only at transport layer, TCP manages message acknowledgment, retransmission and timeout. Multiple attempts to deliver the message are made. If it gets lost along the way, the server will re-request the lost part. In TCP, there's either no missing data, or, in case of multiple timeouts, the connection is dropped. (This reliability however does not cover application layer, at which a separate acknowledgement flow control is still necessary)
Ordered – If two messages are sent over a connection in sequence, the first message will reach the receiving application first. When data segments arrive in the wrong order, TCP buffers delay the out-of-order data until all data can be properly re-ordered and delivered to the application.
Heavyweight – TCP requires three packets to set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control.
Streaming – Data is read as a byte stream, no distinguishing indications are transmitted to signal message (segment) boundaries.
Applications of TCP
World Wide Web, email, remote administration, and file transfer rely on TCP.
UDP :
User Datagram Protocol is a simpler message-based connectionless protocol. Connectionless protocols do not set up a dedicated end-to-end connection. Communication is achieved by transmitting information in one direction from source to destination without verifying the readiness or state of the receiver.
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
Not ordered – If two messages are sent to the same recipient, the order in which they arrive cannot be predicted.
Lightweight – There is no ordering of messages, no tracking connections, etc. It is a small transport layer designed on top of IP.
Datagrams – Packets are sent individually and are checked for integrity only if they arrive. Packets have definite boundaries which are honored upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent.
No congestion control – UDP itself does not avoid congestion. Congestion control measures must be implemented at the application level.
Broadcasts – being connectionless, UDP can broadcast - sent packets can be addressed to be receivable by all devices on the subnet.
Multicast – a multicast mode of operation is supported whereby a single datagram packet can be automatically routed without duplication to very large numbers of subscribers.
Applications of UDP
Numerous key Internet applications use UDP, including: the Domain Name System (DNS), where queries must be fast and only consist of a single request followed by a single reply packet, the Simple Network Management Protocol (SNMP), the Routing Information Protocol (RIP) and the Dynamic Host Configuration Protocol (DHCP).
Voice and video traffic is generally transmitted using UDP. Real-time video and audio streaming protocols are designed to handle occasional lost packets, so only slight degradation in quality occurs, rather than large delays if lost packets were retransmitted. Because both TCP and UDP run over the same network, many businesses are finding that a recent increase in UDP traffic from these real-time applications is hindering the performance of applications using TCP, such as point of sale, accounting, and database systems. When TCP detects packet loss, it will throttle back its data rate usage. Since both real-time and business applications are important to businesses, developing quality of service solutions is seen as crucial by some.
Some VPN systems such as OpenVPN may use UDP while implementing reliable connections and error checking at the application level.
TCP is appropriate when you have to move a decent amount of data (> ~1 kB), and you require all of it to be delivered. Almost all data that moves across the internet does so via TCP - HTTP, SMTP, BitTorrent, SSH, etc, all use TCP.
UDP is appropriate when you have small messages which you can afford to lose, and would like to send them as efficiently as possible. One reason you might be able to afford to lose them is because you can re-send them if they get lost. The main example on the internet is DNS - DNS consists of small queries saying things like "what is the IP number for stackoverflow.com?", and the responses are correspondingly small. Computers make a lot of these queries, so they should be made efficiently, but if they get lost en route, it's easy to time out and re-send them.
TCP guarantees packet delivery AND order. Order is almost as important as the delivery in the first place when reconstructing data for files such as executables, etc.
UDP does not guarantee delivery NOR order. Packets can arrive (or not!) in any order.
Common uses for TCP include file transfer where the integrity of the packets is paramount. Voice/video applications can afford to lose some data while still maintaining acceptable quality, and so usually use UDP.
One additional thought on some of the comments above that talks about ordered delivery.... It must be clarified that the destination computer may receive packets out of order on the wire, but the TCP at the destination is responsible for "rearranging out-of-order data" before passing it on to the upper layers of the stack. When you say TCP guarantees ordered packet delivery, what that means is it will deliver packets in correct order to the upper layers of the stack.
SCTP vs TCP vs UDPServices/Features SCTP TCP UDP
Connection-oriented yes yes no
Full duplex yes yes yes
Reliable data transfer yes yes no
Partial-reliable data transfer optional no no
Ordered data delivery yes yes no
Unordered data delivery yes no yes
Flow control yes yes no
Congestion control yes yes no
ECN capable yes yes no
Selective ACKs yes optional no
Preservation of message boundaries yes no yes
Path MTU discovery yes yes no
Application PDU fragmentation yes yes no
Application PDU bundling yes yes no
Multistreaming yes no no
Multihoming yes no no
Protection against SYN flooding attacks yes no n/a
Allows half-closed connections no yes n/a
Reachability check yes yes no
Psuedo-header for checksum no (vtags) yes yes
Time wait state vtags 4-tuple n/a
Since tcp usages are pretty straightforward from other answers, I'll mention some interesting UDP use-cases:
1)DHCP - Dynamic Host Configuration Protocol, which is being used in order to dynamically assign IP address and some other network configuration to the connecting devices. In simple words, this protocol allows you just connect to the network cable(or wifi) and start using the internet, without any additional configurations. DHCP uses UDP protocol. Since the settings request message is being broadcasted from the host and there is no way to establish a TCP connection with DHCP server(you don't know it's address) it's impossible to use TCP instead.
2)Traceroute - well-known network diagnostic tool which allows you to explore which path in the network your datagram passes to reach it's destination(and how much time it takes). By default, it works by sending UDP datagram with unlikely destination port number(ranging from 33434 to 33534) to the destination with the ttl(time-to-live) field set to 1. When the router somewhere in the network gets such datagram - it finds out that the datagram is expired. Then, the router drops the datagram and sends to the origin of the datagram an ICMP(Internet Control Message Protocol) error message indicating that the datagram's ttl was expired and containing router's name and IP address. Each time the host sends datagrams with higher and higher TTL, thus increasing the network part which it succeeds to overcome and getting new ICMP messages from new routers. When it eventually reaches it's destination(datagrams TTL is big enough to allow it),- the destination host sends 'Destination port unreachable' ICMP message to the origin host. This way, Traceroute knows that the destination was reached. Since the TCP guarantees segments delivery it would be at least inefficient to use it instead of UDP which, in turn, allows datagram to be just dropped without any resend attempts(resend is implemented on the higher level, with continuously increasing TTL as described above).
TCP: will get there in meaningful order
UDP: god knows (maybe)
UDP is applied a lot in games or other Peer-to-peer setups because it's faster and most of the time you don't need the protocol itself to make sure everything gets to the destination in the original order (UDP does not garantee packet delivery or delivery order).
Web traffic on the other hand is over TCP. (I'm not sure here but I think it has to do with the way the HTTP protocol is built)
Edited because I failed at UDP.
Real life examples of both TCP and UDP
tcp -> a phone call, sms or anything specific to destination
UDP -> a FM radio channel (AM), Wi-Fi.

Resources