Just curious what goes into the decision between using UDP and TCP when creating an online game - tcp

As the title suggests, I was wondering how one decides which protocol to implement. So far I understand that UDP can make for faster transmission of data but with the order it's sent in neglected and it doesn't monitor if the data is even received. To my knowledge TCP is safer and is used when data has to be precise and the reception time doesn't have to be as swift. But I noticed that different online games use different protocols even though all of them games play quite fluently(which I'm assuming means fast data transfer.) So I'm wondering how can you tell which is used, and why is that protocol used?
Thanks

Warning: incoming oversimplification. Still, this should help you understand.
TCP is reliable. If you send data, it will either get there in one piece and in perfect order, or it won't get there. This reliability comes at a cost of more traffic overhead, because the receiver has to acknowledge its receipt to the sender, and the sender may send the same data multiple times to ensure correct delivery.
UDP is unreliable, but with no such overhead. The sender tosses packets at the receiver. Packets that do arrive are still guaranteed to arrive in one piece, but not all packets are guaranteed to arrive. UDP is useful when you can afford transmission loss and the overhead of TCP is too great to justify the reliability.
Examples of uses for UDP include real-time content streaming (video/audio) and continuous state updates (e.g., packets notifying the client of the state of various objects in your game universe). In general, these are adequate targets because data becomes irrelevant very quickly as it is replaced with new data. Better to keep throwing bits and chugging along than to worry about intact arrival of past data that may no longer matter.
On the other hand, something like authentication, a content updating system, or in-game chat would strongly benefit from a more reliable TCP connection, as latency is far less important than integrity.

The reliable way to tell which protocol is used is to use a network packet sniffer that records your network traffic, record some of the game related traffic and look at the protocol name. One example of a free and simple network sniffer is Wireshark (formerly Ethereal).
The UDP vs TCP decision will probably (I say probably as I am not a professional game dev, just a hobby dev) boil down to the amount of traffic you expect to send/receive, the networking conditions, the durability of the program in case of packet loss and the expected number of concurrent users.
A turn based strategy game, for example chess, in which each player need not send much more than their moves and occasional chat messages, would benefit from TCP. A first-person shooter with dozens of players in a 3D might generate a lot of traffic over a long time but might not suffer if a few packets now and then are lost, making it an ideal candidate for UDP.
Then again, some games might even use a combination of TCP and UDP for tasks with different requirements in speed and reliability.

Don't think of it as "UDP is faster" and "TCP is slower", because that's just wrong. The big difference is that bad packets are just dropped in UDP. The receiver UDP implementation won't have to wait for the dropped packet to be retransmitted and received before delivering any subsequent ones to the application.
In TCP, even if your machine has the next packet after the dropped one in a buffer, it won't be able to deliver it to the application until the sender retransmits the one that was dropped.
TCP: deliver data to app in order, automatic retransmit
UDP: deliver data to app in any order, no retransmit
UDP mostly makes sense if the packets are self contained. If you have to reassembly multiple UDP packets into one, you should probably use TCP.

Related

Different Applications of TCP and UDP

In one of my classes, we went through TCP and UDP. Largely, I understand the fundamental difference.
TCP uses, 3 way handshake, congestion control, flow control and other
mechanism to make sure the reliable transmission.
UDP is mostly used in cases where the packet delay is more serious
than packet loss
The question outlined below, believe that TCP makes most for TCP, sense the order of the data that would translate to a conversation would be essential and UDP for the network handler that send player data because speed is most important for playing a competitive online game that relies on reflexes.
Does this make sense? Or am I generalizing the problems too much?
Question:
TCP and UDP. The online game is a first person shooter game where real players fight each other with guns in 5 versus 5 matches. You are in charge of two features:
an implementation of real time voice chat,
the network handlers that
send player data from the end user’s clients to your dedicated,
central servers
Which protocols do you use for each and why?
With TCP the devices at the end points need to establish a connection through a "handshake" before any data is sent. TCP also uses flow control, sequence numbers, acknowledgements and timers, to ensure reliable data transfer. Congestion control is also used by TCP to adjust the transmission rate.
The implementation of the above mechanisms comes at a time cost.
UDP, on the other hand, does almost nothing except from multiplexing/demultiplexing and a simple error checking.
Real time applications often need a minimum bitrate and can tolerate some data loss. In your example, of a real time voice chat, it is more important for the users to hear each other without delay even if a few milliseconds are inaudible. The network handlers that send player data to the server, should use TCP because reliability of the data there is vital.

TCP for real-time systems

I am new to networking and trying getting some basic concepts. I will really appreciate if someone can tell me
why using TCP in real-time systems is a bad idea?
What makes UDP preferable for real-time systems?
In short TCP is designed to achieve perfect transmission above all else. You will get exactly what has been sent, in the exact order it was sent, or you will get nothing at all.
The problem with this is that TCP will get hung up trying to re-transmit data until it is received properly, but in a real-time system, the data it's trying to re-transmit is useless because it's already out of date; AND the data you actually want has to wait for the data you don't want to clear the stack, before it can be sent.
This article explains it much more eloquently
As stated before UDP is used over TCP for Real Time Services(RTS), mainly because of how simple a packet of UDP is compared to TCP as the latter puts more emphasis on error correction and reliability.
TCP packets are bigger compared to UDP packets and much more carefully transmitted in order to maintain their integrity, where a receiver acknowledges each and every packet of TCP that is sent which is great when sending sensitive data but it will become a bottleneck in an RTS where state is to be kept as updated as possible and usually data transmitted is 100-1000 KB/s and loosing few KBs won't wreck your service when its implemented with UDP.

udp vs tcp packet dropping

If I send two packets via the net one is UDP packet and the other is TCP packet, which packet is more likely to reach its destination? I have been told that the TCP protocol is safer but this is because of it's "fail-safe" mechanism. But does it also mean that UDP packets are more likely to fall in the way?
I think it's related to the specific router implementation, because on one hand if a UDP packet disappears then both sides probably know it might happen and can afford to lose a packet or two but on the other hand if a TCP packet disappears then by it's "fail-safe" mechanism it will send another and the problem is solved, and TCP packet is much heavier.
I would like to have more solid answer for that question because i find this subject quite interesting.
If you are making a decision on which protocol to use for your application, you really need to look into both in more detail. Below is just an overview.
TCP is a stream protocol that provides several mechanisms that will deliver: a guaranteed delivery of data, in order. It will control the rate at which the data is sent (it will start transmitting slowly, then upping the speed auntil it reaches a rate that is sustainable by the peer). It will resend any data that was not received on the other side. To do that, you pay a price (for example the slow start, the need for acknowledging all data received etc.)
UDP on the other side is a "data chunk" (datagram) protocol and provides none of the checks of integrity / rate / order. It "compensates" by being (potentially) faster: you pump out data as fast as you can, the other side receives whatever it is able to catch, at full network speed in the extreme case. No guarantee of delivery or order of data arriving at the other side. They either receive the whole datagram or nothing.
Any decision one usually makes has nothing to do with the possibility of data being lost or not but the criticality of losing any of it. Video streaming is done via UDP many times since missing the occasional datagram is less critical than having a smooth image. File transmission cannot afford any data loss or inversions of data chunks, so TCP is the natural choice.
Apart form that question, remeber that the network protocol is only half your problem. The other half is coming up with your application protocol to interprest the bytes you are receiveing...

TCP vs UDP on video stream

I just came home from my exam in network-programming, and one of the question they asked us was "If you are going to stream video, would you use TCP or UDP? Give an explanation for both stored video and live video-streams". To this question they simply expected a short answer of TCP for stored video and UDP for live video, but I thought about this on my way home, and is it necessarily better to use UDP for streaming live video? I mean, if you have the bandwidth for it, and say you are streaming a soccer match, or concert for that matter, do you really need to use UDP?
Lets say that while you are streaming this concert or whatever using TCP you start losing packets (something bad happened in some network between you and the sender), and for a whole minute you don't get any packets. The video-stream will pause, and after the minute is gone packets start to get through again (IP found a new route for you). What would then happen is that TCP would retransmit the minute you lost and continue sending you the live stream. As an assumption the bandwidth is higher than the bit-rate on the stream, and the ping is not too high, so in a short amount of time, the one minute you lost will act as a buffer for the stream for you, that way, if packet-loss happens again, you won't notice.
Now, I can think of some appliances where this wouldn't be a good idea, like for instance video-conferences, where you need to always be at the end of the stream, because delay during a video-chat is just horrible, but during a soccer-match, or a concert what does it matter if you are a single minute behind the stream? Plus, you are guaranteed that you get all the data and it would be better to save for later viewing when it's coming in without any errors.
So this brings me to my question. Are there any drawbacks that I don't know of about using TCP for live-streaming? Or should it really be, that if you have the bandwidth for it you should go for TCP given that it is "nicer" to the network (flow-control)?
Drawbacks of using TCP for live video:
As you mentioned, TCP buffers the unacknowledged segments for every client. In some cases this is undesirable, such as TCP streaming for very popular live events: your list of simultaneous clients (and buffering requirements) are large in this case. Pre-recorded video-casts typically don't have as much of a problem with this because viewers tend to stagger their replay activity.
TCP's delivery guarantees are a blocking function which isn't helpful in interactive conversations. Assume your network connection drops for 15 seconds. When we miss part of a conversation, we naturally ask the person to repeat (or the other party will proactively repeat if it seems like you missed something). UDP doesn't care if you missed part of a conversation for the last 15 seconds; it keeps working as if nothing happened. On the other hand, the app could be designed for TCP to replay the last 15 seconds (and the person on the other end may not want or know about that). Such a replay by TCP aggravates the problem, and makes it more difficult to stay in sync with other parties in the conversation. Comparing TCP and UDP’s behavior in the face of packet loss, one could say that it’s easier for UDP to stay in sync with the state of an interactive conversation.
IP multicast significantly reduces video bandwidth requirements for large audiences; multicast requires UDP (and is incompatible with TCP). Note - multicast is generally restricted to private networks. Please note that multicast over the internet is not common. I would also point out that operating multicast networks is more complicated than operating typical unicast networks.
FYI, please don't use the word "packages" when describing networks. Networks send "packets".
but during a soccer-match, or a
concert what does it matter if you are
a single minute behind the stream?
To some soccer fans, quite a bit. It has been remarked that delays of even a few seconds present in digital video streams due to encoding (or whatever) can be very annoying when, during high-profile events such as world cup matches, you can hear the cheers and groans from the guys next door (who are watching an undelyed analog program) before you get to see the game moves that caused them.
I think that to someone caring a lot about sports (and those are the biggest group of paying customers for digital TV, at least here in Germany), being a minute behind in a live video stream would be completely unacceptable (As in, they'd switch to your competitor where this doesn't happen).
Usually a video stream is somewhat fault tolerant. So if some packages get lost (due to some router along the way being overloaded, for example), then it will still be able to display the content, but with reduced quality.
If your live stream was using TCP/IP, then it would be forced to wait for those dropped packages before it could continue processing newer data.
That's doubly bad:
old data be re-transmitted (that's probably for a frame that was already displayed and therefore worthless) and
new data can't arrive until after old data was re-transmitted
If your goal is to display as up-to-date information as possible (and for a live-stream you usually want to be up-to-date, even if your frames look a bit worse), then TCP will work against you.
For a recorded stream the situation is slightly different: you'll probably be buffering a lot more (possibly several minutes!) and would rather have data re-transmitted than have some artifacts due to lost packages. In this case TCP is a good match (this could still be implemented in UDP, of course, but TCP doesn't have as much drawbacks as for the live stream case).
There are some use cases suitable to UDP transport and others suitable to TCP transport.
The use case also dictates encoding settings for the video. When broadcasting soccer match focus is on quality and for video conference focus is on latency.
When using multicast to deliver video to your customers then UDP is used.
Requirement for multicast is expensive networking hardware between broadcasting server and customer. In practice this means if your company owns network infrastructure you can use UDP and multicast for live video streaming. Even then quality-of-service is also implemented to mark video packets and prioritize them so no packet loss happens.
Multicast will simplify broadcasting software because network hardware will handle distributing packets to customers. Customers subscribe to multicast channels and network will reconfigure to route packets to new subscriber. By default all channels are available to all customers and can be optimally routed.
This workflow places dificulty on authorization process. Network hardware does not differentiate subscribed users from other users. Solution to authorization is in encrypting video content and enabling decryption in player software when subscription is valid.
Unicast (TCP) workflow allows server to check client's credentials and only allow valid subscriptions. Even allow only certain number of simultaneous connections.
Multicast is not enabled over internet.
For delivering video over internet TCP must be used. When UDP is used developers end up re-implementing packet re-transmission, for eg. Bittorrent p2p live protocol.
"If you use TCP, the OS must buffer the unacknowledged segments for every client. This is undesirable, particularly in the case of live events".
This buffer must exist in some form. Same is true for jitter buffer on player side. It is called "socket buffer" and server software can know when this buffer is full and discard proper video frames for live streams. It is better to use unicast/TCP method because server software can implement proper frame dropping logic. Random missing packets in UDP case will just create bad user experience. like in this video: http://tinypic.com/r/2qn89xz/9
"IP multicast significantly reduces video bandwidth requirements for large audiences"
This is true for private networks, Multicast is not enabled over internet.
"Note that if TCP loses too many packets, the connection dies; thus, UDP gives you much more control for this application since UDP doesn't care about network transport layer drops."
UDP also doesn't care about dropping entire frames or group-of-frames so it does not give any more control over user experience.
"Usually a video stream is somewhat fault tolerant"
Encoded video is not fault tolerant. When transmitted over unreliable transport then forward error correction is added to video container. Good example is MPEG-TS container used in satellite video broadcast that carry several audio, video, EPG, etc. streams. This is necessary as satellite link is not duplex communication, meaning receiver can't request re-transmission of lost packets.
When you have duplex communication available it is always better to re-transmit data only to clients having packet loss then to include overhead of forward-error-correction in stream sent to all clients.
In any case lost packets are unacceptable. Dropped frames are ok in exceptional cases when bandwidth is hindered.
The result of missing packets are artifacts like this one:
Some decoders can break on streams missing packets in critical places.
I recommend you to look at new p2p live protocol Bittorent Live.
As for streaming it's better to use UDP, first because it lowers the load on servers, but mostly because you can send packets with multicast, it's simpler than sending it to each connected client.
It depends. How critical is the content you are streaming? If critical use TCP. This may cause issues in bandwidth, video quality (you might have to use a lower quality to deal with latency), and latency. But if you need the content to guaranteed get there, use it.
Otherwise UDP should be fine if the stream is not critical and would be preferred because UDP tends to have less overhead.
One of the biggest problems with delivering live events on Internet is 'scale', and TCP doesn’t scale well. For example when you are beaming a live football match -as opposed to an on demand movie playback- the number of people watching can easily be 1000 times more. In such a scenario using TCP is a death sentence for the CDNs (content delivery networks).
There are a couple of main reasons why TCP doesn't scale well:
One of the largest tradeoffs of TCP is the variability of throughput achievable between the sender and the receiver. When streaming video over the Internet the video packets must traverse multiple routers over the Internet, each of these routers is connected with different speed connections. The TCP algorithm starts with TCP window off small, then grows until packet loss is detected, the packet loss is considered a sign of congestion and TCP responds to it by drastically reducing the window size to avoid congestion. Thus in turn reducing the effective throughput immediately. Now imagine a network with TCP transmission using 6-7 router hops to the client (a very normal scenario), if any of the intermediate router looses any packet, the TCP for that link will reduce the transmission rate. In-fact The traffic flow between routers follow an hourglass kind of a shape; always gong up and down in-between one of the intermediate routers. Rendering the effective through put much lower compared to best-effort UDP.
As you may already know TCP is an acknowledgement-based protocol. Lets for example say a sender is 50ms away (i.e. latency btw two points). This would mean time it takes for a packet to be sent to a receiver and receiver to send an acknowledgement would be 100ms; thus maximum throughput possible as compared to UDP based transmission is already halved.
The TCP doesn’t support multicasting or the new emerging standard of multicasting AMT. Which means the CDNs don’t have the opportunity to reduce network traffic by replicating the packets -when many clients are watching the same content. That itself is a big enough reason for CDNs (like Akamai or Level3) to not go with TCP for live streams.
While reading the TCP UDP debate I noticed a logical flaw. A TCP packet loss causing a one minute delay that's converted into a one minute buffer cant be correlated to UDP dropping a full minute while experiencing the same loss. A more fair comparison is as follows.
TCP experiences a packet loss. The video is stopped while TCP resend's packets in an attempt to stream mathematically perfect packets. Video is delayed for one minute and picks up where it left off after missing packet makes its destination. We all wait but we know we wont miss a single pixel.
UDP experiences a packet loss. For a second during the video stream a corner of the screen gets a little blurry. No one notices and the show goes on without looking for the lost packets.
Anything that streams gains the most benefits from UDP. The packet loss causing a one minute delay to TCP would not cause a one minute delay to UDP. Considering that most systems use multiple resolution streams making things go blocky when starving for packets, makes even more sense to use UDP.
UDP FTW when streaming.
If the bandwidth is far higher than the bitrate, I would recommend TCP for unicast live video streaming.
Case 1: Consecutive packets are lost for a duration of several seconds. => live video will stop on the client side whatever the transport layer is (TCP or UDP). When the network recovers:
- if TCP is used, client video player can choose to restart the stream at the first packet lost (timeshift) OR to drop all late packets and to restart the video stream with no timeshift.
- if UDP is used, there is no choice on the client side, video restart live without any timeshift.
=> TCP equal or better.
Case 2: some packets are randomly and often lost on the network.
- if TCP is used, these packets will be immediately retransmitted and with a correct jitter buffer, there will be no impact on the video stream quality/latency.
- if UDP is used, video quality will be poor.
=> TCP much better
Besides all the other reasons, UDP can use multicast. Supporting 1000s of TCP users all transmitting the same data wastes bandwidth.
However, there is another important reason for using TCP.
TCP can much more easily pass through firewalls and NATs. Depending on your NAT and operator, you may not even be able to receive a UDP stream due to problems with UDP hole punching.
For video streaming bandwidth is likely the constraint on the system. Using multicast you can greatly reduce the amount of upstream bandwidth used. With UDP you can easily multicast your packets to all connected terminals.
You could also use a reliable multicast protocol, one is called Pragmatic General Multicast (PGM), I don't know anything about it and I guess it isn't widespread in its use.
All the 'use UDP' answers assume an open network and 'stuff it as much as you can' approach. Good for old-style closed-garden dedicated audio/video networks, which are a vanishing sort.
In the actual world, your transmission will go through firewalls (that will drop multicast and sometimes udp), the network is shared with others more important ($$$) apps, so you want to punish abusers with window scaling.
This is the thing, it is more a matter of content than it is a time issue. The TCP protocol requires that a packet that was not delivered must be check, verified and redelivered. UDP does not use this requirement. So if you sent a file which contains millions of packets using UDP, like a video, if some of the packets are missing upon delivery, they will most likely go unmissed.

When is it appropriate to use UDP instead of TCP? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Since TCP guarantees packet delivery and thus can be considered "reliable", whereas UDP doesn't guarantee anything and packets can be lost. What would be the advantage of transmitting data using UDP in an application rather than over a TCP stream? In what kind of situations would UDP be the better choice, and why?
I'm assuming that UDP is faster since it doesn't have the overhead of creating and maintaining a stream, but wouldn't that be irrelevant if some data never reaches its destination?
This is one of my favorite questions. UDP is so misunderstood.
In situations where you really want to get a simple answer to another server quickly, UDP works best. In general, you want the answer to be in one response packet, and you are prepared to implement your own protocol for reliability or to resend. DNS is the perfect description of this use case. The costs of connection setups are way too high (yet, DNS
does support a TCP mode as well).
Another case is when you are delivering data that can be lost because newer data coming in will replace that previous data/state. Weather data, video streaming, a stock quotation service (not used for actual trading), or gaming data comes to mind.
Another case is when you are managing a tremendous amount of state and you want to avoid using TCP because the OS cannot handle that many sessions. This is a rare case today. In fact, there are now user-land TCP stacks that can be used so that the application writer may have finer grained control over the resources needed for that TCP state. Prior to 2003, UDP was really the only game in town.
One other case is for multicast traffic. UDP can be multicasted to multiple hosts whereas TCP cannot do this at all.
If a TCP packet is lost, it will be resent. That is not handy for applications that rely on data being handled in a specific order in real time.
Examples include video streaming and especially VoIP (e.g. Skype). In those instances, however, a dropped packet is not such a big deal: our senses aren't perfect, so we may not even notice. That is why these types of applications use UDP instead of TCP.
The "unreliability" of UDP is a formalism. Transmission isn't absolutely guaranteed. As a practical matter, they almost always get through. They just aren't acknowledged and retried after a timeout.
The overhead in negotiating for a TCP socket and handshaking the TCP packets is huge. Really huge. There is no appreciable UDP overhead.
Most importantly, you can easily supplement UDP with some reliable delivery hand-shaking that's less overhead than TCP. Read this: http://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
UDP is useful for broadcasting information in a publish-subscribe kind of application. IIRC, TIBCO makes heavy use of UDP for notification of state change.
Any other kind of one-way "significant event" or "logging" activity can be handled nicely with UDP packets. You want to send notification without constructing an entire socket. You don't expect any response from the various listeners.
System "heartbeat" or "I'm alive" messages are a good choice, also. Missing one isn't a crisis. Missing half a dozen (in a row) is.
I work on a product that supports both UDP (IP) and TCP/IP communication between client and server. It started out with IPX over 15 years ago with IP support added 13 years ago. We added TCP/IP support 3 or 4 years ago. Wild guess coming up: The UDP to TCP code ratio is probably about 80/20. The product is a database server, so reliability is critical. We have to handle all of the issues imposed by UDP (packet loss, packet doubling, packet order, etc.) already mentioned in other answers. There are rarely any problems, but they do sometimes occur and so must be handled. The benefit to supporting UDP is that we are able to customize it a bit to our own usage and tweak a bit more performance out of it.
Every network is going to be different, but the UDP communication protocol is generally a little bit faster for us. The skeptical reader will rightly question whether we implemented everything correctly. Plus, what can you expect from a guy with a 2 digit rep? Nonetheless, I just now ran a test out of curiosity. The test read 1 million records (select * from sometable). I set the number of records to return with each individual client request to be 1, 10, and then 100 (three test runs with each protocol). The server was only two hops away over a 100Mbit LAN. The numbers seemed to agree with what others have found in the past (UDP is about 5% faster in most situations). The total times in milliseconds were as follows for this particular test:
1 record
IP: 390,760 ms
TCP: 416,903 ms
10 records
IP: 91,707 ms
TCP: 95,662 ms
100 records
IP: 29,664 ms
TCP: 30,968 ms
The total data amount transmitted was about the same for both IP and TCP. We have extra overhead with the UDP communications because we have some of the same stuff that you get for "free" with TCP/IP (checksums, sequence numbers, etc.). For example, Wireshark showed that a request for the next set of records was 80 bytes with UDP and 84 bytes with TCP.
There are already many good answers here, but I would like to add one very important factor as well as a summary. UDP can achieve a much higher throughput with the correct tuning because it does not employ congestion control. Congestion control in TCP is very very important. It controls the rate and throughput of the connection in order to minimize network congestion by trying to estimate the current capacity of the connection. Even when packets are sent over very reliable links, such as in the core network, routers have limited size buffers. These buffers fill up to their capacity and packets are then dropped, and TCP notices this drop through the lack of a received acknowledgement, thereby throttling the speed of the connection to the estimation of the capacity. TCP also employs something called slow start, but the throughput (actually the congestion window) is slowly increased until packets are dropped, and is then lowered and slowly increased again until packets are dropped etc. This causes the TCP throughput to fluctuate. You can see this clearly when you download a large file.
Because UDP is not using congestion control it can be both faster and experience less delay because it will not seek to maximize the buffers up to the dropping point, i.e. UDP packets are spending less time in buffers and get there faster with less delay. Because UDP does not employ congestion control, but TCP does, it can take away capacity from TCP that yields to UDP flows.
UDP is still vulnerable to congestion and packet drops though, so your application has to be prepared to handle these complications somehow, likely using retransmission or error correcting codes.
The result is that UDP can:
Achieve higher throughput than TCP as long as the network drop rate is within limits that the application can handle.
Deliver packets faster than TCP with less delay.
Setup connections faster as there are no initial handshake to setup the connection
Transmit multicast packets, whereas TCP have to use multiple connections.
Transmit fixed size packets, whereas TCP transmit data in segments. If you transfer a UDP packet of 300 Bytes, you will receive 300 Bytes at the other end. With TCP, you may feed the sending socket 300 Bytes, but the receiver only reads 100 Bytes, and you have to figure out somehow that there are 200 more Bytes on the way. This is important if your application transmit fixed size messages, rather than a stream of bytes.
In summary, UDP can be used for every type of application that TCP can, as long as you also implement a proper retransmission mechanism. UDP can be very fast, has less delay, is not affected by congestion on a connection basis, transmits fixed sized datagrams, and can be used for multicasting.
UDP is a connection-less protocol and is used in protocols like SNMP and DNS in which data packets arriving out of order is acceptable and immediate transmission of the data packet matters.
It is used in SNMP since network management must often be done when the network is in stress i.e. when reliable, congestion-controlled data transfer is difficult to achieve.
It is used in DNS since it does not involve connection establishment, thereby avoiding connection establishment delays.
cheers
UDP does have less overhead and is good for doing things like streaming real time data like audio or video, or in any case where it is ok if data is lost.
One of the best answer I know of for this question comes from user zAy0LfpBZLC8mAC at Hacker News. This answer is so good I'm just going to quote it as-is.
TCP has head-of-queue blocking, as it guarantees complete and in-order
delivery, so when a packet gets lost in transit, it has to wait for a
retransmit of the missing packet, whereas UDP delivers packets to the
application as they arrive, including duplicates and without any
guarantee that a packet arrives at all or which order they arrive (it
really is essentially IP with port numbers and an (optional) payload
checksum added), but that is fine for telephony, for example, where it
usually simply doesn't matter when a few milliseconds of audio are
missing, but delay is very annoying, so you don't bother with
retransmits, you just drop any duplicates, sort reordered packets into
the right order for a few hundred milliseconds of jitter buffer, and
if packets don't show up in time or at all, they are simply skipped,
possible interpolated where supported by the codec.
Also, a major part of TCP is flow control, to make sure you get as
much througput as possible, but without overloading the network (which
is kinda redundant, as an overloaded network will drop your packets,
which means you'd have to do retransmits, which hurts throughput), UDP
doesn't have any of that - which makes sense for applications like
telephony, as telephony with a given codec needs a certain amount of
bandwidth, you can not "slow it down", and additional bandwidth also
doesn't make the call go faster.
In addition to realtime/low latency applications, UDP makes sense for
really small transactions, such as DNS lookups, simply because it
doesn't have the TCP connection establishment and teardown overhead,
both in terms of latency and in terms of bandwidth use. If your
request is smaller than a typical MTU and the repsonse probably is,
too, you can be done in one roundtrip, with no need to keep any state
at the server, and flow control als ordering and all that probably
isn't particularly useful for such uses either.
And then, you can use UDP to build your own TCP replacements, of
course, but it's probably not a good idea without some deep
understanding of network dynamics, modern TCP algorithms are pretty
sophisticated.
Also, I guess it should be mentioned that there is more than UDP and
TCP, such as SCTP and DCCP. The only problem currently is that the
(IPv4) internet is full of NAT gateways which make it impossible to
use protocols other than UDP and TCP in end-user applications.
Video streaming is a perfect example of using UDP.
UDP has lower overhead, as stated already is good for streaming things like video and audio where it is better to just lose a packet then try to resend and catch up.
There are no guarantees on TCP delivery, you are simply supposed to be told if the socket disconnected or basically if the data is not going to arrive. Otherwise it gets there when it gets there.
A big thing that people forget is that udp is packet based, and tcp is bytestream based, there is no guarantee that the "tcp packet" you sent is the packet that shows up on the other end, it can be dissected into as many packets as the routers and stacks desire. So your software has the additional overhead of parsing bytes back into usable chunks of data, that can take a fair amount of overhead. UDP can be out of order so you have to number your packets or use some other mechanism to re-order them if you care to do so. But if you get that udp packet it arrives with all the same bytes in the same order as it left, no changes. So the term udp packet makes sense but tcp packet doesnt necessarily. TCP has its own re-try and ordering mechanism that is hidden from your application, you can re-invent that with UDP to tailor it to your needs.
UDP is far easier to write code for on both ends, basically because you do not have to make and maintain the point to point connections. My question is typically where are the situations where you would want the TCP overhead? And if you take shortcuts like assuming a tcp "packet" received is the complete packet that was sent, are you better off? (you are likely to throw away two packets if you bother to check the length/content)
Network communication for video games is almost always done over UDP.
Speed is of utmost importance and it doesn't really matter if updates are missed since each update contains the complete current state of what the player can see.
The key question was related to "what kind of situations would UDP be the better choice [over tcp]"
There are many great answers above but what is lacking is any formal, objective assessment of the impact of transport uncertainty upon TCP performance.
With the massive growth of mobile applications, and the "occasionally connected" or "occasionally disconnected" paradigms that go with them, there are certainly situations where the overhead of TCP's attempts to maintain a connection when connections are hard to come by leads to a strong case for UDP and its "message oriented" nature.
Now I don't have the math/research/numbers on this, but I have produced apps that have worked more reliably using and ACK/NAK and message numbering over UDP than could be achieved with TCP when connectivity was generally poor and poor old TCP just spent it's time and my client's money just trying to connect. You get this in regional and rural areas of many western countries....
In some cases, which others have highlighted, guaranteed arrival of packets isn't important, and hence using UDP is fine. There are other cases where UDP is preferable to TCP.
One unique case where you would want to use UDP instead of TCP is where you are tunneling TCP over another protocol (e.g. tunnels, virtual networks, etc.). If you tunnel TCP over TCP, the congestion controls of each will interfere with each other. Hence one generally prefers to tunnel TCP over UDP (or some other stateless protocol). See TechRepublic article: Understanding TCP Over TCP: Effects of TCP Tunneling on End-to-End Throughput and Latency.
UDP can be used when an app cares more about "real-time" data instead of exact data replication. For example, VOIP can use UDP and the app will worry about re-ordering packets, but in the end VOIP doesn't need every single packet, but more importantly needs a continuous flow of many of them. Maybe you here a "glitch" in the voice quality, but the main purpose is that you get the message and not that it is recreated perfectly on the other side. UDP is also used in situations where the expense of creating a connection and syncing with TCP outweighs the payload. DNS queries are a perfect example. One packet out, one packet back, per query. If using TCP this would be much more intensive. If you dont' get the DNS response back, you just retry.
UDP when speed is necessary and the accuracy if the packets is not, and TCP when you need accuracy.
UDP is often harder in that you must write your program in such a way that it is not dependent on the accuracy of the packets.
It's not always clear cut. However, if you need guaranteed delivery of packets with no loss and in the right sequence then TCP is probably what you want.
On the other hand UDP is appropriate for transmitting short packets of information where the sequence of the information is less important or where the data can fit into a single
packet.
It's also appropriate when you want to broadcast the same information to many users.
Other times, it's appropriate when you are sending sequenced data but if some of it goes
missing you're not too concerned (e.g. a VOIP application).
Some protocols are more complex because what's needed are some (but not all) of the features of TCP, but more than what UDP provides. That's where the application layer has to
implement the additional functionality. In those cases, UDP is also appropriate (e.g. Internet radio, order is important but not every packet needs to get through).
Examples of where it is/could be used
1) A time server broadcasting the correct time to a bunch of machines on a LAN.
2) VOIP protocols
3) DNS lookups
4) Requesting LAN services e.g. where are you?
5) Internet radio
6) and many others...
On unix you can type grep udp /etc/services to get a list of UDP protocols implemented
today... there are hundreds.
Look at section 22.4 of Steven's Unix Network Programming, "When to Use UDP Instead of TCP".
Also, see this other SO answer about the misconception that UDP is always faster than TCP.
What Steven's says can be summed up as follows:
Use UDP for broadcast and multicast since that is your only option ( use multicast for any new apps )
You can use UDP for simple request / reply apps, but you'll need to build in your own acks, timeouts and retransmissions
Don't use UDP for bulk data transfer.
We know that the UDP is a connection-less protocol, so it is
suitable for process that require simple request-response communication.
suitable for process which has internal flow ,error control
suitable for broad casting and multicasting
Specific examples:
used in SNMP
used for some route updating protocols such as RIP
Comparing TCP with UDP, connection-less protocols like UDP assure speed, but not reliability of packet transmission.
For example in video games typically don't need a reliable network but the speed is the most important and using UDP for games has the advantage of reducing network delay.
You want to use UDP over TCP in the cases where losing some of the data along the way will not completely ruin the data being transmitted. A lot of its uses are in real-time applications, such as gaming (i.e., FPS, where you don't always have to know where every player is at any given time, and if you lose a few packets along the way, new data will correctly tell you where the players are anyway), and real-time video streaming (one corrupt frame isn't going to ruin the viewing experience).
We have web service that has thousands of winforms client in as many PCs. The PCs have no connection with DB backend, all access is via the web service. So we decided to develop a central logging server that listens on a UDP port and all the clients sends an xml error log packet (using log4net UDP appender) that gets dumped to a DB table upon received. Since we don't really care if a few error logs are missed and with thousands of client it is fast with a dedicated logging service not loading the main web service.
I'm a bit reluctant to suggest UDP when TCP could possibly work. The problem is that if TCP isn't working for some reason, because the connection is too laggy or congested, changing the application to use UDP is unlikely to help. A bad connection is bad for UDP too. TCP already does a very good job of minimizing congestion.
The only case I can think of where UDP is required is for broadcast protocols. In cases where an application involves two, known hosts, UDP will likely only offer marginal performance benefits for substantially increased costs of code complexity.
Only use UDP if you really know what you are doing. UDP is in extremely rare cases today, but the number of (even very experienced) experts who would try to stick it everywhere seems to be out of proportion. Perhaps they enjoy implementing error-handling and connection maintenance code themselves.
TCP should be expected to be much faster with modern network interface cards due to what's known as checksum imprint. Surprisingly, at fast connection speeds (such as 1Gbps) computing a checksum would be a big load for a CPU so it is offloaded to NIC hardware that recognizes TCP packets for imprint, and it won't offer you the same service.
UDP is perfect for VoIP addressed where data packet has to be sent regard less its reliability...
Video chatting is an example of UDP (you can check it by wireshark network capture during any video chatting)..
Also TCP doesn't work with DNS and SNMP protocols.
UDP does not have any overhead while TCP have lots of Overhead

Resources