When I'm learning about various technologies, I often try to think of how applications I use regularly implement such things. I've played a few MMOs along with some FPSs. I did some looking around and happened upon this thread:
http://www.gamedev.net/topic/319003-mmorpg-and-the-ol-udp-vs-tcp
I have been seeing that UDP shines when some loss of packets is permissible. There's less overhead involved and updates are done more quickly. After looking around for a bit and reading various articles and threads, I've come to see that character positioning will often be done with UDP. Games like FPSs will often be done with UDP because of all the rapid changes that are occuring.
I've seen multiple times now where someone pointed out issues that can occur when using UDP and TCP simultaneously. What might some of these problems be? Are these issues that would mostly be encountered by novice programmers? It seems to me that it would be ideal to use a combination of UDP and TCP, gaining the advantages of each. However, if using the two together adds a significant amount of complexity to the code to deal with problems caused, it may not be worth it in certain situations.
Many games use UDP and TCP together. Since it is mandatory for every game to deliver the actions of a player to everyone, it has to be done in one way or the other. It now depends on what kind of game you want to make. In a RTS, surely TCP would be much wiser, since you cannot lose information about your opponents movement. In an RPG, it is not that incredibly important to keep track of everything every second of the game.
Bottom line is, if data has to arrive at the client, in any case, (position updates, upgrades aso.), you have to send it via TCP, or, you implement your own reliable protocol based on UDP. I have constructed quite a few network stacks for games, and I have to say, what you use depends on the usecase and what you are trying to accomplish. I mostly did a heartbeat over UDP, so the remote server/client knows that I am still there. The problem with UDP is, that packets get lost and not resent. If a packet drops, it is lost for ever. You have to take that into account. If you send some information over UDP, it has to be information that is allowed to be lost. Everything else goes via TCP.
And so the madness began. If you want to get most out of both, you will have to adapt them. TCP can be slow sometimes, and you have to wait, if packets get fragmented or do not arrive in order, until the OS has reassembled them. In some cases it could be advisable to build your own reliable protocol on top of UDP. That would allow you complete control over your traffic. Most firewalls do not drop UDP or anything else, but as with TCP, any traffic that is not declared to be safe (Opening Ports, packet redirects, aso.), gets dropped. I would suggest you read up the TCP and UDP and UDP-Lite article up at wikipedia and then decide which ones you want to use for what. AFAIK Battle.net uses a combination of the two.
Many services use udp and tcp together, but doing so without adding congestion control to your udp implementation can cause major problems for tcp. Long story short, udp can and often will clog the routers at each endpoint making tcp's congestion control to go haywire and significantly limit the throughout of the tcp connection. The udp based congestion can also cause significant increase in packet loss for tcp, limiting tcp's throughput even more as it will need to have these packets retransmitted. Using them together isn't a bad idea and is even becoming somewhat common, but you'll want to keep this in mind.
The first possible issue I can think of is that, because UDP doesn't have the overhead inherent in the "transmission control" that TCP does, UDP has higher data bandwidth and lower latency. So, it is possible for a UDP datagram that was sent after a TCP message to be available on the remote computer's input buffers before the TCP message is received in full.
This may cause problems if you use TCP to control or monitor UDP transmission; for instance, you might tell the server via TCP that you'll be sending some datagrams via UDP on port X. If the remote computer isn't already listening on port X, it may not receive some of these datagrams, because they arrived before it was told to listen; if it is listening, but not expecting traffic from you, it may discard them because they showed up before it was told to expect them. This may have an adverse effect on your program's flow or your user's experience.
I think that if you like to transfer game data TCP will be the only solution. Imagine that you send a command (e.g gained x item :P) at the server and this packet never reach its destination. (udp has no guaranties).
Also imagine the scenario that two or more UDP packets reach their destination in wrong order.
But if with your game you integrate any VoIP or Video Call capabilities you can use at these UDP.
Related
What purpose is UDP for..if it delivers packets without any order (and given the fact that packets may get lost on the way or sent to other network).
UDP as many very usefull use cases.
Just a few off the top of my head:
1/ Your payloads are small (will hold in a single "packet") and you want to go fast. That's why DNS uses UDP when the data size does exceeds 512 bytes (99% of the cases?):
https://en.wikipedia.org/wiki/Domain_Name_System#Protocol_transport
And you do hundreds of DNS requests every day. How many TCP 3-way handshakes and connection tear-down saved by this? How may petabytes or network load saved on "the internet"? I'd say that's quite useful!
2/ You do not know who you are talking too, or even if someone is listening or wishing to reply. In other words, you cannot or do not want for sure to establish an actual connection, like TCP would do. There may not be a TCP service listening for you. For example, the SSDP protocol from UPnP uses UDP to discover devices/services:
https://en.wikipedia.org/wiki/Simple_Service_Discovery_Protocol
With UDP tough, you can send your data "in the wild" even if nobody is listening to you. Which leads me to point 3...
3/ You want to talk to multiple hosts, or even "everyone". That's multicasting and broadcasting, and it's very easy to do in UDP. The SSDP mentioned above is an example of such case. On the other hand, if you want to do multicast or broadcast on TCP, that becomes very tricky from the start. You'll have to subscribe to multicast group and blablabla. A multicast daemon may help (ex: https://github.com/troglobit/smcroute), but it's really way more difficult in TCP than with UDP.
4/ Your data is realtime, if the target is missing it there's no point for it to ask for a "please send it again, I did not get it and/or not in the correct order". That's too late, sorry. The receiver better forget it, go on and try to catch-up. A typical use case here can be live audio/video (telephony conversations, real time video streaming). There's no point for the receiver to try to get old, expired data again and again in case of TCP missed segment(s). You can only accumulate network data debt doing this. Better forget it and move on to the new, real-time data that keep coming in. You cannot "pause" real-time incoming data. If you want actual real-time, not pseudo real-time like you get in your web browser.
And I'm sure other posters will find many use-cases for UDP.
So UDP is very, VERY useful. You use it daily without noticing it. The networking world would be a pitiful place without it. You would really miss it. The "TCP/IP" should really be renamed "TCP-UDP/IP".
This was my advocacy for the unfairly despised but Oh-so-useful UDP. :-)
Typically, use UDP in applications where speed is more critical than reliability. For example, it may be better to use UDP in an application sending data from a fast acquisition where it is acceptable to lose some data points. You can also use UDP to broadcast/multicast to any/many machine(s) listening to the server.
Applications may want finer control over the performance characteristics or the reliability of their communications. For this, the operating system exposes IP’s “best-effort datagrams” to the application for it to do whatever it wants with them.
To do this, the OS provides UDP — the “user” datagram protocol. It’s just like IP, in that the service is best-effort datagrams, but instead of delivering those datagrams “to a computer,” there’s an added layer of addressing that says which application is interested in them (and like TCP, UDP does this with a port number).
Applications can run whatever they want on top of UDP — anything that runs on best-effort datagrams. There are lots of protocols you can run on top of that abstraction.
In general:
TCP is for high-reliability data transmissions
UDP is for low-overhead transmissions
Okay, so I am programming for my networking course and I have to implement a project in Java using UDP. We are implementing an HTTP server and client along with a 'gremlin' function that corrupts packets with a specified probability. The HTTP server has to break a large file up into multiple segments at the application layer to be sent to the client over UDP. The client must reassemble the received segments at the application layer. What I am wondering however is, if UDP is by definition unreliable, why am I having to simulate unreliability here?
My first thought is that perhaps it's simply because my instructor is figuring in our case, both the client and the server will be run on the same machine and that the file will be transferred from one process to another 100% reliably even over UDP since it is between two processes on the same computer.
This led me first to question whether or not UDP could ever actually lose a packet, corrupt a packet, or deliver a packet out of order if the server and client were guaranteed to be two processes on the same physical machine, guaranteed to be routed strictly over localhost only such that it won't ever go out over the network.
I would also like to know, in general, for a given packet what is the rough probability that UDP will drop / corrupt / or deliver a packet out of order while being used to facilitate communication over the open internet between two hosts that are fairly geographically distant from one another (say something comparable to the route between the average broadband user in the US to one of Google's CDNs)? I'm mostly just trying to get a general idea of the conditions experienced when communicated over UDP, does it drop / corrupt / misorder something on the order of 25% of packets, or is it more like something on the order of 0.001% of packets?
Much appreciation to anyone who can shed some light on any of these questions for me.
Packet loss happens for multiple reasons. Primarily it is caused by errors on individual links and network congestion.
Packet loss due to errors on the link is very low, when links are working properly. Less than 0.01% is not unusual.
Packet loss due to congestion obviously depends on how busy the link is. If there is spare capacity along the entire path, this number will be 0%. But as the network gets busy, this number will increase. When flow control is done properly, this number will not get very high. A couple of lost packets is usually enough that somebody will reduce their transmission speed enough to stop packets getting lost due to congestion.
If packet loss ever reaches 1% something is wrong. That something could be a bug in how your congestion control algorithm responds to packet loss. If it keeps sending packets at the same rate, when the network is congested and losing packets, the packet loss can be pushed much higher, 99% packet loss is possible if software is misbehaving. But this depends on the types of links involved. Gigabit Ethernet uses backpressure to control the flow, so if the path from source to destination is a single Gigabit Ethernet segment, the sending application may simply be slowed down and never see actual packet loss.
For testing behaviour of software in case of packet loss, I would suggest two different simulations.
On each packet drop it with a probability of 10% and transmit it with a probability of 90%
Transmit up to 100 packets per second or up to 100KB per second, and drop the rest if the application would send more.
if UDP is by definition unreliable, why am I having to simulate unreliability here?
It is very useful to have a controlled mechanism to simulate worst case scenarios and how both your client and server can respond to them. The instructor will likely want you to demonstrate how robust the system can be.
You are also talking about payload validity here and not just packet loss.
This led me to question whether or not UDP, lose a packet, corrupt a packet, or deliver it out of order if the server and client were two processes on the same machine and it wasn't having to go out over the actual network.
It is obviously less likely over the loopback adapter, but this is not impossible.
I found a few forum posts on the topic here and here.
I am also wondering what the chances of actually losing a packet, having it corrupted, or having them delivered out of order in reality would usually be over the internet between two geographically distant hosts.
This question would probably need to be narrowed down a bit. There are several factors both application level (packet size and frequency) as well as limitations/traffic of routers and switches along the path.
I couldn't find any hard numbers on this but it seems to be fairly low... like sub 5%.
You may be interested in The Internet Traffic Report and possibly pages such as this.
I was spamming udp packets over wifi to some nanoleaf panels and my packet loss was roughly 1/7000.
I think it depends on a ton of factors.
Some games today use a network system that transmits messages over UDP, and ensures that the messages are reliable and ordered.
For example, RakNet is a popular game network engine. It uses only UDP for its connections, and has a whole system to ensure that packets can be reliable and ordered if you so choose.
My basic question is, what's up with that? Isn't TCP the same thing as ordered, reliable UDP? What makes it so much slower that people have to basically reinvent the wheel?
General/Specialisation
TCP is a general purpose reliable system
UDP +whatever is a special purpose reliable system.
Specialized things are usually better than general purpose things for the thing they are specialized.
Stream / Message
TCP is stream-based
UDP is message-based
Sending discrete gaming information maps usually better to a message-based paradigm. Sending it through a stream is possible but horribly ineffective. If you want to reliably send a huge amount of data (File transfer), TCP is quite effective. That's why Bit-torrent use UDP for control messages and TCP for data sending.
We switched from reliable to unreliable in "league of legends" about a year ago because of several advantages which have since proven to be true:
1) Old information becomes irrelevant. If I send a health packet and it doesn't arrive... I don't want to have to wait for that same health packet to resend when I know its changed.
2) Order is sometimes not necessary. If I'm sending different messages to different systems it may not be necessary to get those messages in order. I don't force the client to wait for in-order messages.
3) Unreliable doesn't get backed up with messages... ie waiting for acknowledgements which means you can resolve loss spikes much more quickly.
4) You can control resends when necessarily more efficiently. Such as repacking something that didn't send into another packet. (TCP does repack but you can do it more efficiently with knowledge about how your program works.)
5) Flow control of message such as throwing away messages that are less relevant when the network suddenly spikes. The network system can choose not to resend less relevant messages when you have a loss spike. With TCP you'd still have a queue of messages that are trying to resend which may be lower priority.
6) Smaller header packet... don't really need to say much about that.
There's much more of a difference between UDP and TCP than just reliability and sequencing:
At the heart of the matter is the fact that UDP is connectionless while TCP is connected. This simple difference leads to a host of other differences that I'm not going to be able to reasonbly summarize here. You can read the analysis below for much more detail.
TCP - UDP Comparative Analysis
The answer in in my opinion the two words: "Congestion control".
TCP goes to great lengths to manage the bandwidth of the path - to use the most of it, but to ensure that there is space for other applications. This is a very hard task, and inherently it is not possible to use 100% of the bandwidth 100% of the time.
With UDP, on the other hand, one can make their own protocol to send the packets onto the wire as fast as they want - this makes the protocol very unfriendly to other applications, but can gain more "performance" short-term. On the other hand, it is with high probability that if the conditions are appropriate, this kind of protocols might contribute to congestion collapse.
TCP is a stream-oriented protocol, whereas UDP is a message-oriented protocol. Hence TCP does more than just reliability and ordering. See this post for more details. Basically, the RakNet developers added the reliability and ordering while still keeping it as a message-oriented protocol, and so the result was more lightweight than TCP (which has to do more).
This little article is old, but it's still pretty true when it comes to games. It explains the two protocols, and the havoc these folks went trying to develop a multiplayer internet game. "X-Wing vs Tie Fighter"
Lessons Learned (The Internet Sucks)
There is one caveat to this though, I run/develop a multiplayer game, and I've used both. UDP was much better for my app, but alot of people couldn't play with UDP. Routers and such blocked the connections. So I changed to the "reliable" TCP. Well... Reliable? I don't think so. You send a packet, no errors, you send another and it crashes (exception) in the middle of the packet. Now which packets made it? So you end up writing a reliable protocol ON TOP OF tcp, to simulate UDP - but continuously establish a new connection when it crashes. Take about inefficient.
UDP + Stop and wait ARW = good
UDP + Sliding Window Protocol = better
TCP + Sliding Window Protocol with reconnection? = Worthless bulkware. (IMHO)
The other side effect is multi-threaded applications. TCP works well for a chat room type thing, since each room can be it's own thread. A room can hold 60-100 people and it runs fine, as the Room thread contains the Sockets for each participant.
UDP on the other hand is best served (IMO) by one thread, but when you get the packet, you have to parse it to figure out who it came from (via info sent or RemoteEndPoint), then pass that data to the chatroom thread in a threadsafe manner.
Actually, you have to do the same with TCP, but only on connect.
Last point. Remember that TCP will just error out and kill the connection at anytime, but you can reconnect in about .5 seconds and send the same information. Most bizzare thing I've ever worked with.
UDP have a lower reliability give it more reliability by making it send a message and wait for respond if no respond came it resend the message.
Why does the UDP protocol not support streams like TCP protocol does?
Others have given perfectly good answers, but I think an analogy may help.
Imagine that UDP is a bit like a mailman - putting letters through people's doors, but never checking that they actually see the letter, or that the person even exists. This is cheap, and works well for junk mail send to many people (read: broadcast packets).
TCP is more like a messenger who will knock on your door and deliver a message in person. If you're not in, he'll come back and try a bit later. Eventually, you'll get the message - or the messenger will know that he hasn't delivered it. You can also send a message back to the sender via the messenger.
Don't try to read too much into my analogy - it's not like there's actually a single "messenger" in TCP - but it might help in terms of thinking about things. In particular, imagine you were sending a whole series of letters to someone, which they had to read in order - that can't work (reliably) with a mailman, as you might come down to find 10 letters sitting on your doormat, with no idea what order to read them in. The mailman might have dropped a few on the way, as well - and no-one will know. That's not a suitable basis for a reliable stream.
(On the other hand, it's a fine model for newspaper distribution. If you happen to miss a few, that's not a problem - you'll still get the later ones, which will be more interesting to you by then. That's why some streaming media solutions use UDP, despite it not giving a reliable actual stream.)
Well ...
UDP is a datagram protocol. It is intended to let applications send invidual datagrams, without a high-level protocol controlling what goes on.
A "stream" is a high-level concept, and thus not possible for UDP to support.
Many applications end up re-implementing parts of TCP on UDP, to get the particular mixture of latency, order independence, error handling, and bandwidth that they need.
The reason is simple: It doesn't, because it doesn't, because that wasn't one of the design goals for the protocol.
Making UDP work as a stream would mean adding almost every feature that TCP has. You'd end up with two identical protocols. What'd be the point?
UDP was meant to be the minimal lightweight protocol. It was meant to complement TCP.
If you define a 'stream' as a 5-tuple of IP(& L4) header, UDP does support streams. I guess what you mean to convey is whether UDP supports connections (where in each stream, aka, 5-tuple, is treated as a logical connection) and reliability of delivery of packets belonging to that stream is more or less guaranteed. Whereas, in case of UDP, althrough the packets are part of a stream (5-tuple), there's no effort to ensure reliability, instead relying on the underlying network layer to make proper delivery. If it fails, so be it.
Jon Skeet's analogy makes perfect sense.
I wanted to know why UDP is used in RTP rather than TCP ?. Major VoIP Tools used only UDP as i hacked some of the VoIP OSS.
As DJ pointed out, TCP is about getting a reliable data stream, and will slow down transmission, and re-transmit corrupted packets, in order to achieve that.
UDP does not care about reliability of the communication, and will not slow down or re-transmit data.
If your application needs a reliable data stream, for example, to retrieve a file from a webserver, you choose TCP.
If your application doesn't care about corrupted or lost packets, and you don't need to incur the additional overhead to provide the additional reliability, you can choose UDP instead.
VOIP is not significantly improved by reliable packet transmission, and in fact, in some cases things in TCP like retransmission and exponential backoff can actually hurt VOIP quality. Therefore, UDP was a better choice.
A lot of good answers have been given, but I'd like to point one thing out explicitly:
Basically a complete data stream is a nice thing to have for real-time audio/video, but its not strictly necessary (as others have pointed out):
The important fact is that some data that arrives too late is worthless. What good is the missing data for a frame that should have been displayed a second ago?
If you were to use TCP (which also guarantees the correct order of all data), then you wouldn't be able to get to the more up-to-date data until the old one is transmitted correctly. This is doubly bad: you have to wait for the re-transmission of the old data and the new data (which is now delayed) will probably be just as worthless.
So RTP does some kind of best-effort transmission in that it tries to transfer all available data in time, but doesn't attempt to re-transmit data that was lost/corrupted during the transfer (*). It just goes on with life and hopes that the more important current data gets there correctly.
(*) actually I don't know the specifics of RTP. Maybe it does try to re-transmit, but if it does then it won't be as aggressive as TCP is (which won't ever accept any lost data).
The others are correct, however the don't really tell you the REAL reason why. Saua kind of hints at it, but here's a more complete answer.
Audio and Video is real-time. If you are listening to a radio, or watching TV, and the signal is interrupted, it doesn't pick up where you left off.. you're just "observing" the signal as it streams, and if you can't observe it at any given time, you lose it.
The reason, is simple. Delay. VOIP tries very hard to minimize the amount of delay from the time someone speaks into one end and you get it on your end, and your response back. Otherwise, as errors occured, the amount of delay between when the person spoke and when the signal was received would continuously grow until it became useless.
Remember, each delay from a retransmission has to be replayed, and that causes further data to be delayed, then another error causes an even greater delay. The only workable solution is to simply drop any data that can't be displayed in real-time.
A 1 second delay from retransmission would mean it would now be 1 second from the time I said something until you heard it. A second 1 second delay now means it's 2 seconds from the time i say something until you hear it. This is cumulative because data is played back at the same rate at which it is spoken, and so on...
RTP could be connection oriented, but then it would have to drop (or skip) data to keep up with retransmission errors anyways, so why bother with the extra overhead?
Technically RTP packets can be interleaved over a TCP connection. There are lots of great answers given here. Two additional minor points:
RFC 4588 describes how one could use retransmission with RTP data. Most clients that receive RTP streams employ a buffer to account for jitter in the network that is typically 1-5 seconds long and which means there is time available for a retransmit to receive the desired data.
RTP traffic can be interleaved over a TCP connection. In practice when this is done, the difference between Interleaved RTP (i.e. over TCP) and RTP sent over UDP is how these two perform over a lossy network with insufficient bandwidth available for the user. The Interleaved TCP stream will end up being jerky as the player continually waits in a buffering state for packets to arrive. Depending on the player it may jump ahead to catch up. With an RTP connection you will get artifacts (smearing/tearing) in the video.
UDP is often used for various types of realtime traffic that doesn't need strict ordering to be useful. This is because TCP enforces an ordering before passing data to an application (by default, you can get around this by setting the URG pointer, but no one seems to ever do this) and that can be highly undesirable in an environment where you'd rather get current realtime data than get old data reliably.
RTP is fairly insensitive to packet loss, so it doesn't require the reliability of TCP.
UDP has less overhead for headers so that one packet can carry more data, so the network bandwidth is utilized more efficiently.
UDP provides fast data transmission also.
So UDP is the obvious choice in cases such as this.
Besides all the others nice and correct answers this article gives a good understanding about the differences between TCP and UDP.
The Real-time Transport Protocol is a network protocol used to deliver streaming audio and video media over the internet, thereby enabling the Voice Over Internet Protocol (VoIP).
RTP is generally used with a signaling protocol, such as SIP, which sets up connections across the network. RTP applications can use the Transmission Control Protocol (TCP), but most use the User Datagram protocol (UDP) instead because UDP allows for faster delivery of data.
UDP is used wherever data is send, that does not need to be exactly received on the target, or where no stable connection is needed.
TCP is used if data needs to be exactly received, bit for bit, no loss of bits.
For Video and Sound streaming, some bits that are lost on the way do not affect the result in a way, that is mentionable, some pixels failing in a picture of a stream, nothing that affects a user, on DVDs the lost bit rate is higher.
just a remark:
Each packet sent in an RTP stream is given a number one higher than its predecessor.This allows thr destination to determine if any packets are missing.
If a packet is mising, the best action for the destination to take is to approximate the missing vaue by interpolation.
Retranmission is not a proctical option since the retransmitted packet would be too late to be useful.
I'd like to add quickly to what Matt H said in response to Stobor's answer. Matt H mentioned that RTP over UDP packets can be checksum'ed so that if they are corrupted, they will get resent. This is actually an optional feature on most PBXs. In Asterisk, for example, you can enable / disable checksums on your RTP over UDP traffic in the rtp.conf configuration file with the following line:
rtpchecksums=yes ; or no if you prefer
Cheers!