I want multiple IoT devices (Say 50) communicating to a server directly asynchronously via TCP. Assume all of them have a heartbeat pulse every 30 seconds and may drop off and reconnect at variable times.
Can anyone advice me the best way to make sure no data is dropped or blocked when multiple devices are communicating simultaneously?
TCP by itself ensures no data loss during the communication between a client and a server. It does that by the use of sequence numbers and ACK messages.
Technically, before the actual data transfer happens, a TCP connection is created between the client (which can be an IoT device, or any other device) and the server. Then, the data is split into multiple packets and sent over the network through that connection. All TCP-related mechanisms like flow-control, error-detection, congestion-detection, and many others, take place once the data starts to flow.
The wiki page for TCP is a pretty good start if you want to learn more about how it works.
Apart from that, as long as your server has enough capacity to support the flow of requests coming from the devices, then everything should work (at least in theory).
I don't think you are asking the right question. There is no way to make sure that no data is dropped or blocked. Networks do not always work (that is why the word work is in network, to convince you otherwise ).
The right question is: how do I make my distributed system as available and reliable as possible? The answer involves viewing interruption and congestion as part of the normal operation, and build your software appropriately.
There is a timeless usenix/acm/? paper from the late 70s early 80s that invigorated the notion that end-to-end protocols are much more effective then over-featured middle to middle protocols; and most guarantees of middle to middle amount to best effort. If you rely upon those guarantees, you are bound to fail. Sorry, cannot find the reference right now, but it is widely cited.
I am building a chat application, where each keystroke presses of the user are sent to the server. At the server, a recommendation engine which is based on nlp generates recommendations based on the context of the typed message at that point of time.
For large scale deployment, which connection type would be preferable between TCP and UDP. UDP is fast but unreliable, whereas TCP, being reliable may be slow in real-time. For example:A user types the words "Hey, lets watch" and quickly clears the text-box,the recommendation of a movie should not be generated after he clears the text-box.
If the server has a recommendation, it should be guaranteed to deliver the recommendation back to the client.
The aim is to get real-time recommendation with low latency. Which type would be more preferable?
TCP and UDP are almost identical if the size of the data being sent at a time is less than the maximum payload of a single frame.
In that case UDP will be more "reliable" in terms of real-time behaviour since it is more within your hands how the data is processed. The downside of course is that you have to care of certain things yourself which TCP will give you for free.
With TCP on the other hand the TCP layer of the protocol stack can make a mess out of your real-time requirements and you don't even have a chance doing anything about it. Ever thought about re-transmitts (about +200ms transmittion time), nagle algorithm (small packets are delayed for up to 200ms), delayed TCP ACKs (may cause re-transmitts on some stacks)? And there is a lot more in stock for you if you have strickt real-tme requirements.
I'm working on a project which has a 20ms timeframe and transmitts a lot of data in that time using TCP. Even though we have a star architecture and real-time operating systems it is a hell to get this working reliable (well lots of the effects are due to one of our used Ethernet chips the smsc91c111).
Concluding there is no "best way" doing things like that since neither UDP nor TCP are real-time protocols. But since it is fairly easy to switch between them I recommend simply to test it and choose the protocol which works best.
I am working on a project which requires sensor information to be obtained from multiple embedded devices so that it may be used by a master machine. The master currently has classes which contain backing fields for each sensor. Data is continuously read on each sensor and a packet is then written and sent to the master to update that sensor's backing field. I have little experience with TCP/UDP so I am not sure which protocol would work better with this setup.
I am currently using TCP to transfer the data because I am worried about data on our rotary encoders being received out of order. Since my experience with this topic is limited, I am not sure if this is this a valid concern.
Does anyone with experience in this area know any reasons that I should prefer one approach over the other?
How much you care about getting know a packet was delivered?
How much you care about getting know a delivered packet was 100% correct?
How much you care about the order of packet delivery?
How much you care about the peer is currently connected?
If the answers were "I care a lot", you'd prefer to keep on using TCP because it ensure all four points.
The counterpart is that UDP could be more lightweight and fast to handle if you manage small packets.
Anyway, it's not so easy choose this or that. Just try.
And read this brief explanation: http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/
I'm no expert but it seems this might be relevant:
Do you can about losing data?
If so, use TCP. Error recovery is automatic.
If not, use UDP. Lost packets are not re-sent. I also believe ordering here is not guaranteed.
After studying TCP/UDP difference all week, I just can't decide which to use. I have to send a large amount of constant sensor data, while at the same time sending important data that can't be lost. This made a perfect split for me to use both, then I read a paper (http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM) that says using both causes packet/performance loss in the other. Is there any issue presented if I allow the user to choose which protocol to use (if I program both server side) instead of choosing myself? Are there any disadvantages to this?
The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).
I'd say go with TCP, unless you can't (because you have thousands of sensors, or the sensors very low energy budgets, or whatever). If you need reliability you'll have to roll your own reliability layer on top of UDP.
Try it out with TCP, and measure your performance. If it's OK, and you don't anticipate serious scaling issues, then just stay with TCP.
The article you link goes into detailed analysis on some corner cases. This probably does not apply in your situation. I would ignore this unless your own performance tests start showing problems. Start with the simplest setup (I'm guessing TCP for bulk data transfer and UDP for non-reliable sensor data), test, measure, find bottlenecks, re-factor.
The OP says:
... sending important data that can't be lost.
Therefore, TCP, by definition is the right answer over UDP.
Remember, the "U" in UDP stands for "unreliable"
Re:
The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).
Bad idea: things will tend to break at exactly the times that you don't expect them to. Your job, as an engineer, is to plan for the failure cases and mitigate them in advance. UDP will lose packets. If your data can't be lost, then don't use UDP.
I also would go with just TCP. UDP has its uses, and high-importance sensor data isn't really what comes to mind. If you can stand to lose plenty of sensor data, go with UDP, but I conjure that isn't what you want at all.
UDP is simpler protocol than TCP, and you can still simulate features of TCP using UDP. If you really have custom needs, UDP is easier to tweak.
However, I'd firstly just use both UDP and TCP, check their behavior in a real environment, and only then decide to reimplement TCP in terms of UDP in the exact way you need. Given proper abstraction, this should not be much work.
Maybe it would be enough for you to throttle your TCP usage not to fill up the bandwidth?
If you can't lose data, and you use UDP, you are reinventing TCP, at least a significant fraction of it. Whatever you gain in performance you are prone to lose in protocol design errors, as it is hard to design a protocol.
Constant sensor data: UDP. Important data that can't be lost: TCP.
You can implement your own mechanism to confirm the delivery of UDP packets that can't be lost.
I would say go with TCP. Also, if you're managing a lot of packet loss, the protocol of choice is your least concern. If you need important data, TCP. If the data is not important and can be supplemented later, UDP. If the data is mission-critical, TCP. UDP will be faster, but leave you with errors left and right from corrupt or non-existent packets. In the end, you'd be reinventing TCP to fix the problems.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
For general protocol message exchange, which can tolerate some packet loss. How much more efficient is UDP over TCP?
People say that the major thing TCP gives you is reliability. But that's not really true. The most important thing TCP gives you is congestion control: you can run 100 TCP connections across a DSL link all going at max speed, and all 100 connections will be productive, because they all "sense" the available bandwidth. Try that with 100 different UDP applications, all pushing packets as fast as they can go, and see how well things work out for you.
On a larger scale, this TCP behavior is what keeps the Internet from locking up into "congestion collapse".
Things that tend to push applications towards UDP:
Group delivery semantics: it's possible to do reliable delivery to a group of people much more efficiently than TCP's point-to-point acknowledgement.
Out-of-order delivery: in lots of applications, as long as you get all the data, you don't care what order it arrives in; you can reduce app-level latency by accepting an out-of-order block.
Unfriendliness: on a LAN party, you may not care if your web browser functions nicely as long as you're blitting updates to the network as fast as you possibly can.
But even if you care about performance, you probably don't want to go with UDP:
You're on the hook for reliability now, and a lot of the things you might do to implement reliability can end up being slower than what TCP already does.
Now you're network-unfriendly, which can cause problems in shared environments.
Most importantly, firewalls will block you.
You can potentially overcome some TCP performance and latency issues by "trunking" multiple TCP connections together; iSCSI does this to get around congestion control on local area networks, but you can also do it to create a low-latency "urgent" message channel (TCP's "URGENT" behavior is totally broken).
In some applications TCP is faster (better throughput) than UDP.
This is the case when doing lots of small writes relative to the MTU size. For example, I read an experiment in which a stream of 300 byte packets was being sent over Ethernet (1500 byte MTU) and TCP was 50% faster than UDP.
The reason is because TCP will try and buffer the data and fill a full network segment thus making more efficient use of the available bandwidth.
UDP on the other hand puts the packet on the wire immediately thus congesting the network with lots of small packets.
You probably shouldn't use UDP unless you have a very specific reason for doing so. Especially since you can give TCP the same sort of latency as UDP by disabling the Nagle algorithm (for example if you're transmitting real-time sensor data and you're not worried about congesting the network with lot's of small packets).
UDP is faster than TCP, and the simple reason is because its non-existent acknowledge packet (ACK) that permits a continuous packet stream, instead of TCP that acknowledges a set of packets, calculated by using the TCP window size and round-trip time (RTT).
For more information, I recommend the simple, but very comprehensible Skullbox explanation (TCP vs. UDP)
with loss tolerant
Do you mean "with loss tolerance" ?
Basically, UDP is not "loss tolerant". You can send 100 packets to someone, and they might only get 95 of those packets, and some might be in the wrong order.
For things like video streaming, and multiplayer gaming, where it is better to miss a packet than to delay all the other packets behind it, this is the obvious choice
For most other things though, a missing or 'rearranged' packet is critical. You'd have to write some extra code to run on top of UDP to retry if things got missed, and enforce correct order. This would add a small bit of overhead in certain places.
Thankfully, some very very smart people have done this, and they called it TCP.
Think of it this way: If a packet goes missing, would you rather just get the next packet as quickly as possible and continue (use UDP), or do you actually need that missing data (use TCP). The overhead won't matter unless you're in a really edge-case scenario.
When speaking of "what is faster" - there are at least two very different aspects: throughput and latency.
If speaking about throughput - TCP's flow control (as mentioned in other answers), is extremely important and doing anything comparable over UDP, while certainly possible, would be a Big Headache(tm). As a result - using UDP when you need throughput, rarely qualifies as a good idea (unless you want to get an unfair advantage over TCP).
However, if speaking about latencies - the whole thing is completely different. While in the absence of packet loss TCP and UDP behave extremely similar (any differences, if any, being marginal) - after the packet is lost, the whole pattern changes drastically.
After any packet loss, TCP will wait for retransmit for at least 200ms (1sec per paragraph 2.4 of RFC6298, but practical modern implementations tend to reduce it to 200ms). Moreover, with TCP, even those packets which did reach destination host - will not be delivered to your app until the missing packet is received (i.e., the whole communication is delayed by ~200ms) - BTW, this effect, known as Head-of-Line Blocking, is inherent to all reliable ordered streams, whether TCP or reliable+ordered UDP. To make things even worse - if the retransmitted packet is also lost, then we'll be speaking about delay of ~600ms (due to so-called exponential backoff, 1st retransmit is 200ms, and second one is 200*2=400ms). If our channel has 1% packet loss (which is not bad by today's standards), and we have a game with 20 updates per second - such 600ms delays will occur on average every 8 minutes. And as 600ms is more than enough to get you killed in a fast-paced game - well, it is pretty bad for gameplay. These effects are exactly why gamedevs often prefer UDP over TCP.
However, when using UDP to reduce latencies - it is important to realize that merely "using UDP" is not sufficient to get substantial latency improvement, it is all about HOW you're using UDP. In particular, while RUDP libraries usually avoid that "exponential backoff" and use shorter retransmit times - if they are used as a "reliable ordered" stream, they still have to suffer from Head-of-Line Blocking (so in case of a double packet loss, instead of that 600ms we'll get about 1.5*2*RTT - or for a pretty good 80ms RTT, it is a ~250ms delay, which is an improvement, but it is still possible to do better). On the other hand, if using techniques discussed in http://gafferongames.com/networked-physics/snapshot-compression/ and/or http://ithare.com/udp-from-mog-perspective/#low-latency-compression , it IS possible to eliminate Head-of-Line blocking entirely (so for a double-packet loss for a game with 20 updates/second, the delay will be 100ms regardless of RTT).
And as a side note - if you happen to have access only to TCP but no UDP (such as in browser, or if your client is behind one of 6-9% of ugly firewalls blocking UDP) - there seems to be a way to implement UDP-over-TCP without incurring too much latencies, see here: http://ithare.com/almost-zero-additional-latency-udp-over-tcp/ (make sure to read comments too(!)).
Which protocol performs better (in terms of throughput) - UDP or TCP - really depends on the network characteristics and the network traffic. Robert S. Barnes, for example, points out a scenario where TCP performs better (small-sized writes). Now, consider a scenario in which the network is congested and has both TCP and UDP traffic. Senders in the network that are using TCP, will sense the 'congestion' and cut down on their sending rates. However, UDP doesn't have any congestion avoidance or congestion control mechanisms, and senders using UDP would continue to pump in data at the same rate. Gradually, TCP senders would reduce their sending rates to bare minimum and if UDP senders have enough data to be sent over the network, they would hog up the majority of bandwidth available. So, in such a case, UDP senders will have greater throughput, as they get the bigger pie of the network bandwidth. In fact, this is an active research topic - How to improve TCP throughput in presence of UDP traffic. One way, that I know of, using which TCP applications can improve throughput is by opening multiple TCP connections. That way, even though, each TCP connection's throughput might be limited, the sum total of the throughput of all TCP connections may be greater than the throughput for an application using UDP.
Each TCP connection requires an initial handshake before data is transmitted. Also, the TCP header contains a lot of overhead intended for different signals and message delivery detection. For a message exchange, UDP will probably suffice if a small chance of failure is acceptable. If receipt must be verified, TCP is your best option.
I will just make things clear. TCP/UDP are two cars are that being driven on the road. suppose that traffic signs & obstacles are Errors TCP cares for traffic signs, respects everything around. Slow driving because something may happen to the car. While UDP just drives off, full speed no respect to street signs. Nothing, a mad driver. UDP doesn't have error recovery, If there's an obstacle, it will just collide with it then continue. While TCP makes sure that all packets are sent & received perfectly, No errors , so , the car just passes obstacles without colliding. I hope this is a good example for you to understand, Why UDP is preferred in gaming. Gaming needs speed. TCP is preffered in downloads, or downloaded files may be corrupted.
UDP is slightly quicker in my experience, but not by much. The choice shouldn't be made on performance but on the message content and compression techniques.
If it's a protocol with message exchange, I'd suggest that the very slight performance hit you take with TCP is more than worth it. You're given a connection between two end points that will give you everything you need. Don't try and manufacture your own reliable two-way protocol on top of UDP unless you're really, really confident in what you're undertaking.
There has been some work done to allow the programmer to have the benefits of both worlds.
SCTP
It is an independent transport layer protolol, but it can be used as a library providing additional layer over UDP. The basic unit of communication is a message (mapped to one or more UDP packets). There is congestion control built in. The protocol has knobs and twiddles to switch on
in order delivery of messages
automatic retransmission of lost messages, with user defined parameters
if any of this is needed for your particular application.
One issue with this is that the connection establishment is a complicated (and therefore slow process)
Other similar stuff
https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
One more similar proprietary experimental thing
https://en.wikipedia.org/wiki/QUIC
This also tries to improve on the triple way handshake of TCP and change the congestion control to better deal with fast lines.
Update 2022: Quic and HTTP/3
QUIC (mentioned above) has been standardized through RFCs and even became the basis of HTTP/3 since the original answer was written. There are various libraries such as lucas-clemente/quic-go or microsoft/msquic or google/quiche or mozilla/neqo (web-browsers need to be implementing this).
These libraries expose to the programmer reliable TCP-like streams on top the UDP transport. RFC 9221 (An Unreliable Datagram Extension to QUIC) adds working with individual unreliable data packets.
Keep in mind that TCP usually keeps multiple messages on wire. If you want to implement this in UDP you'll have quite a lot of work if you want to do it reliably. Your solution is either going to be less reliable, less fast or an incredible amount of work. There are valid applications of UDP, but if you're asking this question yours probably is not.
If you need to quickly blast a message across the net between two IP's that haven't even talked yet, then a UDP is going to arrive at least 3 times faster, usually 5 times faster.
It is meaningless to talk about TCP or UDP without taking the network condition into account.
If the network between the two point have a very high quality, UDP is absolutely faster than TCP, but in some other case such as the GPRS network, TCP may been faster and more reliability than UDP.
The network setup is crucial for any measurements. It makes a huge difference, if you are communicating via sockets on your local machine or with the other end of the world.
Three things I want to add to the discussion:
You can find here a very good article about TCP vs. UDP in the
context of game development.
Additionally, iperf (jperf enhance iperf with a GUI) is a
very nice tool for answering your question yourself by measuring.
I implemented a benchmark in Python (see this SO question). In average of 10^6 iterations the difference for sending 8 bytes is about 1-2 microseconds for UDP.