I have a security scan finding directing me to disable TCP timestamps. I understand the reasons for the recommendation: the timestamp can be used to calculate server uptime, which can be helpful to an attacker (good explanation under heading "TCP Timestamps" at http://www.silby.com/eurobsdcon05/eurobsdcon_silbersack.pdf).
However, it's my understanding that TCP timestamps are intended to enhance TCP performance. Naturally, in the cost/benefit analysis, performance degradation is a big, possibly too big, cost. I'm having a hard time understanding how much, if any, performance cost there is likely to be. Any nodes in the hivemind care to assist?
The answer is most succinctly expressed in RFC 1323 - Round-Trip Measurement... The introduction to the RFC also provides some relevant historical context...
Introduction
The introduction of fiber optics is resulting in ever-higher
transmission speeds, and the fastest paths are moving out of the
domain for which TCP was originally engineered. This memo defines a
set of modest extensions to TCP to extend the domain of its
application to match this increasing network capability. It is based
upon and obsoletes RFC-1072 [Jacobson88b] and RFC-1185 [Jacobson90b].
(3) Round-Trip Measurement
TCP implements reliable data delivery by retransmitting
segments that are not acknowledged within some retransmission
timeout (RTO) interval. Accurate dynamic determination of an
appropriate RTO is essential to TCP performance. RTO is
determined by estimating the mean and variance of the
measured round-trip time (RTT), i.e., the time interval
between sending a segment and receiving an acknowledgment for
it [Jacobson88a].
Section 4 introduces a new TCP option, "Timestamps", and then
defines a mechanism using this option that allows nearly
every segment, including retransmissions, to be timed at
negligible computational cost. We use the mnemonic RTTM
(Round Trip Time Measurement) for this mechanism, to
distinguish it from other uses of the Timestamps option.
The specific performance penalty you incur by disabling timestamps would depend on your specific server operating system and how you do it (for examples, see this PSC doc on performance tuning). Some OS require that you either enable or disable all RFC1323 options at once... others allow you to selectively enable RFC 1323 options.
If your data transfer is somehow throttled by your virtual server (maybe you only bought the cheap vhost plan), then perhaps you couldn't possibly use higher performance anyway... perhaps it's worth turning them off to try. If you do, be sure to benchmark your before and after performance from several different locations, if possible.
Why would the security people want you to disable timestamps? What possible threat could a timestamp represent? I bet the NTP crew would be unhappy with this ;^)
The TCP Timestamp when enabled will allow you to guess the uptime of a target system (nmap v -O . Knowing how long a system has been up will enable you to determine whether security patches that require reboot has been applied or not.
To Daniel and anyone else wanting clarification:
http://www.forensicswiki.org/wiki/TCP_timestamps
"TCP timestamps are used to provide protection against wrapped sequence numbers. It is possible to calculate system uptime (and boot time) by analyzing TCP timestamps (see below).
These calculated uptimes (and boot times) can help in detecting hidden network-enabled operating systems (see TrueCrypt), linking spoofed IP and MAC addresses together, linking IP addresses with Ad-Hoc wireless APs, etc."
It's a low-risk vulnerability denoted in PCI compliancy.
I got asked a similar question on this topic, today. My take is as follows:
An unpatched system is the vulnerability, not whether attacker(s) can easily find it. The solution, therefore, is to patch your systems regularly. Disabling TCP timestamps won't do anything to make your systems less vulnerable - it's simply security through obscurity, which is no security at all.
Turning the question on its head, consider scripting a solution that uses TCP timestamps to identify hosts on your network that have the longest uptimes. These will typically be your most vulnerable systems. Use this information to prioritise patching, to ensure that your network remains protected.
Don't forget that information like uptime can also be useful to your system administrators. :)
I wouldn't do it.
Without timestamp the TCP Protection Against Wrapped Sequence numbers (PAWS) mechanism wont work. It uses the timestamp option to determine the sudden and random sequence number change is a wrap (16 bit sequence numbers) rather than an insane packet from another flow.
If you don't have this then your TCP sessions will burp every once in a while according to how fast they are using up the sequence number space.
From RFC 1185:
ARPANET 56kbps 7KBps 3*10**5 (~3.6 days)
DS1 1.5Mbps 190KBps 10**4 (~3 hours)
Ethernet 10Mbps 1.25MBps 1700 (~30 mins)
DS3 45Mbps 5.6MBps 380
FDDI 100Mbps 12.5MBps 170
Gigabit 1Gbps 125MBps 17
Take 45Mbps (well within 802.11n speeds), then we would have a burp every ~380 seconds. Not horrible, but annoying.
Why would the security people want you to disable timestamps? What possible threat could a timestamp represent? I bet the NTP crew would be unhappy with this ;^)
Hmmmm, I read something about using TCP timestamps to guess the clock frequency of the sender? Maybe this is what they are scared of? I don't know ;^)
Timestamps are less important to RTT estimation than you would think. I happen to like them because they are useful in determining RTT at the receiver or a middlebox. However, according to the cannon of TCP, only the sender needs such forbidden knowledge ;^)
The sender does not need timestamps to calculate the RTT. t1 = timestamp when I sent the packet, t2 = timestamp when I received the ACK. RTT = t2 - t1. Do a little smoothing on that and you are good to go!
...Daniel
Related
Since packets travel over the wire have checksums on different layers, Ethernet and IPv4 have checksums for their headers, TCP's checksum even covers the entire segment.
I know it is not impossible that a corrupted packet, from the standpoint of the application layer, can slip in without being discarded by Ethernet/IP/TCP, because there are chances that their checksums are correct, only the probability is low.
I am designing a custom binary protocol for an IM application. My question is do I need to add a checksum to ensure the integrity of my application data? Is a checksum really needed in practice?
There's actual research on this subject. It's old, but very relevant to the question at hand.
The paper, from 2000, is called "When the CRC and TCP checksum disagree" by Jonathan Stone and Craig Partridge, which investigate packet and frame errors, and look how often the TCP checksum is wrong, but the Ethernet CRC is fine. You can find the PDF here. Here are the important bits.
From the abstract:
Traces of Internet packets from the past two years show that between 1
packet in 1,100 and 1 packet in 32,000 fails the TCP checksum, even on
links where link-level CRCs should catch all but 1 in 4 billion
errors.
From the conclusion (with some of my highlighting)
In practice, the checksum is being asked to detect an error every
few thousand packets. After eliminating those errors that the checksum
always catches, the data suggests that, on average, between one packet
in 10 billion and one packet in a few millions will have an error that
goes undetected. The exact range depends on the type of data
transferred and the path being traversed. While these odds seem large,
they do not encourage complacency. In every trace, one or two 'bad
apple' hosts or paths are responsible for a huge proportion of the
errors. For applications which stumble across one of the `bad-apple'
hosts, the expected time until a corrupted data is accepted could be
as low as a few minutes. When compared to undetected error rates for
local I/O (e.g., disk drives), these rates are disturbing. Our
conclusion is that vital applications should strongly consider
augmenting the TCP checksum with an application sum.
I don't know of any newer research into that question (enlighten me if you know otherwise!), so the Internet could have become more reliable since then, and the numbers in the paper might be irrelevant.
However, and this is important, 17 years have passed, and the amount of Internet traffic simply exploded since that paper was written. At 1Gbps, which is not an uncommon connection speed nowadays, you're sending about 81K full TCP segments, with 1460 bytes of data, per second (or a lot more if the packets are smaller). That's a million big packets every 12.5 seconds, a billion in about 3.5 hours (or again, a lot more if the packets are small).
So to answer your question - that depends.
For transferring large files or other data, I'd definitely add additional checks if the data itself isn't protected in any way. For messaging, which pushes very little data into the network, you'll probably be fine with TCP's checksum, with maybe some sanity checks on the input you're getting to make sure that it's in the correct format, and various parameters and fields make sense.
I would not bother with a checksum because of packets getting corrupted in the network.
However, since you are working on a protocol that would presumably be used on the open internet, you will need to prepare for rare cases of an unintended application sending udp packets or making tcp connections to your receiving/listening ports. Also there will be maybe less port scans and hackers / script kiddies knocking on your gates.
So you should make your protocol such that it is easy to discard this kind of traffic. Using a checksum in every transmission would imho be one sensible way of doing that.
I'm in a situation where, logically, UDP would be the perfect choice (i need to be able to broadcast to hundreds of clients). This is in a very small and controlled environment (the whole network is over a few square metters, all devices are local, the network is way oversized with gigabit ethernet and switches everywhere).
Can i simply "ignore" all of the added reliability that needs to be tossed on udp (checking messages arrived, resending them etc) as those mostly apply where the is expected packet loss (the internet) or is it really suggested to handle udp as "may not arrive" even in such conditions?
I'm not asking for theorycrafting, really wondering if anyone could tell me from experience if i'm actually likely to have udp packets missing in such an environment or is it's going to be a really rare event as obviously sending things and assuming that worked is much simpler than handling all possible errors.
This is a matter of stochastics. Even in small local networks, packet losses will occur. Maybe they have an absolute probability of 1e-10 in a normal usage scenario. Maybe more, maybe less.
So, now comes real-world experience: Network controllers and Operating systems do have a tough live, if used in high-throughput scenarios. Worse applies to switches. So, if you're near the capacity of your network infrastructure, or your computational power, losses become far more likely.
So, in the end it's just a question on how high up in the networking stack you want to deal with errors: If you don't want to risk your application failing in 1 in 1e6 cases, you will need to add some flow/data integrity control; which really isn't that hard. If you can live with the fact that the average program has to be restarted every once in a while, well, that's error correction on user level...
Generally, I'd encourage you to not take risks. CPU power is just too cheap, and bandwidth, too, in most cases. Try ZeroMQ, which has broadcast communication models, and will ensure data integrity (and resend stuff if necessary), is available for practically all relevant languages, and runs on all relevant OSes, and is (at least from my perspective) easier to use than raw UDP sockets.
Suppose, there is a network which gives a lot of Timeout errors when packets are transmitted over it. Now, timeouts can happen either because the network itself is inherently lossy (say, poor hardware) or it might be that the network is highly congested, due to which network devices are losing packets in between, leading to Timeouts. Now, what additional statistics about the traffic being transmitted (like Missing Packets errors etc.) are required that might help us to find out whether timeouts are happening due to poor hardware, or too much network load.
Please note that we have access only to one node in the network (from which we are transmitting packets) and as such, we cannot get to know the load being put by other nodes on the network. Similarly, we don't really have any information about the hardware being used in the network. Statistics is all that we have.
A network node only has hardware information about its local collision domain, which on a standard network will be the cable that links the host to the switch.
All the TCP stack will know about lost packets is that it is not receiving acknowledgements so it needs to resend, there is no mechanism for devices (E.g. switches & routers) between a source and destination to tell the source that there is a problem.
Without access to any other nodes the only way to ascertain if your problem is load based would be to run a test that sends consistent traffic over the network for a long period, if the packet retry count per second/minute/hour remains the same then it would suggest that there is a hardware issue, if the losses only occur during peak traffic periods then the issue could be load related. Of course there could be a situation where misconfigured hardware issues will only be apparent during high traffic periods, this takes things back to the main problem which is that you need access to network stats from beyond your single node.
In practice, nearly all loss on terrestrial network paths is due to either congestion or firewalls. Loss due to bit-errors is extremely rare. Even on wireless networks, forward error correction handles most bit/media/transmission errors. Congestion can be caused by a lot of different factors: any given network path will involve dozens of devices and if any one of them becomes overloaded for even a moment, packets will be dropped.
The only way to tell the difference between congestion induced packet loss and media errors is that media errors will occur independent of load. In other words, the loss rate will be the same whether you are sending a lot of data or only a little data.
To test that, you will need some control, or at least knowledge, of the load on the path. Since you don't have control and the only knowledge you have is from source-node observation, the best you can do is to take test samples (using ping is the easiest) around the clock and throughout the week, recording loss rates and latencies. These should give you an idea of when the path is relatively idle. If loss rates remain significant even when the path is (probably) idle, then there might be a media-loss issue. But again, that is extremely rare.
For background, I have written a few articles on the subject:
Loss, Latency, and Speed, discussing what statistics you can observe about a path and what they mean.
Common Network Performance Problems, discussing the most common components in a network path and how they affect performance (congestion).
Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers by transferring data between them.
Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static.
Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host.
What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency?
In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency.
Added: After considering the comment below, the second portion of the question is probably better off on serverfault.
To the best of my knowledge, asymmetric latencies -- especially "last mile" asymmetries -- cannot be automatically determined, because any network time synchronization protocol is equally affected by the same asymmetry, so you don't have a point of reference from which to evaluate the asymmetry.
If each endpoint had, for example, its own GPS clock, then you'd have a reference point to work from.
In Fast Measurement of LogP Parameters
for Message Passing Platforms, the authors note that latency measurement requires clock synchronization external to the system being measured. (Boldface emphasis mine, italics in original text.)
Asymmetric latency can only be measured by sending a message with a timestamp ts, and letting the receiver derive the latency from tr - ts, where tr is the receive time. This requires clock synchronization between sender and receiver. Without external clock synchronization (like using GPS receivers or specialized software like the network time protocol, NTP), clocks can only be synchronized up to a granularity of the roundtrip time between two hosts [10], which is useless for measuring network latency.
No network-based algorithm (such as NTP) will eliminate last-mile link issues, though, since every input to the algorithm will itself be uniformly subject to the performance characteristics of the last-mile link and is therefore not "external" in the sense given above. (I'm confident it's possible to construct a proof, but I don't have time to construct one right now.)
There is a project called One-Way Ping (OWAMP) specifically to solve this issue. Activity can be seen in the LKML for adding high resolution timestamps to incoming packets (SO_TIMESTAMP, SO_TIMESTAMPNS, etc) to assist in the calculation of this statistic.
http://www.internet2.edu/performance/owamp/
There's even a Java version:
http://www.av.it.pt/jowamp/
Note that packet timestamping really needs hardware support and many present generation NICs only offer millisecond resolution which may be out-of-sync with the host clock. There are MSDN articles in the DDK about synchronizing host & NIC clocks demonstrating potential problems. Timestamps in nanoseconds from the TSC is problematic due to core differences and may require Nehalem architecture to properly work at required resolutions.
http://msdn.microsoft.com/en-us/library/ff552492(v=VS.85).aspx
You can measure asymmetric latency on link by sending different sized packets to a port that returns a fixed size packet, like send some udp packets to a port that replies with an icmp error message. The icmp error message is always the same size, but you can adjust the size of the udp packet you're sending.
see http://www.cs.columbia.edu/techreports/cucs-009-99.pdf
In absence of a synchronized clock, the asymmetry cannot be measured as proven in the 2011 paper "Fundamental limits on synchronizing clocks over networks".
https://www.researchgate.net/publication/224183858_Fundamental_Limits_on_Synchronizing_Clocks_Over_Networks
The sping tool is a new development in this space, which uses clock synchronization against nearby NTP servers, or an even more accurate source in the form of a GNSS box, to estimate asymmetric latencies.
The approach is covered in more detail in this blog post.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
For general protocol message exchange, which can tolerate some packet loss. How much more efficient is UDP over TCP?
People say that the major thing TCP gives you is reliability. But that's not really true. The most important thing TCP gives you is congestion control: you can run 100 TCP connections across a DSL link all going at max speed, and all 100 connections will be productive, because they all "sense" the available bandwidth. Try that with 100 different UDP applications, all pushing packets as fast as they can go, and see how well things work out for you.
On a larger scale, this TCP behavior is what keeps the Internet from locking up into "congestion collapse".
Things that tend to push applications towards UDP:
Group delivery semantics: it's possible to do reliable delivery to a group of people much more efficiently than TCP's point-to-point acknowledgement.
Out-of-order delivery: in lots of applications, as long as you get all the data, you don't care what order it arrives in; you can reduce app-level latency by accepting an out-of-order block.
Unfriendliness: on a LAN party, you may not care if your web browser functions nicely as long as you're blitting updates to the network as fast as you possibly can.
But even if you care about performance, you probably don't want to go with UDP:
You're on the hook for reliability now, and a lot of the things you might do to implement reliability can end up being slower than what TCP already does.
Now you're network-unfriendly, which can cause problems in shared environments.
Most importantly, firewalls will block you.
You can potentially overcome some TCP performance and latency issues by "trunking" multiple TCP connections together; iSCSI does this to get around congestion control on local area networks, but you can also do it to create a low-latency "urgent" message channel (TCP's "URGENT" behavior is totally broken).
In some applications TCP is faster (better throughput) than UDP.
This is the case when doing lots of small writes relative to the MTU size. For example, I read an experiment in which a stream of 300 byte packets was being sent over Ethernet (1500 byte MTU) and TCP was 50% faster than UDP.
The reason is because TCP will try and buffer the data and fill a full network segment thus making more efficient use of the available bandwidth.
UDP on the other hand puts the packet on the wire immediately thus congesting the network with lots of small packets.
You probably shouldn't use UDP unless you have a very specific reason for doing so. Especially since you can give TCP the same sort of latency as UDP by disabling the Nagle algorithm (for example if you're transmitting real-time sensor data and you're not worried about congesting the network with lot's of small packets).
UDP is faster than TCP, and the simple reason is because its non-existent acknowledge packet (ACK) that permits a continuous packet stream, instead of TCP that acknowledges a set of packets, calculated by using the TCP window size and round-trip time (RTT).
For more information, I recommend the simple, but very comprehensible Skullbox explanation (TCP vs. UDP)
with loss tolerant
Do you mean "with loss tolerance" ?
Basically, UDP is not "loss tolerant". You can send 100 packets to someone, and they might only get 95 of those packets, and some might be in the wrong order.
For things like video streaming, and multiplayer gaming, where it is better to miss a packet than to delay all the other packets behind it, this is the obvious choice
For most other things though, a missing or 'rearranged' packet is critical. You'd have to write some extra code to run on top of UDP to retry if things got missed, and enforce correct order. This would add a small bit of overhead in certain places.
Thankfully, some very very smart people have done this, and they called it TCP.
Think of it this way: If a packet goes missing, would you rather just get the next packet as quickly as possible and continue (use UDP), or do you actually need that missing data (use TCP). The overhead won't matter unless you're in a really edge-case scenario.
When speaking of "what is faster" - there are at least two very different aspects: throughput and latency.
If speaking about throughput - TCP's flow control (as mentioned in other answers), is extremely important and doing anything comparable over UDP, while certainly possible, would be a Big Headache(tm). As a result - using UDP when you need throughput, rarely qualifies as a good idea (unless you want to get an unfair advantage over TCP).
However, if speaking about latencies - the whole thing is completely different. While in the absence of packet loss TCP and UDP behave extremely similar (any differences, if any, being marginal) - after the packet is lost, the whole pattern changes drastically.
After any packet loss, TCP will wait for retransmit for at least 200ms (1sec per paragraph 2.4 of RFC6298, but practical modern implementations tend to reduce it to 200ms). Moreover, with TCP, even those packets which did reach destination host - will not be delivered to your app until the missing packet is received (i.e., the whole communication is delayed by ~200ms) - BTW, this effect, known as Head-of-Line Blocking, is inherent to all reliable ordered streams, whether TCP or reliable+ordered UDP. To make things even worse - if the retransmitted packet is also lost, then we'll be speaking about delay of ~600ms (due to so-called exponential backoff, 1st retransmit is 200ms, and second one is 200*2=400ms). If our channel has 1% packet loss (which is not bad by today's standards), and we have a game with 20 updates per second - such 600ms delays will occur on average every 8 minutes. And as 600ms is more than enough to get you killed in a fast-paced game - well, it is pretty bad for gameplay. These effects are exactly why gamedevs often prefer UDP over TCP.
However, when using UDP to reduce latencies - it is important to realize that merely "using UDP" is not sufficient to get substantial latency improvement, it is all about HOW you're using UDP. In particular, while RUDP libraries usually avoid that "exponential backoff" and use shorter retransmit times - if they are used as a "reliable ordered" stream, they still have to suffer from Head-of-Line Blocking (so in case of a double packet loss, instead of that 600ms we'll get about 1.5*2*RTT - or for a pretty good 80ms RTT, it is a ~250ms delay, which is an improvement, but it is still possible to do better). On the other hand, if using techniques discussed in http://gafferongames.com/networked-physics/snapshot-compression/ and/or http://ithare.com/udp-from-mog-perspective/#low-latency-compression , it IS possible to eliminate Head-of-Line blocking entirely (so for a double-packet loss for a game with 20 updates/second, the delay will be 100ms regardless of RTT).
And as a side note - if you happen to have access only to TCP but no UDP (such as in browser, or if your client is behind one of 6-9% of ugly firewalls blocking UDP) - there seems to be a way to implement UDP-over-TCP without incurring too much latencies, see here: http://ithare.com/almost-zero-additional-latency-udp-over-tcp/ (make sure to read comments too(!)).
Which protocol performs better (in terms of throughput) - UDP or TCP - really depends on the network characteristics and the network traffic. Robert S. Barnes, for example, points out a scenario where TCP performs better (small-sized writes). Now, consider a scenario in which the network is congested and has both TCP and UDP traffic. Senders in the network that are using TCP, will sense the 'congestion' and cut down on their sending rates. However, UDP doesn't have any congestion avoidance or congestion control mechanisms, and senders using UDP would continue to pump in data at the same rate. Gradually, TCP senders would reduce their sending rates to bare minimum and if UDP senders have enough data to be sent over the network, they would hog up the majority of bandwidth available. So, in such a case, UDP senders will have greater throughput, as they get the bigger pie of the network bandwidth. In fact, this is an active research topic - How to improve TCP throughput in presence of UDP traffic. One way, that I know of, using which TCP applications can improve throughput is by opening multiple TCP connections. That way, even though, each TCP connection's throughput might be limited, the sum total of the throughput of all TCP connections may be greater than the throughput for an application using UDP.
Each TCP connection requires an initial handshake before data is transmitted. Also, the TCP header contains a lot of overhead intended for different signals and message delivery detection. For a message exchange, UDP will probably suffice if a small chance of failure is acceptable. If receipt must be verified, TCP is your best option.
I will just make things clear. TCP/UDP are two cars are that being driven on the road. suppose that traffic signs & obstacles are Errors TCP cares for traffic signs, respects everything around. Slow driving because something may happen to the car. While UDP just drives off, full speed no respect to street signs. Nothing, a mad driver. UDP doesn't have error recovery, If there's an obstacle, it will just collide with it then continue. While TCP makes sure that all packets are sent & received perfectly, No errors , so , the car just passes obstacles without colliding. I hope this is a good example for you to understand, Why UDP is preferred in gaming. Gaming needs speed. TCP is preffered in downloads, or downloaded files may be corrupted.
UDP is slightly quicker in my experience, but not by much. The choice shouldn't be made on performance but on the message content and compression techniques.
If it's a protocol with message exchange, I'd suggest that the very slight performance hit you take with TCP is more than worth it. You're given a connection between two end points that will give you everything you need. Don't try and manufacture your own reliable two-way protocol on top of UDP unless you're really, really confident in what you're undertaking.
There has been some work done to allow the programmer to have the benefits of both worlds.
SCTP
It is an independent transport layer protolol, but it can be used as a library providing additional layer over UDP. The basic unit of communication is a message (mapped to one or more UDP packets). There is congestion control built in. The protocol has knobs and twiddles to switch on
in order delivery of messages
automatic retransmission of lost messages, with user defined parameters
if any of this is needed for your particular application.
One issue with this is that the connection establishment is a complicated (and therefore slow process)
Other similar stuff
https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
One more similar proprietary experimental thing
https://en.wikipedia.org/wiki/QUIC
This also tries to improve on the triple way handshake of TCP and change the congestion control to better deal with fast lines.
Update 2022: Quic and HTTP/3
QUIC (mentioned above) has been standardized through RFCs and even became the basis of HTTP/3 since the original answer was written. There are various libraries such as lucas-clemente/quic-go or microsoft/msquic or google/quiche or mozilla/neqo (web-browsers need to be implementing this).
These libraries expose to the programmer reliable TCP-like streams on top the UDP transport. RFC 9221 (An Unreliable Datagram Extension to QUIC) adds working with individual unreliable data packets.
Keep in mind that TCP usually keeps multiple messages on wire. If you want to implement this in UDP you'll have quite a lot of work if you want to do it reliably. Your solution is either going to be less reliable, less fast or an incredible amount of work. There are valid applications of UDP, but if you're asking this question yours probably is not.
If you need to quickly blast a message across the net between two IP's that haven't even talked yet, then a UDP is going to arrive at least 3 times faster, usually 5 times faster.
It is meaningless to talk about TCP or UDP without taking the network condition into account.
If the network between the two point have a very high quality, UDP is absolutely faster than TCP, but in some other case such as the GPRS network, TCP may been faster and more reliability than UDP.
The network setup is crucial for any measurements. It makes a huge difference, if you are communicating via sockets on your local machine or with the other end of the world.
Three things I want to add to the discussion:
You can find here a very good article about TCP vs. UDP in the
context of game development.
Additionally, iperf (jperf enhance iperf with a GUI) is a
very nice tool for answering your question yourself by measuring.
I implemented a benchmark in Python (see this SO question). In average of 10^6 iterations the difference for sending 8 bytes is about 1-2 microseconds for UDP.