I'm developing a client/server based system using two GPRS modems (Siemens TC65). The application sends relatively small frames (128 bytes each) from the client to the server and vice versa. Initially I used an UDP connection (using UDPDatagramConnection), but then I decided to change to a TCP connection (using SocketConnection and ServerSocketConnection) and compared the delay between the two.
I did 40 tests around 4 seconds apart from each other measured the round-trip time using the exact same application (just changing the connection method) at the same time of the day to ensure the traffic was similar, surprisingly I got the following results:
I was expecting that UDP would be faster but TCP is on average two times faster than UDP. I'm having trouble justifying this. I read threads like this one
UDP vs TCP, how much faster is it?
and they helped but I'm not sure the Neagle's algorithm has anything to do with it since I waited for every frame to arrive before sending the next frame.
I would apreacite any tips justifying these results.
Also is there any influence by doing the connection under GPRS?
I've been programming a library for both TCP and UDP networking and thought about using packets. Currently I've implemented a packet class which can be used like the C++ standard library's stream classes (it has << and >> for inputting and reading data). I plan on sending the packets like so:
bytes 1-8 - uint64_t as the size of the packet.
bytes 8-size - contents of the packet.
But there's a problem. What if a malicious client sends a size measured in terabytes and random garble as the filler? The server's memory is filled with the random garble and it will freeze/crash.
Is it a good idea to let the server decide the maximum allowed size of the received packet?
Or should I discard packets and implement transferring data as streams (where reading/writing would be entirely decided by the user of the library)?
(PS: I'm not a native English speaker, so forgive my possibly hideous usage of the language.)
Yes, set a maximum allowed size on the server side. Set it so that the server won't freeze/crash, but not smaller. Predictable behaviour should be the highest goal.
(Approximately) how many more bits of data must be transferred over the network during an encrypted connection compared to an unencrypted connection?
IIUC, once the TLS handshake has completed, the number of bits transferred is equal to those transferred during an unencrypted connection. Is this accurate?
As a follow up, is transferring a large file over https significantly slower than transferring that file over http, given fast processors and the same (ideal) network conditions?
I've gotten this question a few times, so I decided to write up a small explanation of the overhead with some sample numbers based on common case. You can read it on my blog at http://netsekure.org/2010/03/tls-overhead/.
Summary from blog post:
The total overhead to establish a new TLS session comes to about 6.5k bytes on average.
The total overhead to resume an existing TLS session comes to about 330 bytes on average.
The total overhead of the encrypted data is about 40 bytes.
The short answer is: Your Milage May Vary (YMMV) - it all depends on your traffic pattern. There are a number of factors to take into account:
The additional handshakes and certs will add 4-6KB to the TCP stream, this will result in more ethernet frames going across the wire as well
Clients should download the certificate revocation list. Some tools like cURL omit this step. The crl may be cached by the browser, however, it usually doesn't have a long age associate with it. Verisign sets theirs to expire after four minutes. In my testing, I see Safari on Windows downloading the same 91KB file two times.
TLS Session resumption can avoid the public key-exchange part of the handshake, as well as the certificate verification.
HTTP keep-alives will keep the socket open, same as http, but has more savings when the socket is TLS.
SSL compression support is starting to show up server side, but AFAIK, most browsers aren't implementing this yet. Additionally, if you are already compressing at the http layer, not much will be gained here. Potentially large gains could be had if the client is sending large amounts of text to the server, which ordinarily isn't compressed at the http layer.
In 2020, TLS 1.2 and 1.3 are more typical with AES-GCM being a streaming cipher mode with lower overhead.
See https://tools.ietf.org/id/draft-mattsson-uta-tls-overhead-01.xml#rfc.section.3.
Per packet, the overhead for AES-GCM is 29 bytes. The TCP MSS may be as large as 1460 (https://blog.apnic.net/2014/12/15/ip-mtu-and-tcp-mss-missmatch-an-evil-for-network-performance/). So for a large download (where the maximum MSS is used), the overhead would be 29:1431 which is 2.03%.
(Handshake overhead is separate being once-off)
An order of magnitude. See this. This is not too significant, if the information that is protected is worth securing. And remember that processor speeds can only go up, so performance will keep getting better.
I'm writing a real-time app using a Flex/Flash client and my own server running on Linux.
I'd like to be able to send data from the Flex client in real time (in response to user actions). I've tried the following methods:
flash.net.NetConnection.call()
flash.net.sendToURL()
flash.net.Socket.write() followed by flash.net.Socket.flush()
In each case these calls always wait for the server to send an ACK before they can send data again. In other words, if you do:
var nc:NetConnection;
// Setup code left out
nc.call("foo", someData);
// Some more code left out
nc.call("foo", moreData);
The second nc.call() above won't send data to the server until the ACK for the first call has been recieved. I'd like to be able to send data immediately without waiting for that ACK.
If the round-trip time to the server is long (e.g. 300ms) I can only send data to the server 3 times a second. Ideally I'd like to be able to send data up to 30 times per second, but this is only possible with a RTT of around 30ms at the moment.
It doesn't matter if the server itself gets the data 300ms late - I realise I can't beat the speed of light.
Is there any way to get the Flash Player to send data without waiting for an ACK? In other environments this is done by setting the TCP_NODELAY flag on the socket but it seems I don't have that level of control in Flash/Flex.
Update: I think I may have stumbled on a workaround for this. I think the Flash Player tries to get the host browser to give it a separate TCP connection for each NetConnection object, subject to the connection limit for each browser, e.g. 2 for IE. The connection limit can be got around by using sub-domains (haven't tried this yet) so hopefully it should be possible to get closer to real-time behaviour by using a pool of NetConnections.
Thanks.
Alternatively, you might have a look at something like Hemlock instead:
http://hemlock-kills.com/
Hi the sockets have the Nagle algorithm turned on. What this does is hold a "first" write for 200ms so it can be coalesced with any subsequent writes inside this time window, which means fewer packets go out across the network. For most modern applications and network engineers this is totally dumb and inappropriate, as they will wish set TCP_NODELAY and control exactly when transmission is made and are quite capable of bunching their bytes up in a buffer before writing them to the socket. The reason for this is likely that someone in Adobe once wanted to restrict this option to push people towards the RTMP protocol and their commercial/expensive LCDS system (I think you can set the client no delay option from the server side of an RTMP connection). Ahem Adobe, get real and please add TCP_NODELAY asap as you are just harming the Flash ecosystem and not increasing profits!!!
I wanted to know why UDP is used in RTP rather than TCP ?. Major VoIP Tools used only UDP as i hacked some of the VoIP OSS.
As DJ pointed out, TCP is about getting a reliable data stream, and will slow down transmission, and re-transmit corrupted packets, in order to achieve that.
UDP does not care about reliability of the communication, and will not slow down or re-transmit data.
If your application needs a reliable data stream, for example, to retrieve a file from a webserver, you choose TCP.
If your application doesn't care about corrupted or lost packets, and you don't need to incur the additional overhead to provide the additional reliability, you can choose UDP instead.
VOIP is not significantly improved by reliable packet transmission, and in fact, in some cases things in TCP like retransmission and exponential backoff can actually hurt VOIP quality. Therefore, UDP was a better choice.
A lot of good answers have been given, but I'd like to point one thing out explicitly:
Basically a complete data stream is a nice thing to have for real-time audio/video, but its not strictly necessary (as others have pointed out):
The important fact is that some data that arrives too late is worthless. What good is the missing data for a frame that should have been displayed a second ago?
If you were to use TCP (which also guarantees the correct order of all data), then you wouldn't be able to get to the more up-to-date data until the old one is transmitted correctly. This is doubly bad: you have to wait for the re-transmission of the old data and the new data (which is now delayed) will probably be just as worthless.
So RTP does some kind of best-effort transmission in that it tries to transfer all available data in time, but doesn't attempt to re-transmit data that was lost/corrupted during the transfer (*). It just goes on with life and hopes that the more important current data gets there correctly.
(*) actually I don't know the specifics of RTP. Maybe it does try to re-transmit, but if it does then it won't be as aggressive as TCP is (which won't ever accept any lost data).
The others are correct, however the don't really tell you the REAL reason why. Saua kind of hints at it, but here's a more complete answer.
Audio and Video is real-time. If you are listening to a radio, or watching TV, and the signal is interrupted, it doesn't pick up where you left off.. you're just "observing" the signal as it streams, and if you can't observe it at any given time, you lose it.
The reason, is simple. Delay. VOIP tries very hard to minimize the amount of delay from the time someone speaks into one end and you get it on your end, and your response back. Otherwise, as errors occured, the amount of delay between when the person spoke and when the signal was received would continuously grow until it became useless.
Remember, each delay from a retransmission has to be replayed, and that causes further data to be delayed, then another error causes an even greater delay. The only workable solution is to simply drop any data that can't be displayed in real-time.
A 1 second delay from retransmission would mean it would now be 1 second from the time I said something until you heard it. A second 1 second delay now means it's 2 seconds from the time i say something until you hear it. This is cumulative because data is played back at the same rate at which it is spoken, and so on...
RTP could be connection oriented, but then it would have to drop (or skip) data to keep up with retransmission errors anyways, so why bother with the extra overhead?
Technically RTP packets can be interleaved over a TCP connection. There are lots of great answers given here. Two additional minor points:
RFC 4588 describes how one could use retransmission with RTP data. Most clients that receive RTP streams employ a buffer to account for jitter in the network that is typically 1-5 seconds long and which means there is time available for a retransmit to receive the desired data.
RTP traffic can be interleaved over a TCP connection. In practice when this is done, the difference between Interleaved RTP (i.e. over TCP) and RTP sent over UDP is how these two perform over a lossy network with insufficient bandwidth available for the user. The Interleaved TCP stream will end up being jerky as the player continually waits in a buffering state for packets to arrive. Depending on the player it may jump ahead to catch up. With an RTP connection you will get artifacts (smearing/tearing) in the video.
UDP is often used for various types of realtime traffic that doesn't need strict ordering to be useful. This is because TCP enforces an ordering before passing data to an application (by default, you can get around this by setting the URG pointer, but no one seems to ever do this) and that can be highly undesirable in an environment where you'd rather get current realtime data than get old data reliably.
RTP is fairly insensitive to packet loss, so it doesn't require the reliability of TCP.
UDP has less overhead for headers so that one packet can carry more data, so the network bandwidth is utilized more efficiently.
UDP provides fast data transmission also.
So UDP is the obvious choice in cases such as this.
Besides all the others nice and correct answers this article gives a good understanding about the differences between TCP and UDP.
The Real-time Transport Protocol is a network protocol used to deliver streaming audio and video media over the internet, thereby enabling the Voice Over Internet Protocol (VoIP).
RTP is generally used with a signaling protocol, such as SIP, which sets up connections across the network. RTP applications can use the Transmission Control Protocol (TCP), but most use the User Datagram protocol (UDP) instead because UDP allows for faster delivery of data.
UDP is used wherever data is send, that does not need to be exactly received on the target, or where no stable connection is needed.
TCP is used if data needs to be exactly received, bit for bit, no loss of bits.
For Video and Sound streaming, some bits that are lost on the way do not affect the result in a way, that is mentionable, some pixels failing in a picture of a stream, nothing that affects a user, on DVDs the lost bit rate is higher.
just a remark:
Each packet sent in an RTP stream is given a number one higher than its predecessor.This allows thr destination to determine if any packets are missing.
If a packet is mising, the best action for the destination to take is to approximate the missing vaue by interpolation.
Retranmission is not a proctical option since the retransmitted packet would be too late to be useful.
I'd like to add quickly to what Matt H said in response to Stobor's answer. Matt H mentioned that RTP over UDP packets can be checksum'ed so that if they are corrupted, they will get resent. This is actually an optional feature on most PBXs. In Asterisk, for example, you can enable / disable checksums on your RTP over UDP traffic in the rtp.conf configuration file with the following line:
rtpchecksums=yes ; or no if you prefer
Cheers!