In which situations a tcp connection needs to wait for ACK? - networking

As far as I know, the only reason to wait for a ACK has to do with the transmit window getting exhausted. Or maybe slow-start. But then this fragment of a Wireshark dump over a pre-existing TCP socket doesn't make sense to me:
Here, between the packets 38 and 40, the server (45.55.162.253) waits a full RTT before continuing sending. I changed the RTT through Netem to be sure that delay is alway equal to the RTT, and as you can see, there is no application data flowing from client to server that the server might need "to continue working". But there is a very conspicuous ACK packet going from the client (packet 39) without any payload. The advertised window is a lot larger than [SEQ/ACK analysis]/[Bytes in flight], which is 1230.
My question is: is there something in TCP that is triggering this wait for ACK between packet 38 and 40 by the server?

TCP limits its transmission rate according to two separate mechanisms:
Flow Control, which is there to make sure that the sender doesn't overwhelm the other party with data. This is where the receive window comes in. Since the receive windows advertised by the client in your screenshot are large, this isn't what pauses the transfer in your case.
Congestion Control, which tries to make sure that the network isn't overwhelmed. Slow Start, which you've mentioned, is part of this mechanism in some implementations of TCP, specifically TCP Tahoe and TCP Reno, which are the variants most commonly taught in networking courses although rarely used in practice.
Since we know that flow control is not what's pausing the connection, we can assume that the culprit is the congestion control algorithm. To figure out the exact cause however, you'd need to dive into the implementation details of TCP your OS uses. For windows, it seems to be something called Compound TCP. With recent Linux kernels, it's something called TCP CUBIC, described in this whitepaper.
The important thing to note however is that both mechanisms operate during the entire lifetime of the connection, not just its start. It seems that your sender paused after sending its biggest packet so far (at least among the ones shown in the screenshot), so it possible that this packet consumed its remaining free congestion control window, and although the flow control window was still large, it was bound the former.

Related

How to deal with many incoming UDP packets when the server has only 1 UDP socket?

When a server has only 1 UDP socket, and many clients are sending UDP packets to it, what would be the best approach to handle all of the incoming packets?
I think this can also be a problem with TCP packets, since there's a limited thread count, which cannot cover all client TCP socket receive events.
But things are better in this situation because there's 1 TCP socket per client, and even if the network buffer is full, packet receiving is blocked until the queue has space (let me know if I'm wrong).
UDP packets, however, are discarded when the buffer is full, and there's only 1 socket, so the chances of that happening are higher.
How can I solve this problem? I've searched for a while, but I couldn't get a clear answer. Should I implement my own queueing system? Or just maximize the network buffer size?
There is no way to guarantee you won't drop UDP messages. No matter what you do, if the rate of packets being sent is too large, you will drop some, either on the receiving host or somewhere in the network.
Some things that can help include:
Implementing an internal queue for messages in your Java app, and handing them over to a thread pool to process.
Increasing the kernel's message buffering.
But neither of these can deal with the case where the average message arrival rate is higher that the receiver's ability to process them or the network capacity. This will inevitably lead to lost messages (requests).
I've searched for a while, but I couldn't get a clear answer.
That is because there isn't one! Some problems are fundamentally unsolvable. For others, the best answer depends on factors that are too hard to measure or predict.
(If you want certainty ... don't use networking!)
In the TCP case, what you should do is use a (long-term) socket for each client. Depending on the number of sockets you need to support, you could either:
Dedicate a server-side thread to each socket (and client).
Use java.nio.channels.Selector and a thread pool.
You will still get problems if the rate of requests exceeds your server's ability to process them. However, the TCP connections will ensure that requests are not lost, and that the clients get some "back pressure".

TCP vs UDP - Issues that arise from using both

When I'm learning about various technologies, I often try to think of how applications I use regularly implement such things. I've played a few MMOs along with some FPSs. I did some looking around and happened upon this thread:
http://www.gamedev.net/topic/319003-mmorpg-and-the-ol-udp-vs-tcp
I have been seeing that UDP shines when some loss of packets is permissible. There's less overhead involved and updates are done more quickly. After looking around for a bit and reading various articles and threads, I've come to see that character positioning will often be done with UDP. Games like FPSs will often be done with UDP because of all the rapid changes that are occuring.
I've seen multiple times now where someone pointed out issues that can occur when using UDP and TCP simultaneously. What might some of these problems be? Are these issues that would mostly be encountered by novice programmers? It seems to me that it would be ideal to use a combination of UDP and TCP, gaining the advantages of each. However, if using the two together adds a significant amount of complexity to the code to deal with problems caused, it may not be worth it in certain situations.
Many games use UDP and TCP together. Since it is mandatory for every game to deliver the actions of a player to everyone, it has to be done in one way or the other. It now depends on what kind of game you want to make. In a RTS, surely TCP would be much wiser, since you cannot lose information about your opponents movement. In an RPG, it is not that incredibly important to keep track of everything every second of the game.
Bottom line is, if data has to arrive at the client, in any case, (position updates, upgrades aso.), you have to send it via TCP, or, you implement your own reliable protocol based on UDP. I have constructed quite a few network stacks for games, and I have to say, what you use depends on the usecase and what you are trying to accomplish. I mostly did a heartbeat over UDP, so the remote server/client knows that I am still there. The problem with UDP is, that packets get lost and not resent. If a packet drops, it is lost for ever. You have to take that into account. If you send some information over UDP, it has to be information that is allowed to be lost. Everything else goes via TCP.
And so the madness began. If you want to get most out of both, you will have to adapt them. TCP can be slow sometimes, and you have to wait, if packets get fragmented or do not arrive in order, until the OS has reassembled them. In some cases it could be advisable to build your own reliable protocol on top of UDP. That would allow you complete control over your traffic. Most firewalls do not drop UDP or anything else, but as with TCP, any traffic that is not declared to be safe (Opening Ports, packet redirects, aso.), gets dropped. I would suggest you read up the TCP and UDP and UDP-Lite article up at wikipedia and then decide which ones you want to use for what. AFAIK Battle.net uses a combination of the two.
Many services use udp and tcp together, but doing so without adding congestion control to your udp implementation can cause major problems for tcp. Long story short, udp can and often will clog the routers at each endpoint making tcp's congestion control to go haywire and significantly limit the throughout of the tcp connection. The udp based congestion can also cause significant increase in packet loss for tcp, limiting tcp's throughput even more as it will need to have these packets retransmitted. Using them together isn't a bad idea and is even becoming somewhat common, but you'll want to keep this in mind.
The first possible issue I can think of is that, because UDP doesn't have the overhead inherent in the "transmission control" that TCP does, UDP has higher data bandwidth and lower latency. So, it is possible for a UDP datagram that was sent after a TCP message to be available on the remote computer's input buffers before the TCP message is received in full.
This may cause problems if you use TCP to control or monitor UDP transmission; for instance, you might tell the server via TCP that you'll be sending some datagrams via UDP on port X. If the remote computer isn't already listening on port X, it may not receive some of these datagrams, because they arrived before it was told to listen; if it is listening, but not expecting traffic from you, it may discard them because they showed up before it was told to expect them. This may have an adverse effect on your program's flow or your user's experience.
I think that if you like to transfer game data TCP will be the only solution. Imagine that you send a command (e.g gained x item :P) at the server and this packet never reach its destination. (udp has no guaranties).
Also imagine the scenario that two or more UDP packets reach their destination in wrong order.
But if with your game you integrate any VoIP or Video Call capabilities you can use at these UDP.

what happens when tcp/udp server is publishing faster than client is consuming?

I am trying to get a handle on what happens when a server publishes (over tcp, udp, etc.) faster than a client can consume the data.
Within a program I understand that if a queue sits between the producer and the consumer, it will start to get larger. If there is no queue, then the producer simply won't be able to produce anything new, until the consumer can consume (I know there may be many more variations).
I am not clear on what happens when data leaves the server (which may be a different process, machine or data center) and is sent to the client. If the client simply can't respond to the incoming data fast enough, assuming the server and the consumer are very loosely coupled, what happens to the in-flight data?
Where can I read to get details on this topic? Do I just have to read the low level details of TCP/UDP?
Thanks
With TCP there's a TCP Window which is used for flow control. TCP only allows a certain amount of data to remain unacknowledged at a time. If a server is producing data faster than a client is consuming data then the amount of data that is unacknowledged will increase until the TCP window is 'full' at this point the sending TCP stack will wait and will not send any more data until the client acknowledges some of the data that is pending.
With UDP there's no such flow control system; it's unreliable after all. The UDP stacks on both client and server are allowed to drop datagrams if they feel like it, as are all routers between them. If you send more datagrams than the link can deliver to the client or if the link delivers more datagrams than your client code can receive then some of them will get thrown away. The server and client code will likely never know unless you have built some form of reliable protocol over basic UDP. Though actually you may find that datagrams are NOT thrown away by the network stack and that the NIC drivers simply chew up all available non-paged pool and eventually crash the system (see this blog posting for more details).
Back with TCP, how your server code deals with the TCP Window becoming full depends on whether you are using blocking I/O, non-blocking I/O or async I/O.
If you are using blocking I/O then your send calls will block and your server will slow down; effectively your server is now in lock step with your client. It can't send more data until the client has received the pending data.
If the server is using non blocking I/O then you'll likely get an error return that tells you that the call would have blocked; you can do other things but your server will need to resend the data at a later date...
If you're using async I/O then things may be more complex. With async I/O using I/O Completion Ports on Windows, for example, you wont notice anything different at all. Your overlapped sends will still be accepted just fine but you might notice that they are taking longer to complete. The overlapped sends are being queued on your server machine and are using memory for your overlapped buffers and probably using up 'non-paged pool' as well. If you keep issuing overlapped sends then you run the risk of exhausting non-paged pool memory or using a potentially unbounded amount of memory as I/O buffers. Therefore with async I/O and servers that COULD generate data faster than their clients can consume it you should write your own flow control code that you drive using the completions from your writes. I have written about this problem on my blog here and here and my server framework provides code which deals with it automatically for you.
As far as the data 'in flight' is concerned the TCP stacks in both peers will ensure that the data arrives as expected (i.e. in order and with nothing missing), they'll do this by resending data as and when required.
TCP has a feature called flow control.
As part of the TCP protocol, the client tells the server how much more data can be sent without filling up the buffer. If the buffer fills up, the client tells the server that it can't send more data yet. Once the buffer is emptied out a bit, the client tells the server it can start sending data again. (This also applies to when the client is sending data to the server).
UDP on the other hand is completely different. UDP itself does not do anything like this and will start dropping data if it is coming in faster then the process can handle. It would be up to the application to add logic to the application protocol if it can't lose data (i.e. if it requires a 'reliable' data stream).
If you really want to understand TCP, you pretty much need to read an implementation in conjunction with the RFC; real TCP implementations are not exactly as specified. For example, Linux has a 'memory pressure' concept which protects against running out of the kernel's (rather small) pool of DMA memory, and also prevents one socket running any others out of buffer space.
The server can't be faster than the client for a long time. After it has been faster than the client for a while, the system where it is hosted will block it when it writes on the socket (writes can block on a full buffer just as reads can block on an empty buffer).
With TCP, this cannot happen.
In case of UDP, packets will be lost.
The TCP Wikipedia article shows the TCP header format which is where the window size and acknowledgment sequence number are kept. The rest of the fields and the description there should give a good overview of how transmission throttling works. RFC 793 specifies the basic operations; pages 41 and 42 details the flow control.

Why is UDP + a software reliable ordering system faster than TCP?

Some games today use a network system that transmits messages over UDP, and ensures that the messages are reliable and ordered.
For example, RakNet is a popular game network engine. It uses only UDP for its connections, and has a whole system to ensure that packets can be reliable and ordered if you so choose.
My basic question is, what's up with that? Isn't TCP the same thing as ordered, reliable UDP? What makes it so much slower that people have to basically reinvent the wheel?
General/Specialisation
TCP is a general purpose reliable system
UDP +whatever is a special purpose reliable system.
Specialized things are usually better than general purpose things for the thing they are specialized.
Stream / Message
TCP is stream-based
UDP is message-based
Sending discrete gaming information maps usually better to a message-based paradigm. Sending it through a stream is possible but horribly ineffective. If you want to reliably send a huge amount of data (File transfer), TCP is quite effective. That's why Bit-torrent use UDP for control messages and TCP for data sending.
We switched from reliable to unreliable in "league of legends" about a year ago because of several advantages which have since proven to be true:
1) Old information becomes irrelevant. If I send a health packet and it doesn't arrive... I don't want to have to wait for that same health packet to resend when I know its changed.
2) Order is sometimes not necessary. If I'm sending different messages to different systems it may not be necessary to get those messages in order. I don't force the client to wait for in-order messages.
3) Unreliable doesn't get backed up with messages... ie waiting for acknowledgements which means you can resolve loss spikes much more quickly.
4) You can control resends when necessarily more efficiently. Such as repacking something that didn't send into another packet. (TCP does repack but you can do it more efficiently with knowledge about how your program works.)
5) Flow control of message such as throwing away messages that are less relevant when the network suddenly spikes. The network system can choose not to resend less relevant messages when you have a loss spike. With TCP you'd still have a queue of messages that are trying to resend which may be lower priority.
6) Smaller header packet... don't really need to say much about that.
There's much more of a difference between UDP and TCP than just reliability and sequencing:
At the heart of the matter is the fact that UDP is connectionless while TCP is connected. This simple difference leads to a host of other differences that I'm not going to be able to reasonbly summarize here. You can read the analysis below for much more detail.
TCP - UDP Comparative Analysis
The answer in in my opinion the two words: "Congestion control".
TCP goes to great lengths to manage the bandwidth of the path - to use the most of it, but to ensure that there is space for other applications. This is a very hard task, and inherently it is not possible to use 100% of the bandwidth 100% of the time.
With UDP, on the other hand, one can make their own protocol to send the packets onto the wire as fast as they want - this makes the protocol very unfriendly to other applications, but can gain more "performance" short-term. On the other hand, it is with high probability that if the conditions are appropriate, this kind of protocols might contribute to congestion collapse.
TCP is a stream-oriented protocol, whereas UDP is a message-oriented protocol. Hence TCP does more than just reliability and ordering. See this post for more details. Basically, the RakNet developers added the reliability and ordering while still keeping it as a message-oriented protocol, and so the result was more lightweight than TCP (which has to do more).
This little article is old, but it's still pretty true when it comes to games. It explains the two protocols, and the havoc these folks went trying to develop a multiplayer internet game. "X-Wing vs Tie Fighter"
Lessons Learned (The Internet Sucks)
There is one caveat to this though, I run/develop a multiplayer game, and I've used both. UDP was much better for my app, but alot of people couldn't play with UDP. Routers and such blocked the connections. So I changed to the "reliable" TCP. Well... Reliable? I don't think so. You send a packet, no errors, you send another and it crashes (exception) in the middle of the packet. Now which packets made it? So you end up writing a reliable protocol ON TOP OF tcp, to simulate UDP - but continuously establish a new connection when it crashes. Take about inefficient.
UDP + Stop and wait ARW = good
UDP + Sliding Window Protocol = better
TCP + Sliding Window Protocol with reconnection? = Worthless bulkware. (IMHO)
The other side effect is multi-threaded applications. TCP works well for a chat room type thing, since each room can be it's own thread. A room can hold 60-100 people and it runs fine, as the Room thread contains the Sockets for each participant.
UDP on the other hand is best served (IMO) by one thread, but when you get the packet, you have to parse it to figure out who it came from (via info sent or RemoteEndPoint), then pass that data to the chatroom thread in a threadsafe manner.
Actually, you have to do the same with TCP, but only on connect.
Last point. Remember that TCP will just error out and kill the connection at anytime, but you can reconnect in about .5 seconds and send the same information. Most bizzare thing I've ever worked with.
UDP have a lower reliability give it more reliability by making it send a message and wait for respond if no respond came it resend the message.

Why Does RTP use UDP instead of TCP?

I wanted to know why UDP is used in RTP rather than TCP ?. Major VoIP Tools used only UDP as i hacked some of the VoIP OSS.
As DJ pointed out, TCP is about getting a reliable data stream, and will slow down transmission, and re-transmit corrupted packets, in order to achieve that.
UDP does not care about reliability of the communication, and will not slow down or re-transmit data.
If your application needs a reliable data stream, for example, to retrieve a file from a webserver, you choose TCP.
If your application doesn't care about corrupted or lost packets, and you don't need to incur the additional overhead to provide the additional reliability, you can choose UDP instead.
VOIP is not significantly improved by reliable packet transmission, and in fact, in some cases things in TCP like retransmission and exponential backoff can actually hurt VOIP quality. Therefore, UDP was a better choice.
A lot of good answers have been given, but I'd like to point one thing out explicitly:
Basically a complete data stream is a nice thing to have for real-time audio/video, but its not strictly necessary (as others have pointed out):
The important fact is that some data that arrives too late is worthless. What good is the missing data for a frame that should have been displayed a second ago?
If you were to use TCP (which also guarantees the correct order of all data), then you wouldn't be able to get to the more up-to-date data until the old one is transmitted correctly. This is doubly bad: you have to wait for the re-transmission of the old data and the new data (which is now delayed) will probably be just as worthless.
So RTP does some kind of best-effort transmission in that it tries to transfer all available data in time, but doesn't attempt to re-transmit data that was lost/corrupted during the transfer (*). It just goes on with life and hopes that the more important current data gets there correctly.
(*) actually I don't know the specifics of RTP. Maybe it does try to re-transmit, but if it does then it won't be as aggressive as TCP is (which won't ever accept any lost data).
The others are correct, however the don't really tell you the REAL reason why. Saua kind of hints at it, but here's a more complete answer.
Audio and Video is real-time. If you are listening to a radio, or watching TV, and the signal is interrupted, it doesn't pick up where you left off.. you're just "observing" the signal as it streams, and if you can't observe it at any given time, you lose it.
The reason, is simple. Delay. VOIP tries very hard to minimize the amount of delay from the time someone speaks into one end and you get it on your end, and your response back. Otherwise, as errors occured, the amount of delay between when the person spoke and when the signal was received would continuously grow until it became useless.
Remember, each delay from a retransmission has to be replayed, and that causes further data to be delayed, then another error causes an even greater delay. The only workable solution is to simply drop any data that can't be displayed in real-time.
A 1 second delay from retransmission would mean it would now be 1 second from the time I said something until you heard it. A second 1 second delay now means it's 2 seconds from the time i say something until you hear it. This is cumulative because data is played back at the same rate at which it is spoken, and so on...
RTP could be connection oriented, but then it would have to drop (or skip) data to keep up with retransmission errors anyways, so why bother with the extra overhead?
Technically RTP packets can be interleaved over a TCP connection. There are lots of great answers given here. Two additional minor points:
RFC 4588 describes how one could use retransmission with RTP data. Most clients that receive RTP streams employ a buffer to account for jitter in the network that is typically 1-5 seconds long and which means there is time available for a retransmit to receive the desired data.
RTP traffic can be interleaved over a TCP connection. In practice when this is done, the difference between Interleaved RTP (i.e. over TCP) and RTP sent over UDP is how these two perform over a lossy network with insufficient bandwidth available for the user. The Interleaved TCP stream will end up being jerky as the player continually waits in a buffering state for packets to arrive. Depending on the player it may jump ahead to catch up. With an RTP connection you will get artifacts (smearing/tearing) in the video.
UDP is often used for various types of realtime traffic that doesn't need strict ordering to be useful. This is because TCP enforces an ordering before passing data to an application (by default, you can get around this by setting the URG pointer, but no one seems to ever do this) and that can be highly undesirable in an environment where you'd rather get current realtime data than get old data reliably.
RTP is fairly insensitive to packet loss, so it doesn't require the reliability of TCP.
UDP has less overhead for headers so that one packet can carry more data, so the network bandwidth is utilized more efficiently.
UDP provides fast data transmission also.
So UDP is the obvious choice in cases such as this.
Besides all the others nice and correct answers this article gives a good understanding about the differences between TCP and UDP.
The Real-time Transport Protocol is a network protocol used to deliver streaming audio and video media over the internet, thereby enabling the Voice Over Internet Protocol (VoIP).
RTP is generally used with a signaling protocol, such as SIP, which sets up connections across the network. RTP applications can use the Transmission Control Protocol (TCP), but most use the User Datagram protocol (UDP) instead because UDP allows for faster delivery of data.
UDP is used wherever data is send, that does not need to be exactly received on the target, or where no stable connection is needed.
TCP is used if data needs to be exactly received, bit for bit, no loss of bits.
For Video and Sound streaming, some bits that are lost on the way do not affect the result in a way, that is mentionable, some pixels failing in a picture of a stream, nothing that affects a user, on DVDs the lost bit rate is higher.
just a remark:
Each packet sent in an RTP stream is given a number one higher than its predecessor.This allows thr destination to determine if any packets are missing.
If a packet is mising, the best action for the destination to take is to approximate the missing vaue by interpolation.
Retranmission is not a proctical option since the retransmitted packet would be too late to be useful.
I'd like to add quickly to what Matt H said in response to Stobor's answer. Matt H mentioned that RTP over UDP packets can be checksum'ed so that if they are corrupted, they will get resent. This is actually an optional feature on most PBXs. In Asterisk, for example, you can enable / disable checksums on your RTP over UDP traffic in the rtp.conf configuration file with the following line:
rtpchecksums=yes ; or no if you prefer
Cheers!

Resources