What are the retransmission rules for TCP? - networking

The Wikipedia article on TCP indicates that the IP packets transporting TCP segments can sometimes go lost, and that TCP "requests retransmission of lost data".
What exactly are the rules for requesting retransmission of lost data? At what time frequency are the retransmission requests performed? Is there an upper bound on the number? Is there functionality for the client to indicate to the server to forget about the whole TCP segment for which part went missing when the IP packet went missing?

What exactly are the rules for requesting retransmission of lost data?
The receiver does not request the retransmission. The sender waits for an ACK for the byte-range sent to the client and when not received, resends the packets, after a particular interval.
This is ARQ (Automatic Repeat reQuest). There are several ways in which this is implemented.
Stop-and-wait ARQ
Go-Back-N ARQ
Selective Repeat ARQ
are detailed in the RFC 3366.
At what time frequency are the retransmission requests performed?
The retransmissions-times and the number of attempts isn't enforced by the standard. It is implemented differently by different operating systems, but the methodology is fixed. (One of the ways to fingerprint OSs perhaps?)
The timeouts are measured in terms of the RTT (Round Trip Time) times. But this isn't needed very often due to Fast-retransmit which kicks in when 3 Duplicate ACKs are received.
Is there an upper bound on the number?
Yes there is. After a certain number of retries, the host is considered to be "down" and the sender gives up and tears down the TCP connection.
Is there functionality for the client to indicate to the server to forget about the whole TCP segment for which part went missing when the IP packet went missing?
The whole point is reliable communication. If you wanted the client to forget about some part, you wouldn't be using TCP in the first place. (UDP perhaps?)

There's no fixed time for retransmission. Simple implementations estimate the RTT (round-trip-time) and if no ACK to send data has been received in 2x that time then they re-send.
They then double the wait-time and re-send once more if again there is no reply. Rinse. Repeat.
More sophisticated systems make better estimates of how long it should take for the ACK as well as guesses about exactly which data has been lost.
The bottom-line is that there is no hard-and-fast rule about exactly when to retransmit. It's up to the implementation. All retransmissions are triggered solely by the sender based on lack of response from the receiver.
TCP never drops data so no, there is no way to indicate a server should forget about some segment.

Related

How long can a delayed packet persist?

I read a little about PAWS (Protection Against Wrapping Sequence). It's very interesting. I didn't know such complicated things are implemented to guarantee the reliability of TCP. Without PAWS, in the case of high data rate, a delayed old packet can be received and regarded as the new packet by mistake.
I didn't think much about this before. But now I started to wonder how long a packet can stay in network (Especially UDP packet if the type of packet matters). A packet can be delayed, temporarily stay in the network before it's delivered. But it can only stay for a short period of time, right?
In other words, how much time does it take to wait for a (UDP) packet before concluding that it won't come?
If there is an answer, then how is it determined? How to estimate it? (for writing programs related to timeout of packet.)
A simplified example: A server received 2 UDP packets. Each contains an integer to indicate the order. It got No.1 and No.3. It knows No.2 is either delayed or lost. After a period of time, No.2 still doesn't come then it concludes the packet is lost. The packet doesn't exist anymore. (So it won't cause any trouble for new packets in the future, similar to the problem PAWS aims to solve.) But how long should the the server wait before concluding No.2 doesn't exist anymore?
See RFC 791 #3.2:
Time to Live
The time to live is set by the sender to the maximum time the
datagram is allowed to be in the internet system. If the datagram
is in the internet system longer than the time to live, then the
datagram must be destroyed.
This field must be decreased at each point that the internet header
is processed to reflect the time spent processing the datagram.
Even if no local information is available on the time actually
spent, the field must be decremented by 1. The time is measured in
units of seconds (i.e. the value 1 means one second). Thus, the
maximum time to live is 255 seconds or 4.25 minutes. Since every
module that processes a datagram must decrease the TTL by at least
one even if it process the datagram in less than a second, the TTL
must be thought of only as an upper bound on the time a datagram may
exist. The intention is to cause undeliverable datagrams to be
discarded, and to bound the maximum datagram lifetime.
UDP is a fire-and-forget, best-effort protocol. There is no expectation by the receiving host that the UDP packet is coming. Upper layers can use their own guarantees or expectations, but UDP has none.
UDP doesn't wait on packets the way TCP does.

Implementation standard of retransmission scheduling in a communication protocol

So I am implementing a communication protocol over UDP, which obviously means that I will be managing all error handling and retransmission myself. My originial plan was to have a timestamp as part of the header in my packets and keep all non-acknowledged packets in a retransmission queue. Then I would maintain a round trip delay time and a round trip time out (how long to wait for acknowledgement before retransmitting the packet), by measuring the time a packet was acknowledged and compare this time with the timestamp for the packet in the retransmission queue.
Someone then told me that having timestamps as part of packet headers is normally something that is used in streaming protocols, since timing is of big importance in those cases (timing is of no importance in my case). However, I did a little research, and it seems like UDT uses timestamps in this way although UDT is not a streaming protocol. Actually, what I am trying to implement is kinda similar to UDT.
I guess I could easily remove the timestamp field of the header, and just keep track of the sending times of the packets in another queue running parallel to my retransmission queue.
I'm just curious to hear what you guys think?

tcpip 3-way handshake

Why is data not transferred during the 3rd part of TCP 3-way handshake?
e.g.
(A to B)SYN
(B to A)ACK+SYN
(A to B) ACK.... why cant data be transferred along with this ACK?
I've always believed it was to keep the session establishment phase separate from the data transfer phase so that no real data is transferred until both ends of the session have agreed on the sequence numbers and session options, especially since packets arriving may be from a totally different, previous, session that just happens to have the same endpoints.
However, on further investigation, I'm not entirely certain that transmitting data with the handshake packets is disallowed. The section on TCP connection establishment in my Internetworking with TCP/IP1 book contains the following snippet:
Because of the protocol design, it is possible to send data along with the initial sequence numbers in the handshake segments. In such cases, the TCP software must hold the data until the handshake completes. Once a connection has been established, the TCP software can release data being held and deliver it to a waiting application program quickly.
Since it's certainly possible to construct a TCP packet with SYN (or ACK) and data, this may well be allowed. I've never seen it happen in the wild but, then again, I've never seen a hairy-eared dwarf lemur in the wild either, though I'm assured they exist.
It may be that it's the sockets software that prevents data going out before the session is fully established but TCP appears to consider it valid. It appears you can send data with a SYN-ACK packet (phase 2 of the connection establishment) since you have the other end's sequence number and options. Likewise, sending data with the phase 3 ACK packet appears to be possible as well.
The reason the TCP software holds on to the data until the handshake is fully complete is probably due to the reason mentioned above - only once both ends have agreed on the sequence numbers can you be sure that the data is not from a previous session.
1 Internetworking with TCP/IP Volume 1 Principles, Protocols and Architecture, 3rd edition, Douglas E. Comer, ISBN 0-13-216987-8.

What factors other than latency and bandwidth affect network speed?

I have noticed that viewing images or websites that are hosted on US servers (Im in europe) is considerably slower. The main reason would be the latency because of the distance.
But if 1 packet takes n milliseconds to be received, can't this be alleviated by sending more packets simultaneously?
Does this actually happen or are the packets sent one by one? And if yes what determines how many packets can be send simultaneously (something to do with the cable i guess)?
But if 1 packet takes n milliseconds
to be received, can't this be
alleviated by sending more packets
simultaneously?
Not in a boundless way, by TCP/IP standards, because there algorithms that determine how much can be in flight and not yet acknowledged to avoid overloading the whole network.
Does this actually happen or are the
packets sent one by one?
TCP can and does keep up to a certain amount of packets and data "in flight".
And if yes what determines how many
packets can be send simultaneously
(something to do with the cable i
guess)?
What cable? The same standards apply whether you're on cabled, wireless, or mixed sequences of connections (remember your packet goes through many routers on its way to the destination, and the sequence of router can change among packets).
You can start your study of TCP e.g. wikipedia. Your specific questions deal with congestion control algorithms and standard, Wikipedia will give you pointers to all relevant algorithms and RFCs, but the whole picture won't do you much good if you try to start studying at that spot without a lot of other understanding of TCP (e.g., its flow control concepts).
Wikipedia and similar encyclopedia/tutorial sites can only give you a summary of the summary, while RFCs are not studied to be readable, or understandable to non-experts. If you care about TCP, I'd recommend starting your study with Stevens' immortal trilogy of books (though there are many other valid ones, Stevens' are by far my personal favorites).
The issue is parallelism.
Latency does not directly affect your pipe's throughput. For instance, a dump truck across the country has terrible latency, but wonderful throughput if you stuff it full of 2TB tapes.
The problem is that your web browser can't start asking for things until it knows what to ask for. So, when you load a web page with ten images, you have to wait until the img tags arrive before you can send the request for them. So everything is perceptibly slower, not because your connection is saturated but because there's down time between making one request and the next.
A prefetcher helps alleviate this issue.
As far as "multiple packets at a time" are concerned, a single TCP connection will have many packets in transit at once, as specified by the window scaling algorithm the ends are using. But that only helps with one connection at a time...
TCP uses what's called a sliding window. Basically the amount of buffer space, X, the receiver has to re-assemble out of order packets. The sender can send X bytes past the last acknowledged byte, sequence number N, say. This way you can fill the pipe between sender and receiver with X unacknowledged bytes under the assumption that the packets will likely get there and if not the receiver will let you know by not acknowledging the missing packets. On each response packet the receiver sends a cumulative acknowledgment, saying "I've got all the bytes up to byte X." This lets it ack multiple packets at once.
Imagine a client sending 3 packets, X, Y, and Z, starting at sequence number N. Due to routing Y arrives first, then Z, and then X. Y and Z will be buffered at the destination's stack and when X arrives the receiver will ack N+ (the cumulative lengths of X,Y, and Z). This will bump the start of the sliding window allowing the client to send additional packets.
It's possible with selective acknowledgement to ack portions of the sliding window and ask the sender to retransmit just the lost portions. In the classic scheme is Y was lost the sender would have to resend Y and Z. Selective acknowledgement means the sender can just resend Y. Take a look at the wikipedia page.
Regarding speed, one thing that may slow you down is DNS. That adds an additional round-trip, if the IP isn't cached, before you can even request the image in question. If it's not a common site this may be the case.
TCP Illustrated volume 1, by Richard Stevens is tremendous if you want to learn more. The title sounds funny but the packets diagrams and annotated arrows from one host to the other really make this stuff easier to understand. It's one of those books that you can learn from and then end up keeping as a reference. It's one of my 3 go-to books on networking projects.
The TCP protocol will try to send more and more packets at a time, up to a certain amount (I think), until it starts noticing that they're dropping (the packets die in router/switch land when their Time To Live expires) and then it throttles back. This way it determines the size of the window (bandwidth) that it can use. If the sending host is noticing from its end a lot of dropped packets, then the receiving host is just going to see it as a slow connection. It could very well be blasting you with data, you're just not seeing it.
I guess parallel packets transmission is also possible(ya.. there can be limiit on No. of packets to be send at a time)..U will get more information about the packet transmission from topics::> message switching,packet switching ,circuit switching & virtual circuit packet switching...

Why Does RTP use UDP instead of TCP?

I wanted to know why UDP is used in RTP rather than TCP ?. Major VoIP Tools used only UDP as i hacked some of the VoIP OSS.
As DJ pointed out, TCP is about getting a reliable data stream, and will slow down transmission, and re-transmit corrupted packets, in order to achieve that.
UDP does not care about reliability of the communication, and will not slow down or re-transmit data.
If your application needs a reliable data stream, for example, to retrieve a file from a webserver, you choose TCP.
If your application doesn't care about corrupted or lost packets, and you don't need to incur the additional overhead to provide the additional reliability, you can choose UDP instead.
VOIP is not significantly improved by reliable packet transmission, and in fact, in some cases things in TCP like retransmission and exponential backoff can actually hurt VOIP quality. Therefore, UDP was a better choice.
A lot of good answers have been given, but I'd like to point one thing out explicitly:
Basically a complete data stream is a nice thing to have for real-time audio/video, but its not strictly necessary (as others have pointed out):
The important fact is that some data that arrives too late is worthless. What good is the missing data for a frame that should have been displayed a second ago?
If you were to use TCP (which also guarantees the correct order of all data), then you wouldn't be able to get to the more up-to-date data until the old one is transmitted correctly. This is doubly bad: you have to wait for the re-transmission of the old data and the new data (which is now delayed) will probably be just as worthless.
So RTP does some kind of best-effort transmission in that it tries to transfer all available data in time, but doesn't attempt to re-transmit data that was lost/corrupted during the transfer (*). It just goes on with life and hopes that the more important current data gets there correctly.
(*) actually I don't know the specifics of RTP. Maybe it does try to re-transmit, but if it does then it won't be as aggressive as TCP is (which won't ever accept any lost data).
The others are correct, however the don't really tell you the REAL reason why. Saua kind of hints at it, but here's a more complete answer.
Audio and Video is real-time. If you are listening to a radio, or watching TV, and the signal is interrupted, it doesn't pick up where you left off.. you're just "observing" the signal as it streams, and if you can't observe it at any given time, you lose it.
The reason, is simple. Delay. VOIP tries very hard to minimize the amount of delay from the time someone speaks into one end and you get it on your end, and your response back. Otherwise, as errors occured, the amount of delay between when the person spoke and when the signal was received would continuously grow until it became useless.
Remember, each delay from a retransmission has to be replayed, and that causes further data to be delayed, then another error causes an even greater delay. The only workable solution is to simply drop any data that can't be displayed in real-time.
A 1 second delay from retransmission would mean it would now be 1 second from the time I said something until you heard it. A second 1 second delay now means it's 2 seconds from the time i say something until you hear it. This is cumulative because data is played back at the same rate at which it is spoken, and so on...
RTP could be connection oriented, but then it would have to drop (or skip) data to keep up with retransmission errors anyways, so why bother with the extra overhead?
Technically RTP packets can be interleaved over a TCP connection. There are lots of great answers given here. Two additional minor points:
RFC 4588 describes how one could use retransmission with RTP data. Most clients that receive RTP streams employ a buffer to account for jitter in the network that is typically 1-5 seconds long and which means there is time available for a retransmit to receive the desired data.
RTP traffic can be interleaved over a TCP connection. In practice when this is done, the difference between Interleaved RTP (i.e. over TCP) and RTP sent over UDP is how these two perform over a lossy network with insufficient bandwidth available for the user. The Interleaved TCP stream will end up being jerky as the player continually waits in a buffering state for packets to arrive. Depending on the player it may jump ahead to catch up. With an RTP connection you will get artifacts (smearing/tearing) in the video.
UDP is often used for various types of realtime traffic that doesn't need strict ordering to be useful. This is because TCP enforces an ordering before passing data to an application (by default, you can get around this by setting the URG pointer, but no one seems to ever do this) and that can be highly undesirable in an environment where you'd rather get current realtime data than get old data reliably.
RTP is fairly insensitive to packet loss, so it doesn't require the reliability of TCP.
UDP has less overhead for headers so that one packet can carry more data, so the network bandwidth is utilized more efficiently.
UDP provides fast data transmission also.
So UDP is the obvious choice in cases such as this.
Besides all the others nice and correct answers this article gives a good understanding about the differences between TCP and UDP.
The Real-time Transport Protocol is a network protocol used to deliver streaming audio and video media over the internet, thereby enabling the Voice Over Internet Protocol (VoIP).
RTP is generally used with a signaling protocol, such as SIP, which sets up connections across the network. RTP applications can use the Transmission Control Protocol (TCP), but most use the User Datagram protocol (UDP) instead because UDP allows for faster delivery of data.
UDP is used wherever data is send, that does not need to be exactly received on the target, or where no stable connection is needed.
TCP is used if data needs to be exactly received, bit for bit, no loss of bits.
For Video and Sound streaming, some bits that are lost on the way do not affect the result in a way, that is mentionable, some pixels failing in a picture of a stream, nothing that affects a user, on DVDs the lost bit rate is higher.
just a remark:
Each packet sent in an RTP stream is given a number one higher than its predecessor.This allows thr destination to determine if any packets are missing.
If a packet is mising, the best action for the destination to take is to approximate the missing vaue by interpolation.
Retranmission is not a proctical option since the retransmitted packet would be too late to be useful.
I'd like to add quickly to what Matt H said in response to Stobor's answer. Matt H mentioned that RTP over UDP packets can be checksum'ed so that if they are corrupted, they will get resent. This is actually an optional feature on most PBXs. In Asterisk, for example, you can enable / disable checksums on your RTP over UDP traffic in the rtp.conf configuration file with the following line:
rtpchecksums=yes ; or no if you prefer
Cheers!

Resources