tcpip 3-way handshake - networking

Why is data not transferred during the 3rd part of TCP 3-way handshake?
e.g.
(A to B)SYN
(B to A)ACK+SYN
(A to B) ACK.... why cant data be transferred along with this ACK?

I've always believed it was to keep the session establishment phase separate from the data transfer phase so that no real data is transferred until both ends of the session have agreed on the sequence numbers and session options, especially since packets arriving may be from a totally different, previous, session that just happens to have the same endpoints.
However, on further investigation, I'm not entirely certain that transmitting data with the handshake packets is disallowed. The section on TCP connection establishment in my Internetworking with TCP/IP1 book contains the following snippet:
Because of the protocol design, it is possible to send data along with the initial sequence numbers in the handshake segments. In such cases, the TCP software must hold the data until the handshake completes. Once a connection has been established, the TCP software can release data being held and deliver it to a waiting application program quickly.
Since it's certainly possible to construct a TCP packet with SYN (or ACK) and data, this may well be allowed. I've never seen it happen in the wild but, then again, I've never seen a hairy-eared dwarf lemur in the wild either, though I'm assured they exist.
It may be that it's the sockets software that prevents data going out before the session is fully established but TCP appears to consider it valid. It appears you can send data with a SYN-ACK packet (phase 2 of the connection establishment) since you have the other end's sequence number and options. Likewise, sending data with the phase 3 ACK packet appears to be possible as well.
The reason the TCP software holds on to the data until the handshake is fully complete is probably due to the reason mentioned above - only once both ends have agreed on the sequence numbers can you be sure that the data is not from a previous session.
1 Internetworking with TCP/IP Volume 1 Principles, Protocols and Architecture, 3rd edition, Douglas E. Comer, ISBN 0-13-216987-8.

Related

What exactly is kept alive in Alive TCP Connection?

Please bear the naivity of this question as I have close to no knowledge of networking. Please help me understand this thing more clearly.
Suppose, there is a river and people from both the ends needs to travel back and forth from one end to another. So a bridge can act as connection between both ends, Till the time, bridge is alive, The connection is said to be alive and travelling is possible. I want to know what does it mean to keep a TCP connection alive and what is exactly kept alive? As in case of river, bridge was kept alive.
Contrary to a bridge a TCP connection is not a physical thing but only a logical association between to ends. Data get delivered between the ends hop by hop through several intermediate systems. Single packets might get lost on the way or even the other end or some intermediate systems might crash so that connectivity is lost completely.
As long as regular data exchange is done between the ends such conditions can quickly be detected. If one end sends data the other has to acknowledge these - if the acknowledgement is missing the packet gets retransmitted. If data are still not acknowledged after retransmissions the connection is considered broken.
But the ends might not continuously exchange data. If the connection is idle (i.e. no data exchange) then it will not be detected that something got broken. TCP keep alive works around this problem by regularly exchanging packets on idle connections and expecting an acknowledgment. These are packets with no data payload since no data are there to be transmitted.
In both these end-points, some data (or state variables) needs to be associated with each connection which is necessary to execute TCP protocol. E.g., sender needs to remember sequence numbers, maintain copies of all sent/but not acknowledged packets. Receiver needs to track sequence numbers too, store copies of packets that are out of order, and reconstruct original stream from received packets. These state data structures are created (i.e., memory is allocated) when the connection is established, and are destroyed (memory is freed) when the connection is terminated (e.g., after exchanging FINs). This state is accociated with each open TCP socket. A good practice is not to have connections open forever without exchanging data (e.g., if one of the communication partners has crashed, it won't be able to do proper connection termination), so if the connection is idle for a long period (which i don't know exactly) it is reasonable that the socket is closed. The concept is known as "soft state", which basically means that each state (memory) has an expiration inactivity time untill it is deleted. If the socket is closed, then when new data has to be sent, new connection has to be established. Yes it does involve sending packets, but it has overhead of sending TCP packets without any payload for one RTT, before the first data is sent.
In theory TCP connection exists only on end-points of the connection. In practice, however, there are also many kinds of so called "middleboxes", which is a general name for network device that is not a router or a switch. These middleboxes sometimes also need to maintain state accociated with each TCP connections, so these keepalives will also refresh the state on these middleboxes.
But in both cases, these keepalives basically tell to reset inactivity timer associate with state for this connection.

Why do TCP selective ACKs not prevent HOL blocking in HTTP/2?

The HTTP/3 spec states that
because the parallel nature of HTTP/2's multiplexing is not visible to TCP's loss recovery mechanisms, a lost or reordered packet causes all active transactions to experience a stall regardless of whether that transaction was directly impacted by the lost packet
While I understand this in the context of cumulative ACKs, I had assumed that selective ACKs would prevent a stall as they allow
the receiver to acknowledge discontinuous blocks of packets which were received correctly
But clearly this isn't the case as per the quote from the HTTP/3 spec above. So, my question then is why does head-of-line blocking persist even with discontinuous acknowledgements?
Even with selective ACK it is still necessary to get the missing data before forwarding the data stream to the application. Applications expect from TCP a continuous data stream and there is no mechanism to deal with temporary holes which get latter filled. All what selective ACK allow is to communicate that already received data don't need to be resend again but that only the outstanding data (the hole in the stream) need to be resend.

Why tcp termination need 4-way-handshake?

When connection sets up, there is:
Client ------SYN-----> Server
Client <---ACK/SYN---- Server ----①
Client ------ACK-----> Server
and when termination comes, there is:
Client ------FIN-----> Server
Client <-----ACK------ Server ----②
Client <-----FIN------ Server ----③
Client ------ACK-----> Server
my question is why ② and ③ can not set in the same package like ① which is ACK and SYN set in one package ???
After googling a lot, I recognized that the four-way is actually two pairs of two-way handshakes.
If termination is a REAL four-way actions, the 2 and 3 indeed can be set 1 at the same packet.
But this a two-phase work: the first phase (i.e. the first two-way handshake) is :
Client ------FIN-----> Server
Client <-----ACK------ Server
At this moment the client has been in FIN_WAIT_2 state waiting for a FIN from Server. As a bidirectional and full-duplex protocol, at present one direction has break down, no more data would be sent, but receiving still work, client has to wait for the other "half-duplex" to be terminated.
While the FIN from the Server was sent to Client, then Client response a ACK to terminate the connection.
Concluding note: the 2 and 3 can not merge into one package, because they belong to different states. But, if server has no more data or no data at all to be sent when received the FIN from client, it's ok to merge 2 and 3 in one package.
References:
http://www.tcpipguide.com/free/t_TCPConnectionTermination-2.htm
http://www.tcpipguide.com/free/t_TCPConnectionEstablishmentProcessTheThreeWayHandsh-3.htm
http://www.tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm
I think of course the 2 and 3 can merge technically, but not flexible enough as not atomic.
The first FIN1 C to S means and only means: I would close on my way of communication.
The first ACK1 S to C means a response to FIN1. OK, I know your channel is disconnected but for my S(server) way connection, I'm not sure yet. Maybe my receiving buffer is not fully handled yet. The time I need is uncertain.
Thus 2 and 3 are not appropriate to merge. Only the server would have right to decide when his way of connection can be disconnected.
From Wikipedia - "It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one) and host A replies with an ACK.[14]"
Source:
Wikipedia - https://en.wikipedia.org/wiki/Transmission_Control_Protocol
[14] - Tanenbaum, Andrew S. (2003-03-17). Computer Networks (Fourth ed.). Prentice Hall. ISBN 0-13-066102-3.
Based on this document, we can see the detail process of the four way handshake as follows
The ACK (marked as ②) is send by TCP stack automatically. And the next FIN (marked as ③) is controlled in application level by calling close socket API. Application has the control to terminate the connection. So in common case, we didn't merge this two packets into one.
In contrast, the ACK/SYN (marked as ①) is send by TCP stack automatically. In the TCP connection establishment stage, the process is straightforward and easier, so TCP stack handles this by default.
If this is observed from the angle of coding, it is more reasonable to have 4 way than 3 way although both are possible ways for use.
When a connection is to be terminated by one side, there are multiple possibilities or states that the peer may have. At least one is normal, one is on transmitting or receiving, one is in disconnected state somehow all of a sudden before this initiation.
The way of termination should take at least above three into consideration for they all have high probabilities to happen in real life.
So it becomes more natural to find out why based on above. If the peer is in offline state, then things are quite easy for client to infer the peer state by delving into the packet content captured in the procedure since the first ack msg could not be received from peer. But if the two messages are combined together, then it becomes much difficult for the client to know why the peer does not respond because not only offline state could lead to the packet missing but also the various exceptions in the procedure of processing in server side could make this happen. Not to mention the client needs to wait large amount of time until timeout. With the additional 1 message, the two problems could become more easier
to handle from client side.
The reason for it looks like coding because the information contained in the message is just like the log of code.
In the Three-Way Handshake (connection setup) : The server must acknowledge (ACK) the client's SYN and the server must also send its own SYN containing the initial sequence number for the data that the server will send on the connection.
That why the server sends its SYN and the ACK of the client's SYN in a single segment.
In connection Termination : it takes four segments to terminate a connection since a FIN and an ACK are required in each direction.
(2) means that The received FIN (first segment) is acknowledged (ACK) by TCP
(3) means that sometime later the application that received the end-of-file will close its socket. This causes its TCP to send a FIN.
And then the last segment will mean that The TCP on the system that receives this final FIN acknowledges (ACK) the FIN.

Checksums on TCP

Is TCP not responsible for making sure that a stream is sent intact over the wire by doing whatever may become necessary as losses etc. occur during a transfer?
Does it not do a proper job of it?
Why do higher application-layer protocols and their applications still perform checksums?
While TCP does contain its own checksum, it is only a 16-bit checksum and it is certainly possible for a multi-bit transmission error to slip by the TCP checksum mechanism. This is quite rare, but it is still possible and I have in fact seen it happen (once or twice in a couple of decades).
A robust protocol will want to use a higher-level hash function to assure integrity of transmitted data. Having said that, not many applications that transmit a small amount of data go to this trouble. Bulk transfer applications (such as a package manager or auto-update mechanism) will usually use a cryptographic hash function to increase the assurance of data integrity.
TCP ensures that TCP packets are delivered reliably, using checksums to trap errors introduced during transmission, and retransmitting lost or damaged packets as required. When a packet is transmitted it is retained in a retransmission queue until the peer host acknowledges receipt; if no acknowledgement is received within a certain timeout period then the packet is retransmitted. But the host won't keep retransmitting a packet forever - if a packet repeatedly fails then TCP eventually gives up and closes the connection.
Higher-level protocols assume that TCP works reliably (a fair assumption) and use their own checksums or whatever to check that the higher-level data stream arrived safely. I've written lots of buggy sockets applications that screwed up their own higher-level buffers and mangled the application data stream!
In any production-grade TCP/IP stack with a robust application I think you can be confident that the problem is that your connection is dropping out. Or you might have a buggy application, but I doubt that your fetch/wget is buggy.

Why Does RTP use UDP instead of TCP?

I wanted to know why UDP is used in RTP rather than TCP ?. Major VoIP Tools used only UDP as i hacked some of the VoIP OSS.
As DJ pointed out, TCP is about getting a reliable data stream, and will slow down transmission, and re-transmit corrupted packets, in order to achieve that.
UDP does not care about reliability of the communication, and will not slow down or re-transmit data.
If your application needs a reliable data stream, for example, to retrieve a file from a webserver, you choose TCP.
If your application doesn't care about corrupted or lost packets, and you don't need to incur the additional overhead to provide the additional reliability, you can choose UDP instead.
VOIP is not significantly improved by reliable packet transmission, and in fact, in some cases things in TCP like retransmission and exponential backoff can actually hurt VOIP quality. Therefore, UDP was a better choice.
A lot of good answers have been given, but I'd like to point one thing out explicitly:
Basically a complete data stream is a nice thing to have for real-time audio/video, but its not strictly necessary (as others have pointed out):
The important fact is that some data that arrives too late is worthless. What good is the missing data for a frame that should have been displayed a second ago?
If you were to use TCP (which also guarantees the correct order of all data), then you wouldn't be able to get to the more up-to-date data until the old one is transmitted correctly. This is doubly bad: you have to wait for the re-transmission of the old data and the new data (which is now delayed) will probably be just as worthless.
So RTP does some kind of best-effort transmission in that it tries to transfer all available data in time, but doesn't attempt to re-transmit data that was lost/corrupted during the transfer (*). It just goes on with life and hopes that the more important current data gets there correctly.
(*) actually I don't know the specifics of RTP. Maybe it does try to re-transmit, but if it does then it won't be as aggressive as TCP is (which won't ever accept any lost data).
The others are correct, however the don't really tell you the REAL reason why. Saua kind of hints at it, but here's a more complete answer.
Audio and Video is real-time. If you are listening to a radio, or watching TV, and the signal is interrupted, it doesn't pick up where you left off.. you're just "observing" the signal as it streams, and if you can't observe it at any given time, you lose it.
The reason, is simple. Delay. VOIP tries very hard to minimize the amount of delay from the time someone speaks into one end and you get it on your end, and your response back. Otherwise, as errors occured, the amount of delay between when the person spoke and when the signal was received would continuously grow until it became useless.
Remember, each delay from a retransmission has to be replayed, and that causes further data to be delayed, then another error causes an even greater delay. The only workable solution is to simply drop any data that can't be displayed in real-time.
A 1 second delay from retransmission would mean it would now be 1 second from the time I said something until you heard it. A second 1 second delay now means it's 2 seconds from the time i say something until you hear it. This is cumulative because data is played back at the same rate at which it is spoken, and so on...
RTP could be connection oriented, but then it would have to drop (or skip) data to keep up with retransmission errors anyways, so why bother with the extra overhead?
Technically RTP packets can be interleaved over a TCP connection. There are lots of great answers given here. Two additional minor points:
RFC 4588 describes how one could use retransmission with RTP data. Most clients that receive RTP streams employ a buffer to account for jitter in the network that is typically 1-5 seconds long and which means there is time available for a retransmit to receive the desired data.
RTP traffic can be interleaved over a TCP connection. In practice when this is done, the difference between Interleaved RTP (i.e. over TCP) and RTP sent over UDP is how these two perform over a lossy network with insufficient bandwidth available for the user. The Interleaved TCP stream will end up being jerky as the player continually waits in a buffering state for packets to arrive. Depending on the player it may jump ahead to catch up. With an RTP connection you will get artifacts (smearing/tearing) in the video.
UDP is often used for various types of realtime traffic that doesn't need strict ordering to be useful. This is because TCP enforces an ordering before passing data to an application (by default, you can get around this by setting the URG pointer, but no one seems to ever do this) and that can be highly undesirable in an environment where you'd rather get current realtime data than get old data reliably.
RTP is fairly insensitive to packet loss, so it doesn't require the reliability of TCP.
UDP has less overhead for headers so that one packet can carry more data, so the network bandwidth is utilized more efficiently.
UDP provides fast data transmission also.
So UDP is the obvious choice in cases such as this.
Besides all the others nice and correct answers this article gives a good understanding about the differences between TCP and UDP.
The Real-time Transport Protocol is a network protocol used to deliver streaming audio and video media over the internet, thereby enabling the Voice Over Internet Protocol (VoIP).
RTP is generally used with a signaling protocol, such as SIP, which sets up connections across the network. RTP applications can use the Transmission Control Protocol (TCP), but most use the User Datagram protocol (UDP) instead because UDP allows for faster delivery of data.
UDP is used wherever data is send, that does not need to be exactly received on the target, or where no stable connection is needed.
TCP is used if data needs to be exactly received, bit for bit, no loss of bits.
For Video and Sound streaming, some bits that are lost on the way do not affect the result in a way, that is mentionable, some pixels failing in a picture of a stream, nothing that affects a user, on DVDs the lost bit rate is higher.
just a remark:
Each packet sent in an RTP stream is given a number one higher than its predecessor.This allows thr destination to determine if any packets are missing.
If a packet is mising, the best action for the destination to take is to approximate the missing vaue by interpolation.
Retranmission is not a proctical option since the retransmitted packet would be too late to be useful.
I'd like to add quickly to what Matt H said in response to Stobor's answer. Matt H mentioned that RTP over UDP packets can be checksum'ed so that if they are corrupted, they will get resent. This is actually an optional feature on most PBXs. In Asterisk, for example, you can enable / disable checksums on your RTP over UDP traffic in the rtp.conf configuration file with the following line:
rtpchecksums=yes ; or no if you prefer
Cheers!

Resources