In TCP/IP, we have MSS and MTU when sending and receiving packets.
MTU is an IP layer concept, which is determined by the underlying hardware. It shows the maximum data size that an IP layer packet can contain during one transmission.
MSS is a TCP layer concept, which is limited by the MTU, showing that the TCP data stream will be fragmented into MSS-size packets.
Our protocol lies on top of TCP, and each protocol will define its own packet. One example is MySQL, which defines its packet size up to 2^24-1, that is around 16M. When the big enough protocol packet comes to TCP, it will be fragmented according to MSS.
Assume that a client needs to send DATA1 and DATA2 to server. DATA2 size is bigger than MSS, and DATA2 will be fragmented into DATA2_1, DATA2_2. As the packets will be handled by the IP layer, so the time that each packet arrives at server might not be the same as that when the client sends them.
So I think the sequence of packets' arriving might be the following:
DATA1 DATA2_1, DATA2_2
DATA1, DATA2_1, DATA2_2
DATA1, DATA2_2, DATA2_1
In the first case, the server receives DATA1 and DATA2_1 in one tcp packet and then another packet contains DATA2_2 arrives.
In the second case, the server receives DATA1, DATA2_1 and DATA2_2 in three packets.
In the third case, the server first receives DATA2_2 and then DATA2_1.
My question:
Is the third case possible?
If yes, it disobeys that TCP is a stream protocol, and stream protocol should be ordered. And even this does not break the stream rule, how to handle this scenario?
If no, how TCP makes the disordered packets into its original order?
It is possible to receive that sequence over the network, however the TCP implementation will hide that detail from your application and only feed the data to you in stream order. (In fact since fragmentation happens at the IP layer it won't even be shown to the TCP layer until the second part has arrived also)
The fact that received packets have to be held in a buffer even after receiving them in some cases like this is why you will see people referring to UDP as better for lower latency applications: you can receive datagrams out of order with UDP and it's up to your application to figure out how to deal with that possibility.
Is the third case possible.
Yes, of course.
If so, it disobeys that TCP is a stream protocol ...
No it doesn't.
Your cases concern arrival of IP packets into a host. TCP being a stream protocol is about delivery of data into an application.
The packet fragments get reassembled in the correct order by the IP layer, and the packets get reassembled into segments in the correct order by TCP, and the now correctly ordered data stream is delivered to the application.
Related
My firewall textbook says: "UDP breaks a message into numbered segments so that it can be transmitted."
My understanding was UDP had no sequence or other numbering scheme? That data was broken into packets and sent out with no ordered reconstruction on the other end, at least on this level. Am I missing something?
The book is just wrong here. The relevant section says:
User Datagram Protocol (UDP)—This protocol is similar to TCP in that it handles the addressing of a message. UDP breaks a message into numbered segments so that it can be transmitted. It then reassembles the message when it reaches the destination computer.
UDP does not include any mechanism to segment or reassemble messages; each message is sent as a single UDP datagram. If you look at the UDP "packet" (technically datagram) structure on page 108, there's no segment number or anything like that.
Mind you, segmentation can happen at other layers, either above or below UDP:
IP packets can be fragmented if they're too big for a network link's MTU (maximum transfer unit). This can happen to IP packets that contain UDP, TCP, or whatever. This is actually relevant for firewalls because creative fragmentation can sometimes be used to bypass packet filtering rules.
Some protocols that run on top of UDP also use something like numbered segments. For example, TFTP (trivial file transfer protocol) breaks files into "blocks", and transmits a block number in the header for each block. (And the receiver responds acknowledging the block number it's received -- it's like a drastically simplified version of TCP.) But this is part of the TFTP protocol, not part of UDP.
QUIC is another example of a protocol that runs over UDP and supports segmentation (and multiple connections, and...), and each packet contains a packet number. But again it's part of the QUIC protocol, not UDP.
I've been doing some work with C# Networking using UDP. I'm getting on fine but need the answer to a couple of fundamental questions I'm having problems testing:
Currently I'm sending data in a ~16000 byte datagrams, which according to wireshark is getting split into several 1500 byte packets (because of max packet size limits) and then reassembled at the other end.
Am I right in understanding the datagram will be received complete at the other end OR not at all. IE it's an all or nothing thing. There is no chance of ending up with a fragmented datagram due to packet loss?
Therefore, I only need to ACK per datagram, rather than ensuring my datagrams are < 1500 bytes and ACK each one?
I've looked in a lot of places but there seems to be a lot of confusion between the differences between datagrams and the underlying packets...
Thanks for you help!
There is no chance of ending up with a fragmented datagram due to packet loss?
I believe that's true: that fragmentation and fragment reassembly is handled by the protocol layer below UDP, i.e. that it's handled by the "IP" layer, which will error if it fails to reassemble the packet-fragments into a datagram (for example, search for "fragment" in RFC 792).
http://www.pcvr.nl/tcpip/udp_user.htm#11_5 says,
"The IP layer at the destination performs the reassembly. The goal is to make fragmentation and reassembly transparent to the transport layer (TCP and UDP), which it is, except for possible performance degradation."
As you may now 16 bit UDP length field indicates that you can send a total of 65535 bytes. However, the data can be theoretically (sizeof(IP Header) + sizeof(UDP Header)) = 65535-(20+8) = 65507 bytes.
But this does not mean that all applications that are using UDP will send this amount of data as an example DNS packets limits to 512 bytes. This is because you don't get any ACK packets from server. This is one reason that packets may get lost in the network (packet transmission problems and loss). Secondly intermediate nodes may encapsulate datagrams inside of another protocol, as an example IPSEC or other protocols do that.
For UDP there is no ACK packets, so in your case if underlying application uses UDP you should not see any ACK packets. Secondly, some of the server limit their sizes to the max UDP packets depending on the application, so if you have data transfer from client to server you should see same bytes e.g 512 bytes. going and coming back in wireshark. Mostly, source makes the request and destination sends X bytes UDP datagrams back.
These links may be good for your questions:
Wireshark UDP analysis
RFC 1122 (states that 576 is the minimum maximum reassembly buffer size)
Am I right in understanding the datagram will be received complete at the other end OR not at all. IE it's an all or nothing thing. There is no chance of ending up with a fragmented datagram due to packet loss?
That is correct.
Therefore, I only need to ACK per datagram, rather than ensuring my datagrams are < 1500 bytes and ACK each one?
I don't understand this question. You need to ACK each datagram regardless of its size, and you should make them < 1500 bytes so they won't get fragmented. Otherwise you may never be able to transmit any specific datagrams at all, if it repeatedly gets fragmented and a fragment repeatedly gets lost.
Using the TCP/IP protocol, given a connection between a client and a server, are the packets sent by the client to the server always received in the same order they were sent?
For example, if the client sends 3 packets of data, A, B and C, will the server always receive A first followed by B and C or is it possible for the server to receive C first, followed by A and B?
At IP level, packets may arrive in any order (if they arrive). At TCP level, the data stream is guaranteed to be ordered in the same manner on both ends.
That means yes, the server will always receive A then B then C. As long as you are using TCP.
When using TCP, data is received by the destination application in the same order as it is sent by the source application.
See the following for more details:
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Data_transfer
TCP is a transmission protocol, and it transmits data by sending the data out in IP packets over the underlying IP network. TCP is responsible for ensuring the correct transmission of the data, which includes ordering the arriving packets, re-requesting missing ones and discarding duplicates.
TCP as such does not expose any notion of "packet" to the user; the fact that the data is chunked into IP packets is a detail of the "over IP" implementation. A different implementation, e.g. TCP-over-bicycle-courier, might employ an entirely different scheme.
It cannot happen that you receive data in a different order on the application side over a TCP socket.
It may happen that packets are received in a different order by the networking layer of the OS, but TCP makes it a requirement that the upper levels get data in order. It is the OS' role to ask again for unreceived fragments etc and assemble these fragments. So, you need not worry.
UDP, on the other hand, offers no such guarantee.
The server (as the physical NIC of the machine) might receive them in any order. Your OS might receive them in any order again - that will mostly (but not allways) be the order of physical reception. Your client application is guaranteed to receive them in correct order, thats a property of TCP
In general, packets will be received in the same order they are transmitted. But the network may drop or reorder packets. For example, packets may take different routes and arrive out of order. Packets may be lost or even duplicated on the network. The TCP implementation is responsible for retransmitting packets that are lost, acknowledging packets that are received, ignoring duplicated packets, all with the objective of accurately reconstructing the transmitted byte stream at the receiver.
At the application level, you send a stream of bytes and receive a stream of bytes. TCP does whatever is needed to ensure the received stream of bytes is identical to the sent stream of bytes, regardless of what happens to the packets on the network.
So I have trouble finding a source that describes whether the TCP Packet is the payload of the IP Datagram or vice versa. I imagine the TCP Packet must be the payload because presumably the router can divide the IP Datagram therefore splitting up the TCP Packet and then the final router would have to reassamble them. Am I right?
If by "payload" you're referring to the data that comes after an IP header, then TCP is the "payload" of an IP packet when receiving data, since it's an upper level protocol.
The proper term for networking is actually encapsulation though.
It basically works by adding on progressive layers of protocols as information travels down from the application to the wire. After transmission, the packets are re-assembled and then the packets are error checked, the headers are stripped off, and what you are referring to as the "payload" becomes the next chunk of information that is checked. Once all of the outer protocol layers are stripped off the server/client has the information that directly corresponds to what the application sent.
Tcp\IP are two important proctocols. Tcp is connection oriented, while IP is a connection-less protocol. IP stands for a logical address, which works as packet address. The source packet has destination address for its destination. Tcp works with this logical address and helps the packets to reach their destinations, and provides acknowledgement when packet reached to its destination.
Why is the IP called a connectionless protocol? If so, what is the connection-oriented protocol then?
Thanks.
Update - 1 - 20:21 2010/12/26
I think, to better answer my question, it would be better to explain what "connection" actually means, both physically and logically.
Update - 2 - 9:59 AM 2/1/2013
Based on all the answers below, I come to the feeling that the 'connection' mentioned here should be considered as a set of actions/arrangements/disciplines. Thus it's more an abstract concept rather than a concrete object.
Update - 3 - 11:35 AM 6/18/2015
Here's a more physical explanation:
IP protocol is connectionless in that all packets in IP network are routed independently, they may not necessarily go through the same route, while in a virtual circuit network which is connection oriented, all packets go through the same route. This single route is what 'virtual circuit' means.
With connection, because there's only 1 route, all data packets will arrive in the same order as they are sent out.
Without connection, it is not guaranteed all data packets will arrive
in the same order as they are sent out.
Update - 4 - 9:55 AM 2016/1/20/Wed
One of the characteristics of connection-oriented is that the packet order is preserved. TCP use a sequence number to achieve that but IP has no such facility. Thus TCP is connection-oriented while IP is connection-less.
The basic idea is pretty simple: with IP (on its own -- no TCP, UDP, etc.) you're just sending a packet of data. You simply send some data onto the net with a destination address, but that's it. By itself, IP gives:
no assurance that it'll be delivered
no way to find out if it was
nothing to let the destination know to expect a packet
much of anything else
All it does is specify a minimal packet format so you can get some data from one point to another (e.g., routers know the packet format, so they can look at the destination and send the packet on its next hop).
TCP is connection oriented. Establishing a connection means that at the beginning of a TCP conversation, it does a "three way handshake" so (in particular) the destination knows that a connection with the source has been established. It keeps track of that address internally, so it can/will/does expect more packets from it, and be able to send replies to (for example) acknowledge each packet it receives. The source and destination also cooperate to serial number all the packets for the acknowledgment scheme, so each end knows whether packets it sent were received at the other end. This doesn't involve much physically, but logically it involves allocating some memory on both ends. That includes memory for metadata like the next packet serial number to use, as well as payload data for possible re-transmission until the other side acknowledges receipt of that packet.
TCP/IP means "TCP over IP".
TCP
--
IP
TCP provides the "connection-oriented" logic, ordering and control
IP provides getting packets from A to B however it can: "connectionless"
Notes:
UDP is connection less but at the same level as TCP
Other protocols such as ICMP (used by ping) can run over IP but have nothing to do with TCP
Edit:
"connection-oriented" mean established end to end connection. For example, you pick up the telephone, call someone = you have a connection.
"connection-less" means "send it, see what happens". For example, sending a letter via snail mail.a
So IP gets your packets from A to B, maybe, in any order, not always eventually. TCP sorts them out, acknowledges them, requests a resends and provides the "connection"
Connectionless means that no effort is made to set up a dedicated end-to-end connection, While Connection-Oriented means that when devices communicate, they perform handshaking to set up an end-to-end connection.
IP is an example of the Connectionless protocols , in this kind of protocols you usually send informations in one direction, from source to destination without checking to see if the destination is still there, or if it is prepared to receive the information . Connectionless protocols (Like IP and UDP) are used for example with the Video Conferencing when you don't care if some packets are lost , while you have to use a Connection-Oriented protocol (Like TCP) when you send a File because you want to insure that all the packets are sent successfully (actually we use FTP to transfer Files). Edit :
In telecommunication and computing in
general, a connection is the
successful completion of necessary
arrangements so that two or more
parties (for example, people or
programs) can communicate at a long
distance. In this usage, the term has
a strong physical (hardware)
connotation although logical
(software) elements are usually
involved as well.
The physical connection is layer 1 of
the OSI model, and is the medium
through which the data is transfered.
i.e., cables
The logical connection is layer 3 of
the OSI model, and is the network
portion. Using the Internetwork
Protocol (IP), each host is assigned a
32 bit IP address. e.g. 192.168.1.1
TCP is the connection part of TCP/IP. IP's the addressing.
Or, as an analogy, IP is the address written on the envelope, TCP is the postal system which uses the address as part of the work of getting the envelope from point A to point B.
When two hosts want to communicate using connection oriented protocol, one of them must first initiate a connection and the other must accept it. Logically a connection is made between a port in one host and other port in the other host. Software in one host must perform a connect socket operation, and the other must perform an accept socket operation. Physically the initiator host sends a SYN packet, which contains all four connection identifying numbers (source IP, source port, destination IP, destination port). The other receives it and sends SYN-ACK, the initiator sends an ACK, then the connection are established. After the connection established, then the data could be transferred, in both directions.
In the other hand, connectionless protocol means that we don't need to establish connection to send data. It means the first packet being sent from one host to another could contain data payloads. Of course for upper layer protocols such as UDP, the recipient must be ready first, (e.g.) it must perform a listen udp socket operation.
The connectionless IP became foundation for TCP in the layer above
In TCP, at minimal 2x round trip times are required to send just one packet of data. That is : a->b for SYN, b->a for SYN-ACK, a->b for ACK with DATA, b->a for ACK. For flow rate control, Nagle's algorithm is applied here.
In UDP, only 0.5 round trip times are required : a->b with DATA. But be prepared that some packets could be silently lost and there is no flow control being done. Packets could be sent in the rate that are larger than the capability of the receiving system.
In my knowledge, every layer makes a fool of the one above it. The TCP gets an HTTP message from the Application layer and breaks it into packets. Lets call them data packets. The IP gets these packets one by one from TCP and throws it towards the destination; also, it collects an incoming packet and delivers it to TCP. Now, TCP after sending a packet, waits for an acknowledgement packet from the other side. If it comes, it says the above layer, hey, I have established a connection and now we can communicate! The whole communication process goes on between the TCP layers on both the sides sending and receiving different types of packets with each other (such as data packet, acknowledgement packet, synchronization packet , blah blah packet). It uses other tricks (all packet sending) to ensure the actual data packets to be delivered in ordered as they were broken and assembled. After assembling, it transfers them to the above application layer. That fool thinks that it has got an HTTP message in an established connection but in reality, just packets are being transferred.
I just came across this question today. It was bouncing around in my head all day and didn't make any sense. IP doesn't handle transport. Why would anyone even think of IP as connectionless or connection oriented? It is technically connectionless because it offers no reliability, no guaranteed delivery. But so is my toaster. My toaster offers no guaranteed delivery, so why not call aa toaster connectionless too?
In the end, I found out it's just some stupid title that someone somewhere attached to IP and it stuck, and now everyone calls IP connectionless and has no good reason for it.
Calling IP connectionless implies there is another layer 3 protocol that is connection oriented, but as far as I know, there isn't and it is just plain stupid to specify that IP is connectionless. MAC is connectionless. LLC is connectionless. But that is useless, technically correct info.