when a packet goes out libpcap timestamps the packet, but where does the time stamp of packet reside i.e, whether it resides in data of packet.
If on reception side if the same packet is received does the time stamp at transmitted side will be over written at reception side by libpcap.
libpcap does not timestamp outgoing packets. On the transmit side, timestamping can be done as a part of some network protocol. For example, with TCP one can use the Timestamp option (RFC 1323). If the TCP timestamp option is enabled, the outgoing packets will most likely be timestamped by the network stack.
On the receive side, libpcap receives the packet from the OS and will rely on the kernel to give it a valid timestamp. The kernel will get the timestamp from either the network interface driver or the networking stack.
The receive timestamp should not be a part of the packet and hence wont overwrite the senders timestamp, which will be a part of the received packet. (as in case of TCP)
Hope that answers your question?
Related
In TCP/IP, we have MSS and MTU when sending and receiving packets.
MTU is an IP layer concept, which is determined by the underlying hardware. It shows the maximum data size that an IP layer packet can contain during one transmission.
MSS is a TCP layer concept, which is limited by the MTU, showing that the TCP data stream will be fragmented into MSS-size packets.
Our protocol lies on top of TCP, and each protocol will define its own packet. One example is MySQL, which defines its packet size up to 2^24-1, that is around 16M. When the big enough protocol packet comes to TCP, it will be fragmented according to MSS.
Assume that a client needs to send DATA1 and DATA2 to server. DATA2 size is bigger than MSS, and DATA2 will be fragmented into DATA2_1, DATA2_2. As the packets will be handled by the IP layer, so the time that each packet arrives at server might not be the same as that when the client sends them.
So I think the sequence of packets' arriving might be the following:
DATA1 DATA2_1, DATA2_2
DATA1, DATA2_1, DATA2_2
DATA1, DATA2_2, DATA2_1
In the first case, the server receives DATA1 and DATA2_1 in one tcp packet and then another packet contains DATA2_2 arrives.
In the second case, the server receives DATA1, DATA2_1 and DATA2_2 in three packets.
In the third case, the server first receives DATA2_2 and then DATA2_1.
My question:
Is the third case possible?
If yes, it disobeys that TCP is a stream protocol, and stream protocol should be ordered. And even this does not break the stream rule, how to handle this scenario?
If no, how TCP makes the disordered packets into its original order?
It is possible to receive that sequence over the network, however the TCP implementation will hide that detail from your application and only feed the data to you in stream order. (In fact since fragmentation happens at the IP layer it won't even be shown to the TCP layer until the second part has arrived also)
The fact that received packets have to be held in a buffer even after receiving them in some cases like this is why you will see people referring to UDP as better for lower latency applications: you can receive datagrams out of order with UDP and it's up to your application to figure out how to deal with that possibility.
Is the third case possible.
Yes, of course.
If so, it disobeys that TCP is a stream protocol ...
No it doesn't.
Your cases concern arrival of IP packets into a host. TCP being a stream protocol is about delivery of data into an application.
The packet fragments get reassembled in the correct order by the IP layer, and the packets get reassembled into segments in the correct order by TCP, and the now correctly ordered data stream is delivered to the application.
I am preparing for my university exam and one of the question last year was " how to make UDP multicast reliable " ( like tcp, retransmission of lost packets )
I thought about something like this :
Server send multicast using UDP
Every client send acknowledgement of receiving that packets ( using TCP )
If server realize that not everyone receive packets , it resends multicast or unicast to particular client
The problem are that there might be one client who usually lost packets and force server to resend.
Is it good ?
Every client send acknowledgement of receiving that packets ( using TCP )
Sending an ACK for each packet, and using TCP to do so, is not scalable to a large number of receivers. Using a NACK based scheme is more efficient.
Each packet sent from the server should have a sequence number associated with it. As clients receive them, they keep track of which sequence numbers were missed. If packets are missed, a NACK message can then be sent back to the server via UDP. This NACK can be formatted as either a list of sequence numbers or a bitmap of received / not received sequence numbers.
If server realize that not everyone receive packets , it resends multicast or unicast to particular client
When the server receives a NACK it should not immediately resend the missing packets but wait for some period of time, typically a multiple of the GRTT (Group Round Trip Time -- the largest round trip time among the receiver set). That gives it time to accumulate NACKs from other receivers. Then the server can multicast the missing packets so any clients missing them can receive them.
If this scheme is being used for file transfer as opposed to streaming data, the server can alternately send the file data in passes. The complete file is sent on the first pass, during which any NACKs that are received are accumulated and packets that need to be resent are marked. Then on subsequent passes, only retransmissions are sent. This has the advantage that clients with lower loss rates will have the opportunity to finish receiving the file while high loss receivers can continue to receive retransmissions.
The problem are that there might be one client who usually lost packets and force server to resend.
For very high loss clients, the server can set a threshold for the maximum percentage of packets missed. If a client sends back NACKs in excess of that threshold one or more times (how many times is up to the server), the server can drop that client and either not accept its NACKs or send a message to that client informing it that it was dropped.
There are a number of protocols which implement these features:
UFTP - Encrypted UDP based FTP with multicast (disclosure: author)
NORM - NACK-Oriented Reliable Multicast
PGM - Pragmatic General Multicast
UDPCast
Relevant RFCs:
RFC 4654 - TCP-Friendly Multicast Congestion Control (TFMCC): Protocol Specification
RFC 5401 - Multicast Negative-Acknowledgment (NACK) Building Blocks
RFC 5740 - NACK-Oriented Reliable Multicast (NORM) Transport Protocol
RFC 3208 - PGM Reliable Transport Protocol Specification
To make UDP reliable, you have to handle few things (i.e., implement it yourself).
Connection handling: Connection between the sending and receiving process can drop. Most reliable implementations usually send keep-Alive messages to maintain the connection between the two ends.
Sequencing: Messages need to split into chunks before sending.
Acknowledgement: After each message is received an ACK message needs to be send to the sending process. These ACK messasges can also be sent through UDP, doesn't have to be through UDP. The receiving process might realise that has lost a message. In this case, it will stop delivering the messages from the holdback queue (queue of messages that holds the received messages, it is like a waiting room for messages), and request of a retransmission of the missing message.
Flow control: Throttle the sending of data based on the abilities of the receiving process to deliver the data.
Usually, there is a leader of a group of processes. Each of these groups normally have a leader and a view of the entire group. This is called a virtual synchrony.
From what I have read about UDP, it has no error handling, no checking for things like sequence of data sent/recieved, no checking for duplicate packets, no checking for corrupt packets and obviously no guarantee that the packets sent are even received...
So with that in mind, why an earth is there actually an option to use checksums in UDP?? Because surely if you want to make sure the data being sent is received in the correct order (and not corrupt and so on) then you would use TCP...
UDP packets include a field for a 16 bit CRC checksum which the receiving operating system will use to check for packet corruption. If the checksum is present and fails, then the packet will be silently discarded. It is up to the application to notice that the packet disappeared and take corrective action.
UDP checksums are enabled by default on all modern operating systems. It is possible to disable UDP checksums in IPv4, either at the socket or OS level. Doing so would reduce the CPU overhead of processing each packet at both the sender and receiver. This might be desirable if, for example, the application were calculating its own checksum separately. Without any checksum, there would be no guarantee that the bytes received are the same as the bytes sent.
The task of UDP is to transport datagrams, which are "network data packets". For UDP, every data packet is a transmission of its own. If you send 3 packets, those are three independent transmissions for UDP. Whether the content of these 3 packets somehow belongs together or if these are three individual requests (think of DNS requests, where every request is sent as an own UDP packet), UDP doesn't know and doesn't care. All that UDP guarantees is that a packet is either transmitted as a whole or not at all; either the entire packet arrives or the entire packet is lost, you will never see "half of a packet" arriving. So if you just want to send a bunch of data packets, you use UDP.
The task of TCP, on the other hand, is to transport a stream of data. It's not about packets. It's about a stream of bytes somehow making it from one host to another. How this happens, e.g. how TCP is breaking the data stream into chunks and sending these chunks over the network and ensuring that no data is lost and all data is in order, is up to TCP. All that TCP guarantees is that the bytes will arrive correctly and in order at the other side, unless the TCP connection is lost, in which case the stream ends abruptly somewhere in the middle but all data, that arrived up to that point, did arrive correctly and in correct order. So despite TCP also working with packets, the transmission behaves like a stream that has no internal "data units". When sending 80 bytes over TCP, there may be one packet with 80 bytes or 10 packets with each 8 bytes or anything in between, you cannot know and you don't have to.
But just because you use UDP doesn't mean you don't care for data corruption in UDP packets. Keep in mind that corruption may not just affect your data, it may also affect the UDP header itself. If only a single bit swaps, the UDP packets may have an incorrect destination port. So they added a checksum which ensures that neither the UDP header nor the data payload has been corrupted but made it optional, so it's up to you whether you want to use it or not. If used, corrupt packets are dropped and thus behave like lost packets. If your code takes care of lost packets, it will automatically take care of corrupt packets, too.
With IPv6 though, the checksum was dropped from the IP header, which means that IP header corruptions are no longer detected. But this was seen as a small problem, as most layer 2 protocols have their own mechanism to detect corrupt data (e.g. Ethernet and WiFi already guarantee that data is not corrupted on its way through the network) and the checksums of UDP/TCP also cover some of the IP header fields, so even without layer 2 error checking, the recipient would notice if the IP addresses in the header have been corrupted along the way and drop the packet. As a consequence, the UDP checksum is no longer optional with IPv6.
How do TCP knows which is the last packet of a large file (that was segmented by tcp) in the scenario that the connection is kept-established. (like ftp or sending mp3 on yahoo messenger)
I mean how does it know which packet carries data of one.mp3 and which packet carries data of another.mp3 ??
Anyone ?
Thank you
There are at least 2 possible approaches.
Declare upfront how much data you're going to send. Something like a packet that declares Sending a message that's 4008 bytes long
The second approach is to use a terminating sequence (nastier to process)
So the receiver:
Tries to read the declared amount or
Scans for the terminating sequence
TCP is a stream protocol and fragmentation should be transparent to a TCP application. It operates on streams of data, never packets. A stream is assembled to its intended order using the sequence numbers. The sequence of bytes send by application is encapsulated in tcp segments. The stream is recreated on the receiver side before data is delivered to the application.
The IP protocol can do fragmentation.
Each TCP segment goes to the IP layer and may be fragmented there. Segment is reassembled by collecting all of the packets and offset field from the header is used to put it in the right place.
Using the TCP/IP protocol, given a connection between a client and a server, are the packets sent by the client to the server always received in the same order they were sent?
For example, if the client sends 3 packets of data, A, B and C, will the server always receive A first followed by B and C or is it possible for the server to receive C first, followed by A and B?
At IP level, packets may arrive in any order (if they arrive). At TCP level, the data stream is guaranteed to be ordered in the same manner on both ends.
That means yes, the server will always receive A then B then C. As long as you are using TCP.
When using TCP, data is received by the destination application in the same order as it is sent by the source application.
See the following for more details:
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Data_transfer
TCP is a transmission protocol, and it transmits data by sending the data out in IP packets over the underlying IP network. TCP is responsible for ensuring the correct transmission of the data, which includes ordering the arriving packets, re-requesting missing ones and discarding duplicates.
TCP as such does not expose any notion of "packet" to the user; the fact that the data is chunked into IP packets is a detail of the "over IP" implementation. A different implementation, e.g. TCP-over-bicycle-courier, might employ an entirely different scheme.
It cannot happen that you receive data in a different order on the application side over a TCP socket.
It may happen that packets are received in a different order by the networking layer of the OS, but TCP makes it a requirement that the upper levels get data in order. It is the OS' role to ask again for unreceived fragments etc and assemble these fragments. So, you need not worry.
UDP, on the other hand, offers no such guarantee.
The server (as the physical NIC of the machine) might receive them in any order. Your OS might receive them in any order again - that will mostly (but not allways) be the order of physical reception. Your client application is guaranteed to receive them in correct order, thats a property of TCP
In general, packets will be received in the same order they are transmitted. But the network may drop or reorder packets. For example, packets may take different routes and arrive out of order. Packets may be lost or even duplicated on the network. The TCP implementation is responsible for retransmitting packets that are lost, acknowledging packets that are received, ignoring duplicated packets, all with the objective of accurately reconstructing the transmitted byte stream at the receiver.
At the application level, you send a stream of bytes and receive a stream of bytes. TCP does whatever is needed to ensure the received stream of bytes is identical to the sent stream of bytes, regardless of what happens to the packets on the network.