How to identify the retransmitted TCP segment in Wireshark? - tcp

I am confused to identify the retransmitted TCP segment in the captured segments at Wireshark. Is there any notes showing that a segment is a retransmitted one in Wireshark?

For wireshark to identify a segment as a retransmitte one, it has to identify both packets (original and retransmitted) in the pcap file.
If for example, you sniff on the receiving endpoint for a certain packet, you might only see the retransmitted instance (as sometimes, though not always, the retransmission would happen due to the packet not arriving to destination). In that case wireshark will only see one instance of the packet and won't know it was retransmitted.
If you do have both packets in the pcap file (e.g. for the example above, but sniffing on the source of the packet), wireshark will identify it.
The way wireshark marks TCP retransmissions varies between WS versions but in the later ones it will usually default to black color (and anyway you can always see in the "expert info" field under TCP).

Related

UDP numbered segments?

My firewall textbook says: "UDP breaks a message into numbered segments so that it can be transmitted."
My understanding was UDP had no sequence or other numbering scheme? That data was broken into packets and sent out with no ordered reconstruction on the other end, at least on this level. Am I missing something?
The book is just wrong here. The relevant section says:
User Datagram Protocol (UDP)—This protocol is similar to TCP in that it handles the addressing of a message. UDP breaks a message into numbered segments so that it can be transmitted. It then reassembles the message when it reaches the destination computer.
UDP does not include any mechanism to segment or reassemble messages; each message is sent as a single UDP datagram. If you look at the UDP "packet" (technically datagram) structure on page 108, there's no segment number or anything like that.
Mind you, segmentation can happen at other layers, either above or below UDP:
IP packets can be fragmented if they're too big for a network link's MTU (maximum transfer unit). This can happen to IP packets that contain UDP, TCP, or whatever. This is actually relevant for firewalls because creative fragmentation can sometimes be used to bypass packet filtering rules.
Some protocols that run on top of UDP also use something like numbered segments. For example, TFTP (trivial file transfer protocol) breaks files into "blocks", and transmits a block number in the header for each block. (And the receiver responds acknowledging the block number it's received -- it's like a drastically simplified version of TCP.) But this is part of the TFTP protocol, not part of UDP.
QUIC is another example of a protocol that runs over UDP and supports segmentation (and multiple connections, and...), and each packet contains a packet number. But again it's part of the QUIC protocol, not UDP.

How To know the Size of HTTPS or HTTP Packets except for Retransmission

It's about HTTP(s) or TCP protocol.
I am learning HTTP using Wireshark, I see lots of retransmission packets in TCP protocol. And I want to calculate the total size of data sent by client-end to server-end, except for retransmission.
How can I get the whole pure size of data I sent to server, except retransmission?
Is there any flag for retransmission in TCP protocol?
There's no "retransmission" flag it TCP. Finding retransmissions requires analyzing the sequence numbers of sent segments, which is something wireshark does - for instance, you can use the display filter tcp.analysis.retransmission to find TCP segments that wireshark considers to be retransmissions.
To find the amount of data sent in a tcp session, right click any segment from the session, and click Follow -> TCP Stream:
It'll generate a display filter such as tcp.stream eq 138 and show you the entire content of the selected tcp session, including the amount of shared data, regardless of retransmissions:

packet order in TCP packet fragmentation

In TCP/IP, we have MSS and MTU when sending and receiving packets.
MTU is an IP layer concept, which is determined by the underlying hardware. It shows the maximum data size that an IP layer packet can contain during one transmission.
MSS is a TCP layer concept, which is limited by the MTU, showing that the TCP data stream will be fragmented into MSS-size packets.
Our protocol lies on top of TCP, and each protocol will define its own packet. One example is MySQL, which defines its packet size up to 2^24-1, that is around 16M. When the big enough protocol packet comes to TCP, it will be fragmented according to MSS.
Assume that a client needs to send DATA1 and DATA2 to server. DATA2 size is bigger than MSS, and DATA2 will be fragmented into DATA2_1, DATA2_2. As the packets will be handled by the IP layer, so the time that each packet arrives at server might not be the same as that when the client sends them.
So I think the sequence of packets' arriving might be the following:
DATA1 DATA2_1, DATA2_2
DATA1, DATA2_1, DATA2_2
DATA1, DATA2_2, DATA2_1
In the first case, the server receives DATA1 and DATA2_1 in one tcp packet and then another packet contains DATA2_2 arrives.
In the second case, the server receives DATA1, DATA2_1 and DATA2_2 in three packets.
In the third case, the server first receives DATA2_2 and then DATA2_1.
My question:
Is the third case possible?
If yes, it disobeys that TCP is a stream protocol, and stream protocol should be ordered. And even this does not break the stream rule, how to handle this scenario?
If no, how TCP makes the disordered packets into its original order?
It is possible to receive that sequence over the network, however the TCP implementation will hide that detail from your application and only feed the data to you in stream order. (In fact since fragmentation happens at the IP layer it won't even be shown to the TCP layer until the second part has arrived also)
The fact that received packets have to be held in a buffer even after receiving them in some cases like this is why you will see people referring to UDP as better for lower latency applications: you can receive datagrams out of order with UDP and it's up to your application to figure out how to deal with that possibility.
Is the third case possible.
Yes, of course.
If so, it disobeys that TCP is a stream protocol ...
No it doesn't.
Your cases concern arrival of IP packets into a host. TCP being a stream protocol is about delivery of data into an application.
The packet fragments get reassembled in the correct order by the IP layer, and the packets get reassembled into segments in the correct order by TCP, and the now correctly ordered data stream is delivered to the application.

UDP - Optional Checksum

From what I have read about UDP, it has no error handling, no checking for things like sequence of data sent/recieved, no checking for duplicate packets, no checking for corrupt packets and obviously no guarantee that the packets sent are even received...
So with that in mind, why an earth is there actually an option to use checksums in UDP?? Because surely if you want to make sure the data being sent is received in the correct order (and not corrupt and so on) then you would use TCP...
UDP packets include a field for a 16 bit CRC checksum which the receiving operating system will use to check for packet corruption. If the checksum is present and fails, then the packet will be silently discarded. It is up to the application to notice that the packet disappeared and take corrective action.
UDP checksums are enabled by default on all modern operating systems. It is possible to disable UDP checksums in IPv4, either at the socket or OS level. Doing so would reduce the CPU overhead of processing each packet at both the sender and receiver. This might be desirable if, for example, the application were calculating its own checksum separately. Without any checksum, there would be no guarantee that the bytes received are the same as the bytes sent.
The task of UDP is to transport datagrams, which are "network data packets". For UDP, every data packet is a transmission of its own. If you send 3 packets, those are three independent transmissions for UDP. Whether the content of these 3 packets somehow belongs together or if these are three individual requests (think of DNS requests, where every request is sent as an own UDP packet), UDP doesn't know and doesn't care. All that UDP guarantees is that a packet is either transmitted as a whole or not at all; either the entire packet arrives or the entire packet is lost, you will never see "half of a packet" arriving. So if you just want to send a bunch of data packets, you use UDP.
The task of TCP, on the other hand, is to transport a stream of data. It's not about packets. It's about a stream of bytes somehow making it from one host to another. How this happens, e.g. how TCP is breaking the data stream into chunks and sending these chunks over the network and ensuring that no data is lost and all data is in order, is up to TCP. All that TCP guarantees is that the bytes will arrive correctly and in order at the other side, unless the TCP connection is lost, in which case the stream ends abruptly somewhere in the middle but all data, that arrived up to that point, did arrive correctly and in correct order. So despite TCP also working with packets, the transmission behaves like a stream that has no internal "data units". When sending 80 bytes over TCP, there may be one packet with 80 bytes or 10 packets with each 8 bytes or anything in between, you cannot know and you don't have to.
But just because you use UDP doesn't mean you don't care for data corruption in UDP packets. Keep in mind that corruption may not just affect your data, it may also affect the UDP header itself. If only a single bit swaps, the UDP packets may have an incorrect destination port. So they added a checksum which ensures that neither the UDP header nor the data payload has been corrupted but made it optional, so it's up to you whether you want to use it or not. If used, corrupt packets are dropped and thus behave like lost packets. If your code takes care of lost packets, it will automatically take care of corrupt packets, too.
With IPv6 though, the checksum was dropped from the IP header, which means that IP header corruptions are no longer detected. But this was seen as a small problem, as most layer 2 protocols have their own mechanism to detect corrupt data (e.g. Ethernet and WiFi already guarantee that data is not corrupted on its way through the network) and the checksums of UDP/TCP also cover some of the IP header fields, so even without layer 2 error checking, the recipient would notice if the IP addresses in the header have been corrupted along the way and drop the packet. As a consequence, the UDP checksum is no longer optional with IPv6.

Detecting retransmitted packet with libpcap

I'm filtering packets with libpcap with a filter like "tcp src localhost". It filters all the packets whose source is localhost (my host).
When localhost doesn't receive a TCP confirmation of an already sendt packet, localhost will forward the packet.
Not all the packets filtered by libpcap will arrive to its destination, and I need to identify when a packet is a "forwarded packet". Is there any way with libpcap to identify a forwarded packet?
By my understanding, you're looking for TCP retransmissions. These can be found by display fitters in wireshark after capturing. These two should help you:
Retransmitted packets can be found through the display filter tcp.analysis.retransmission (more such filters).
When the receiver gets an out-of-order packet (usually indicates lost packet), it sends a ACK for the missing seq number. This is a duplicate ACK and these can be found by using tcp.analysis.duplicate_ack (details).

Resources