Why are TCP messages in my PC coming in frames of 590 bytes - networking

I am analyzing wireshark log files, when I make a request to a web page using firefox through a proxy server.
Following are details of connection establishment:
I have noted "maximum segment size" when I open options branch in the TCP segment details of the [SYN] message from my PC to the proxy server - it says 1460 bytes
Similarly, maximum segment size eof the [SYN,ACK] message from the proxy server to my PC - it says 1460 bytes
After establishing the TCP connection, should not each of the TCP frames sent from proxy server to my PC be of 1460 bytes? I am puzzled that why are they 590 bytes. Please advice how the 590 size is being set

A plausible explanation is that 590 turns out to be the Path MTU for the particular connection.
In other words whereby the client (one of the end nodes of the connection)accepts packets of a maximum of of 1460 bytes payload, some node(s) on the way accepts smaller packets. For efficiency purposes, the Path MTU Discovery allows the originator of a packet to size it so that it would fit the smaller MTU encountered on the path, and hence avoid fragmentation.
BTW:
1460 is a very common MTU (well MSS), because it it corresponds to 1500, Ethernet v2's maximum, minus 20+20= 40 bytes for the IP header overhead)
See the following Wikipedia entry for an overview of MTU (Maximum Transmission Unit) and a basic description of the Path MTU Discovery method (Basically setting the the DF i.e. do-not-fragment flag and relying on the ICMP ""Destination Unreachable (Datagram Too Big)" messages to detect that some node on the way couldn't handle the packet, and hence try with smaller size until it goes through).
Also, I suggest inspecting the packets when the connection is to a different host, maybe a peer on the very same network segment, without going through the proxy mentioned. Chances are you will then start seeing 1460 bytes frames.

Related

TCP ACK of packets in wireshark

I've noticed in wireshark that I'm able to send 4096 bytes of data to a HTTP webserver (from uploading a file) however the server only seems to be acknowledging data 1460 bytes at a time. Why is this the case?
The size of TCP segments is restricted to the MSS (Maximum Segment Size), which is basically the MTU (Maximum Transmission Unit) less the bytes comprising the IP and TCP overhead. On a typical Ethernet link, the MTU is 1500 bytes and basic IP and TCP headers comprise 20 bytes each, so the MSS is 1460 (1500 - 20 - 20).
If you're seeing packets indicated with a length field of 4096 bytes, then it almost certainly means that you're capturing on the transmitting host and Wireshark is being handed the large packet before it's segmented into 1460 byte chunks. If you were to capture at the receiving side, you would see the individual 1460 byte segments arriving and not a single, large 4096 byte packet.
For further reading, I would encourage you to read Jasper Bongertz's blog titled, "The drawbacks of local packet captures".
TCP by default uses path MTU discovery:
When system send packet to the network it set don't fragment flag (DF) in IP header
When IP router or you local machine see DF packet that should be fragmented to match MTU of the next hop link it sends feedback (RTCP fragmentation need) that contains new MTU
When system receives fragmentation needed ICMP it adjusts MSS and send data again.
This procedure is performed to reduce overall load on the network and increase probability of each packet delivery.
This is why you see 1460 packets.
Regarding to you question: the server only seems to be acknowledging data 1460 bytes at a time. Why is this the case?
TCP keep track window that defines "how many bytes of data you can send without acknowledge". Its purpose is to provide flow control mechanisms (sender can't send too much data that can't be processed) and congestion control mechanisms (sender can't send too much data to overload network). Window is defined by receiver side and may be increased during connection when TCP will estimate real channel bandwidth. So you may see one ACK that acknowledges several packets.

How does MTU retransmission work in case of UDP

As we all perfectly know, UDP does not support retransmission along with some other things.
We also aware of such thing like MTU that works basically in the following way -- when one of the network devices on the path between source and destination points does not support packet of some size, it just drops it.
In case of TCP, it's not a problem -- it already knows MSS after handshake that is always less than MTU (am I right?), so there's no possibility to send a packet with the size greater than MTU.
However, I wonder how does it work in case of UDP? As I already said, there's no retransmission in this protocol and there's no such thing like MSS. So what happens when the packet is dropped due to exceeding MTU?
Or it just works because of the MTU nature (it actually belongs to the IP layer, not the transport layer protocols like UDP or TCP)? So the IP layer reconstruct the dropped packet in smaller units and send it again?
First of all, you must distinguish between the local MTU, which is just the MTU of the local link, and the path MTU (PMTU), which is the smallest MTU of the local link. Consider the following topology:
1500 1480 1500
A -------- B -------- C -------- D
then A's local MTU is 1500, but the PMTU is just 1480.
When router B receives a packet of size 1500 which it needs to forward, and the DF bit is set, it sends an ICMP packet back to the sender with the next hop's MTU, 1480 in this case. The sender can then reduce the packet size.
In TCP, this is done transparently by the network stack. In UDP, the application needs to deal with it. There are three ways to do that:
always send packets that are small enough; 1024 is always safe over IPv6, and 512 is usually (but not always) safe over IPv4;
use a connected UDP socket, and react to an EMSGSIZE error by reducing the packet size; or
use any kind of UDP socket, request the PMTU ancillary data, and use the data provided.
Technique (3) is the most efficient. For IPv6, it is described in Section 11.3 of RFC 3542.

Discrepancy between MSS sent by client and MSS received by host

When the client initiates the connection with the SYN bit set, Wireshark (and TCPDump) show the MSS as being 1460. However, when the same packet is delivered to the host, Wireshark (and TCPDump) show the MSS as being 1416.
Can anybody please explain why there's a discrepancy of 44 bytes?
The image below shows the MSS received by the host. Sorry but I don't have a screenshot showing the client's initial SYN 1460 MSS.
During actual data transfer, the 1416 is used as an MSS (1404 for payload and 12 for options such as the TSVal)
My original thought was that it has something to do with Path MTU discovery, and that some space is being reserved for any additional headers that may be added on while the packet is making it's way from the sender to the destination. Am I correct in thinking so? If so, is there a way to find a breakdown of how these are being used?
After consulting the university's network admin, we concluded that that a lower MSS was being imposed by the network for load reasons.

TCP file Transfer window size

I'm trying to reverse engineer an application, and i need help understanding how TCP window size works. My MTU is 1460
My application transfers a file using TCP from point A to B. I know the following:
The file is split into segments of size 8K
Each segment is compressed
Then each segment is sent to point B over TCP. These segment for a text file can be of size 148 Bytes, and for a pdf 6000 Bytes.
For a text file, am i supposed to see the segments of 148 attached to one another to form one large TCP stream? and then it is split according to the Window Size?
Any help is appreciated.
The receiver application should see the data in teh same way, the sender application sent it. TCP uses byte-streaming and so it collects all the bytes in an in-order manner and delivers it to the application. MTU is largely an internal semantics to TCP and does not take into application-layer packet boundaries. If TCP has enough data to send in its send buffer (each TCP socket has its own send buffer, btw), then it will package its next segment worth MTU size and sends it; to be more precise, it deducts TCP and IP header from the MTU size.

UDP Networking Fundamentals

I've been doing some work with C# Networking using UDP. I'm getting on fine but need the answer to a couple of fundamental questions I'm having problems testing:
Currently I'm sending data in a ~16000 byte datagrams, which according to wireshark is getting split into several 1500 byte packets (because of max packet size limits) and then reassembled at the other end.
Am I right in understanding the datagram will be received complete at the other end OR not at all. IE it's an all or nothing thing. There is no chance of ending up with a fragmented datagram due to packet loss?
Therefore, I only need to ACK per datagram, rather than ensuring my datagrams are < 1500 bytes and ACK each one?
I've looked in a lot of places but there seems to be a lot of confusion between the differences between datagrams and the underlying packets...
Thanks for you help!
There is no chance of ending up with a fragmented datagram due to packet loss?
I believe that's true: that fragmentation and fragment reassembly is handled by the protocol layer below UDP, i.e. that it's handled by the "IP" layer, which will error if it fails to reassemble the packet-fragments into a datagram (for example, search for "fragment" in RFC 792).
http://www.pcvr.nl/tcpip/udp_user.htm#11_5 says,
"The IP layer at the destination performs the reassembly. The goal is to make fragmentation and reassembly transparent to the transport layer (TCP and UDP), which it is, except for possible performance degradation."
As you may now 16 bit UDP length field indicates that you can send a total of 65535 bytes. However, the data can be theoretically (sizeof(IP Header) + sizeof(UDP Header)) = 65535-(20+8) = 65507 bytes.
But this does not mean that all applications that are using UDP will send this amount of data as an example DNS packets limits to 512 bytes. This is because you don't get any ACK packets from server. This is one reason that packets may get lost in the network (packet transmission problems and loss). Secondly intermediate nodes may encapsulate datagrams inside of another protocol, as an example IPSEC or other protocols do that.
For UDP there is no ACK packets, so in your case if underlying application uses UDP you should not see any ACK packets. Secondly, some of the server limit their sizes to the max UDP packets depending on the application, so if you have data transfer from client to server you should see same bytes e.g 512 bytes. going and coming back in wireshark. Mostly, source makes the request and destination sends X bytes UDP datagrams back.
These links may be good for your questions:
Wireshark UDP analysis
RFC 1122 (states that 576 is the minimum maximum reassembly buffer size)
Am I right in understanding the datagram will be received complete at the other end OR not at all. IE it's an all or nothing thing. There is no chance of ending up with a fragmented datagram due to packet loss?
That is correct.
Therefore, I only need to ACK per datagram, rather than ensuring my datagrams are < 1500 bytes and ACK each one?
I don't understand this question. You need to ACK each datagram regardless of its size, and you should make them < 1500 bytes so they won't get fragmented. Otherwise you may never be able to transmit any specific datagrams at all, if it repeatedly gets fragmented and a fragment repeatedly gets lost.

Resources