I have a mismatch problem between server process time and client receive time. I have made nagle alogrithm off. and ping rtt seems no problem. so there is any other thing will cause tcp delay send packets, tcp buffer? if it's tcp buffer, how does it make tcp delay send?
Related
I'm troubleshooting a MTU/MSS issue that is causing fragmentation over a PPPoE service. Below is a packet dump of a TCP 3-Way Handshake from a different service (that is working as expected) that relates to my question.
I understand the way PMTUD works as this: by setting the Don't Fragment (DF) bit to 1 in the IP header, a router along the path to the destination that requires fragmentation of the packet sends an ICMP back to the host to adjust the MSS size accordingly. However, my understanding is that this will only happen when fragmentation occurs (packets greater than the path MTU). This suggests that PMTUD works during the data exchange phase, NOT when TCP 3-Way Handshake is negotiated (since these are small packets, 78 bytes in this case).
In the above packet capture the SYN packet sends a MSS=1460 (which is too large, due to the 8 byte overhead of PPPoE) and the SYN/ACK response from the server sends back the correct MSS=1452. What mechanism does TCP use to determine the MSS during this exchange?
Maybe, the server hasn't computed the MSS during this three-way handshake. For instance, if the system administrator has observed a lot of fragmentation, he can have set the MSS of the whole system to 1452 (with the command ip tcp adjust-mss 1452), so when you are doing the three-way handshake, the server only advertises its default MSS. Is it applicable to your case?
What you're probably seeing here is the result of what's known as MSS clamping where the network on which the server is attached to modifies the MSS in the outgoing SYN/ACK packets to signal to the sender to use a lower MSS. This is commonly done on networks that perform some form of tunnelling such as PPPoE on ADSL.
I am trying to find out a way to send exactly one TCP packet and verify this on Rx side that same has been received (no other packet) using tcpdump. I am new to networking world. Hence any help/explaination would be much appreciated.
These tools are for performance measurements and not for packet crafting. They always establish a full TCP connection for measurements. Since even a TCP connection with no data transfer consists of 6 packets (initial handshake to establish connection and handshake for connection close) you will not be able to send a single TCP packet using these tools.
Just a thought - configure the Rx side NOT to accept a tcp-ip connection from the Tx side, then attempt a connection from Tx side. You should see a (single) SYN packet on the Rx side, to which it won't respond. [Unfortunately, the Tx side will then retry the SYN packet a number of times].
I am facing a problem related to the TCP retransmissions.
My Sender starts sending some data to receiver (which is not in the network after opening the connection), after sending 3 packets, it retransmits first packet 3 times (as per the retransmission timeouts)and start sending next packets.
Then it retransmits first packet again. I am not able to understand this behavior and want to know if there is some way I can disable this and force TCP to retransmit first packet and then close the connection if no ack is received.
Thanks.
No there isn't. It's a streaming protocol, not a datagram protocol.
If a TCP connection is established between a client and server, is sending data faster on this connection-oriented route compared to a connectionless given there is less header info in the packets? So a TCP connection is opened and bytes sent down the open connection as and when required. Or would UDP still be a better choice via a connectionless route where each packet contains the destination address?
Is sending packets via an established TCP connection (after all hand shaking has been done) a method to be faster than UDP?
I suggest you read a little bit more about this topic.
just as a quick answer. TCP makes sure that all the packages are delivered. So if one is dropped for whatever reason. The sender will continue sending it until the receiver gets it. However, UDP sends a packet and just forgets it, so you might loose some of the packets. As a result of this, UDP sends less number of packets over network.
This is why they use UDP for videos because first loosing small amount of data is not a big deal plus even if the sender sends it again it is too late for receiver to use it, so UDP is better. In contrast, you don't want your online banking be over UDP!
Edit: Remember, the speed of sending packets for both UDP and TCP is almost same, and depends on the network! However, after handshake is done in TCP, still receiver needs to send the acks, and sender has to wait for ack before sending new batch of data, so still would be a little slower.
In general, TCP is marginally slower, despite less header info, because the packets must arrive in order, and, in fact, must arrive. In a UDP situation, there is no checking of either.
When a TCP application exits it will send a FIN packet.
Consider a tcp client which get connected to a always listening server(server never exits).
if the tcp client is exiting abruptly after few exchange of packets, will it always send a FIN packet to the server?
Thx!
Under normal operation , a FIN will be sent ,yes.
Here's a few cases where a FIN is not going to be sent.
Someone yanks out the network cable of the client.
The client gets nuked
The FIN packets are dropped on the way.
The OS on the kernel crashes hard.