TCP Dup ACK after reconnection - sequence number problem? - networking

currently I'm debugging the network traffic between two devices in the same network. The network architecture is quite simple.
Device 1 <-> Switch 1 <-> Switch 2 <-> Device 2
To verify the software on the devices, I check different scenarios.
One of them is the correct reconnection after I unplugged the network cable between Switch 1 and Switch 2 and plugged it in again after few seconds.
I uploaded a wireshark capture to my onedrive: Wireshark capture
Packets 1-9 are correct communication.
Between Packet 9 and Packet 10 the cable is unplugged.
In Packet 10 Device 1 tries to send data to Device 2 without receiving ACK.
In Packets 11/12 Device 2 sends Keep-Alive messages.
Between Packet 12 and 13 the cable is plugged in again.
Packet 13 ACKs the last Keep-Alive message, which seems fine.
From now on, it gets weird. I only see TCP Dup ACK messages.
I assume that the TCP stacks get confused on the difference in the sequence numbers.
While Device 1 thinks it's own sequence number is 49, Device 2 thinks it is 37.
Device 1 does not support Fast Retransmission.
Can someone explain what is happening here? I'm struggeling to understand where the problem is.
Is the problem in Device 1 where the TCP stack thinks it is on sequence number 49 while the package is not yet acknowledged or is it in Device 2?
I really appreciate your help.
Kindly,
Philipp

Related

TCP avoid delayed acknowledge

I'm having an application on a micro controller with a small TCP/IP stack. This application waits for a connection on a specific TCP port. As soon as the TCP connection is established the micro controller need to send 8 KB of data. Because the TX buffer of the TCP socket is only 1 KB tall I will need to send 8 segments of data. The TX buffer size can't be changed!
But now I have the problem, that after every segment a delay of 200 ms occurs. I know that this is caused by the delayed ack (which is on Windows 200 ms). But in my use case this means the whole process includes 1400 ms of delay time, which is just wasted time.
Is there any possibility that I can force the PC to acknowledge the data instantly (maybe a bit in the TCP header)? I can't change anything on the PC side.
Or should I instead sent two 512 byte tall segments instead of the 1 KB segment? Would this fix / trick out the issue? I have read that the PC will acknowledge the data if a second segement arrives.
What is the right way to solve such a usecase?

Centos does not receive packets as well as Ubuntu - using sockets

I have CentOS on my server and the server has two NIC.
I want to capture any packets on NIC number 1 and forward them to NIC number 2. I use Socket in my C++ application to capture and froward the packets, then I use IPerf to analyze my application. so I connect each of the NICs to different computers (Computer A and B) and try to send UDP packets from computer A to B. The server suppose to capture packets from A then forward them to B and vice versa.
It works well when I'm pinging computer B from A, but there are some things wrong when I try to generate more packets via IPerf. IPerf generate 100K UDP packets per second (22 bytes payload) and send it from A to B, but computer B does not receive all of the packets in a same time! for example if IPerf on A send 100K packets at the first second, Computer B receive these packets at 20 seconds! It seems like some caching system on server that holds the received packet! let me just show you what happened:
[OS: CentOS]
[00:00] Computer A Start Sending...
[00:01] Computer A send 100,000 packets
[00:01] Finnish
[00:00] Computer B start to listen...
[00:01] Computer B receive 300 packets
[00:02] Computer B receive 250 packets
[00:03] Computer B receive 200 packets
[00:04] Computer B receive 190 packets
[00:05] Computer B receive 180 packets
[00:06] Computer B receive 170 packets
[00:07] Computer B receive 180 packets
[00:08] Computer B receive 20 packets
[00:09] Computer B receive 3 packets
[00:10] (same things happened until all 100K packets will receive, it takes long time)
At the 4th second, I unplugged the network cable from computer A to make sure it's not sending any packets any more, after that computer B is still receiving packets! It seems like something holds the traffic at the server and releases it slowly. I tried to turn the firewall off but nothing has been changed.
I changed the server OS and use Ubuntu to check if there is any problem in my code, but it works well on Ubuntu. after that I tried to change CentOS sockets buffer size but it doesn't help. here is some of important part of my code:
How I setup a socket:
int packet_socket = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (packet_socket == -1) {
printf("Can't create AF_PACKET socket\n");
return -1;
}
// We would use V3 because it could read/pool in per block basis instead per packet
int version = TPACKET_V3;
int setsockopt_packet_version = setsockopt(packet_socket, SOL_PACKET, PACKET_VERSION, &version, sizeof(version));
if (setsockopt_packet_version < 0) {
printf("Can't set packet v3 version\n");
return -1;
}
//Set Interface in Promiscuous Mode
struct ifreq ifopts;
strncpy(ifopts.ifr_name,interface_name,IFNAMSIZ-1);
ioctl(packet_socket,SIOCGIFFLAGS,&ifopts);
ifopts.ifr_flags|=IFF_PROMISC;
ioctl(packet_socket,SIOCGIFFLAGS,&ifopts);
//Bind Socket On Specific Interface
struct sockaddr_ll sockll;
bzero((char *)&sockll,sizeof(sockll));
sockll.sll_family=AF_PACKET;
sockll.sll_protocol=htons(ETH_P_ALL);
sockll.sll_ifindex=get_interface_index(interface_name,packet_socket);
bind(packet_socket,(struct sockaddr *) &sockll , sizeof(sockll));
And here I receive packets:
u_char *packet;
packet=new u_char[1600];
*length = recv(packet_socket ,packet,1600,0);
And here I send packets:
int res = write(packet_socket ,packet,PacketSize);
I'm so confused and I don't know what is going on CentOS. Can you please help me to understand what is happening?
Is CentOS a good choice for doing this job?
Thank you.
Try disabling selinux on centos
setenforce 0
Then try

Systematic TCP retransmission

I have 5 same devices connected to the switch. The IPs are constant. Using Wireshark I see a systematic TCP retransmissions within 1 or 2 usec on ports 1 and 5 (only). After swapping Port 1 with 2 for example the retransmission remains the same. It is probably not due to the lose data. What can be the issue?

Winsock TCP Packets sent but not reaching host

When the server sends 4 or more - 25 Byte packets to the client only the first 2 are processed by the client. I am using Event select on the client, and send on the server. There are no errors but only the first 2 packets are displayed. Thanks in advance.
Without looking at your code I can only think of one issue that you might be overlooking,
Maybe you are missing a point that TCP is a stream based protocol. If you send data by calling Send function 10 times from client then it is not necessary that you have to call the receive function 10 times on receiving side. All data maybe retrieved in 1 receive or 5 or 8 or 12 receives. I mean don't try to look at it in form of packets. You have to do Framing yourself to identify the packets.
When you send 4 packets of 25 byte each. The total is 100 byte data.
On receiving side you may be getting 2 packets of 50 bytes and you have to identify your packets yourself by using some start and end markers etc...
You can also get a single packet of 100 bytes or 10 packets of 10 bytes. keep that in mind.

TCP protocol : Host goes temporarily unavailable

Say our client is sending the packets at a constant rate. Now, if server goes down temporarily there can be two situations
(We are using the TCP protocol)
1) The packet won't be delivered to the server. Consequently, the other packets in the line have to wait for the server to respond. And the communication can be carried out from there.
2) The packet won't be delivered and will be tried again, but the other packages won't be affected by this packet.
Say, packets A, B and C are to be transferred. While I am sending packet A the server goes down temporarily, then the packets B and C will be sent at the time they were initially scheduled to be or they will be sent once A is received by the server.
TCP is a stream-oriented protocol. This means that if, on a single TCP connection, you send A followed by B then the reciever will never see B until after it has seen A.
If you send A and B over separate TCP connections, then it is possible for B to arrive before A.
When you say "goes down temporarily", what do you mean? I can see two different scenarios.
Scenario 1: The connection between Server and Client is interrupted.
Packet A is sent on its way. Unfortunately, as it is winding its ways through he cables, one cable breaks and A is lost. Meanwhile, depending on the exact state of the TCP windowing algorithm, packets B and C may or may not be sent (as that would depend on the window size, the size of A/B7C and the amount of as-yet unacknowledged bytes sent). I guess that is saying both your "1" and "2" may be right?
If B and/or C have been sent, there will be no ack of A, until it has been resent. If they have been sent, once A has arrived, the server will ack up until the end of the last frame received in sequence (so, C, if taht is the case).
Scenario 2: The sever goes down
If this happens, all TCP state will be lost and connections will have to be re-established after the server has finished rebooting.

Resources