UDP packet lost - networking

Two servers communicate via UDP. When a large packet is send, it is not received at the other end.
I added the following registry entries
But this has not solved the problem. Below is the wireshark info regarding the lost packet. The size of the lost packet is 2706
How can I make sure that the packets are fragmented. Should I enable or disable Fragmetnation. I was not able to find a registry entry for MTU for UDP.

Related

TCP checksum error for fragmented packets

I'm working on a server/client socket application that is using Linux TUN interface.
Server gets packets directly from TUN interface and pass them to clients and clients put received packets directly in the TUN interface.
<Server_TUN---><---Server---><---Clients---><---Client_TUN--->
Sometimes the packets from Server_TUN need to be fragmented in IP layer before transmitting to a client.
So at the server I read a packet from TUN, start fragmenting it in the IP layer and send them via socket to clients.
When the fragmentation logic was implemented, the solution did not work well.
After starting Wireshark on Client_TUN I noticed for all incoming fragmented packets I get TCP Checksum error.
At the given screenshot, frame number 154 is claimed to be reassembled in in 155.
But TCP checksum is claimed to be incorrect!
At server side, I keep tcp data intact and for the given example, while you see the reverse in Wireshark, I've split a packet with 1452 bytes (including IP header) and 30 bytes (Including IP header)
I've also checked the TCP checksum value at the server and its exactly is 0x935e and while I did not think that Checksum offloading matters for incoming packets, I checked offloading at the client and it was off.
$ sudo ethtool -k tun0 | grep ": on"
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: on
generic-segmentation-offload: on
generic-receive-offload: on
tx-vlan-offload: on
tx-vlan-stag-hw-insert: on
Despite that, because of the solution is not working now, I don't think its caused by offload effect.
Do you have any idea why TCP checksum could be incorrect for fragmented packets?
Hopefully I found the issue. It was my mistake. Some tcp data was missing when I was coping buffers. I was tracing on the indexes and lengths but because of the changes in data, checksum value was calculating differently in the client side.

UDP - Optional Checksum

From what I have read about UDP, it has no error handling, no checking for things like sequence of data sent/recieved, no checking for duplicate packets, no checking for corrupt packets and obviously no guarantee that the packets sent are even received...
So with that in mind, why an earth is there actually an option to use checksums in UDP?? Because surely if you want to make sure the data being sent is received in the correct order (and not corrupt and so on) then you would use TCP...
UDP packets include a field for a 16 bit CRC checksum which the receiving operating system will use to check for packet corruption. If the checksum is present and fails, then the packet will be silently discarded. It is up to the application to notice that the packet disappeared and take corrective action.
UDP checksums are enabled by default on all modern operating systems. It is possible to disable UDP checksums in IPv4, either at the socket or OS level. Doing so would reduce the CPU overhead of processing each packet at both the sender and receiver. This might be desirable if, for example, the application were calculating its own checksum separately. Without any checksum, there would be no guarantee that the bytes received are the same as the bytes sent.
The task of UDP is to transport datagrams, which are "network data packets". For UDP, every data packet is a transmission of its own. If you send 3 packets, those are three independent transmissions for UDP. Whether the content of these 3 packets somehow belongs together or if these are three individual requests (think of DNS requests, where every request is sent as an own UDP packet), UDP doesn't know and doesn't care. All that UDP guarantees is that a packet is either transmitted as a whole or not at all; either the entire packet arrives or the entire packet is lost, you will never see "half of a packet" arriving. So if you just want to send a bunch of data packets, you use UDP.
The task of TCP, on the other hand, is to transport a stream of data. It's not about packets. It's about a stream of bytes somehow making it from one host to another. How this happens, e.g. how TCP is breaking the data stream into chunks and sending these chunks over the network and ensuring that no data is lost and all data is in order, is up to TCP. All that TCP guarantees is that the bytes will arrive correctly and in order at the other side, unless the TCP connection is lost, in which case the stream ends abruptly somewhere in the middle but all data, that arrived up to that point, did arrive correctly and in correct order. So despite TCP also working with packets, the transmission behaves like a stream that has no internal "data units". When sending 80 bytes over TCP, there may be one packet with 80 bytes or 10 packets with each 8 bytes or anything in between, you cannot know and you don't have to.
But just because you use UDP doesn't mean you don't care for data corruption in UDP packets. Keep in mind that corruption may not just affect your data, it may also affect the UDP header itself. If only a single bit swaps, the UDP packets may have an incorrect destination port. So they added a checksum which ensures that neither the UDP header nor the data payload has been corrupted but made it optional, so it's up to you whether you want to use it or not. If used, corrupt packets are dropped and thus behave like lost packets. If your code takes care of lost packets, it will automatically take care of corrupt packets, too.
With IPv6 though, the checksum was dropped from the IP header, which means that IP header corruptions are no longer detected. But this was seen as a small problem, as most layer 2 protocols have their own mechanism to detect corrupt data (e.g. Ethernet and WiFi already guarantee that data is not corrupted on its way through the network) and the checksums of UDP/TCP also cover some of the IP header fields, so even without layer 2 error checking, the recipient would notice if the IP addresses in the header have been corrupted along the way and drop the packet. As a consequence, the UDP checksum is no longer optional with IPv6.

Connectionless UDP vs connection-oriented TCP over an open connection

If a TCP connection is established between a client and server, is sending data faster on this connection-oriented route compared to a connectionless given there is less header info in the packets? So a TCP connection is opened and bytes sent down the open connection as and when required. Or would UDP still be a better choice via a connectionless route where each packet contains the destination address?
Is sending packets via an established TCP connection (after all hand shaking has been done) a method to be faster than UDP?
I suggest you read a little bit more about this topic.
just as a quick answer. TCP makes sure that all the packages are delivered. So if one is dropped for whatever reason. The sender will continue sending it until the receiver gets it. However, UDP sends a packet and just forgets it, so you might loose some of the packets. As a result of this, UDP sends less number of packets over network.
This is why they use UDP for videos because first loosing small amount of data is not a big deal plus even if the sender sends it again it is too late for receiver to use it, so UDP is better. In contrast, you don't want your online banking be over UDP!
Edit: Remember, the speed of sending packets for both UDP and TCP is almost same, and depends on the network! However, after handshake is done in TCP, still receiver needs to send the acks, and sender has to wait for ack before sending new batch of data, so still would be a little slower.
In general, TCP is marginally slower, despite less header info, because the packets must arrive in order, and, in fact, must arrive. In a UDP situation, there is no checking of either.

How can we avoid packet missing in UDP Flex?

I'm trying to send large files using UDP Adobe air to CPP. While transferring large files some packets are missing. How can I retrieve the missing packets data? I'm first of all connecting client(air) with server(cpp) using tcp. After connection establishment I'm starting file transfer. I am planning to get the file missing data using tcp and then resending the missing packets using tcp. Can anybody tell me how can i come to know which packets are missing while transferring. Thank you.
Can you please clarify what is happening? You say you are sending files over UDP yet connecting to the sever with TCP - the two protocols are mutually exclusive over a single connection.
UDP does not provide any mechanism for detecting packet loss (that's what TCP is for) so by default you won't be able to work out whether packets have been lost.
You should use TCP for sending files as it manages sending/resending packets for you.
As stated in the Air ServerSocket documentation (http://help.adobe.com/en_US/air/reference/html/flash/net/ServerSocket.html):
All packets [sent over TCP] are guaranteed to arrive (within reason) — any lost packets are retransmitted. In general, the TCP protocol manages the available network bandwidth better than the UDP protocol. Most AIR applications that require socket communications should use the ServerSocket and Socket classes [TCP] rather than the DatagramSocket class [UDP].
See this page for more information on the Air networking classes:
http://help.adobe.com/en_US/air/html/dev/WSb2ba3b1aad8a27b0-181c51321220efd9d1c-8000.html#WS5b3ccc516d4fbf351e63e3d118a9b90204-7cfb
my guess, tcp is slower because it does a resend when a packet gets lost. so that's probably why it's slower. but on the other hand, checking which packets get lost and re-send them by udp will also take longer ...
i would go for TCP instead of UDP
Like Sly says, udp seems the wrong tool to use here

How to Reduce TCP delays caused by ARP flushes for MODBUS TCP

We have an application which is periodically sending TCP messages at a defined rate(Using MODBUS TCP). If a message is not received within a set period an alarm is raised. However every once in a while there appears to be a delay in messages being received. Investigation has shown that this is associated with the ARP cache being refreshed causing a resend of the TCP message.
The IP stack provider have told us that this is the expected behaviour for TCP. The questions are,
Is this expected behaviour for an IP stack? If not how do other stacks work around the period when IP/MAC address translation is not available
If this is the expected behaviour how can we reduce the delay in TCP messages during this period?(Permanent ARP entries have been tried, but are not the best solution)
In my last job I worked with a company building routers and switches. Our implementation would queue packets waiting for ARP replies and send them when the ARP reply was received. Therefore, no TCP retransmit required.
Retransmission in TCP occurs when an ACK is not received within a given time. If the ARP reply takes a long time, or is itself lost, you might be getting a retransmission even though the device waiting for the ARP reply is queuing the packet.
It would appear from your question that the period of the TCP message is shorter than the ARP refresh time. This implies that reuse of the ARP is not causing it to stay refreshed, which is possible behaviour that would be helpful in your situation.
A packet trace of the situation occurring could be helpful - are you actually losing the first packet? How long does the ARP reply take?
In order to stop the ARP cache timing out, you might want to try to find something that will refresh it, such as another ARP request for the same address, or a gratuitous ARP.
I found a specification for MODBUS TCP but it didn't help. Can you post some details of your network - media, devices, speeds?
Your description suggests that the peer ARP entries expire between TCP segments and cause some subsequent segments to fail due to the lack of a current MAC destination.
If you have the MODBUS devices on a separate subnet, then perhaps the destination router will be kind enough to queue the segment until it receives a valid MAC. If you cannot use a separate subnet, you could try to force the session to have keep-alives activated - this would cause a periodic empty message to be sent that would keep the ARP timers resetting. If the overhead of the keep-alive is too high and you completely control the application in your system, you could try to force zero-length messages through to the peer.

Resources