How to understand packet lost in UDP? - networking

How can I understand packet lost in UDP?
For example in a chat application if a Sentence is not received how to understand it and warn client?

thanks to safeer I found this and I think it's correct:
If the higher-level protocol that's running on top of UDP (RTP, TFTP, etc.) has some sort of identification number or sequence number or block number that is unique per packet and that changes in a predictable way, then you could display that number in a custom column and manually look for missing numbers, but there is no way in the UDP protocol itself to identify missing packets.
It may be possible to somehow mix UDP and TCP.

Related

Reassembly of segments

I am working on an application that intercepts various kinds of traffic. Recently I have been receiving out-of-order segments. This traffic is over TCP. The SIP header is among multiple segments. I am trying to understand a protocol to be followed to reassemble packets that arrive out of order to be able to display them in my application. To clarify the data is segmented by TCP. By receiving out of order, I mean:
SIP INVITE header first half received later, second-half earlier.
TCP seq and ack are such that the segment received later is expected to be received first.
I would greatly appreciate any leads towards established protocols to implement this.
I suspect you may need to look deeper into your architecture as TCP is designed to deliver packets in order.
One thing in particular to check is whether you are using multiple TCP connections in some way, maybe to boost bandwidth - this could allow out of order delivery if different packets could take different TCP connections, but within a TCP connection delivery should still be in order.

how to know whether the TCP ends or not?

TCP is stream to communicate and it has varying length. So in the application, how I can know whether the TCP ends or not?
In the transfom layer, The TCP packet header doesn't have a length field and its length is varying, how can the TCP layer know where is the end.
You design a protocol that runs on top of TCP that includes a length field (or a message terminator). You can look at existing protocols layered on top of TCP (such as DNS, HTTP, IRC, SMTP, SMB, and so on) to see how they do this.
To avoid pain, it helps to have a thorough understanding of several different protocols layered on top of TCP before attempting to design your own. There's a lot of subtle details you can easily get wrong.
If you look at this post, it gives a good answer.
Depending on what you are communicating with, or how you are communicating there will need to be some sort of character sequence that you look for to know that an individual message or transmission is done if you plan to leave the socket open.
n the transfom layer
I assume you mean 'transport layer'?
The TCP packet header doesn't have a length field and its length is varying
The IP header has a length field. Another one in the TCP header would be redundant.
how can the TCP layer know where is the end.
From the IP header length word, less the IP and TCP header sizes.
When a party of a TCP connection receives a FIN or RST signal it knows that the other side has stopped sending. At an API level you can call shutdown to give that signal. The other side will then get a zero length read and knows that nothing more will be coming.

Are there any disadvantgaes of using a part of the datagram in UDP to receive packets sequentially?

The UDP protocol does not guarantee packets being received sequentially, but you could just use part of the datagram for a sequence number.
Compared to the guarantee of TCP, is the above solution for UDP equivalent?
Basically, I've been reading everywhere that UDP does not provide sequential receiving, but this seems like such an obvious fix that I was wondering if it is truly an adequate fix.
The only 'disadvantage' is that you lose a few bytes of data space.
However, by itself, it isn't a solution. You have to add ACK messages into your protocol so that the sender knows what you have and haven't received; you have to buffer sent datagrams at the sender until they are ACK'd in case you have to retransmit then; and you have to either buffer out of sequence datagrams or throw them away so you can reconstruct the sequence correctly. Having come this far, it would also be sensible for the sender to implement some form of flow control or pacing if it notices a lot of retransmission being required.
This is a good way towards implementing TCP. Most people give up at this point and use TCP.
Using UDP in that fashion makes the application need to handle packet reconstruction and sequencing. That creates overhead in the application layer of the network. TCP is probably more efficient at handling that in the transport layer.
As well, UDP does not provide a mechanism for resending lost packets. When your application notices that the sequence numbers skipped one, there is some ambiguity in the meaning. Is there a lost packet or a delay packet? Your application would need to be able to detect that, and be able to request that the packet be sent again via a packet number reference.
In other words, there is a reason for the overhead of TCP when in-order guaranteed delivery is required.
https://en.wikipedia.org/wiki/User_Datagram_Protocol
It sounds like you want a form of partial reliability, inbetween TCP and UDP.
An option is to use SCTP-over-UDP (SCTP, portable userspace & kernel source). SCTP lets you set in-order for unreliable UDP-like streams , and also for partially-reliable streams (limited time or number of re-transmits)
.
Of course you could implement the missing features from TCP in UDP but that would destroy the purpose of UDP. The point is that the TCP implementation in your network stack peforms all the neccessary operations for you. (Involving packet reassembling and packet loss).
If you need TCP than you should use it. UDP is designed for packets where you don't care if they got lost (like VOIP, Gameserver, etc.).

I'm confused on terminology about wifi

I am trying to simulate a wifi video transmission and for that I created a connection using a socket between 2 devices, however I then started to doubt whether this is required or if I was supposed to create a UDP connection.
I think I'm just confused on terms here and I've Googled and I found out that Wifi can has TCP or UDP my question would then be would a Wifi Transmission over TCP be as reliable for a simulation as one with UDP?
I'd suggest you to read Difference between TCP and UDP?.
For streaming like video transmission you would generally want to use UDP. If a packet cannot reach the server in time, it'd better be discarded than pausing the whole transmission in order to wait for one tiny missing packet that just contains the other person blinking.
But obviously it's up to you and how you implement your software.
You may need to read up a bit on the TCP/IP protocol. TCP and UDP are just types of packets/datagrams. The main difference is that TCP packets include extra protocol information, whereas UDP are simpler packets with just a destination, the data itself, and a checksum.
The upshot is that the sender of a UDP packet has no way of knowing whether or not the packet was received at the other end. Often this doesn't matter - because it may be handled in other ways by higher layers in the software, or can be simply lost and ignored without any negative consequences. So UDP can be a more efficient use of the bandwidth, in some scenarios - because there is less protocol information being exchanged, and therefore more actual data. Plus TCP is more complicated because you have to handle the protocol stuff.
So when you create your system, you have a choice - either TCP or UDP packets, depending on what you are trying to achieve and how you want to go about it. But both packet types are really all part of the "tcp/ip" protocol stack, and have similarities.

Can UDP retransmit lost data?

I know the protocol doesn't support this but is it common for clients that require some level of reliability to build into their application a method for requesting retransmission of a packet if found to be corrupt?
It is common for clients to implement reliability on top of UDP if they need reliability (or sometimes just some reliability) but not any of the other things that TCP offers, for example strict in-order delivery, and if they want, at the same time, low latency (or multicast, where it works).
In general, you will only want to use reliable UDP if there are urgent reasons (very low latency and high speed needed, e.g. for a twitch game). In every "normal" case, simply using TCP will serve you equally well or better.
Also note that it is easy to implement your own stack on top of UDP that performs worse than TCP.
See enet for an example of a library that implements reliability (and some other features) on top of UDP (Raknet or HawkNL would be other examples).
You might want to look at the answers to this question: What do you use when you need reliable UDP?
Of course. You can build a reliable protocol (like TCP) on top of UDP.
Example
Imagine you are building a fileserver:
* read the file using blocks of 1024 bytes
* construct an UDP packet with payload: 4 bytes for the "position" in the file, 4 bytes for the "length" of the contents of the packet.
The receiver now receives UDP packets. If he gets following packets:
* 0-1024: DATA
* 1024-2048: DATA
* 3072-4096: DATA
it realises a packet got missing, and asks the sending application to resend the part between 2048 and 3072.
This is a very basic example to explain your application code needs to deal with the packet construction and payload interpretation. Don't use it, it does not take edge cases (last packet, checksums for corrupted packets, ...) into account.
Short answer: No.
Long answer: UDP doesn't care for packet loss. If an UDP packet arrives and has a bad checksum, it is simply dropped. Neither the sender of the packet is informed about this, nor the recipient is informed. It is only dropped, that's all that happens. Also UDP packets are not numbered, so if you send four packets and only three arrive at the recipient, the recipient cannot know that there used to be four and one is missing. Last but not least, packets are not acknowledged, so the sender will never know if a packet it was sending out ever made it to the recipient or not.
Contrary to this, TCP breaks data into segments, each segment is numbered, so the recipient can know if a segment is missing. Also all segments must be acknowledged to the sender. If the sender receives no acknowledgment for a segment after a certain period of time, it will assume the segment was lost and send it again.
Of course, you can add an own frame header on top of every data fragment you sent over UDP, that way your application code can number the sent fragments and you can implement an acknowledgement-resent strategy in your code but the question is: Will this really be better than what TCP is already offering for free? Even if it would be equally good, save yourself the time and just use TCP. A lot of people already thought they can do better than TCP and usually they realize in the end, actually they cannot. TCP has its weaknesses but in general it is pretty good at what it does.

Resources