QUdpSocket problem - qt

I try to send data using UDP protocol. Is it possible to understand when UDP dont send data?
Thanks a lot.
I try to a servis which run into client. And they send their IP an port number in one second. Server listen them and if they dont send this message it understand that client is not connected. I do this but I cant understand when they dont send? Do you have any suggestion

You can check the result of writeDatagram
Sends the datagram at data of size size to the host address address at port port. Returns the number of bytes sent on success; otherwise returns -1.
Then just check the return number to make sure the number of bytes sent was what you expected

Of course it's possible, but it might be hard.
I would recommend:
Verify that you don't get errors from your calls to send data (perhaps you're specifying a bad address, or the socket is in a bad state or something).
Try sending more seldom, perhaps your packets are getting dropped by your local network stack.
Make sure you really listen properly at the receiving end, perhaps the packets make it but you fail to read them properly.
Consider firewall/NAT issues, as usual with UDP. Protocol-wise, never include connection information as application data in packets, since then it's invisible to NAT-machines.
The next step might be digging down and trying to get some feedback from the local network stack, or maybe sniffing the network to see if the packets make it some way at least.

Related

Determining if TCP connection can stay up for 30 minutes without anything being received from its peer

The description of the questions goes like this:
Someone recorded all the IP packets of a TCP connection between a client and a server for 30 minutes. In the record, he didn't find any packet that was ACK-only. How is this possible?
What is claimed to be a possible solution: For all the record time, the server sent data to the client, which the client processed, but he didn't send any data back to the server.
I am having trouble understanding how can it be possible.
From what I see, since the client didn't send any data to the server, and there weren't any ACK-only packets in the record, then the server didn't get any ACK from the client. Logically, I would think that since no ACK is received by the server, it will always do re-transmit. But also, since the server doesn't get anything from the client for 30 minutes, which seems like a long time for me, it will conclude that the connection is broken and stop it. (maybe even send an ACK only, but I am not sure about it).
Moreover, from what I know, when using keepalive, the sender gets and ACK-only packet from his peer.
Can anyone help me understand this?
Help would be appreciated
Perhaps more details would be helpful here. What kind of server/client? What protocol is being used and for what purpose?
Is the connection running as expected and this is just viewed as strange traffic you are trying to understand or is the connection timing out?
Some devices or softwares can be set to a "No ACK" state which means that no ACKs are sent nor are they expected.
One reason for this is usually bandwidth. ACKs do consume bandwidth and there are cases where bandwidth is such a great premium that packets being lost is preferable to bandwidth being consumed by ACKs. That type of traffic would probably do better with a UDP protocol but that is a completely different topic.
Another reason is that you don't care if packets are lost. Again, this would be better off as UDP instead of TCP, but someone may be trying to work within strange parameters is bending the rules about what type of traffic to advertise as in order to get around some issue.
Hopefully this is helpful, but if it does not apply, then please put in more details about the connection so that we can better understand what may be happening.

How can I know if the packet I send is received when I'm using UDP?

I know UDP is not a two-way communication between client & server. However, if I send a packet to clients from a server, how can I know that my packet reach it's destination?
The only way to be sure that the data you've send are received (and processed) by the server application is to let the server application explicitly acknowledge the data in a reply.
Note that contrary to a comment to your question it will not help to use TCP instead of UDP to get a more reliable transport: while your OS kernel will does its best to deliver data with TCP (instead of just trying once with UDP) it can neither guarantee the delivery (since connectivity might break) nor does it care if the server application itself has read and maybe also processed the data. The only way to know that the application has successfully read the data is to let the application itself say so.

reading tcp packets out of order

Web games are forced to use tcp.
But with real time constraints tcp head of line blocking behavior is absurd when you don't care about old packets.
While I'm aware that there's definitely nothing that we can do on the client side, I'm wondering if there is a solution on the server side.
Indeed, on the server you get packets in order and miserably wait if misbehaving packet t+42 has been lost even though packets t+43, t+44 can already be nicely waiting in your receive buffer.
Since we are talking about local data, technically it should be possible to retrieve it..
So does anyone have an idea on how to perform that feat?
How to save this precious data from these pesky kernel space daemons?
TCP guarantees that the data arrives in order and re-transmits lost packets. TCP Man Page
Given this, there is only one way to achieve the results you want given your stated constraints, and that is to hack the TCP protocol at the server side (assuming you cannot control the Client WebSocket behavior). The simplest, relative term, would be to open a raw socket, implement your own simple TCP handshake (Syn-Ack when client Syns), then read and write from the socket managing your own TCP headers. Your custom implementation would need to keep track of received sequence numbers and acknowledge all of those you want the client to forget about.
You might be able to reduce effort by making this program a proxy to your original.
Example of TCP raw socket here.

TCP write error but not really

I have been testing a program which has simple communication between two machines over a 1Gbps line. While running TCP communications over the line I occasionally receive write errors on the client side (due to a timeout) when the network is totally flooded (running at or close to 100% usage). This generally happens when I am running multiple instances of the same program going to different ports.
My question is, is it possible to get a write error but still receive the message on the server side. It appears that is what is happening, and I am not quite sure why. Could it be that the ACK coming back to the client is what is timing out?
Yes, that is possible. TCP does not guarantee you that data you sent successfully is received and that data that is sent unsuccessfully is not received. This problem is unsolvable. It is called the Generals Problem. There is always a way to loose messages/packets such that the sender comes to the wrong conclusion. TCP guarantees that the receiver receives the same stream of bytes that the sender sent, but possibly cut off at an arbitrary point.
This unreliability has performance reasons, too. TCP data is buffered on both hosts as well as on the network. Acknowledgement is delayed.
You have to live with this. If you make your scenario more concrete I can suggest some strategies of dealing with this.
send puts data into the TCP send buffer.
If the send buffer has no enough space, send will block util the data is completely or partly copied into the send buffer, or the designed timeout arrives.
Read timeout and write timeout is OK. You should check and process them. The way is restarting read/write operation after timeout. You also pay attention to other read/write error except timeout.

loss of a DNS request

UDP package are like once gone you will never know that it is received or not. So it is not gurenteed weather package will be received. I have learned that mostly DNS use UDP package to send the request. How does the loss of DNS requested using UDP are handled?
If no response packet is received within a certain amount of time, the request is re-sent. Dan Bernstein suggests that most clients will retry up to four times.
There are two ways to decide that a UDP datagram has been lost. Neither is completely reliable.
The most common is a timeout. You send a message and wait for a response. If you don't get a response after some amount of time, you assume that either the message or the response were lost. At that point you can either try again, or give up. It is also possible that the message or response is just taking a really long time to get through the network, so you must account for duplicates. Note that all packet-switched communication, including TCP, works this way. TCP just hides the details for you.
Another method is to look for ICMP messages telling you that a packet has been dropped. For example, ICMP_UNREACH_PORT, ICMP_UNREACH_HOST, or ICMP_UNREACH_HOST_PROHIB. But not only are these messages rarely sent and subject to loss themselves, but you can sometimes receive them even when a message did get through successfully. At best, if you get an ICMP message you can think of it as a suggestion of what might have happened.
Most DNS implementations use a short timeout because duplication is no big deal. After a few repeats to one DNS server, it will try another one (assuming multiple servers are available). Most implementations will also cache information about which servers are responding and which are not.

Resources