for a project of mine I am trying to manually trigger a retransmission for certain packets in a TCP Flow.
So far I found NetFilterQueue a good way to do this, but now I am stuck.
I have a working version that handles every packet, so I can modify/accept/drop it.
So this part is working.
At the Moment, when I find a packet that I want retransmitted, I set a mark and use NF_REPEAT to reinject the packet into the queue.
I thought that with doing so I would send the packet out and send a copy of it through the queue again, where it would be NF_ACCEPT'ed cause of its mark. Effectively doing a retransmission of that packet.
But it turns out this does not work as I hoped it would...
So my question is:
Is there a way I could use NFQueue to accomplish my retransmissions?
Or is there another way to do such a thing?
Related
The description of the questions goes like this:
Someone recorded all the IP packets of a TCP connection between a client and a server for 30 minutes. In the record, he didn't find any packet that was ACK-only. How is this possible?
What is claimed to be a possible solution: For all the record time, the server sent data to the client, which the client processed, but he didn't send any data back to the server.
I am having trouble understanding how can it be possible.
From what I see, since the client didn't send any data to the server, and there weren't any ACK-only packets in the record, then the server didn't get any ACK from the client. Logically, I would think that since no ACK is received by the server, it will always do re-transmit. But also, since the server doesn't get anything from the client for 30 minutes, which seems like a long time for me, it will conclude that the connection is broken and stop it. (maybe even send an ACK only, but I am not sure about it).
Moreover, from what I know, when using keepalive, the sender gets and ACK-only packet from his peer.
Can anyone help me understand this?
Help would be appreciated
Perhaps more details would be helpful here. What kind of server/client? What protocol is being used and for what purpose?
Is the connection running as expected and this is just viewed as strange traffic you are trying to understand or is the connection timing out?
Some devices or softwares can be set to a "No ACK" state which means that no ACKs are sent nor are they expected.
One reason for this is usually bandwidth. ACKs do consume bandwidth and there are cases where bandwidth is such a great premium that packets being lost is preferable to bandwidth being consumed by ACKs. That type of traffic would probably do better with a UDP protocol but that is a completely different topic.
Another reason is that you don't care if packets are lost. Again, this would be better off as UDP instead of TCP, but someone may be trying to work within strange parameters is bending the rules about what type of traffic to advertise as in order to get around some issue.
Hopefully this is helpful, but if it does not apply, then please put in more details about the connection so that we can better understand what may be happening.
Web games are forced to use tcp.
But with real time constraints tcp head of line blocking behavior is absurd when you don't care about old packets.
While I'm aware that there's definitely nothing that we can do on the client side, I'm wondering if there is a solution on the server side.
Indeed, on the server you get packets in order and miserably wait if misbehaving packet t+42 has been lost even though packets t+43, t+44 can already be nicely waiting in your receive buffer.
Since we are talking about local data, technically it should be possible to retrieve it..
So does anyone have an idea on how to perform that feat?
How to save this precious data from these pesky kernel space daemons?
TCP guarantees that the data arrives in order and re-transmits lost packets. TCP Man Page
Given this, there is only one way to achieve the results you want given your stated constraints, and that is to hack the TCP protocol at the server side (assuming you cannot control the Client WebSocket behavior). The simplest, relative term, would be to open a raw socket, implement your own simple TCP handshake (Syn-Ack when client Syns), then read and write from the socket managing your own TCP headers. Your custom implementation would need to keep track of received sequence numbers and acknowledge all of those you want the client to forget about.
You might be able to reduce effort by making this program a proxy to your original.
Example of TCP raw socket here.
Say I am capturing data from TCP using RECV function in c++.
I might sound stupid but I would like to know will I get any speed up if I capture the packet through a simple sniffer (maybe using PCAP) and process it?
Thanks
No, it probably won't speed up anything. I rather expect it to be even slower and more memory-consuming. (overhead, overhead, overhead...).
Additionally, it won't work at all.
No payload will be exchanged if there isn´t a real client
which creates a proper connection with the peer.
If there is a connection and you´re relying only on the sniffer without proper receiving the payload in the client, the whole transfer will stop after some amount of data. (Because the buffer is full, and the sender won't send anymore until there is space again).
That means you must call recv, which makes sniffing useless in the first place.
I have a server for a very competitive game which involves money. In order for the game to be fair, every client must have the same ping. I can't, obviously, make everyone have a short ping. So the only solution is to fix it high: for example, 200ms is acceptable.
The problem is, how do I force a ping to be 200ms? In order for that to work, I'd have to know how much I should delay sending the packets - and, for that, I'd have to know the ping of the client. So, if the ping is 60ms, I could just add a 140ms delay to provided data. The problem is: I can only know the ping by asking it, and a client can lie, telling me his ping is higher than it is and making me send the packets earlier.
How to solve that problem?
you can't fix the ping, it can change all the time. if your client enable e.g. torrent during your ping discovery it will significantly affects the result.
but maybe you don't have to delay sending packets. maybe it's enough to delay receiving or analyzing them? and that might be easier, you just have to know when you sent the corresponding packet
there is also other option: many online shooters use feature called 'unlagged'. client sends the coordinate of his shoot and server, based on current ping (lets say 80ms) calculate if the target would have been hit 80ms ago. if so, the target is considered hit. but this of course introduces some artifacts like the victim getting the shot through the wall.
By 'ping' I assume you mean 'network latency', which is variable and entirely outside your control. Your question doesn't make sense.
I have this application that consists of two phases. Queuing phase and chatting Phase.
The chatting uses UDP (a flash-app).
So before the user enters the queue phase I want to check if UDP traffic is possible.
I could do this both in the ASP.NET app (that wraps the flash-app) or in the flash-app.
I'm not sure on how to do this in either of them.
My initial thougth is to connect via UDP to some tiny webservice a server, but is there an easier way of doing it ?
It's not the computer I'm worried about, it's the router that I want to check.
Unfortunately, the only way to know for sure if a UDP datagram can be routed from one point to another is to try and see what happens. Send a test datagram to the other side and have that send back a response. If you don't get a response within a second or two, try again. Repeat a couple of times. If you still get nothing back, then you probably don't have connectivity at that moment
Testing to a different IP address, or even a different port, won't really help: you might have connectivity to one location but not another.
Also remember all the caveats about UDP:
Anything you send could disappear at any time, so verify receipt and be prepared to repeat
Payloads larger than 1400 bytes are much more likely to disappear (see "IP fragmentation")
If you must send more than a few packets, then you must control your data rate: too fast and packets will be dropped, the definition of "too fast" will constantly change.
Making UDP work is a lot of work, so consider if you really need it.