I have a server for a very competitive game which involves money. In order for the game to be fair, every client must have the same ping. I can't, obviously, make everyone have a short ping. So the only solution is to fix it high: for example, 200ms is acceptable.
The problem is, how do I force a ping to be 200ms? In order for that to work, I'd have to know how much I should delay sending the packets - and, for that, I'd have to know the ping of the client. So, if the ping is 60ms, I could just add a 140ms delay to provided data. The problem is: I can only know the ping by asking it, and a client can lie, telling me his ping is higher than it is and making me send the packets earlier.
How to solve that problem?
you can't fix the ping, it can change all the time. if your client enable e.g. torrent during your ping discovery it will significantly affects the result.
but maybe you don't have to delay sending packets. maybe it's enough to delay receiving or analyzing them? and that might be easier, you just have to know when you sent the corresponding packet
there is also other option: many online shooters use feature called 'unlagged'. client sends the coordinate of his shoot and server, based on current ping (lets say 80ms) calculate if the target would have been hit 80ms ago. if so, the target is considered hit. but this of course introduces some artifacts like the victim getting the shot through the wall.
By 'ping' I assume you mean 'network latency', which is variable and entirely outside your control. Your question doesn't make sense.
Related
The description of the questions goes like this:
Someone recorded all the IP packets of a TCP connection between a client and a server for 30 minutes. In the record, he didn't find any packet that was ACK-only. How is this possible?
What is claimed to be a possible solution: For all the record time, the server sent data to the client, which the client processed, but he didn't send any data back to the server.
I am having trouble understanding how can it be possible.
From what I see, since the client didn't send any data to the server, and there weren't any ACK-only packets in the record, then the server didn't get any ACK from the client. Logically, I would think that since no ACK is received by the server, it will always do re-transmit. But also, since the server doesn't get anything from the client for 30 minutes, which seems like a long time for me, it will conclude that the connection is broken and stop it. (maybe even send an ACK only, but I am not sure about it).
Moreover, from what I know, when using keepalive, the sender gets and ACK-only packet from his peer.
Can anyone help me understand this?
Help would be appreciated
Perhaps more details would be helpful here. What kind of server/client? What protocol is being used and for what purpose?
Is the connection running as expected and this is just viewed as strange traffic you are trying to understand or is the connection timing out?
Some devices or softwares can be set to a "No ACK" state which means that no ACKs are sent nor are they expected.
One reason for this is usually bandwidth. ACKs do consume bandwidth and there are cases where bandwidth is such a great premium that packets being lost is preferable to bandwidth being consumed by ACKs. That type of traffic would probably do better with a UDP protocol but that is a completely different topic.
Another reason is that you don't care if packets are lost. Again, this would be better off as UDP instead of TCP, but someone may be trying to work within strange parameters is bending the rules about what type of traffic to advertise as in order to get around some issue.
Hopefully this is helpful, but if it does not apply, then please put in more details about the connection so that we can better understand what may be happening.
While reading one of the assignment questions in "Data Communication and Networking" by Behrouz Forouzan, one of the questions asked were using UDP for file-transfer have any adverse effects keeping process crash phenomenon in mind.
The solution to this said that if a process A asked for the file-contents from a server X and soon after the request, A crashed and another process B came up on the same port on the same machine(giving it the same socket address) and sends a request to the same server for another file but the request is lost which makes the server unknown of both the process A crashing and the request being lost and hence, it sends the contents of the file asked by A to B.
Why doesn't this problem occur, in a video-on-demand channel like you-tube or likes?
One of the closest answers I got is this, but it doesn't seem to address my problem:
When is it appropriate to use UDP instead of TCP?
UPDATE: For people who would like to have a read of the question given in the book, I found an online version of the required part, please have a look at the 8th question of the PDF:
http://ceng334.cankaya.edu.tr/uploads/files/file/network%20sample.pdf
In theory the problem could happen but in real life? Not a chance.
Let's say a user wants to stream a video from Youtube with a browser.
Browser must crash - realistically does not happen too often.
New browser instance takes the exact same source UDP port - virtually never happens.
The user decides to look at a different video - makes no sense.
While all this happens, server side does not time out - I don't think so.
This is like arguing that TCP should be used because a packet might get dropped on the wire when two computers are connected back to back with one meter Ethernet cable.
UDP package are like once gone you will never know that it is received or not. So it is not gurenteed weather package will be received. I have learned that mostly DNS use UDP package to send the request. How does the loss of DNS requested using UDP are handled?
If no response packet is received within a certain amount of time, the request is re-sent. Dan Bernstein suggests that most clients will retry up to four times.
There are two ways to decide that a UDP datagram has been lost. Neither is completely reliable.
The most common is a timeout. You send a message and wait for a response. If you don't get a response after some amount of time, you assume that either the message or the response were lost. At that point you can either try again, or give up. It is also possible that the message or response is just taking a really long time to get through the network, so you must account for duplicates. Note that all packet-switched communication, including TCP, works this way. TCP just hides the details for you.
Another method is to look for ICMP messages telling you that a packet has been dropped. For example, ICMP_UNREACH_PORT, ICMP_UNREACH_HOST, or ICMP_UNREACH_HOST_PROHIB. But not only are these messages rarely sent and subject to loss themselves, but you can sometimes receive them even when a message did get through successfully. At best, if you get an ICMP message you can think of it as a suggestion of what might have happened.
Most DNS implementations use a short timeout because duplication is no big deal. After a few repeats to one DNS server, it will try another one (assuming multiple servers are available). Most implementations will also cache information about which servers are responding and which are not.
I have this application that consists of two phases. Queuing phase and chatting Phase.
The chatting uses UDP (a flash-app).
So before the user enters the queue phase I want to check if UDP traffic is possible.
I could do this both in the ASP.NET app (that wraps the flash-app) or in the flash-app.
I'm not sure on how to do this in either of them.
My initial thougth is to connect via UDP to some tiny webservice a server, but is there an easier way of doing it ?
It's not the computer I'm worried about, it's the router that I want to check.
Unfortunately, the only way to know for sure if a UDP datagram can be routed from one point to another is to try and see what happens. Send a test datagram to the other side and have that send back a response. If you don't get a response within a second or two, try again. Repeat a couple of times. If you still get nothing back, then you probably don't have connectivity at that moment
Testing to a different IP address, or even a different port, won't really help: you might have connectivity to one location but not another.
Also remember all the caveats about UDP:
Anything you send could disappear at any time, so verify receipt and be prepared to repeat
Payloads larger than 1400 bytes are much more likely to disappear (see "IP fragmentation")
If you must send more than a few packets, then you must control your data rate: too fast and packets will be dropped, the definition of "too fast" will constantly change.
Making UDP work is a lot of work, so consider if you really need it.
How long can I expect a client/server TCP connection to last in the wild?
I want it to stay permanently connected, but things happen, so the client will have to reconnect. At what point do I say that there's a problem in the code rather than there's a problem with some external equipment?
I agree with Zan Lynx. There's no guarantee, but you can keep a connection alive almost indefinitely by sending data over it, assuming there are no connectivity or bandwidth issues.
Generally I've gone for the application level keep-alive approach, although this has usually because it's been in the client spec so I've had to do it. But just send some short piece of data every minute or two, to which you expect some sort of acknowledgement.
Whether you count one failure to acknowledge as the connection having failed is up to you. Generally this is what I have done in the past, although there was a case I had wait for three failed responses in a row to drop the connection because the app at the other end of the connection was extremely flaky about responding to "are you there?" requests.
If the connection fails, which at some point it probably will, even with machines on the same network, then just try to reestablish it. If that fails a set number of times then you have a problem. If your connection persistently fails after it's been connected for a while then again, you have a problem. Most likely in both cases it's probably some network issue, rather than your code, or maybe a problem with the TCP/IP stack on your machine (has been known: I encountered issues with this on an old version of QNX--it'd just randomly fall over). Having said that you might have a software problem, and the only way to know for sure is often to attach a debugger, or to get some logging in there. E.g. if you can always connect successfully, but after a time you stop getting ACKs, even after reconnect, then maybe your server is deadlocking, or getting stuck in a loop or something.
What's really useful is to set up a series of long-running tests under a variety of load conditions, from just sending the keep alive are you there?/ack requests and responses, to absolutely battering the server. This will generally give you more confidence about your software components, and can be really useful in shaking out some really weird problems which won't necessarily cause a problem with your connection, although they might result in problems with the transactions taking place. For example, I was once writing a telecoms application server that provided services such as number translation, and we'd just leave it running for days at a time. The thing was that when Saturday came round, for the whole day, it would reject every call request that came in, which amounted to millions of calls, and we had no idea why. It turned out to be because of a single typo in some date conversion code that only caused a problem on Saturdays.
Hope that helps.
I think the most important idea here is theory vs. practice.
The original theory was that the connections had no lifetimes. If you had a connection, it stayed open forever, even if there was no traffic, until an event caused it to close.
The new theory is that most OS releases have turned on the keep-alive timer. This means that connections will last forever, as long as the system on the other end responds to an occasional TCP-level exchange.
In reality, many connections will be terminated after time, with a variety of criteria and situations.
Two really good examples are: The remote client is using DHCP, the lease expires, and the IP address changes.
Another example is firewalls, which seem to be increasingly intelligent, and can identify keep-alive traffic vs. real data, and close connections based on any high level criteria, especially idle time.
How you want to implement reconnect logic depends a lot on your architecture, the working environment, and your performance goals.
It shouldn't really matter, you should design your code to automatically reconnect if that is the desired behavior.
There really is no way to tell. There is nothing inherent to TCP that would cause the connection to just drop after a certain amount of time. Someone on a reliable connection could have years of uptime, while someone on a different connection could have to reconnect every 5 minutes. There is no way to tell or even guess.
You will need some data going over the connection periodically to keep it alive - many OS's or firewalls will drop an inactive connection.
Pick a value. One drop every hour is probably fine. Ten unexpected connection drops in 5 minutes probably indicates a problem.
TCP connections will generally last about two hours without any traffic. Either end can send keep-alive packets, which are, I think, just an ACK on the last received packet. This can usually be set per socket or by default on every TCP connection.
An application level keep-alive is also possible. For a telnet style protocol like FTP, SMTP, POP or IMAP something like sending return, newline and getting back a command prompt.