Periodic network latency peaks on mobile devices - networking

I was receiving UDP data packets in real-time around every 200ms from the server, and from time to time I got packets in the delay of even a couple of seconds.
I have put my UDP pings (from client-side) to 200ms and it made my network a lot more responsive, but that is all learned from experimenting.
I don't have a solid knowledge of why this worked.
I would like to understand how power conservation algorithms work on mobile devices because I think that is why this happened.
After some investigation the best thing I have found so far is
http://www.crittercism.com/2014/03/200ms-the-magical-number-for-faster-response-times/
But it has not so many details on the topic nor any reference to further read.

There's probably two ways to look at this question:
Why is '200ms' so special
Users can tell when something takes longer than about 200ms - 300ms. This might include not just the transmission time, but also the time spent by the mobile device to use that data. Practically speaking, this is an important balancing point. Assuming new data is continuously available and that the network could send an update at any time, it might be a waste of resources to be any faster than about 200ms.
https://medium.com/appdiff/magic-numbers-for-app-performance-45c7e3bc9e46
Why does adjusting ping frequency seem to have an effect
Many energy conservation algorithms do two things:
Send/request updates only when necessary, but...
Periodically let the network know you're still around
This defines another balance point, since seconds may pass between 'necessary' updates, and the network might check that you're still there before that point. By adjusting the ping, you might be indicating that "I'm still here, and want updates!" every 200ms.
https://en.wikipedia.org/wiki/Nagle%27s_algorithm
This might not explain the effect you're seeing, but these would be some factors to consider.

Related

Compensating for jitter

I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).

strategy for hit detection over a net connection, like Quake or other FPS games

I'm learning about the various networking technologies, specifically the protocols UDP and TCP.
I've read numerous times that games like Quake use UDP because, "it doesn't matter if you miss a position update packet for a missile or the like, because the next packet will put the missile where it needs to be."
This thought process is all well-and-good during the flight path of an object, but it's not good for when the missile reaches it's target. If one computer receives the message that the missile reached it's intended target, but that packet got dropped on a different computer, that would cause some trouble.
Clearly that type of thing doesn't really happen in games like Quake, so what strategy are they using to make sure that everyone is in sync with instantaneous type events, such as a collision?
You've identified two distinct kinds of information:
updates that can be safely missed, because the information they carry will be provided in the next update;
updates that can't be missed, because the information they carry is not part of the next regular update.
You're right - and what the games typically do is to separate out those two kinds of messages within their protocol, and require acknowledgements and retransmissions for the second type, but not for the first type. (If the underlying IP protocol is UDP, then these acknowledgements / retransmissions need to be provided at a higher layer).
When you say that "clearly doesn't happen", you clearly haven't played games on a lossy connection. A popular trick amongst the console crowd is to put a switch on the receive line of your ethernet connection so you can make your console temporarily stop receiving packets, so everybody is nice and still for you to shoot them all.
The reason that could happen is the console that did the shooting decides if it was a hit or not, and relays that information to the opponent. That ensures out of sync or laggy hit data can be deterministically decided. Even if the remote end didn't think that the shot was a hit, it should be close enough that it doesn't seem horribly bad. It works in a reasonable manner, except for what I've mentioned above. Of course, if you assume your players are not cheating, this approach works quite reasonably.
I'm no expert, but there seems to be two approaches you can take. Let the client decide if it's a hit or not (allows for cheating), or let the server decide.
With the former, if you shoot a bullet, and it looks like a hit, it will count as a hit. There may be a bit of a delay before everyone else receives this data though (i.e., you may hit someone, but they'll still be able to play for half a second, and then drop dead).
With the latter, as long as the server receives the information that you shot a bullet, it can use whatever positions it currently has to determine if there was a hit or not, then send that data back for you. This means neither you nor the victim will be aware of you hit or not until that data is sent back to you.
I guess to "smooth" it out you let the client decide for itself, and then if the server pipes in and says "no, that didn't happen" it corrects. Which I suppose could mean players popping back to life, but I reckon it would make more sense just to set their life to 0 and until you get a definitive answer so you don't have weird graphical things going on.
As for ensuring the server/client has received the event... I guess there are two more approaches. Either get the server/client to respond "Yeah, I received the event" or forget about events altogether and just think about everything in terms of state. There is no "hit" event, there's just HP before and after. Sooner or later, it'll receive the most up-to-date state.

How to synchronize media playback over an unreliable network?

I wish I could play music or video on one computer, and have a second computer playing the same media, synchronized. As in, I can hear both computers' speakers at the same time, and it doesn't sound funny.
I want to do this over Wi-Fi, which is slightly unreliable.
Algorithmically, what's the best approach to this problem?
EDIT 1
Whether both computers "play" the same media, or one "plays" the media and streams it to the other, doesn't matter to me.
I am certain this is a tractable problem because I once saw a demo of Wi-Fi speakers. That was 5+ years ago, so I'm figure the technology should make it easier today.
(I myself was looking for an application which did this, hoping I wouldn't have to write one myself, when I stumbled upon this question.)
overview
You introduce a bit of buffer latency and use a network time-synchronization protocol to align the streams. That is, you split the stream up into packets, and timestamp each packet with "play later at time T", where T is for example 50-100ms in the future (or more if the network is glitchy). You send (or multicast) the packets on the local network, to all computers in the chorus. The computers will all play the sound at the same time because the application clock is synced.
Note that there may be other factors like OS/driver/soundcard latency which may have to be factored into the time-synchronization protocol. If you are not too discerning, the synchronization protocol may be as simple as one computer beeping every second -- plus you hitting a key on the other computer in beat. This has the advantage of accounting for any other source of lag at the OS/driver/soundcard layers, but has the disadvantage that manual intervention is needed if the clocks become desynchronized.
hybrid manual-network sync
One way to account for other sources of latency, without constant manual intervention, is to combine this approach with a standard network-clock synchronization protocol; the first time you run the protocol on new machines:
synchronize the machines with manual beat-style intervention
synchronize the machines with a network-clock sync protocol
for each machine in the chorus, take the difference of the two synchronizations; this is the OS/driver/soundcard latency of each machine, which they each keep track of
Now whenever the network backbone changes, all one needs to do is resync using the network-clock sync protocol (#2), and subtract out the OS/driver/soundcard latencies, obviating the need for manual intervention (unless you change the OS/drivers/soundcards).
nature-mimicking firefly sync
If you are doing this in a quiet room and all machines have microphones, you do not even need manual intervention (#1), because you can have them all follow a "firefly-style" synchronizing algorithm. Many species of fireflies in nature will all blink in unison. http://tinkerlog.com/2007/05/11/synchronizing-fireflies/ describes the algorithm these fireflies use: "If a firefly receives a flash of a neighbour firefly, it flashes slightly earlier." Flashes correspond to beeps or buzzes (through the soundcard, not the mobo piezo buzzer!), and seeing corresponds to listening through the microphone.
This may be a bit awkward over very large room distances due to the speed of sound, but I doubt it'll be an issue (if so, decrease rate of beeping).
The synchronization is relative to the position of the listener relative to each speaker. I don't think the reliability of the network would have as much to do with this synchronization as it would the content of the audio stream. In order to synchronize you need to find the distance between each speaker and the listener. Find the difference between each of those values and the value for the farthest speaker. For each 1.1 feet of difference, delay each of the close speakers by 1ms. This will ensure that the audio stream reaches the listener at the same time. This all assumes an open area, as any in proximity to your scenario will generate reflections of the audio waves and create destructive interference. Objects within the area may also transmit sound at a slower speed resulting in delayed sound of their own.

Why I cannot get equal upload and download speed on symmetrical channel?

I'm assigned to a project where my code is supposed to perform uploads and downloads of some files on the same FTP or HTTP server simultaneously. The speed is measured and some conclusions are being made out of this.
Now, the problem is that on high-speed connections we're getting pretty much expected results in terms of throughput, but on slow connections (think ideal CDMA 1xRTT link) either download or upload wins at the expense of the opposite direction. I have a "higher body" who's convinced that CDMA 1xRTT connection is symmetric and thus we should be able to perform data transfer with equivalent speeds (~100 kbps in each direction) on this link.
My measurements show that without heavy tweaking the code in terms of buffer sizes and data link throttling it's not possible to have same speeds in forementioned conditions. I tried both my multithreaded code and also created a simple batch file that automates Windows' ftp.exe to perform data transfer -- same result.
So, the question is: is it really possible to perform data transfer on a slow symmetrical link with equivalent speeds? Is a "higher body" right in their expectations? If yes, do you have any suggestions on what should I do with my code in order to achieve such throughput?
PS.
I completely re-wrote the question, so it would be obvious it belongs to this site.
CDMA 1x consists of up to 15 channels of 9.6kbps traffic. This results in a total throughput of 144kbps.
Two channels are used for command and control signals (talking to base stations, associating/disassociating, SMS traffic, ring signals, etc).
That leaves you with up to 124.8kbps.
--> Each channel is one way. <--
They are dynamically switched and allocated depending on the need.
Generally you'll get more download than upload because that's the typical cell phone modem usage. But you'll never get more than 120kbps total aggregate bandwidth.
In practise, due to overhead of 1xRTT encoding, error correction, resends, etc, you'll typically experience between 60kbps and 90kbps even if you have all the channels possible.
This means that you can probably only get 30kbps-60kbps of upload and download simultaneously.
Further, due to switching the channels dynamically (and the fact that the base station controls this more than your modem - they need to manage base station channels carefully to keep channels free for voice calls) you'll lose time when it switches channels - it's not an instantaneous process.
So - 1xRTT can, in theory, give you 124kbps one way, but due to overhead, switching times, base station capacity, or the phone company simply limiting such connections for other reasons, you can't depend on a symmetrical link.
NOTE:
This will vary to some degree based on the provider and the modem. For instance, some modems have 16 channels, and some providers support 16 channels. In some cases those modems and providers work well together and can provide a full 144kbps aggregate raw bandwidth to the application, with only one dedicated channel (which has to work pretty hard) to deal with control, switching, and other issues. Even then, though, with the overhead of the modem communications, then the overhead of PPP, then the overhead of IP, then the overhead of TCP, you're still looking at maybe 100-120kbps total bandwidth, both up and down.
Lastly, no provider yet supports transparent transfer of IP traffic. In other words if you're modem is moving, the modem will switch to a new base station, but you'll completely drop the PPP session and have to restart it, as well as all the TCP sessions and such. You typically won't get the same IP address, and so your TCP sessions will not recover gracefully.
The "fun" aspect to this twist is that this can happen even if you aren't moving. If one base station gets loaded down, you may be transferred to another base station if you are close enough - there are other things that may make your modem transfer even without you moving. So make sure you take this into account, since you seem to be keen on maintaining a full duplex, symmetric channel open. It's tough to write stuff that will recover gracefully, nevermind anticipate it and do it quickly. You would do well to work very closely with a modem manufacturer (such as Kyocera) on this - otherwise you won't get the documentation on how to control the modem chipset at the low level that you need.
-Adam
I think the whole drama with high equal speeds on both directions is because my higher body thinks that they have 144 kbps on uplink AND 144 kbps on DOWNLINK (== TWO pipes). Whereas in reality we have 144 kbps of ONE pipe which is switching directions when I transfer files.
Comment me if I right or wrong, please.

Dealing with Latency in Networked Games

I'm thinking about making a networked game. I'm a little new to this, and have already run into a lot of issues trying to put together a good plan for dead reckoning and network latency, so I'd love to see some good literature on the topic. I'll describe the methods I've considered.
Originally, I just sent the player's input to the server, simulated there, and broadcast changes in the game state to all players. This made cheating difficult, but under high latency things were a little difficult to control, since you dont see the results of your own actions immediately.
This GamaSutra article has a solution that saves bandwidth and makes local input appear smooth by simulating on the client as well, but it seems to throw cheat-proofing out the window. Also, I'm not sure what to do when players start manipulating the environment, pushing rocks and the like. These previously neutral objects would temporarily become objects the client needs to send PDUs about, or perhaps multiple players do at once. Whose PDUs would win? When would the objects stop being doubly tracked by each player (to compare with the dead reckoned version)? Heaven forbid two players engage in a sumo match (e.g. start pushing each other).
This gamedev.net bit shows the gamasutra solution as inadequate, but describes a different method that doesn't really fix my collaborative boulder-pushing example. Most other things I've found are specific to shooters. I'd love to see something more geared toward games that play like SNES Zelda, but with a little more physics / momentum involved.
Note: I'm not asking about physics simulation here -- other libraries have that covered. Just strategies for making games smooth and reactive despite network latency.
Check out how Valve does it in the Source Engine: http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
If it's for a first person shooter you'll probably have to delve into some of the topics they mention such as: prediction, compensation, and interpolation.
I find this network physics blog post by Glenn Fiedler, and even more so the response/discussion below it, awesome. It is quite lengthy, but worth-while.
In summary
Server cannot keep up with reiterating simulation whenever client input is received in a modern game physics simulation (i.e. vehicles or rigid body dynamics). Therefore the server orders all clients latency+jitter (time) ahead of server so that all incomming packets come in JIT before the server needs 'em.
He also gives an outline of how to handle the type of ownership you are asking for. The slides he showed on GDC are awesome!
On cheating
Mr Fiedler himself (and others) state that this algorithm suffers from not being very cheat-proof. This is not true. This algorithm is no less easy or hard to exploit than traditional client/server prediction (see article regarding traditional client/server prediction in #CD Sanchez' answer).
To be absolutely clear: the server is not easier to cheat simply because it receives network physical positioning just in time (rather than x milliseconds late as in traditional prediction). The clients are not affected at all, since they all receive the positional information of their opponents with the exact same latency as in traditional prediction.
No matter which algorithm you pick, you may want to add cheat-protection if you're releasing a major title. If you are, I suggest adding encryption against stooge bots (for instance an XOR stream cipher where the "keystream is generated by a pseudo-random number generator") and simple sanity checks against cracks. Some developers also implement algorithms to check that the binaries are intact (to reduce risk of cracking) or to ensure that the user isn't running a debugger (to reduce risk of a crack being developed), but those are more debatable.
If you're just making a smaller indie game, that may only be played by some few thousand players, don't bother implementing any anti-cheat algorithms until 1) you need them; or 2) the user base grows.
we have implemented a multiplayer snake game based on a mandatory server and remote players that make predictions. Every 150ms (in most cases) the server sends back a message containing all the consolidated movements sent by each remote player. If remote client movements arrive late to the server, he discards them. The client the will replay last movement.
Check out Networking education topics at the XNA Creator's Club website. It delves into topics such as network architecture (peer to peer or client/server), Network Prediction, and a few other things (in the context of XNA of course). This may help you find the answers you're looking for.
http://creators.xna.com/education/catalog/?contenttype=0&devarea=19&sort=1
You could try imposing latency to all your clients, depending on the average latency in the area. That way the client can try to work around the latency issues and it will feel similar for most players.
I'm of course not suggesting that you force a 500ms delay on everyone, but people with 50ms can be fine with 150 (extra 100ms added) in order for the gameplay to appear smoother.
In a nutshell; if you have 3 players:
John: 30ms
Paul: 150ms
Amy: 80ms
After calculations, instead of sending the data back to the clients all at the same time, you account for their latency and start sending to Paul and Amy before John, for example.
But this approach is not viable in extreme latency situations where dialup connections or wireless users could really mess it up for everybody. But it's an idea.

Resources