Unity Multiplayer player synchronization - networking

I am working on a unity mobile game. Which is like a multiplayer version of temple run. Because this game is meant for mobile there is a fluctuating latency generally in the range of 200ms - 500ms.
Since the path is predetermined and actions the user can perform are limited (jump,slide, use powerup etc) , the remote player keep running on the path until it receives the updated state from its local player.
This strategy generally works pretty well since I only need to send limited amount of data over the network but there is a specific case in which I am having issues.
In the game, players die on the specific positions (obstacles).
I want remote players to die on the same obstacle/position as local player but due to latency in sending message, the remote player crosses the obstacle by the time it receives the death message.
Is there a way to sync the players on the deaths.
One of the things I tried was moving the remote player back to the local players death position but not only does it look awkward visually but can also raise other syncing issues.
Is there some other better way to do this ?

One way I may recommend is to make one player acts like server (not real server). The server-player will do all the computation like moving, jumping, creating scenes, etc. Then the server-player will send all the data to sync with the client-player. The client-player get the data and process game state. The client-player can also send his action (left-right-jump-slide) to the server-player. This way both player will have the same state of the game like position, die. You also need to deal with latency by adding some prediction.

So the solution I implemented was I spawned all the remote player behind enough so they can have some time to receive the information that local player was died on specific obstacle. And in the end there is a straight path where I just sync players again. So that result is displayed correctly.

Related

How to synchronize the ball in a network pong game?

I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client.
So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure.
I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that the it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position).
Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball?
EDIT:
I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way?
When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference?
As you are developing your first game, I think you should try the simplest but brute-force method first. Then you will experience the first exciting result, then you will get courage and try the better methods.
Like Source Engine method, process the game play in one side and send every object state to the other for each 1/30 second. This is a brute method but it works in LAN environment.
Now you will find problems that occur WAN environment where latency is more than 1/30 second.
I am not sure it works actually, but I think that:
Assume that the movement of ball does not change by only player's hit.
Send ball position P, velocity and player A position only when player hits the ball to B.
At B, receive it but process it as if time L is already passed (L=latency between A and B * 2) However, rendered ball should be keep its previous movement until it reaches the ball position P.
Values which never are affected can be masqueraded, even if it is time value. :)

Zigbee mesh networking

I'm making an application for a running competition on a fixed track. I'm investigating what systems could be used and tough of using a stick containing a GPS/DGPS module and a Zigbee enabled chip to communicate the location to a server.
I've researched the subject (on the internet) but I was wondering if anyone has some practical advice/experience with using a Zigbee mesh/star topology in a dynamic environment wich could apply to this use case. I'm also very interested in using a mesh topology where the data is transmitted (hopping) trough the different Zigbee modules to the server.
Runners are holding a stick; run around the track and than pass the stick on to the next team member.
I am not particularly clear about your goal. But I'd like to say a few things.
First, using GPS/DGPS to measure which team reaches the finish line is inaccurate. Raw GPS data is horrible in accuracy (varying in 1 - 10 meters, well, around that), also the sampling rate of a GPS module is low (say once a second?) How do you determine exactly which team reaches the finish line first?
Second, to use a mobile ZigBee chip to communicate in real-time is hard. I assume your stick has a ZigBee end device. When it is moving (which in your case is pretty fast), it must dynamically find and associate with new parent routers, which takes time and depending on the wireless environment, it might involve several retries. So you will imagine a packet is only successfully delivered to the other end after 100ms or even a second. This might not be a problem if your stick records the exact time when a team reaches the finish line. Since you have a GPS module in the stick so there is no problem in getting very accurate time.

How to sync physics in a multiplayer game?

I try to found the best method to do this, considering a turn by turn cross-plateform game on mobile (3G bandwidth) with projectile and falling blocks.
I wonder if one device (the current player turn = server role) can run the physics and send some "key frames" data (position, orientation of blocks) to the other device, which just interpolate from the current state to the "keyframes" received.
With this method I'm quite afraid about the huge amount of data to guarantee the same visual on the other player's device.
Another method should be to send the physics data (force, acceleration ...) and run physics on the other device too, but I'm afraid to never have the same result at all.
My current implementation works like this:
Server manages physics simulation
On any major collision of any object, the object's absolute position, rotation, AND velocity/acceleration/forces are sent to each client.
Client sets each object at the position along with their velocity and applies the necessary forces.
Client calculates latency and advances the physics system to accommodate for the lag time by that amount.
This, for me, works quite well. I have the physics system running over dozens of sub-systems (maps).
Some key things about my implementation:
Completely ignore any object that isn't flagged as "necessary". For instance, dirt and dust particles that respond to player movement or grass and water as it responds to player movement. Basically non-essential stuff.
All of this is sent through UDP by the way. This would be horrendous on TCP.
You will want to send absolute positions and rotations.
You're right, that if you send just forces, it won't work. It's possible to make this work, but it's much harder than just sending positions. You need both devices to do their calculations the same way, so before each frame, you need to wait for the input from the other device, you need to use the same time step, scripts need to either run in the same order or be commutative, and you can only use CPU instructions guaranteed to give the same result on both machines.
that last one is one that makes it particularly problematic, because it means you can't use floating-point numbers (floats/singles, or doubles). you have to use integers, or roll your own number format, so you can't take advantage of many existing tools.
Many games use a client-server model with client-side prediction. if your game is turn based, you might be able to get away with not using client-side prediction. instead, you could have the client lag behind by some amount of time, so that you can be fairly sure that the server's input will already be there when you go to render. client-side prediction is only important if the client can make changes that the server cares about (such as moving).

Smooth MultiPlayer movement

i am developing a multiplayer roleplaying game, (No, its not a mmorpg. ;)
My current setup is like this.
Client tells the server "I want to move forward"/"I want to move backwards", the server then updates your entity, and informs all clients in the area about the change. The server is also updating each entity every 20ms and sending updates every 100ms to the clients, these updates contains position, velocity, rotation etc.
So far so good, however i have nothing in store for smoothing the movement between the packets on the client side, and i must say, i can not get it working. I have been reading up on prediction, interpolation, deadreackoning but its all a big mess for me.
So right now i am just doing something like "Position = Packet.Position", which causes a very stuttering movement.
So, what i want help with is, how do i get a more smooth movement? Have been looking at the XNA Prediction Sample, but i could not get it right.
Thanks //F
Read Valve's description of their multiplayer protocol. It should be instructive, and gives a very clear example on how you do the prediction/interpolation.
I'd suggest the idea from another question (see the accepted answer)
Here the client calculates its position itself as if its not a network game. Client regularly sends his current position to the server. And if client cheats or can't continue moving in the chosen direction, server just sends the client his correct position.
The same algorithm was used in Ultima Online (at least when I was playing it 10 years ago)
I solved it by running a ghost entity alongside with my main one.
The ghost will get updated every frame aswell, but whenever a packet comes in, his values are set to the values of the packet.
I then gradually tweak the real entity to where the ghost is.

How to synchronize media playback over an unreliable network?

I wish I could play music or video on one computer, and have a second computer playing the same media, synchronized. As in, I can hear both computers' speakers at the same time, and it doesn't sound funny.
I want to do this over Wi-Fi, which is slightly unreliable.
Algorithmically, what's the best approach to this problem?
EDIT 1
Whether both computers "play" the same media, or one "plays" the media and streams it to the other, doesn't matter to me.
I am certain this is a tractable problem because I once saw a demo of Wi-Fi speakers. That was 5+ years ago, so I'm figure the technology should make it easier today.
(I myself was looking for an application which did this, hoping I wouldn't have to write one myself, when I stumbled upon this question.)
overview
You introduce a bit of buffer latency and use a network time-synchronization protocol to align the streams. That is, you split the stream up into packets, and timestamp each packet with "play later at time T", where T is for example 50-100ms in the future (or more if the network is glitchy). You send (or multicast) the packets on the local network, to all computers in the chorus. The computers will all play the sound at the same time because the application clock is synced.
Note that there may be other factors like OS/driver/soundcard latency which may have to be factored into the time-synchronization protocol. If you are not too discerning, the synchronization protocol may be as simple as one computer beeping every second -- plus you hitting a key on the other computer in beat. This has the advantage of accounting for any other source of lag at the OS/driver/soundcard layers, but has the disadvantage that manual intervention is needed if the clocks become desynchronized.
hybrid manual-network sync
One way to account for other sources of latency, without constant manual intervention, is to combine this approach with a standard network-clock synchronization protocol; the first time you run the protocol on new machines:
synchronize the machines with manual beat-style intervention
synchronize the machines with a network-clock sync protocol
for each machine in the chorus, take the difference of the two synchronizations; this is the OS/driver/soundcard latency of each machine, which they each keep track of
Now whenever the network backbone changes, all one needs to do is resync using the network-clock sync protocol (#2), and subtract out the OS/driver/soundcard latencies, obviating the need for manual intervention (unless you change the OS/drivers/soundcards).
nature-mimicking firefly sync
If you are doing this in a quiet room and all machines have microphones, you do not even need manual intervention (#1), because you can have them all follow a "firefly-style" synchronizing algorithm. Many species of fireflies in nature will all blink in unison. http://tinkerlog.com/2007/05/11/synchronizing-fireflies/ describes the algorithm these fireflies use: "If a firefly receives a flash of a neighbour firefly, it flashes slightly earlier." Flashes correspond to beeps or buzzes (through the soundcard, not the mobo piezo buzzer!), and seeing corresponds to listening through the microphone.
This may be a bit awkward over very large room distances due to the speed of sound, but I doubt it'll be an issue (if so, decrease rate of beeping).
The synchronization is relative to the position of the listener relative to each speaker. I don't think the reliability of the network would have as much to do with this synchronization as it would the content of the audio stream. In order to synchronize you need to find the distance between each speaker and the listener. Find the difference between each of those values and the value for the farthest speaker. For each 1.1 feet of difference, delay each of the close speakers by 1ms. This will ensure that the audio stream reaches the listener at the same time. This all assumes an open area, as any in proximity to your scenario will generate reflections of the audio waves and create destructive interference. Objects within the area may also transmit sound at a slower speed resulting in delayed sound of their own.

Resources