i am developing a multiplayer roleplaying game, (No, its not a mmorpg. ;)
My current setup is like this.
Client tells the server "I want to move forward"/"I want to move backwards", the server then updates your entity, and informs all clients in the area about the change. The server is also updating each entity every 20ms and sending updates every 100ms to the clients, these updates contains position, velocity, rotation etc.
So far so good, however i have nothing in store for smoothing the movement between the packets on the client side, and i must say, i can not get it working. I have been reading up on prediction, interpolation, deadreackoning but its all a big mess for me.
So right now i am just doing something like "Position = Packet.Position", which causes a very stuttering movement.
So, what i want help with is, how do i get a more smooth movement? Have been looking at the XNA Prediction Sample, but i could not get it right.
Thanks //F
Read Valve's description of their multiplayer protocol. It should be instructive, and gives a very clear example on how you do the prediction/interpolation.
I'd suggest the idea from another question (see the accepted answer)
Here the client calculates its position itself as if its not a network game. Client regularly sends his current position to the server. And if client cheats or can't continue moving in the chosen direction, server just sends the client his correct position.
The same algorithm was used in Ultima Online (at least when I was playing it 10 years ago)
I solved it by running a ghost entity alongside with my main one.
The ghost will get updated every frame aswell, but whenever a packet comes in, his values are set to the values of the packet.
I then gradually tweak the real entity to where the ghost is.
Related
I would like to build from scratch a kind of small (extremely small and simple) MMORPG game with click movements inside a given grid (something like an online Pokemon game). When I click on a cell, the player will move to reach the pointed area (using a pathfinding algorithm such as A*). But, I have no idea about how to build the network protocol of the movement system. And, googling the subject doesn't seem to help me a lot. I need to make a choice: - either I send a message to the server each time I move from one cell to another (which isn't efficient at all because if we are in a 100ms latency network, I can only sens 10movements/sec and if I ask the server to acknowledge the movement it can only do 5moves/sec) - either I send a message each time the player wants to change its direction (it clicks on another cell). The client sends periodically its position to the server and it checks if it's coherent with the destination.
The second solution looks a lot better than the first one and I think that a real game systems must implement that strategy. Am I wrong?
Also, how can we build systems based on that solution that doesn't let a modified (hacked) client be able to send fake positions and teleport anywhere they want? Does real systems implement somethink like a "incoherent sent position" system that will try to try to detect and fix those problems?
Or, is there something simpler to implement? Thank you very much
In a realtime game I would create a panthing mechanic. I would use a formula such as: Speed * time = distance.
now if you run the same logic in the client and the server you only need to send the movement logic at the beginning and the sever can answer with a finish message when the distance is traveled. (compare at the end to prevent cheating. the server always wins)
Now if the client cancels in the movement to go somewhere else, you send a cancelation of panthing and the time that you traveled. The same goes when changing the panthing (in this case you send a new path staring at a new time and place).
Thanks to ScarletMerlin's answers I think there's a simple solution to my problem.
When the player wants to move from one position (for instance 0,0) to another (for instance 10,10), it sends a "MOVE 10 10" message to the server.
Then, only the server will apply the pathfinding algorithm and send after each move the position to the client. (for instance, by message CURRENTLY_ON X Y). Then, the client will update the position of the player.
It's a kind of automatic synchronisation from the server to the client. It also solves the problem of the first solution (one move and one ACK per movement) thanks to pipeling. For instance, if the network has a 1000ms delay and a move is set 50ms, we will just receive messages at time 1000, 1050, 1100, 1150, ... So, we will just feel the lag when the player is changing its direction.
The user will also feel some lag when changing from one direction to another but if we assume that the delay is quite low and symetric (it doesn't seems to be a too strong assumption), it won't be so much percepted (as the change direction will often arrive before the sending of the CURRENTLY_ON message and the server can interrupt its current action (going straight on) to handle the new one).
So I've been messing around with Project Tango, and noticed that if I turn on a motion tracking app, and leave the device on a table(blocking all cameras), the motion tracking goes off in crazy directions and makes incredibly wrong predictions on where I'm going (I'm not even moving, but the device thinks I'm going 10 meters to the right). I'm wondering if their is some exception that can be thrown or some warning or api call I can call to stop this from happening.
if you block all the camera, there is not features camera can capture.
so motion tracking may be in two stages:
1. No moving,
2. drifting to Hawaii.
either ways may happen.
If you did block the fisheye camera, yes, this is expected.
For API, There is a way to handle it.
Please check life cycle for motiontracking concept
For example for C/C++ :
https://developers.google.com/project-tango/apis/c/c-motion-tracking
if API detected pose_data as TANGO_POSE_INVALID, the motion tracking system can be reinitialized in two ways. If config_enable_auto_recovery was set to true, the system will immediately enter the TANGO_POSE_INITIALIZING state. It will use the last valid pose as the starting point after recovery. If config_enable_auto_recovery was set to false, the system will essentially pause and always return poses as TANGO_POSE_INVALID until TangoService_resetMotionTracking() is called. Unlike auto recovery, this will also reset the starting point after recovery back to the origin.
Also you can add Handling Adverse Situations with UX-Framework to your app.
check the link:
https://developers.google.com/project-tango/ux/ux-framework-exceptions
The last solution is by write the function handle driftting by measuring velocity of pose_data and call TangoService_resetMotionTracking() and so on.
I run a filter on the intake that tries not to let obviously ridiculous pose changes through, and I believe no reported points whose texel is white nor any pose where the entire texture is in near shouting distance of black
I am working on a unity mobile game. Which is like a multiplayer version of temple run. Because this game is meant for mobile there is a fluctuating latency generally in the range of 200ms - 500ms.
Since the path is predetermined and actions the user can perform are limited (jump,slide, use powerup etc) , the remote player keep running on the path until it receives the updated state from its local player.
This strategy generally works pretty well since I only need to send limited amount of data over the network but there is a specific case in which I am having issues.
In the game, players die on the specific positions (obstacles).
I want remote players to die on the same obstacle/position as local player but due to latency in sending message, the remote player crosses the obstacle by the time it receives the death message.
Is there a way to sync the players on the deaths.
One of the things I tried was moving the remote player back to the local players death position but not only does it look awkward visually but can also raise other syncing issues.
Is there some other better way to do this ?
One way I may recommend is to make one player acts like server (not real server). The server-player will do all the computation like moving, jumping, creating scenes, etc. Then the server-player will send all the data to sync with the client-player. The client-player get the data and process game state. The client-player can also send his action (left-right-jump-slide) to the server-player. This way both player will have the same state of the game like position, die. You also need to deal with latency by adding some prediction.
So the solution I implemented was I spawned all the remote player behind enough so they can have some time to receive the information that local player was died on specific obstacle. And in the end there is a straight path where I just sync players again. So that result is displayed correctly.
I'm using open source Torque 3d game engine for the avia simulator project.
I need to generate single image from the several IG (image generator) PCs.Each IG displays has its own view camera with certain angle offset and get the info about the current position from the server via LAN.
I've already setup multi IG system.
Network connection is robust (less than <1 ms)
Frame rate is good as well - about 70 FPS on each IG.
However while moving the whole picture looks broken because some IG are updating their views faster than others.
I'm looking for the solution that will make the IG update simultaneously. Maybe some kind of precise time synchronization algorithms that make different PC connected via LAN act as one.
I had a much simpler problem, but my approach might help you.
You've got to run clocks on all your machines with, say, a 15 millisecond tick. Each image needs to be generated correctly for a specific tick and marked with its tick ID time. The display machine can check its own clock, determine the specific tick number (time) for which it should display, grab the images for that specific time, and display them.
(To have the right mindset to think about this, imagine your network is really bad and think about one IG delivering 1000 images ahead of the current display tick while another is 5 ticks behind. Write for this sort of system and the results will look really good on the one you have.)
Ideally you want your display running a bit behind the IGs so you always have a full set of images for the current tick. I had a client-server setup and slowed the display (client) timer down if it came close to missing updates and sped it up if it was getting too far behind. You have to synchronize all your IG machines, so it might be better to have the master clock on the display and have it send messages to speed up any IG machine that's getting behind. (You may not have the variable network delays I had, but it's best to plan for them.)
The key is that each image must be made at a particular time, that the display include only images for the time being displayed, and that the composite images appear right when they should (every 15 milliseconds, on the millisecond). Also, do not depend on your network or even your machines to do anything in a timely manner. Use feedback to keep everything synched.
Addition On Feedback:
Say the last image for the frame at time T arrives 5ms after time T by the display computer's time (real time). If you display the frame for time T at T plus 10 ms, no one will notice the lag and you'll have plenty of time to assemble the images. Using a constant (10 ms) delay might work for you, especially if you make it big enough. It may be the way to go if you always run with the exact same network.
But you are depending on all your IG machines being precisely synchronized for real time, taking no more than a certain amount of time to produce their image, and delivering their image to the display machine all in predictable lengths of time.
What I'd suggest is have your display machine determine the delay based on the time stamps on the images it receives. It would want to increase the delay if it isn't getting the images it needs in time, and decrease it if all the IG's are running several images ahead of what the display needs. (You might want to ignore the occasional really late image. You have to decide which is more annoying: images that are out-of-date, a display that is running noticably behind time, or a display that noticably speeds up and slows down.)
In my original answer I was suggesting some kind of feedback from the display to keep the IG machines running on time, but that may be overkill: your computer's clocks are probably good enough for that.
Very generally, when any two processes have to coordinate over time, it's best if they talk to each other to stay in step (feedback) rather than each stick to a carefully timed schedule.
I try to found the best method to do this, considering a turn by turn cross-plateform game on mobile (3G bandwidth) with projectile and falling blocks.
I wonder if one device (the current player turn = server role) can run the physics and send some "key frames" data (position, orientation of blocks) to the other device, which just interpolate from the current state to the "keyframes" received.
With this method I'm quite afraid about the huge amount of data to guarantee the same visual on the other player's device.
Another method should be to send the physics data (force, acceleration ...) and run physics on the other device too, but I'm afraid to never have the same result at all.
My current implementation works like this:
Server manages physics simulation
On any major collision of any object, the object's absolute position, rotation, AND velocity/acceleration/forces are sent to each client.
Client sets each object at the position along with their velocity and applies the necessary forces.
Client calculates latency and advances the physics system to accommodate for the lag time by that amount.
This, for me, works quite well. I have the physics system running over dozens of sub-systems (maps).
Some key things about my implementation:
Completely ignore any object that isn't flagged as "necessary". For instance, dirt and dust particles that respond to player movement or grass and water as it responds to player movement. Basically non-essential stuff.
All of this is sent through UDP by the way. This would be horrendous on TCP.
You will want to send absolute positions and rotations.
You're right, that if you send just forces, it won't work. It's possible to make this work, but it's much harder than just sending positions. You need both devices to do their calculations the same way, so before each frame, you need to wait for the input from the other device, you need to use the same time step, scripts need to either run in the same order or be commutative, and you can only use CPU instructions guaranteed to give the same result on both machines.
that last one is one that makes it particularly problematic, because it means you can't use floating-point numbers (floats/singles, or doubles). you have to use integers, or roll your own number format, so you can't take advantage of many existing tools.
Many games use a client-server model with client-side prediction. if your game is turn based, you might be able to get away with not using client-side prediction. instead, you could have the client lag behind by some amount of time, so that you can be fairly sure that the server's input will already be there when you go to render. client-side prediction is only important if the client can make changes that the server cares about (such as moving).