Time synchronization of views generated by different instances of the game engine - networking

I'm using open source Torque 3d game engine for the avia simulator project.
I need to generate single image from the several IG (image generator) PCs.Each IG displays has its own view camera with certain angle offset and get the info about the current position from the server via LAN.
I've already setup multi IG system.
Network connection is robust (less than <1 ms)
Frame rate is good as well - about 70 FPS on each IG.
However while moving the whole picture looks broken because some IG are updating their views faster than others.
I'm looking for the solution that will make the IG update simultaneously. Maybe some kind of precise time synchronization algorithms that make different PC connected via LAN act as one.

I had a much simpler problem, but my approach might help you.
You've got to run clocks on all your machines with, say, a 15 millisecond tick. Each image needs to be generated correctly for a specific tick and marked with its tick ID time. The display machine can check its own clock, determine the specific tick number (time) for which it should display, grab the images for that specific time, and display them.
(To have the right mindset to think about this, imagine your network is really bad and think about one IG delivering 1000 images ahead of the current display tick while another is 5 ticks behind. Write for this sort of system and the results will look really good on the one you have.)
Ideally you want your display running a bit behind the IGs so you always have a full set of images for the current tick. I had a client-server setup and slowed the display (client) timer down if it came close to missing updates and sped it up if it was getting too far behind. You have to synchronize all your IG machines, so it might be better to have the master clock on the display and have it send messages to speed up any IG machine that's getting behind. (You may not have the variable network delays I had, but it's best to plan for them.)
The key is that each image must be made at a particular time, that the display include only images for the time being displayed, and that the composite images appear right when they should (every 15 milliseconds, on the millisecond). Also, do not depend on your network or even your machines to do anything in a timely manner. Use feedback to keep everything synched.
Addition On Feedback:
Say the last image for the frame at time T arrives 5ms after time T by the display computer's time (real time). If you display the frame for time T at T plus 10 ms, no one will notice the lag and you'll have plenty of time to assemble the images. Using a constant (10 ms) delay might work for you, especially if you make it big enough. It may be the way to go if you always run with the exact same network.
But you are depending on all your IG machines being precisely synchronized for real time, taking no more than a certain amount of time to produce their image, and delivering their image to the display machine all in predictable lengths of time.
What I'd suggest is have your display machine determine the delay based on the time stamps on the images it receives. It would want to increase the delay if it isn't getting the images it needs in time, and decrease it if all the IG's are running several images ahead of what the display needs. (You might want to ignore the occasional really late image. You have to decide which is more annoying: images that are out-of-date, a display that is running noticably behind time, or a display that noticably speeds up and slows down.)
In my original answer I was suggesting some kind of feedback from the display to keep the IG machines running on time, but that may be overkill: your computer's clocks are probably good enough for that.
Very generally, when any two processes have to coordinate over time, it's best if they talk to each other to stay in step (feedback) rather than each stick to a carefully timed schedule.

Related

Is it OK for a DirectShow filter to seek the filters upstream from itself?

Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk

Zigbee mesh networking

I'm making an application for a running competition on a fixed track. I'm investigating what systems could be used and tough of using a stick containing a GPS/DGPS module and a Zigbee enabled chip to communicate the location to a server.
I've researched the subject (on the internet) but I was wondering if anyone has some practical advice/experience with using a Zigbee mesh/star topology in a dynamic environment wich could apply to this use case. I'm also very interested in using a mesh topology where the data is transmitted (hopping) trough the different Zigbee modules to the server.
Runners are holding a stick; run around the track and than pass the stick on to the next team member.
I am not particularly clear about your goal. But I'd like to say a few things.
First, using GPS/DGPS to measure which team reaches the finish line is inaccurate. Raw GPS data is horrible in accuracy (varying in 1 - 10 meters, well, around that), also the sampling rate of a GPS module is low (say once a second?) How do you determine exactly which team reaches the finish line first?
Second, to use a mobile ZigBee chip to communicate in real-time is hard. I assume your stick has a ZigBee end device. When it is moving (which in your case is pretty fast), it must dynamically find and associate with new parent routers, which takes time and depending on the wireless environment, it might involve several retries. So you will imagine a packet is only successfully delivered to the other end after 100ms or even a second. This might not be a problem if your stick records the exact time when a team reaches the finish line. Since you have a GPS module in the stick so there is no problem in getting very accurate time.

How to sync physics in a multiplayer game?

I try to found the best method to do this, considering a turn by turn cross-plateform game on mobile (3G bandwidth) with projectile and falling blocks.
I wonder if one device (the current player turn = server role) can run the physics and send some "key frames" data (position, orientation of blocks) to the other device, which just interpolate from the current state to the "keyframes" received.
With this method I'm quite afraid about the huge amount of data to guarantee the same visual on the other player's device.
Another method should be to send the physics data (force, acceleration ...) and run physics on the other device too, but I'm afraid to never have the same result at all.
My current implementation works like this:
Server manages physics simulation
On any major collision of any object, the object's absolute position, rotation, AND velocity/acceleration/forces are sent to each client.
Client sets each object at the position along with their velocity and applies the necessary forces.
Client calculates latency and advances the physics system to accommodate for the lag time by that amount.
This, for me, works quite well. I have the physics system running over dozens of sub-systems (maps).
Some key things about my implementation:
Completely ignore any object that isn't flagged as "necessary". For instance, dirt and dust particles that respond to player movement or grass and water as it responds to player movement. Basically non-essential stuff.
All of this is sent through UDP by the way. This would be horrendous on TCP.
You will want to send absolute positions and rotations.
You're right, that if you send just forces, it won't work. It's possible to make this work, but it's much harder than just sending positions. You need both devices to do their calculations the same way, so before each frame, you need to wait for the input from the other device, you need to use the same time step, scripts need to either run in the same order or be commutative, and you can only use CPU instructions guaranteed to give the same result on both machines.
that last one is one that makes it particularly problematic, because it means you can't use floating-point numbers (floats/singles, or doubles). you have to use integers, or roll your own number format, so you can't take advantage of many existing tools.
Many games use a client-server model with client-side prediction. if your game is turn based, you might be able to get away with not using client-side prediction. instead, you could have the client lag behind by some amount of time, so that you can be fairly sure that the server's input will already be there when you go to render. client-side prediction is only important if the client can make changes that the server cares about (such as moving).

How to split movie and play parts to look as a whole?

I'm writing software which is demonstraiting video on demand service. One of the feature is something similiar to IIS Smooth Streaming - I want to adjust quality to the bandwith of the client. My idea is, to split single movie into many, let's say - 2 seconds parts, in different qualities and then send it to the client and play them. The point is that for example first part can be in very high quality, and second in really poor (if the bandwith seems to be poor). The question is - do you know any software that allows me to cut movies precisly? For example ffmpeg splits movies in a way that join is visible and really annoying (seconds are the measure of precision). I use qt + phonon as a player if it matters. Or maybe you know any better way to provide such feature, without splitting movie into parts?
Are you sure ffmpeg's precision is in seconds? Here's an excerpt from the man page:
-t duration
Restrict the transcoded/captured video sequence to the duration specified in seconds. "hh:mm:ss[.xxx]" syntax is also supported.
-ss position
Seek to given time position in seconds. "hh:mm:ss[.xxx]" syntax is also supported.
-itsoffset offset
Set the input time offset in seconds. "[-]hh:mm:ss[.xxx]" syntax is also supported. This option affects all the input files that follow it. The offset is added to the timestamps of the input files. Specifying a positive offset means that the corresponding streams are delayed by 'offset' seconds.
Looks like it supports up to millisecond precision, and since most video is not +1000 frames per second, this would be more than enough precision to accurately seek through any video stream.
Are you sure this is a good idea? Checking the bandwidth and switching out clips every two seconds seems like it will only allow you to buffer two seconds into the future at any given point, and unless the client has some Godly connection, it will appear extremely jumpy.
And what about playback, if the user replays the video? Would it recalculate the quality as it replays, or do you build the video file while streaming?
I am not experienced in the field of streaming video, but it seems what I see most often is that the provider has several different quality versions of their video (from extremely low to HD), and they test the user's bandwidth and then stream at an appropriate quality.
(I apologize if I misunderstood the question.)

How to synchronize media playback over an unreliable network?

I wish I could play music or video on one computer, and have a second computer playing the same media, synchronized. As in, I can hear both computers' speakers at the same time, and it doesn't sound funny.
I want to do this over Wi-Fi, which is slightly unreliable.
Algorithmically, what's the best approach to this problem?
EDIT 1
Whether both computers "play" the same media, or one "plays" the media and streams it to the other, doesn't matter to me.
I am certain this is a tractable problem because I once saw a demo of Wi-Fi speakers. That was 5+ years ago, so I'm figure the technology should make it easier today.
(I myself was looking for an application which did this, hoping I wouldn't have to write one myself, when I stumbled upon this question.)
overview
You introduce a bit of buffer latency and use a network time-synchronization protocol to align the streams. That is, you split the stream up into packets, and timestamp each packet with "play later at time T", where T is for example 50-100ms in the future (or more if the network is glitchy). You send (or multicast) the packets on the local network, to all computers in the chorus. The computers will all play the sound at the same time because the application clock is synced.
Note that there may be other factors like OS/driver/soundcard latency which may have to be factored into the time-synchronization protocol. If you are not too discerning, the synchronization protocol may be as simple as one computer beeping every second -- plus you hitting a key on the other computer in beat. This has the advantage of accounting for any other source of lag at the OS/driver/soundcard layers, but has the disadvantage that manual intervention is needed if the clocks become desynchronized.
hybrid manual-network sync
One way to account for other sources of latency, without constant manual intervention, is to combine this approach with a standard network-clock synchronization protocol; the first time you run the protocol on new machines:
synchronize the machines with manual beat-style intervention
synchronize the machines with a network-clock sync protocol
for each machine in the chorus, take the difference of the two synchronizations; this is the OS/driver/soundcard latency of each machine, which they each keep track of
Now whenever the network backbone changes, all one needs to do is resync using the network-clock sync protocol (#2), and subtract out the OS/driver/soundcard latencies, obviating the need for manual intervention (unless you change the OS/drivers/soundcards).
nature-mimicking firefly sync
If you are doing this in a quiet room and all machines have microphones, you do not even need manual intervention (#1), because you can have them all follow a "firefly-style" synchronizing algorithm. Many species of fireflies in nature will all blink in unison. http://tinkerlog.com/2007/05/11/synchronizing-fireflies/ describes the algorithm these fireflies use: "If a firefly receives a flash of a neighbour firefly, it flashes slightly earlier." Flashes correspond to beeps or buzzes (through the soundcard, not the mobo piezo buzzer!), and seeing corresponds to listening through the microphone.
This may be a bit awkward over very large room distances due to the speed of sound, but I doubt it'll be an issue (if so, decrease rate of beeping).
The synchronization is relative to the position of the listener relative to each speaker. I don't think the reliability of the network would have as much to do with this synchronization as it would the content of the audio stream. In order to synchronize you need to find the distance between each speaker and the listener. Find the difference between each of those values and the value for the farthest speaker. For each 1.1 feet of difference, delay each of the close speakers by 1ms. This will ensure that the audio stream reaches the listener at the same time. This all assumes an open area, as any in proximity to your scenario will generate reflections of the audio waves and create destructive interference. Objects within the area may also transmit sound at a slower speed resulting in delayed sound of their own.

Resources