Motion tracking goes way off - motion-detection

So I've been messing around with Project Tango, and noticed that if I turn on a motion tracking app, and leave the device on a table(blocking all cameras), the motion tracking goes off in crazy directions and makes incredibly wrong predictions on where I'm going (I'm not even moving, but the device thinks I'm going 10 meters to the right). I'm wondering if their is some exception that can be thrown or some warning or api call I can call to stop this from happening.

if you block all the camera, there is not features camera can capture.
so motion tracking may be in two stages:
1. No moving,
2. drifting to Hawaii.
either ways may happen.
If you did block the fisheye camera, yes, this is expected.
For API, There is a way to handle it.
Please check life cycle for motiontracking concept
For example for C/C++ :
https://developers.google.com/project-tango/apis/c/c-motion-tracking
if API detected pose_data as TANGO_POSE_INVALID, the motion tracking system can be reinitialized in two ways. If config_enable_auto_recovery was set to true, the system will immediately enter the TANGO_POSE_INITIALIZING state. It will use the last valid pose as the starting point after recovery. If config_enable_auto_recovery was set to false, the system will essentially pause and always return poses as TANGO_POSE_INVALID until TangoService_resetMotionTracking() is called. Unlike auto recovery, this will also reset the starting point after recovery back to the origin.
Also you can add Handling Adverse Situations with UX-Framework to your app.
check the link:
https://developers.google.com/project-tango/ux/ux-framework-exceptions
The last solution is by write the function handle driftting by measuring velocity of pose_data and call TangoService_resetMotionTracking() and so on.

I run a filter on the intake that tries not to let obviously ridiculous pose changes through, and I believe no reported points whose texel is white nor any pose where the entire texture is in near shouting distance of black

Related

Here SDK PositionSimulator - Issue, or surprising behaviour, if the signal is temporarily lost (in the file)

I'm using a GPX file which contains one point having the network source, the other points come from the gps satellite source. When playing the file and getting this 'network point', then the OnPositionChanged Listener is not triggered during 20 seconds (it should consider that the signal is still the same), I lose the next points and my app considers that the signal is lost during 20 seconds.
This behaviour comes up when playing the file without navigation, it also comes up when navigating, ...but it doesn't occur when the navigation has been launched and it is stopped. In this case, the position of the points after the network point is got normally without the 20s delay.
HERE Developer Support, could you please investigate?
Try please new 3.14 version there is fix some delays in PositioningManager. Some suggestion: if possible use please only LocationMethod.GPS during navigation and LocationMethod.GPS_NETWORK at all other cases. This suggestion works for PositionSimulator too.

Vulkan: trouble understanding cycling of framebuffers

In Vulkan,
A semaphore(A) and a fence(X) can be passed to vkAcquireNextImageKHR. That semaphore(A) is subsequently passed to vkQueueSubmit, to wait until the image is released by the Presentation Engine (PE). A fence(Y) can also be passed to vkQueueSubmit. Client code can check when the submission has completed by checking fence(Y).
When fence(Y) signals, this means the PE can display the image.
My question:
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR? To me, it doesn't seem that it would be by checking fence(X), because that is for client code to know when the image can be written to by vkQueueSubmit, isn't it? After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done. Or, can the same fence(X) be used to query the image availability after the call to vkQueuePresentKHR?
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
The reason this is causing trouble for me is that in an asynchronous, 60fps, triple buffered app (throwaway learning code), things get out of wack like this:
Send an initial framebuffer to the PE. This framebuffer is now unavailable for 16 milliseconds.
Within the 16ms, acquire a second image/framebuffer, submit commands, but don't present.
Do the same as #2, for a third image. We submit it before 16ms.
16ms have gone by, so we vkQueuePresentKHR the second image.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 is not yet done being used, because I have acquired three images at this point.
How to know if image #1 is available again without calling vkAcquireNextImageKHR?
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR?
You usually do not need to know.
Either you need to acquire a new VkImage, or you don't. Whether PE has finished or not does not even enter that decision.
Only reason wanting to know is if you want to measure presentation times. There's a special extension for that: VK_GOOGLE_display_timing.
After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done.
Well, you can reuse the fence. But the Implementation has stopped using it as soon as it was signaled and won't be changing its state anymore to anything, if that's what you are asking (and so you are free to vkDestroy it or do other things with it).
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
Hopefully I cover it below, but I am not precisely sure what the problem here is. I don't know how to eat a soup without a spoon neither. Simply use a spoon— I mean vkAcquireNextImageKHR.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 >is not yet done being used, because I have acquired 3 images at this point.
How to know if image #1 is available again without calling >vkAcquireNextImageKHR?
How is it any different than image #1 and #2?
Yes, you may have already acquired all the images the swapchain has to offer, or the PE is "not ready" to give away an image even if it has two.
In the first case the spec advises against calling vkAcquireNextImageKHR with timeout of UINT64_MAX. It is a simple matter of counting the successful vkAcquireNextImageKHR calls vs the vkQueuePresentKHRs. One way may be to simply do one vkAcquireNextImageKHR and then do one vkQueuePresentKHR.
In the second case you can simply call vkAcquireNextImageKHR and you will eventually get the image.
In order to use a swapchain image, You need to acquire it. After that the actual availability of the image for rendering purposes is signaled by the semaphore (A) or the fence (X). You can either use the semaphore (X) during the submission as a wait semaphore or wait on the CPU for the fence (X) and submit after that. For performance reasons the semaphore is a preferred way.
Now when You present an image, You give it back to the Presentation Engine. From now on You cannot use that image for whatever purposes. There is no way to check when that image is available again for You so You can render into it again. You cannot do that. If You want to render into a swapchain image again, You need to acquire another image. And during this operation You once again provide a semaphore or a fence (probably different than those provided when You previously acquired a swapchain image). There is no other way to check when an image is again available than through calling the vkAcquireNextImageKHR() function.
And when You want to implement triple-buffering, You should just select appropriate presentation mode (mailbox mode is the closest match). You shouldn't wait for a specific time before You present an image. You just should present it when You are done rendering into it. Your synchronization should be entirely based on acquire, present commands and semaphores or fences provided during these operations and during submission. Appropriate present mode should do the rest. Detailed explanation of different present modes is available in Intel's tutorial.

IMediaControl::Run followed by IMediaControl::Stop followed by IMeidaControl::Run doesn't switch on certain Onboard cameras

I have a DirectShow webcam application. I make use of Sample Grabber to get the buffer callbacks and IVideoWindow to control the display co-ordinates for the Preview. I have Preview and Capture Streams which I run as below.
g_pBuild->RenderStream(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video,cam,g_pGrabberF,pNullRenderer2); g_pBuild->RenderStream(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video,cam,NULL,NULL);
On certain On board cameras, IMediaControl::Run followed by IMediaControl::Stop followed by IMediaCOntrol::Run doesn't switch on the camera.
Extenal USB cameras work properly here. How can I diagnose more on this? Any pointers, please help.
Maybe its specific to a certain hardware issue in the unit.
Do a quick test by adding sleep of 1 sec between calls.
If it does help than you need to find a way to know when to unit state in idle or not.
There are two important parts of the question which you did not provide:
Filter graph topologies
HRESULTs of the method calls
A problem you might be having is that one of the filters in the topology does not handle well state transitions and fails somewhere between states. Supposedly your second Run meets it still trying to complete Stop. You might get a HRESULT there which indicates the issue (better for you) or the filter fails silently.
The filter graph's is the unlikely source of the bug itself. Chances are high that it does everything flawlessly, however since internally it distributes the calls between filters, one of the filter is letting you down.

How to sync physics in a multiplayer game?

I try to found the best method to do this, considering a turn by turn cross-plateform game on mobile (3G bandwidth) with projectile and falling blocks.
I wonder if one device (the current player turn = server role) can run the physics and send some "key frames" data (position, orientation of blocks) to the other device, which just interpolate from the current state to the "keyframes" received.
With this method I'm quite afraid about the huge amount of data to guarantee the same visual on the other player's device.
Another method should be to send the physics data (force, acceleration ...) and run physics on the other device too, but I'm afraid to never have the same result at all.
My current implementation works like this:
Server manages physics simulation
On any major collision of any object, the object's absolute position, rotation, AND velocity/acceleration/forces are sent to each client.
Client sets each object at the position along with their velocity and applies the necessary forces.
Client calculates latency and advances the physics system to accommodate for the lag time by that amount.
This, for me, works quite well. I have the physics system running over dozens of sub-systems (maps).
Some key things about my implementation:
Completely ignore any object that isn't flagged as "necessary". For instance, dirt and dust particles that respond to player movement or grass and water as it responds to player movement. Basically non-essential stuff.
All of this is sent through UDP by the way. This would be horrendous on TCP.
You will want to send absolute positions and rotations.
You're right, that if you send just forces, it won't work. It's possible to make this work, but it's much harder than just sending positions. You need both devices to do their calculations the same way, so before each frame, you need to wait for the input from the other device, you need to use the same time step, scripts need to either run in the same order or be commutative, and you can only use CPU instructions guaranteed to give the same result on both machines.
that last one is one that makes it particularly problematic, because it means you can't use floating-point numbers (floats/singles, or doubles). you have to use integers, or roll your own number format, so you can't take advantage of many existing tools.
Many games use a client-server model with client-side prediction. if your game is turn based, you might be able to get away with not using client-side prediction. instead, you could have the client lag behind by some amount of time, so that you can be fairly sure that the server's input will already be there when you go to render. client-side prediction is only important if the client can make changes that the server cares about (such as moving).

Smooth MultiPlayer movement

i am developing a multiplayer roleplaying game, (No, its not a mmorpg. ;)
My current setup is like this.
Client tells the server "I want to move forward"/"I want to move backwards", the server then updates your entity, and informs all clients in the area about the change. The server is also updating each entity every 20ms and sending updates every 100ms to the clients, these updates contains position, velocity, rotation etc.
So far so good, however i have nothing in store for smoothing the movement between the packets on the client side, and i must say, i can not get it working. I have been reading up on prediction, interpolation, deadreackoning but its all a big mess for me.
So right now i am just doing something like "Position = Packet.Position", which causes a very stuttering movement.
So, what i want help with is, how do i get a more smooth movement? Have been looking at the XNA Prediction Sample, but i could not get it right.
Thanks //F
Read Valve's description of their multiplayer protocol. It should be instructive, and gives a very clear example on how you do the prediction/interpolation.
I'd suggest the idea from another question (see the accepted answer)
Here the client calculates its position itself as if its not a network game. Client regularly sends his current position to the server. And if client cheats or can't continue moving in the chosen direction, server just sends the client his correct position.
The same algorithm was used in Ultima Online (at least when I was playing it 10 years ago)
I solved it by running a ghost entity alongside with my main one.
The ghost will get updated every frame aswell, but whenever a packet comes in, his values are set to the values of the packet.
I then gradually tweak the real entity to where the ghost is.

Resources