So I've been messing around with Project Tango, and noticed that if I turn on a motion tracking app, and leave the device on a table(blocking all cameras), the motion tracking goes off in crazy directions and makes incredibly wrong predictions on where I'm going (I'm not even moving, but the device thinks I'm going 10 meters to the right). I'm wondering if their is some exception that can be thrown or some warning or api call I can call to stop this from happening.
if you block all the camera, there is not features camera can capture.
so motion tracking may be in two stages:
1. No moving,
2. drifting to Hawaii.
either ways may happen.
If you did block the fisheye camera, yes, this is expected.
For API, There is a way to handle it.
Please check life cycle for motiontracking concept
For example for C/C++ :
https://developers.google.com/project-tango/apis/c/c-motion-tracking
if API detected pose_data as TANGO_POSE_INVALID, the motion tracking system can be reinitialized in two ways. If config_enable_auto_recovery was set to true, the system will immediately enter the TANGO_POSE_INITIALIZING state. It will use the last valid pose as the starting point after recovery. If config_enable_auto_recovery was set to false, the system will essentially pause and always return poses as TANGO_POSE_INVALID until TangoService_resetMotionTracking() is called. Unlike auto recovery, this will also reset the starting point after recovery back to the origin.
Also you can add Handling Adverse Situations with UX-Framework to your app.
check the link:
https://developers.google.com/project-tango/ux/ux-framework-exceptions
The last solution is by write the function handle driftting by measuring velocity of pose_data and call TangoService_resetMotionTracking() and so on.
I run a filter on the intake that tries not to let obviously ridiculous pose changes through, and I believe no reported points whose texel is white nor any pose where the entire texture is in near shouting distance of black
I have captured the IR signal ( I believe RC5) of a HVAC remote control, like this one....
(using Saleae)
This gave me a sequence of pulses of different width that I can make the Arduino reproduce and the HVAC recognize the request. An example is:
unsigned int power_ON[180] = {2888,3918,1911,1049,907,1992,903,989,1936,1023,907,1049,903,989,903,1049,903,1049,907,1992,1851,1992,1915,1049,928,963,928,1023,903,1049,907,1049,928,963,928,1023,903,1053,928,1023,928,963,928,1023,928,1027,928,1023,928,963,928,1023,907,1049,928,1023,928,1906,1941,959,2940,3866,1962,997,932,1967,929,963,1962,997,933,1019,959,933,933,1023,954,997,928,1971,1902,1941,1941,1019,958,933,958,997,954,997,933,1019,959,933,959,997,954,997,928,1023,958,933,958,997,954,997,933,1019,958,933,958,997,954,997,933,1019,958,1881,1962,937,2940,3862,1966,993,958,1941,933,959,1966,993,958,997,954,937,954,997,933,1023,954,1941,1880,1966,1962,997,954,937,928,1023,933,1023,954,997,928,963,928,1023,933,1023,929,1023,928,963,929,1023,928,1027,928,1023,928,963,928,1023,928,1027,928,1023,928,1910,1911,989,3832};
Could anyone guide me on the steps to decode the message? or understand the different pulse width?
I guess there must be certain defined pulse widths? Each meaning something different?
My initial though is that I need to:
1) Decode raw data by converting pulses to digital 1,0
2) Identify from digital data each section of the code, I think all the configuration is send on every key press, so identify the section of the code where it states the temperature, fan speed, hvac mode, clock, etc
3) Be able to put together a full IR code based on wanted setup, instead of just saving the whole code and reproducing it.
Any hint or guideline on how to do this?
Am I on the right track?
edit:
I have tried analysing one same mode and try to figure out which pulses change, but I cant figure it out as the number of pulses varies. Here you can see Cooling mode and maximum fan speed with changing temperature setting.
here is the excel file for anyone really into helping:
http://www.filedropper.com/analysiscoolingmodefanspeedmaximum
and the end of the message
So I put your pulse widths (?) into a diagram: http://i.imgur.com/C9k64qB.jpg
Without knowing more about what this actually represents, this does not really help i guess..
What buttons did you press while recording this? How did you record this?
I would try to visualize all the data you can get. Record all buttons and put what you get in diagrams. Then stare at them and maybe you will find some logic hidden in there.
Also, open the remote, look what IC's are inside and look up their datasheets. Maybe there you will find the protocol and you won't have to do any reverse engineering at all.
Keep us updated!
I have a DirectShow webcam application. I make use of Sample Grabber to get the buffer callbacks and IVideoWindow to control the display co-ordinates for the Preview. I have Preview and Capture Streams which I run as below.
g_pBuild->RenderStream(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video,cam,g_pGrabberF,pNullRenderer2); g_pBuild->RenderStream(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video,cam,NULL,NULL);
On certain On board cameras, IMediaControl::Run followed by IMediaControl::Stop followed by IMediaCOntrol::Run doesn't switch on the camera.
Extenal USB cameras work properly here. How can I diagnose more on this? Any pointers, please help.
Maybe its specific to a certain hardware issue in the unit.
Do a quick test by adding sleep of 1 sec between calls.
If it does help than you need to find a way to know when to unit state in idle or not.
There are two important parts of the question which you did not provide:
Filter graph topologies
HRESULTs of the method calls
A problem you might be having is that one of the filters in the topology does not handle well state transitions and fails somewhere between states. Supposedly your second Run meets it still trying to complete Stop. You might get a HRESULT there which indicates the issue (better for you) or the filter fails silently.
The filter graph's is the unlikely source of the bug itself. Chances are high that it does everything flawlessly, however since internally it distributes the calls between filters, one of the filter is letting you down.
i am developing a multiplayer roleplaying game, (No, its not a mmorpg. ;)
My current setup is like this.
Client tells the server "I want to move forward"/"I want to move backwards", the server then updates your entity, and informs all clients in the area about the change. The server is also updating each entity every 20ms and sending updates every 100ms to the clients, these updates contains position, velocity, rotation etc.
So far so good, however i have nothing in store for smoothing the movement between the packets on the client side, and i must say, i can not get it working. I have been reading up on prediction, interpolation, deadreackoning but its all a big mess for me.
So right now i am just doing something like "Position = Packet.Position", which causes a very stuttering movement.
So, what i want help with is, how do i get a more smooth movement? Have been looking at the XNA Prediction Sample, but i could not get it right.
Thanks //F
Read Valve's description of their multiplayer protocol. It should be instructive, and gives a very clear example on how you do the prediction/interpolation.
I'd suggest the idea from another question (see the accepted answer)
Here the client calculates its position itself as if its not a network game. Client regularly sends his current position to the server. And if client cheats or can't continue moving in the chosen direction, server just sends the client his correct position.
The same algorithm was used in Ultima Online (at least when I was playing it 10 years ago)
I solved it by running a ghost entity alongside with my main one.
The ghost will get updated every frame aswell, but whenever a packet comes in, his values are set to the values of the packet.
I then gradually tweak the real entity to where the ghost is.