Microsoft Kinect SDK Zoom/Scroll getsures - gestures

We are writing software to enable mouse control via kinect on Windows. Basic mouse functions are working well.
Now we want to add things like zoom and pan.
Now, these functions are not applicable to every application, and it might be that this is only possible on an application by application basis, or perhaps there is a way.
IN basic terms we would like to be able to use our kinect software to do pan and zoom in google maps/earth, and any app where there is a vertical/horizontal scrollbar.
We also like to implement rotate using two handed gesture as the rotation axis. Again this does not apply to all software.
Any ideas?

Related

Swap Gestures in HERE sdk

I'm trying to remove the default pan behaviour, which is easily done with:
mapView.getGestures().disableDefaultAction(GestureType.PAN);
and use the TWO_FINGER_PAN gesture instead, but I can't quite find a solution for this other than coding the entire animation by myself. Is there an easier way? Maybe some source code I couldn't find?
To summarize I want the TWO_FINGER_PAN gesture to do the exact same thing the PAN gesture would do.
When you disable a gesture, you have to handle the gesture on your side. There is no source code exposed for this as part of the HERE SDK. But you may want to look for native Android gesture handling examples.
The Gestures example app that is shipped with the HERE SDK provides as least a starting point for custom gesture animations with the GestureAnimator class, but for a pan gesture you would usually not need any animation, so a finger movement relates to a change of the target coordinates of the MapView.
Please confirm the the Here SDK version that you looking for? To explain more about the reason this function is not available because, the pan gesture is designed for single finger including kinetic panning. Two finger pan behaves different. So the best user experience is still with normal panning.
Also two finger gestures are also used for tilting and rotating map. To achieve panning with two fingers can be confusing for users.
Detail information about default behaviour in 4.x in explained in the below link:
https://developer.here.com/documentation/android-sdk-navigate/4.7.6.0/dev_guide/topics/gestures.html

replace WASD keys navigation with VR tracked controllers a-frame

I've developed an a-frame scene in a different location to where I will be able to use a headset (either oculus or HTC).
Is the tracked controller functionality built into aframe 0.7.0?
Is there code I need to add to detect these controllers and replace the desktop WASD navigation with the tracked controllers? I don't need any hands to be visible I just need to achieve the up/down/left/right movement in space.
Thanks
Don McCurdy's aframe-extras includes a component called universal-controls that I highly recommend. Specifically there is a gamepad-controls component that may do exactly what you're looking for right out of the box.
If not, universal-controls supports extending the main component with "custom" controllers. The ability to do so is lightly documented on the repository page, but it's pretty straightforward. I'm working on one for the GearVR controller that responds to pressing the GearVR trackpad to achieve movement. I still need to work on getting backward movement, but you can find my work so far at Github.
Once you've developed your own custom controller, (or decided to use mine, or whatever), you attach it to your scene's camera, like this:
<a-entity
id='scene-camera'
camera="userHeight: 1.6"
position='24 1.6 14'
universal-controls='movementControls: universal-gear-vr, keyboard;'
universal-gearvr-controls>
Things to note from above: Rather than the default setting, (which will attempt to load all movement controls schemes that are available), I'm telling the universal-controls component to use my custom component, by giving it's name in the movementControls parameter. Notice that I leave off 'controls' from the name though. That's because universal-controls adds that back later. With that said, I also attach my custom component to the camera, which must be done so that universal-controls can find and use it.
A quick note though, on enabling backward movement, if that's something you're interested in. I've already done it by hacking around with the original WASD movement script. You can take a look at what I did if you'd like to see that.

ZXing.Net.Mobile customising scannable area in xamarin.forms

There is an overlay feature available in the library. The overlay feature is nice but functionality wise it is just there to mask the UI.
Is there a way to customize the scannable area, I mean reduce the area that is scannable on the screen? When this can't be done there is no point in having the overlay right? The overlay does not really do anything.
This android app which uses the core Zxing library developed in java by the Zxing team https://play.google.com/store/apps/details?id=com.google.zxing.client.android you can see that - if the barcode/qr code lies outside the scanning area, it does not process it. That is what I am looking for. Is this possible?
According to this link github.com/Redth/ZXing.Net.Mobile/issues/87
the author of the library says
Unfortunately there's not a great short fix that I can think of for this scenario. Yes I could only check a certain region, but what would the region be set to? It makes sense to use the non-gray areas if you're using the default overlay, but if you use a custom overlay, you might not want the same region checked.
Just need some time to implement it (on ALL platforms - which is what takes so much effort).
It has been a more than a year since this issue was raised on github, so I don't think it will be implemented anytime soon.

pointerHover is not being called?

In the eclipse simulator environment, I don't seem to be receiving any
pointerHover calls on my components. Is there something I need to do
to arm them?
[edit/response]
No one will find it acceptable to deliver desktop applications that don't support mouse movement input.
Likewise, there are many new input devices on the horizon which will require specialized support, but will all expect mouse simulation to be the baseline. For example gesture capture by Kinect, leap motion or VR headsets will need to feed standard events to applications that have not been specifically rewritten to use them.
We don't support those events since there is no pointer hover on the device. We might support it in a future case for desktop builds but even then the simulator will not send those events.
We might support this for the new Apple 3d touch API although I think this might be too simplistic.
This API was originally introduced for the blackberry 5 device which had a "click screen" that allowed hovering and clicking.
You should use pointerDrag events which are more representative of all devices.

Fast (HW-accelerated) drawing to foreign window (probably using Direct3D)

I am working on certain project where the task is to paint bitmaps (currently HBITMAP/bitblt/alphablend) into non-client areas of all visible windows (i.e. windows which do not belong to my application). It must be done very fast - bitmap is update when window is moved, resized etc. Also some kind of blur algorithm must be applied on this bitmap. It should work on Win7 and Win8 (so XP is not required).
I have managed to work properly with GDI. I obtain GetWindowDC, GetWindowRect and AlphaBlend bitmap into buffer (CreateCompatibleDC/CreateCompatibleBitmap) and then BitBlt it into GetWindowDC. This works perfectly... except... it is not as fast as I want. And if I apply blur algorithm on the bitmap, then everything is slow as hell.
So my question would be how to improve the speed? I'm thinking about hardware accelerated drawing.
a) I tried GDI-compatible Direct2D (using ID2D1DCRenderTarget+BindDC) but it is much slower than pure GDI.
b) I am thinking about Direct3D. Problem is that I probably don't know how to use it. If I call D3D10CreateDeviceAndSwapChain with swapchain's OutputWindow set to HWND of my application then it returns S_OK but when I set OutputWindow to HWND of any foreign window then the method fails. So I am not sure how to render into foreign windows.
c) how to properly apply blur on a part of image? I found many algorithms but all of them are processed on CPU. How to make it on GPU?
Thanks in advance for any idea how to solve my problem.
Have you thought about using DComp? For why using DComp may be appropriate, take a look at this: http://msdn.microsoft.com/en-us/library/windows/desktop/hh449195%28v=vs.85%29.aspx
For a brief summary of what DComp is (from MSDN):
Microsoft DirectComposition is a Windows component that enables high-performance bitmap composition with transforms, effects, and animations. Application developers can use the DirectComposition API to create visually engaging user interfaces that feature rich and fluid animated transitions from one visual to another.
DirectComposition enables rich and fluid transitions by achieving a high framerate, using graphics hardware, and operating independently of the UI thread. DirectComposition can accept bitmap content drawn by different rendering libraries, including Microsoft DirectX bitmaps, and bitmaps rendered to a window (HWND bitmaps). Also, DirectComposition supports a variety of transformations, such as 2D affine transforms and 3D perspective transforms, as well as basic effects such as clipping and opacity.
DirectComposition is designed to simplify the process of composing visuals and creating animated transitions. If your application already contains rendering code or already uses the recommended DirectX API, you only need to do a minimal amount of work to use DirectComposition effectively.

Resources