Hololens Simulate gestures - gesture

Im running server on hololens, and i created a PC app that works as a client.
Via client i can drag, rescale, rotate etc. holograms created in unity.
The holograms have some function that starts when user makes an air tap gesture on them.
I want to be able to start those functions in my pc app.
The easiest way to do it is just to send an info from app to hololens to do certain thing but...
I want to run those functions specificly as if the air tap gesture happened.
To do so i need to simulate an air tap gesture.
I want to send the coordinates from pc app to hololens, and i want my scene to behave like there was an actual air tap gesture executed without executing that gesture in real life.
Similiar thing can be done on PC version of Windows10 and its described for example here
https://superuser.com/questions/159618/simulating-mouse-clicks-at-specific-screen-coordinates.
There is my question to you.
Is it possible to simulate gestures? I would realy appretiate any info you can share with me.
Thanks.

If you are using the standard MRTK Gaze stuff to handle your gestures, you could define a new IInputSource.
The example for adding Gamepad input could be a good starting point - instead of triggering an air tap when a gamepad button is pressed, trigger it in response to remote calls from your PC app.
The benefit of this is that it's in keeping with the existing input system - your code which acts on input doesn't need to know that it came from a gamepad, a hand, or your desktop app.

Related

Automate Blackberry 10 simulator actions

I'm using the VMWare Player and the Blackberry 10 simulator image; I need to do some unit/integration tests automatically. I know I can use the VIX api to spin up a new Simulator and load the Blackberry image.
What I would love to be able to do is send 'key presses', launch specific apps, and perhaps send gestures. On android there's monkeyrunner and other similar apps. However I haven't found much with respect to BB10, I know it's new but I can't be the only one with this request.
Also, how powerful is the telnet option? I can telnet into an emulator and change directory into the apps dir, but I can't list its contents, SUDO, or run anything.
*****UPDATE*******
I've made some progress WRT to this, but not much. It seems that you can use the Windows API to send mouse_evt messages to the VMWare emulator; it's not 100% reliable but works enough to open apps. The big hole I have right now is being able to detect state after the action/swipe/touch is executed, aka "did the swipe I just execute work? Are we in the right app?". It would be hugely beneficial to query the device's process list, but the 'devuser' account given in the telnet example can't really do anything.
This gist has the basics for how to touch and swipe the screen based on my experiences.
https://gist.github.com/edgiardina/6188074
As you are on windows, have you tried Autohotkey (freeware) on the host machine that runs the VMWare Player? This software can send any key/mouse move/click combination and has several ways of analysing the VMWare Player Window output and react to it.
If in your example you want to check if a certain app has started and is visible, you can start it manually once and make a screenshot of a small part of the apps interface. Then you write a script that sends whatever mouse moves and key types are needed to start the app, make the script pause a while and then perform the ImageSearch command to search for this image on screen.
I don't know much about any of this but telnet.
When you telnet in, you're assigned a shell, which, if the shell is a restricted shell, will prevent you from doing exactly the things that you mentioned, are you able to change the default shell options for the devuser?
You can change the directory but not create files, list files anywhere but the home directory, see any of the filesystem, redirect output, set environment variables, etc.
Which shell does it give you?
Can you telnet as a different user? Create a new user with better privileges?
Dru
Srry, should be comment.

IBM Worklight - How to use Worklight in a background process

I have an application that should run continuously even after the user presses the back button.
It has to send some data using POST method to my database in a remote server for every half an hour. This should happen even after the user has pressed the back button and the app should only stop when the mobile is switched off.
While I have set up the HTTP Adapter to send POST data and also an HTML file to call the adapter procedures on launch, I dont know how to make it background or which code to run to keep sending the POST data every half an hour.
It sounds like you are running on Android.
The expected behavior in Android is that pressing the Back button QUITS the application.
Moving to the background is performed by pressing the Home button.
So, I don't think you should do this when pressing the Back button... or maybe you should at least present the user the choice of either quitting or moving to the background (you can override the Back button by using WL.App.overrideBackButton).
That said, I am not familiar with a programmatic method to move to the background instead of quitting (maybe there is an existing Cordova plug-in or another way that does this).
The other way to accomplish this is by using an Android Background Service, however, Worklight does not have support in its Native API to do this.
As for what happens when the Worklight application is in the background, I have never tried it myself, so I cannot be certain at all, but try using WL.Client.setHeartBeatInterval to keep alive the connection to the Worklight Server and write some logic that sends the adapter requests. See if that would work for you...
I have found this:
How to create plugin in phonegap to run the application in background?
Maybe you'll be able to massage it into your Worklight project (in case the heartbeat approach above does not work).
More information on running in the background:
Android:
http://developer.android.com/training/basics/activity-lifecycle/index.html
http://developer.android.com/reference/android/app/Activity.html
iOS:
http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/ManagingYourApplicationsFlow/ManagingYourApplicationsFlow.html
Override the Back button by using WL.App.overrideBackButton.,
remove the exit function in that method and leave it blank (if you want the background process not to exit when clicked)

Controlling Spotify through Processing/Arduino

I am making a tangible controller for Spotify (like the one from Jordi Parra, http://vimeo.com/21387481#at=0) using an Arduino microcontroller.
I have a Processing sketch running which does all the calculations with the data from the Arduino. I want this Processing sketch to be able to control different options in Spotify like: Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Right now I use an extra Arduino Leonardo which simulates key presses while AutoHotKey listens to those and sends them to Spotify. It does not work very well and I only have limited options.
I would love to get rid of that extra Arduino while getting more control.
I am working on a Windows thing so Apple script won't work (for me).
Is there a possibility to control the Spotify app from Processing? Or is it possible to use the library to create a new Spotify app in Processing?
Many thanks in advance!
Paul
Disclaimer: I work at Spotify
Right now there is no cross-platform way to control the Spotify application. On Linux, Spotify will respond to dbus commands, which means that a bit of hacking could send play/pause/next/previous. I have heard that it is also possible to control Spotify on Mac OSX via applescript, but I'm not 100% certain about this. A quick google search for "control spotify mac os x applescript" produced some interesting results, though I'm not sure how current or relevant any of them are. As for Windows, I'm not sure if/how one would control the application at all.
Otherwise, your best bet would be libspotify, for which you would need to write a Processing library to communicate with it. Based on a bit of quick research, it seems that Processing libraries are written in Java, which means you'd either need to use a wrapper such as jlibspotify or hand-roll your own JNI wrapper for libspotify.
I'm not sure how current jlibspotify is, given that they are wrapping a rather old version of the library. If you do any libspotify hacking it is better done in C/C++ with a minimal JNI wrapper, but all of this may be way more work than you are intending for this project.
Why not utilize Spotify's keyboard integration.
The Arduino Leonardo supports USB HID mode.
So, send the keyboard keys for Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Most everything has a single bound global key. I believe only shuffle does not. You could create a global hotkey in your OS to bind to the app's shuffle control key.
If you are looking for status feedback on the state of each button, this of course won't help you.
Good luck.

Adobe AIR : Check/Alert/Prompt when user press computer shut down down or command

Is that possible to achieve using Adobe Air or any other suggestion? I have an app which record user time and to prevent user don't forget i think similar solution will work.
Thanks!
You can bind a listener to close event for the application. When OS received shut down command, it'll try to terminate processes so your app will trigger close event. I haven't tested but hopefully this will work. At that moment, you can register user's session data. That's what other software do when you want to turn off, a little delay...

Connecting to a Ghost User in Flex RTMFP

I have a simple Flex RTMFP P2P video app in the same mold as the Adobe Cirrus VideoPhone Sample application. A problem I've been encountering in developing this app (the same problem occurs in the sample) is when you try to connect to a ghost Stratus instance i.e you try to call someone whose Stratus id is in the database but who is no longer on the page. So here's an example of what I mean:
Let's say you go to the Adobe Stratus sample and connect as Dan. Then open up a new tab, go to the sample again and connect as Fred. If from this point, you (as Fred) call Dan everything will work fine. But, if you close the tab in which you connected as Dan, and then from the Fred tab try to connect to Dan the program will just hang.
I would have thought there would be a NetStream event that would be triggered if you tried to connect to a Stratus instance that is not longer online but I can't seem to find anything besides NetStream.Connect.Rejected which doesn't seem to be called.
Any help is much appreciated!
Did you try NetStream.Connect.Failed?
You should also add an event listener to the incoming stream and watch for the NetStream.Connect.Closed to see if it is disconnected during conversation. If it is fired any time; remove that peer from your db.

Resources