Automate Blackberry 10 simulator actions - simulator

I'm using the VMWare Player and the Blackberry 10 simulator image; I need to do some unit/integration tests automatically. I know I can use the VIX api to spin up a new Simulator and load the Blackberry image.
What I would love to be able to do is send 'key presses', launch specific apps, and perhaps send gestures. On android there's monkeyrunner and other similar apps. However I haven't found much with respect to BB10, I know it's new but I can't be the only one with this request.
Also, how powerful is the telnet option? I can telnet into an emulator and change directory into the apps dir, but I can't list its contents, SUDO, or run anything.
*****UPDATE*******
I've made some progress WRT to this, but not much. It seems that you can use the Windows API to send mouse_evt messages to the VMWare emulator; it's not 100% reliable but works enough to open apps. The big hole I have right now is being able to detect state after the action/swipe/touch is executed, aka "did the swipe I just execute work? Are we in the right app?". It would be hugely beneficial to query the device's process list, but the 'devuser' account given in the telnet example can't really do anything.
This gist has the basics for how to touch and swipe the screen based on my experiences.
https://gist.github.com/edgiardina/6188074

As you are on windows, have you tried Autohotkey (freeware) on the host machine that runs the VMWare Player? This software can send any key/mouse move/click combination and has several ways of analysing the VMWare Player Window output and react to it.
If in your example you want to check if a certain app has started and is visible, you can start it manually once and make a screenshot of a small part of the apps interface. Then you write a script that sends whatever mouse moves and key types are needed to start the app, make the script pause a while and then perform the ImageSearch command to search for this image on screen.

I don't know much about any of this but telnet.
When you telnet in, you're assigned a shell, which, if the shell is a restricted shell, will prevent you from doing exactly the things that you mentioned, are you able to change the default shell options for the devuser?
You can change the directory but not create files, list files anywhere but the home directory, see any of the filesystem, redirect output, set environment variables, etc.
Which shell does it give you?
Can you telnet as a different user? Create a new user with better privileges?
Dru
Srry, should be comment.

Related

Hololens Simulate gestures

Im running server on hololens, and i created a PC app that works as a client.
Via client i can drag, rescale, rotate etc. holograms created in unity.
The holograms have some function that starts when user makes an air tap gesture on them.
I want to be able to start those functions in my pc app.
The easiest way to do it is just to send an info from app to hololens to do certain thing but...
I want to run those functions specificly as if the air tap gesture happened.
To do so i need to simulate an air tap gesture.
I want to send the coordinates from pc app to hololens, and i want my scene to behave like there was an actual air tap gesture executed without executing that gesture in real life.
Similiar thing can be done on PC version of Windows10 and its described for example here
https://superuser.com/questions/159618/simulating-mouse-clicks-at-specific-screen-coordinates.
There is my question to you.
Is it possible to simulate gestures? I would realy appretiate any info you can share with me.
Thanks.
If you are using the standard MRTK Gaze stuff to handle your gestures, you could define a new IInputSource.
The example for adding Gamepad input could be a good starting point - instead of triggering an air tap when a gamepad button is pressed, trigger it in response to remote calls from your PC app.
The benefit of this is that it's in keeping with the existing input system - your code which acts on input doesn't need to know that it came from a gamepad, a hand, or your desktop app.

How do I run command scripts and keystrokes in the background via a button click?

Some employees in our shipping department have been manually entering and editing open orders' shipments in some terribly managed and typo-riddled Excel spreadsheets.
I'm now in the process of creating an Access database that imports the openorder.csv from our ancient UNIX system. Two command scripts (Telnet & FTP) requiring passwords communicate with UNIX to get current data, but the employees are nowhere near technologically literate enough to handle the command prompt screens.
How would I go about having this run in the background with a pop-up box telling the user to wait until the background process has finished?
*Note, I was originally going to have two buttons, one for each, and have the user type in the passwords, but Telnet closes immediately after opening while FTP works fine, but without both, this approach is useless.
I know this is definitely not the most efficient way to do this, but my skills are still lower intermediate, and I can't convince the higher-ups to invest in a server, so I have to do everything with band-aids.

How to keep the browser open after test execution when using selenium web driver

My script searches for different strings in different tabs of a browser. Is there a way to keep the browser open after test execution is over so that results can be checked at a later time? Currently the browser closes automatically after 5 mins even though i am not using driver.quit().
Selenium: 2.33, Win 7, FF and Chrome
I don't know if you can let the browser open. But may I suggest you to use the TakeScreenshot functionality of Selenium in order to save the state of the browser when you want. That could help you to debug or to check that the page is as expected.
If this solution doesn't help you, could you please explain us exactly why you want to keep the browser open?
I was able to get the fix at least for my problem. I am using Remote Web Driver class and there is an option to provide timeout and browser_timeout parameters on the command line when the hub is started. Setting the value to 0 means the hub will wait indefinitely. For this scenario i wanted that only.
while 1==1:
pass
This will put your code into a loop, browser will stay responsive, and you ultimately kill the python program with control C. Just did this for the scripting of logging on to an application with encrypted credentials stored locally. This keeps plain text user ids and passwords out of the scripting code.

Controlling Spotify through Processing/Arduino

I am making a tangible controller for Spotify (like the one from Jordi Parra, http://vimeo.com/21387481#at=0) using an Arduino microcontroller.
I have a Processing sketch running which does all the calculations with the data from the Arduino. I want this Processing sketch to be able to control different options in Spotify like: Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Right now I use an extra Arduino Leonardo which simulates key presses while AutoHotKey listens to those and sends them to Spotify. It does not work very well and I only have limited options.
I would love to get rid of that extra Arduino while getting more control.
I am working on a Windows thing so Apple script won't work (for me).
Is there a possibility to control the Spotify app from Processing? Or is it possible to use the library to create a new Spotify app in Processing?
Many thanks in advance!
Paul
Disclaimer: I work at Spotify
Right now there is no cross-platform way to control the Spotify application. On Linux, Spotify will respond to dbus commands, which means that a bit of hacking could send play/pause/next/previous. I have heard that it is also possible to control Spotify on Mac OSX via applescript, but I'm not 100% certain about this. A quick google search for "control spotify mac os x applescript" produced some interesting results, though I'm not sure how current or relevant any of them are. As for Windows, I'm not sure if/how one would control the application at all.
Otherwise, your best bet would be libspotify, for which you would need to write a Processing library to communicate with it. Based on a bit of quick research, it seems that Processing libraries are written in Java, which means you'd either need to use a wrapper such as jlibspotify or hand-roll your own JNI wrapper for libspotify.
I'm not sure how current jlibspotify is, given that they are wrapping a rather old version of the library. If you do any libspotify hacking it is better done in C/C++ with a minimal JNI wrapper, but all of this may be way more work than you are intending for this project.
Why not utilize Spotify's keyboard integration.
The Arduino Leonardo supports USB HID mode.
So, send the keyboard keys for Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Most everything has a single bound global key. I believe only shuffle does not. You could create a global hotkey in your OS to bind to the app's shuffle control key.
If you are looking for status feedback on the state of each button, this of course won't help you.
Good luck.

Unix write command to message another user

I have some handheld scanners that i would like to send messages to (running Unix) and after using the write command to send a message to a handheld scanner(computer) to someone on the warehouse floor, the message stays on the screen (we are using AML M7220 scanners). Does anyone know a way to either clear the screen or refresh the screen after the message session has ended. I have tried to email AML and call AML tech support but they don't give a crap about responding to emails and when calling, it is not even a valid phone number. AML needs to update their website!!
There are many commands available to send the message to another user in the same network
1.mesg
2.talk
3.write
4.wall
these commands are used to send the message to another users
write is a fairly low-level utility. It has no notion of the contents of the screen that is at the end of the remote terminal, and anything else that happens to be running on that terminal has no awareness that its screen has just been polluted by a write session. Actually UNIX ttys in general have no notion of what's being on displayed on them (they are more or less just character streams).
You cannot clear the screen at the end of a write session because the only way you could possibly do it would be to pass a terminal-clearing terminal emulation sequence through the write session and (modern) write won't let you do that (it will escape it).
You cannot repaint the remote terminal's screen if some kind of full screen application happens to be running because there is no way to ask that application to do that.

Resources