I am making a tangible controller for Spotify (like the one from Jordi Parra, http://vimeo.com/21387481#at=0) using an Arduino microcontroller.
I have a Processing sketch running which does all the calculations with the data from the Arduino. I want this Processing sketch to be able to control different options in Spotify like: Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Right now I use an extra Arduino Leonardo which simulates key presses while AutoHotKey listens to those and sends them to Spotify. It does not work very well and I only have limited options.
I would love to get rid of that extra Arduino while getting more control.
I am working on a Windows thing so Apple script won't work (for me).
Is there a possibility to control the Spotify app from Processing? Or is it possible to use the library to create a new Spotify app in Processing?
Many thanks in advance!
Paul
Disclaimer: I work at Spotify
Right now there is no cross-platform way to control the Spotify application. On Linux, Spotify will respond to dbus commands, which means that a bit of hacking could send play/pause/next/previous. I have heard that it is also possible to control Spotify on Mac OSX via applescript, but I'm not 100% certain about this. A quick google search for "control spotify mac os x applescript" produced some interesting results, though I'm not sure how current or relevant any of them are. As for Windows, I'm not sure if/how one would control the application at all.
Otherwise, your best bet would be libspotify, for which you would need to write a Processing library to communicate with it. Based on a bit of quick research, it seems that Processing libraries are written in Java, which means you'd either need to use a wrapper such as jlibspotify or hand-roll your own JNI wrapper for libspotify.
I'm not sure how current jlibspotify is, given that they are wrapping a rather old version of the library. If you do any libspotify hacking it is better done in C/C++ with a minimal JNI wrapper, but all of this may be way more work than you are intending for this project.
Why not utilize Spotify's keyboard integration.
The Arduino Leonardo supports USB HID mode.
So, send the keyboard keys for Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Most everything has a single bound global key. I believe only shuffle does not. You could create a global hotkey in your OS to bind to the app's shuffle control key.
If you are looking for status feedback on the state of each button, this of course won't help you.
Good luck.
Related
I have been assigned the task to automate regression testing for a VXML based IVR hosted in cloud.
This is a DTMF based IVR where IVR plays a audio prompt and then waits for caller input. I am not sure how to automate this part.
How do I automate DTMF digits collection?
I have seen a few suggestion where it was mentioned that I need to playback audio files that represent the telephone keypad input (DTMF). But that doesn't seem optimal. Is there a way I can specify the input in a text file and have IVR read it.
I have found few suggestions online but that would require.
I have to find a solution which is Free. Meaning I am allowed to use only tools that are freely available on Internet.
I will be grateful if I can get suggestions on how to get this done.
There are a couple of commercial solutions, but since you indicate you need free, I'll skip those.
You can blindly treat it as a web application and test the navigation between pages. This won't allow you to test the call flow, but you can test some of the back end logic that drives page generation.
You can write another IVR application to call your current application. Without speech recognition, it is hard to confirm that the call flow is correct, but an call that ends unexpectedly would fail. If you can change the existing application, you may be able to swap out recordings of voice with tones and use those to keep the test case and call flow in sync.
You could use one of the open source voicexml engines and modify them to drive the call flow. You might have dependencies in your infrastructure that require a real call flow versus a simulation. I have been able to get JVoiceXML to process a voice application in a simulation/test case manner.
In summary, you're going to need to be creative if the requirement is no external cost, just your time.
The DTMF tone wave files can be made dynamic by using script. Say you want to enter DOB 22111984, write a OE/ECMA script which will input those wave file. It's same like playing dynamic audio file.
Assuming your are using another IVR(outbound) which will play back to inbound IVR.
i.e :
<script> <![CDATA[
function sayDTMF(n)
{
//generate VXML page which will play audio file
// depending upon the input
//2.wav 2.wav 1.wav 1.wav 1.wav 9.wav 8.wav 4.wav
}
]]> </script>
<goto expr="sayDTMF(DOB)"/>
I'm using the VMWare Player and the Blackberry 10 simulator image; I need to do some unit/integration tests automatically. I know I can use the VIX api to spin up a new Simulator and load the Blackberry image.
What I would love to be able to do is send 'key presses', launch specific apps, and perhaps send gestures. On android there's monkeyrunner and other similar apps. However I haven't found much with respect to BB10, I know it's new but I can't be the only one with this request.
Also, how powerful is the telnet option? I can telnet into an emulator and change directory into the apps dir, but I can't list its contents, SUDO, or run anything.
*****UPDATE*******
I've made some progress WRT to this, but not much. It seems that you can use the Windows API to send mouse_evt messages to the VMWare emulator; it's not 100% reliable but works enough to open apps. The big hole I have right now is being able to detect state after the action/swipe/touch is executed, aka "did the swipe I just execute work? Are we in the right app?". It would be hugely beneficial to query the device's process list, but the 'devuser' account given in the telnet example can't really do anything.
This gist has the basics for how to touch and swipe the screen based on my experiences.
https://gist.github.com/edgiardina/6188074
As you are on windows, have you tried Autohotkey (freeware) on the host machine that runs the VMWare Player? This software can send any key/mouse move/click combination and has several ways of analysing the VMWare Player Window output and react to it.
If in your example you want to check if a certain app has started and is visible, you can start it manually once and make a screenshot of a small part of the apps interface. Then you write a script that sends whatever mouse moves and key types are needed to start the app, make the script pause a while and then perform the ImageSearch command to search for this image on screen.
I don't know much about any of this but telnet.
When you telnet in, you're assigned a shell, which, if the shell is a restricted shell, will prevent you from doing exactly the things that you mentioned, are you able to change the default shell options for the devuser?
You can change the directory but not create files, list files anywhere but the home directory, see any of the filesystem, redirect output, set environment variables, etc.
Which shell does it give you?
Can you telnet as a different user? Create a new user with better privileges?
Dru
Srry, should be comment.
what's the proper way to capture biometric information (pressure, speed...) by signing with a stylus on a canvas developed in a JSP web Page
Alright, since no one else has attempted to answer this question, I shall elaborate on my comment and opefully it will serve as an answer to others as well.
First, Java Server Pages (JSP) is a server-side language. It is meant to run on the web-server and not on the user's browser. The same goes for other server-side languages like PHP and ASP.
So a server-side language is not able to directly interact with devices (keyboard, scanners, cameras, etc). Only when the data is submitted by the browser or client program, the server receives it for processing.
For a device to receive input, there are two key pieces of software needed.
The device driver: which must be installed on the user's machine
The application program to capture inputs and do any processing.
If either one is missing, the device cannot function. And then there's another issues. Depending on the device, there's various feedback from the driver/API that should go back to the application that reads it. For example, if a fingerprint scan was not very successful for some reason, the scanner should tell this to the user. So again, there's the need for interactivity between the device and the user's application.
Thus, using any server-side language is out of the question for such applicatoins.
Now, in order to make this possible, you may use a client-side program. Here are some options.
A native application in VB, C/C++, Pascal or other language. If this is an option, the user must install this application on their computer.
A browser-based program. This can be a program created using JAVA (not Javascript or JSP), or ActiveX component. ActiveX is largely OS/browser dependent. And the TRUTH is that even Java is not truly platform independent when it comes to different operating systems. There are some technical differences that you'll need to look into. But for the most part of interactivity and high-level operations, yes, Java is more platform-independent than the others. But on a personal note, Java is my worst language. I try not to use it anywhere anymore. That's a different story.
In both options above, every client machine must have their own proprietory drivers and often some sort of API for browser integration.
A year or so ago, I had to program a Bio-Mini fingerprint scanner using VB. It was all sweet in the beginning. Then due to the restrictions of networkability and concurrent usage, the drivers/SDK could not take the load and things were going wrong. By the way, the drivers/SDK were meant for MS-Access. Knowing that the DB was the problem, I started to port this to MySQL. And it was a severe climb from there. I had to do a near-rewrite of the SDK for capturing and comparing data using arrays in VB. And to make things worst, the device was changed and things went wrong again. But do note that the new device was from the same manufacturer.
So keep in mind that even a simple change like that can cause a problem.
I am planning on creating my own POS system by using pos for .net.
So far i have never created any POS system or used pos for .net. I was trying to find some tutorials but wasn't successfull, does anybody know some good website or book? I am also wondering if there is a way to emulate the pos devices (like barcode scanner....)
Using windows embedded 8 industry
As your base system and the POS .net you will be able to achieve what you are trying to do.
As mentioned by #CResults most barcode readers are plug and play. So unless they need to have com ports mapped you will not have to do anything to them. They scan decode and then send the values like a keyboard
Receipt printers are the issue. You can either use the windows driver to print to the printer (highly buggy) or every receipt printer you can communicate directly too through a Com port. You will need to ask for the SDK from the manufacture. This way you are communicating direct to the printer rather then relying a mediator that could crash or freeze.
But the time you are going to spend building your project there are plenty of already created ones. It would be worth while to use a system that has already been developed on for years. Rather then re-invent the wheel.
Check out Bepoz http://bepoz.com/ They have been around for a while and have a very solid application.
Disclaimer: Employee of Bepoz
I believe the SDK comes with plenty of sample code and documentation.
There's also a community forum:
http://social.msdn.microsoft.com/Forums/en-US/posfordotnet/threads
To answer part of your question most barcode scanners appear to the OS as keyboards, sending their output (the decoded value of the barcode) to your app as a string of characters.
So to emulate all you need do is capture the keyboard.
I want to make a web interface that prints out a form that needs to be filled.
The problem is that i want to print out the form only when a call is answered by a softphone.
What event should i listen for?
I'm using Asterisk 1.6 and I get all the events in XML using AsterClick.
thanks,
Sebastian
AsterClick is an excellent choice as it is the only
event based Asterisk AMI /(XML)/JavaScript interface
on the planet that can propagates those events to
JavaScript AS they occur!! All others use polling,
As to your question...
During your AsterClick development,when your JavaScript inherits from the
wSocket class, you should implement the method(s)
your_wSocket.wSocketsReceiveString(String) and/or
your_wSocket.wSocketsReceiveXML(XMLdocument).
These wSocket methods should be documented on the AsterClick site http://asterclick.drclue.net
with more help being available in the forums http://forums.drclue.net/viewforum.php?f=13.
These functions can be used to monitor the XML command and event flow in real time
and represents all the data Asterisk AMI provides.
I tend to route this information to an autoscrolling widget with a handy [clear]
button near by.
By pressing your [clear] button and then interacting with your
phones while examining the event flow, you
should be able to pick out the events and related data for
any automation sequence.
As to jQuery, I know that there are jQuery/Asterclick projects
running out there, including a real-time HUD system that manages
calls,conferencing,parking,queues, etc.
The AsterClick client SDK is also squarely aimed at HTML5 and running
on an ever increasing range of devices as the HTML5 features required are
implemented.
You can also use the AsterClick "WBEA" tool to deploy your HTML5
AsterClick applications as desktop executables for Windows,Linux.
Anyways I Hope some of that helps.
You should capture events from AMI, you can use Adhearsion for this (Adhearsion can be also connected to a Rails app):
https://github.com/adhearsion/adhearsion
or PAMI if you prefer PHP:
https://github.com/marcelog/PAMI