I would like to accomplish 2 tasks in AIR:
Determine, whether a defined program is running (for example firefox.exe)
If its running, then get the current dimensions of its window - i want to make a screenshot of the window, so i'd need other parameters too i guess: Is it minimized? is it behind some other window?
Is this possible to accomplish in AIR? Im using the latest version (2.6)
No it's not possible out of the box. What you're going to need to do is write a native application in something like C# or C++ and then interface with that application using the NativeProcess API. Here is a video tutorial to something close to what you want to do, and should have you well under way.
http://gotoandlearn.com/play.php?id=125
http://gotoandlearn.com/play.php?id=126
Related
I am trying to show a progress in the taskbar of the plasma desktop using the KDE Frameworks. In short, it want to do the same thing as dolphin, when it copies files:
I'm kinda stuck, because I don't even know where to get started. The only thing I found that could be useful is KStatusBarJobTracker, but I don't know how to use it. I could not find any tutorials or examples how to do this.
So, after digging around, and thanks to the help of #leinir, I was able to find out the following:
Since Plasma 5.6 KDE supports the Unitiy DBus Launcher-API, which can be used, for example, to show progress
I found a post on AskUbuntu that explains how to use the API with Qt
The real problem is: This only works, if you have a valid desktop file in one of the standard locations! You need to pass the file as parameter of the DBus message to make it work.
Based on this information, I figured out how to use it and created a GitHub repository, that supports cross platform taskbar progress, and uses this API for the linux implementation.
However, here is how to do it anyways. It should work for KDE Plasma and the Unity desktop, maybe more (haven't tried any others):
Create a .desktop file for your application. For test purpose, this can be a "dummy" file, that could look like this:
[Desktop Entry]
Type=Application
Version=1.1
Name=MyApp
Exec=<path_to>/MyApp
Copy that file to ~/.local/share/applications/ (or wherever user specific desktop files go on your system)
In your code, all you need to do is execute the following code, to update the taskbar state:
auto message = QDBusMessage::createSignal(QStringLiteral("/com/example/MyApp"),
QStringLiteral("com.canonical.Unity.LauncherEntry"),
QStringLiteral("Update"));
//you don't always have to specify all parameters, just the ones you want to update
QVariantMap properties;
properties.insert(QStringLiteral("progress-visible"), true);// enable the progress
properties.insert(QStringLiteral("progress"), 0.5);// set the progress value (from 0.0 to 1.0)
properties.insert(QStringLiteral("count-visible"), true);// display the "counter badge"
properties.insert(QStringLiteral("count"), 42);// set the counter value
message << QStringLiteral("application://myapp.desktop") //assuming you named the desktop file "myapp.desktop"
<< properties;
QDBusConnection::sessionBus().send(message);
Compile and run your application. You don't have to start it via the desktop file, at least I did not need to. If you want to be sure your application is "connected" to that desktop file, just set a custom icon for the file. Your application should show that icon in the taskbar.
And thats basically it. Note: The system remembers the last state when restarting the application. Thus, you should reset all those parameters once when starting the application.
Right, so as it turns out you are right, there is not currently a tutorial for this. This reviewboard request, however, shows how it was implemented in KDevelop, and it should be possible for you to work it out through that :) https://git.reviewboard.kde.org/r/127050/
ps: that there is no tutorial now might be a nice way for you to hop in and help out, by writing a small, self contained tutorial for it... something i'm sure would be very much welcomed :)
How to create/launch integration test in QT? For example, I want to test my application on receiving events and check signals with QSignalSpy, but it looks like there is no option to execute your application and test after that.
Update:
I'm familiar with QTest and using it - I'm actually looking how to launch custom application, not base one with QTEST_MAIN macro
have a look at QTest. It is designed for unit testing but you can use it to launch your application, perform mouse clicks and key presses on objects. It does present a bit of a headache whenever you get a modal window, but there are workarounds on SO.
I'm using Game Maker for Mac by YoYo Games, and I was wondering if it's possible to open another app in a Game Maker game by pressing a button. Sorry if this doesn't seem very clear, tell me if it isn't.
Thanks!
I do not know about the mac-version, but in the windows-version it can be done using the following functions:
execute_program(prog,arg,wait)
execute_shell(prog,arg)
execute_program, execute_shell and execute_file are obsolete. Don't use them, they will be removed.
Why?
These functions cannot be used any more due to changes in the underlying runner that GameMaker:Studio uses to generate the device specific packages making it impossible to generate objects, images or code "on the fly" as was done before.
How to fix this? I'd make a DLL for Windows and some other DLL-like program (I guess .vst or .so? ) for the Mac version, that can run external codes
My scenario here is the following: I am using a pyqt widget to display a solid color fullscreen on a second display and observe this display with a camera that is continuously capturing images. I do some processing with the images and this is the data I am interested in. This works great when used interactively with ipython and matplotlib using the qt4agg backend like so
% ipython -pylab
# ... import PatternDisplay, starting camera
pd = PatternDisplay(); pd.show(); pd.showColor(r=255,g=255,b=255)
imshow(cam.current_image)
I need a similar behavior now in a console script though: it should display the PatternDisplay widget, capture an image, than change the color on the PatternDisplay and take a new image and so on.
The problem is now that the PatternDisplay is never updated/redrawn in my script, likely because PyQt never gets a chance to run it's event queue. I had no luck trying to move the linear worker part of my script into a QThread because I cannot communicate with the PatternDisplay Widget from another Thread any longer. I tried to replicate the implementation of ipython/matplotlib, but I didn't fully understand it, it is quite complicated - it avoids running the QApplication main loop via monkey patching and somehow moves QT into it's own thread. It then checks periodically using a QTimer if a new command was entered by the user.
Isn't there an easy way to achieve what I want to do? I am gladly providing more information if needed. Thanks for any help!
What you need is easier than IPython's job - IPython makes the Qt application and the command line interactive at the same time.
I think the way to do it in Qt is to use a timer which fires at regular intervals, and connect the signal to the 'slot' representing your function that gets the new image and puts it in the widget. So you're pulling it in from the event loop, rather than trying to push it.
I've not used Qt much, so I can't give specifics, but the more I think about it, the more I think that's the right way to do it.
I solved the same problem (i.e. interactive ipython console in terminal, and GUI thread running independently) in the following way with ipython 0.10 (code here)
1. Construct QApplication object, but don't enter its event loop explicitly
2. Run the embedded IPython instance
3. Run the UI code you need by instantiating your window and calling show() on it (like here with the yade.qt.Controller(), which I aliased to F12. (I did not find a way how to ask the embedded shell to run a command at the start of the session, as if the user had typed it)
(You can also show() your window first, then run the embedded ipython. It will provide event loop for Qt.)
(BTW I also tried running Qt4 from a background thread (using both python threads module, and Qt4.QThreads), but it refuses to run in such way stubbornly. Don't bother going that way.)
The disadvantage is that UI will be blocked while ipython is busy. I hope to finding something better for 0.11, which should have much better threading facilities (I asked on ipython-users about how to unblock the UI).
HTH, v.
I've been working on a Flex component and I'd like to write some automated tests for it. The trouble is, the UI testing tools I've looked at (FlexMonkey and Selenium Flex API) don't simulate "enough":
Most of the bugs which have come up so far relate to the way Flex deals with dragging and dropping, which these libraries can't simulate accurately enough. For example, I need to test a case where there is a "drop" event which occurs in the bottom half of a component – neither FlexMonkey nor Selenium Flex API can do that (they may simulate a mouse event, but they won't include coordinates).
So, is there any "good" way to automate that sort of testing?
Edit: After much research, it looks like the only piece of software that can do this is iMacros, which is Windows-only and the interface is... Lacking. So I'm going to be writing my own. Basically, it will put an HTTP interface on java.awt.Robot so code (in any language) can simulate mouse/keyboard events. If you're interested, PM me and I'll keep you updated.
Edit 2: I have published the first version of the framework I wrote, Blunderbuss, over at BitBucket: http://bitbucket.org/wolever/blunderbuss/ . You'll need Jython to run it (http://www.jython.org/), but after that the flex-client example should work.
Videos of Blunderbuss live over at Vimeo:
Automating Flex testing with Blunderbuss
Blunderbuss test suite running
At the moment this remains a proof-of-concept, as I haven't had the cycles to clean it up and make it more useable… But maybe enough people bothering me would give me that time :)
I've used Eggplant to test Flash and AIR apps without having to add any hooks into the code. It's a great tool but it's quite expensive. It simulates a real user by VNC-ing into a system and uses image recognition - among other things - to interact with the app.
I am definitely interested in your custom Java class, and (though I am not the best at Java (yet...)), I would be willing to help out if you're thinking of making this collaborative.
As to Flash MouseEvents. Unfortunately, there really isn't an accurate way to simulate the drag/drop experience in Flash. MouseEvents, when generated by the mouse, are handled in a very different way than regular events and while you could simulate actions by passing events into the handling functions, or by making the dispatcher fire a new DragEvent( DragEvent.DRAG_DROP..., it will not be the same as having the user interact with it. And for some functionality (like gaining access to the clipboard), nothing inside Flash will accomplish your goals.
To be honest, you're probably headed in the right direction -- using something which is not written in Flash to drive faked mouse events is probably your best bet.
I've never had to use it in Flex but i recently stumbled across some info on automation packages in the MS Surface SDK... after looking into it those classes automated user behavior which can be used for testing i.e. move a fake mouse to this point, perform this action. As you're using Flex mx.automation packages and classes. My guess (and hope) is that you'd be able to achieve what you want using these classes.
You could also try auto-hotkey - it is similarly a macro-editing program but it has proven to be very efficient and you can write scripts and set it up very easily.