While trying to add D&D support to a gnome-shell-extension that I'm writing I ran into a bit of trouble. I can create drop targets to any open window, but that's all I've managed to pull off.
I can't differentiate between the windows. I tried to use global.get_stage().get_actor_at_pos(Clutter.PickMode.ALL, x, y).get_parent().get_parent().get_meta_window().get_wm_class(), but half the time it gives me the wrong window and every now and again it just returns null. Also I'm not sure how to drop the information into the target.
All I'm trying to do is drop a file URI into a browser window or the file into a file manager.
Is it even possible in gnome-shell-extensions and how would I pull it off? Any advice would be welcome!
Here is the current available shell code about DND between windows: https://github.com/GNOME/gnome-shell/blob/master/js/ui/xdndHandler.js You can do practically nothing with it.
In Mutter, there are more than one procedure to handled a drag and drop in a window, because there are one implementation for X11-windows and another implementation for Wayland-windows.
To be honest, i don't know if there are a way on Wayland and how will be.
I can tell you that in gnome-shell (Mutter to be specific) there are not a fully implementation of this ability on X11. Most you can know, it's if a drag and drop occurs from a window to the shell and the position of the dragged actor, but the shell dosen't provide any api to create an internal drag and drop from the shell to a particular window.
The shell drag and drop that you can fully used, it's only an internal (just the shell) drag and drop from and to the shell (only for clutter actors) and not an external one between different windows.
In X11, the drag and drop process occurs between windows only. One window provide the dragged object and the information that it's associate to that object. The another window (could be the same) will accept the drop of the object, taking on account the information that the first window provide.
As there are not way in the shell to be possible setting the requiered information to the target-window and like your GUI is inside a big top window (The window that represent the shell global stage: https://github.com/GNOME/mutter/blob/6c18bae83cd27a7397a1ed0c1c0c81b282f1b44e/src/compositor/meta-dnd.c#L152) and like you don't have access to this big internal window, finally you can not do anything to interact directly with other windows.
Here (https://github.com/swayfreeda/blender-2.77a/tree/5969d704f44952ea8cbecba2ba4150fb4a48e6de/extern/xdnd) you can find a fully implementation of drag and drop on X11, you will need to modify the code to be adapted to the Mutter workflow and then add this code to Mutter. After that you will have support, but you will need to create the corresponding procedured to then invoked the functionalities, provide information and recive usefull events from the shell to the window, to be possible finally control it in gjs, but it will be only for X11, not for Wayland. I suppose you will need to do something similar if you want support on Wayland.
Good loock.
Related
I am trying to show a progress in the taskbar of the plasma desktop using the KDE Frameworks. In short, it want to do the same thing as dolphin, when it copies files:
I'm kinda stuck, because I don't even know where to get started. The only thing I found that could be useful is KStatusBarJobTracker, but I don't know how to use it. I could not find any tutorials or examples how to do this.
So, after digging around, and thanks to the help of #leinir, I was able to find out the following:
Since Plasma 5.6 KDE supports the Unitiy DBus Launcher-API, which can be used, for example, to show progress
I found a post on AskUbuntu that explains how to use the API with Qt
The real problem is: This only works, if you have a valid desktop file in one of the standard locations! You need to pass the file as parameter of the DBus message to make it work.
Based on this information, I figured out how to use it and created a GitHub repository, that supports cross platform taskbar progress, and uses this API for the linux implementation.
However, here is how to do it anyways. It should work for KDE Plasma and the Unity desktop, maybe more (haven't tried any others):
Create a .desktop file for your application. For test purpose, this can be a "dummy" file, that could look like this:
[Desktop Entry]
Type=Application
Version=1.1
Name=MyApp
Exec=<path_to>/MyApp
Copy that file to ~/.local/share/applications/ (or wherever user specific desktop files go on your system)
In your code, all you need to do is execute the following code, to update the taskbar state:
auto message = QDBusMessage::createSignal(QStringLiteral("/com/example/MyApp"),
QStringLiteral("com.canonical.Unity.LauncherEntry"),
QStringLiteral("Update"));
//you don't always have to specify all parameters, just the ones you want to update
QVariantMap properties;
properties.insert(QStringLiteral("progress-visible"), true);// enable the progress
properties.insert(QStringLiteral("progress"), 0.5);// set the progress value (from 0.0 to 1.0)
properties.insert(QStringLiteral("count-visible"), true);// display the "counter badge"
properties.insert(QStringLiteral("count"), 42);// set the counter value
message << QStringLiteral("application://myapp.desktop") //assuming you named the desktop file "myapp.desktop"
<< properties;
QDBusConnection::sessionBus().send(message);
Compile and run your application. You don't have to start it via the desktop file, at least I did not need to. If you want to be sure your application is "connected" to that desktop file, just set a custom icon for the file. Your application should show that icon in the taskbar.
And thats basically it. Note: The system remembers the last state when restarting the application. Thus, you should reset all those parameters once when starting the application.
Right, so as it turns out you are right, there is not currently a tutorial for this. This reviewboard request, however, shows how it was implemented in KDevelop, and it should be possible for you to work it out through that :) https://git.reviewboard.kde.org/r/127050/
ps: that there is no tutorial now might be a nice way for you to hop in and help out, by writing a small, self contained tutorial for it... something i'm sure would be very much welcomed :)
In my app, I have a project features but it needs a chain of dialogs to work.
At start, the user must either open an existing project or create a new one and when creating a new project, the user must specify a folder.
So there is a first dialog for the choice between new or existing project and another one opens to select a folder in the case of a new project.
Right now, I call the exec_() method on the first one, and do everything inside (creating the second dialog, using it, ect). the direct consequence : it is messy as it uses side effects.
So the question is : It is possible to cleanly chain dialogs in QT ?
Take a look at QWizard clas:
A wizard (also called an assistant on Mac OS X) is a special type of
input dialog that consists of a sequence of pages. A wizard's purpose
is to guide the user through a process step by step. Wizards are
useful for complex or infrequent tasks that users may find difficult
to learn.
Sounds to me like "state machine" is your friend.
http://www.drdobbs.com/cpp/state-machine-design-in-c/184401236
https://en.wikipedia.org/wiki/Automata-based_programming
In your case, you'd start your state machine from an initial state that runs dialog 1.
You run dialog 1, and when it returns from exec(), determine and
update your machine to a new state.
Then run the appropriate dialog for the new state. And so on, until you get to a state that is the end of your dialog chain.
This allows you to have the flexibility in your dialog chain where the next dialog is conditional to what the user selects in the previous dialog, while keeping the logic of the states outside of the dialogs and in a central location.
It's basically a switch statement in a while loop, but a very useful one for managing non-linear / conditional flow in your program.
Hope this helps.
When open the HP Quick Test Professional we gets Add-in Manager window for select add-ins we wants,
By selecting a Add-in from that menu, is it only effect for identify objects from the window we try to automate or Is that commands(Click, DoubleClick, etc.) also arrange according to the above selection?
eg- By only selecting WEB add-in from the manager and Try to Scroll the browser pane up
window("abc").Scrollup
but only selecting Java add-in from the manager, I dont able to find that Scrollup,
Is that commands depend on what we selected add-in from the Add-in Manager or Is it a QTP commands
It is not really clear what you are asking.
Click is a test object´s method.
The object repository (OR) contains test objects, or at least info that allows QTP to create (instantiate) test objects when you reference them using TestobjectClass ("TOName").
During recording, QTP creates playback statements (calls to test object methods) that reference test objects in the OR, and creates those test objects there.
What kind (class) of test object it creates indeed is determined by the currently active add-ins.
For example, if you record a Java application, but deactivate the Java add-in, you won´t see Java objects in your OR after recording.
That means .Click calls might still be recorded, but for lower-level objects, like Window.
The layout in the OR is just the parent/child relationship (in a simplified way, since in the OR, there usually are less hierarchy levels than in the GUI -- a listbox in a groupbox in a groupbox in a Tab in a frame in a dialog is stored as a listbox in a dialog).
I'm implementing a drag and drop interface with Qt across X11 and Windows. The interface handles events such that it is not illegal for a user to drop a dragged object on an area which can't handle drops.
In this case, Qt::IgnoreAction should therefore not be treated as an incorrect potential action. To communicate this fact to the user I need a way to stop Qt::ForbiddenCursor from displaying if the current Qt::DropAction is Qt::IgnoreAction.
There are three ways I can see to achieve this (in order of preference):
To override the QCursor used for a drag with Qt::IgnoreAction to something other than Qt::ForbiddenCursor.
To override the bitmap used for Qt::ForbiddenCursor. This is pretty dirty but would be an acceptable solution as long as I don't have to delve into OS-specific configuration.
To override the call made by Qt when a drag leaves a valid drop area (I assume that Qt does the equivalent of QDropEvent::setDropAction(Qt::IgnoreAction) in this case).
Could anyone suggest ways to acheive any of the above?
Note: I have also attempted to use QApplication::setOverrideCursor() just before calling QDrag::exec(). This doesn't seem to have any effect.
Check if QDragEnterEvent comes to application itself (install event filter on QApplication object). If it does, simply accept it and cursor will appear normal.
My scenario here is the following: I am using a pyqt widget to display a solid color fullscreen on a second display and observe this display with a camera that is continuously capturing images. I do some processing with the images and this is the data I am interested in. This works great when used interactively with ipython and matplotlib using the qt4agg backend like so
% ipython -pylab
# ... import PatternDisplay, starting camera
pd = PatternDisplay(); pd.show(); pd.showColor(r=255,g=255,b=255)
imshow(cam.current_image)
I need a similar behavior now in a console script though: it should display the PatternDisplay widget, capture an image, than change the color on the PatternDisplay and take a new image and so on.
The problem is now that the PatternDisplay is never updated/redrawn in my script, likely because PyQt never gets a chance to run it's event queue. I had no luck trying to move the linear worker part of my script into a QThread because I cannot communicate with the PatternDisplay Widget from another Thread any longer. I tried to replicate the implementation of ipython/matplotlib, but I didn't fully understand it, it is quite complicated - it avoids running the QApplication main loop via monkey patching and somehow moves QT into it's own thread. It then checks periodically using a QTimer if a new command was entered by the user.
Isn't there an easy way to achieve what I want to do? I am gladly providing more information if needed. Thanks for any help!
What you need is easier than IPython's job - IPython makes the Qt application and the command line interactive at the same time.
I think the way to do it in Qt is to use a timer which fires at regular intervals, and connect the signal to the 'slot' representing your function that gets the new image and puts it in the widget. So you're pulling it in from the event loop, rather than trying to push it.
I've not used Qt much, so I can't give specifics, but the more I think about it, the more I think that's the right way to do it.
I solved the same problem (i.e. interactive ipython console in terminal, and GUI thread running independently) in the following way with ipython 0.10 (code here)
1. Construct QApplication object, but don't enter its event loop explicitly
2. Run the embedded IPython instance
3. Run the UI code you need by instantiating your window and calling show() on it (like here with the yade.qt.Controller(), which I aliased to F12. (I did not find a way how to ask the embedded shell to run a command at the start of the session, as if the user had typed it)
(You can also show() your window first, then run the embedded ipython. It will provide event loop for Qt.)
(BTW I also tried running Qt4 from a background thread (using both python threads module, and Qt4.QThreads), but it refuses to run in such way stubbornly. Don't bother going that way.)
The disadvantage is that UI will be blocked while ipython is busy. I hope to finding something better for 0.11, which should have much better threading facilities (I asked on ipython-users about how to unblock the UI).
HTH, v.