I have a button widget I'd like to fade out (self.button1)
def button_slot(self):
fade_effect = QtGui.QGraphicsOpacityEffect()
self.button1.setGraphicsEffect(fade_effect)
hideAnimation = QtCore.QPropertyAnimation(fade_effect, "opacity")
hideAnimation.setDuration(5000)
hideAnimation.setStartValue(1.0)
hideAnimation.setEndValue(0.0)
hideAnimation.start(QtCore.QPropertyAnimation.DeleteWhenStopped)
self.hideAnimation = hideAnimation
The code is in PyQt, but is the same as the original Qt.
For a reason, the when I try the code separately in a test file, it works well.
However, when trying to integrate it in my code, it seems like the fading out animation is running in the background, but not updated in the GUI itself:
The button is stuck at the "clicked" state.
If I minimize and enlarge the window, The button's opacity is right where it's supposed to be (for example, if the duration is 5000ms from 1.0 to 0.0, enlarging the window after 2500ms will show 0.5 opacity).
The button is clickable even though it looks "stuck".
Why could this be happening? How can I force the GUI to update itself at every event iteration?
The only possible explanation I have is that you're blocking the event loop somewhere else in your code. The animation will definitely run, as your test case shows, but it's invoked from the event loop. If your code blocks -- if there's any place in your code where you wait for things, sleep, etc., then that's your problem.
GUI code in Qt and many other frameworks must be written in run-to-completion fashion. Every slot and event handler must execute as quickly as it can, and then return. When you add a breakpoint in a slot, and look at the stack trace when the code stops, you'll see that QEventLoop::exec() is somewhere there. Ultimately, all GUI code is called from the event loop.
Try reducing your code piecewise until the problem vanishes. That's how you'll know where the blocking part is. Qt provides, unfortunately, many methods named waitxxx(), and they tend to be used without understanding that they block the event loop. A blocked event loop means that the application does not respond to user interaction, and eventually the OS will detect it and issue a spinning beachball (OS X), a spinning circle (Vista/Win7) or perhaps a message about a stuck application. A spinning beachball/circle means that the application's main event loop is blocked.
Related
I have a few animations being fired from a mouseenter event. Works just fine the first time around and then when I attempt to fire the animation again it gets to the beginAnimation function on the aframe-animation-component, which means it should emitting the animationbegin event but at that point nothing happens and the complete function in the config object never ends up running like it does when the animation runs fine the first time. Trying to figure out what parts I should be looking at to debug this but I'm having a hard time. Thanks! Also, I'm using the newest version off of kframe repo.
I have a normal flex app that allows you to enter a code. When you enter this code, it pops the code into a queue. I the have a process in the background that hits this queue and takes each code, sends it to a webservice and then responds with a result. Based on the result, the screen is then also updated. The problem is, I want the user to be able to keep entering codes without having the annoying clock show up stopping him from doing anything until the process is done.
So...
Is there a way to run a background process on Flex?
How do you do it, what's it called?
Close the native window but let the Application run in the background
NativeApplication.nativeApplication.autoExit = false;
NativeWindow(this.stage.nativeWindow).close();
Something seems to have changed in Qt 5: you can't get a drop or move event if you don't move at least one pixel from the start point where you were when QDrag::exec() was called. Try putting a breakpoint in the dropEvent of the Draggable Icons Sample, then click a boat and release it without moving the mouse. That generates an "ignore" without any drop signal.
(This is on Kubuntu 13.10 with Qt 5.1.)
When teaching how to start a drag operation, the documentation suggests you might use manhattanDistance() to determine if the mouse has moved enough to really qualify as "the user intending to start a drag". But you don't have to use that; you can start up a QDrag on the click itself.
Anyone know of a workaround to have that same kind of choice on the drop side, or is that choice gone completely? :-/
Why I care: I've long had frustrations trying to get a tight control on mouse behavior in GUI apps—Qt included. There seems to be no trustworthy state transition diagram you can draw of the invariants. It's a house of cards you can disprove very easily with simple tests like:
virtual void enterEvent(QEvent * event) {
Q_ASSERT(!_entered);
_entered = true;
}
virtual void leaveEvent(QEvent * event) {
Q_ASSERT(_entered);
_entered = false;
}
This breaks all kinds of ways, and how it breaks depends on the platform. (For the moment I'll talk about Kubuntu 13.10 with Qt 5.1.) If you press the mouse button and drag out of the widget, you'll receive a leaveEvent when you cross the boundary...and then another leaveEvent when the button is released. If you leave the window and activate another app in a window on screen and then click inside the widget to reactivate the Qt app, you'll get two consecutive enterEvents.
Repeat this pattern for every mouse event, and try and get a solid hold on the invariants...good luck! Nailing these down into a bulletproof app that "knows" it's state and doesn't fall apart (especially in the face of wild clicking and alt-Tabbing) is a bit of a lost cause.
This isn't good if your program does allocations and has heavy processing, and doesn't want to do a lot of sweeping under the rug (e.g. "Oh, I was doing some processing in response to being entered... but I just got entered again without a leave. Hm, I guess that happens! Throw the current calculations away and start again...")
In the past what I've done is to handle all my mouse operations (even simple clicking) with drag & drop. Getting the OS drag & drop facility involved in the operation tended to produce a more robust experience. I can only presume this is because the testers actually had to consider things like task switching with alt-Tab, etc. and not cause multiple drop operations or just forget that an operation had been started.
But the "baked in at a level deeper than the framework" aspect actually makes this one-pixel-move requirement impossible to change. I tried to hack around it by setting a timer event, then faking a QMouseEvent to bump the cursor to a new position once the drag was in effect. However, I surmise that the drag and drop is hooked in at the platform level, and doesn't consult the ordinary Qt event queue: src/plugins/platforms/xcb/qxcbdrag.cpp
The issue has--as of 1-May-2014--been acknowledged as a bug by the Qt team:
https://bugreports.qt-project.org/browse/QTBUG-34331
It seems that me bountying it here finally brought it to their attention, though it did not generate any SO answers I could accept to finalize the issue. So I'm writing and accepting my own. Good work, me. (?) Sorry for not having a better answer. :-/
There is another unfortunate side effect of the Qt5 change, pointed out by a "Dmitry Mordvinov":
Same problem here. Additionally app events are not handled till the first mouse event after drag started and this is really nasty bug. For example all app animations are suspended during that moment or application hangs up when you try to drag with touch monitor.
#dvvrd had to work around it, but felt the workaround was too ugly to share. So it seems that if you're affected by the problem, the right thing to do is go weigh in...and add your voice to the issue tracker to perhaps raise the priority of a solution.
(Or even better: patch it and submit the patch. 'tis open source, after all...)
I am trying to implement a scenario where two Qt windows will be placed side by side and they will be kind of sticky to each other. By dragging one of them, the other also gets dragged. Even when doing an alt-tab they should behave like a single window.
Any help or pointer will be extremely helpful.
-Soumya
What you describe sounds like it's a good fit for a "docking" scenario. You're probably most familiar with docking from toolbars; where you can either float a toolbar on its own or stick it to any edge of an app's window. But Qt has a more generalized mechanism:
http://doc.qt.io/qt-5/qtwidgets-mainwindows-dockwidgets-example.html
http://doc.qt.io/qt-5/qdockwidget.html
It won't be a case where multiple top level windows are moved around in sync with their own title bars and such. The top-level windows will be merged into a single containing window when they need to get "sticky". But IMO this is more elegant for almost any situation, and provides the properties you seem to be seeking.
Install a event filter on the tracked window with QObject::installEventFilter() and filter on QEvent::Move
You can then change the position of tracking window whenever your filter is called with that event type.
I found a way to keep two windows anchored: when the user moves a window, the other follows, keeping its relative position to the moved one.
It is a bit of a hack, because it assumes that the event QEvent::NonClientAreaMouseButtonPress is sent when the user left clicks on the title bar, holding it pressed while he moves the window, and releasing it at the end, so that QEvent::NonClientAreaMouseButtonRelease is sent.
The idea is to use the QWidget::moveEvent event handler of each window to update the geometry of the other, using QWidget::setGeometry.
But the documentation states that:
Calling setGeometry() inside resizeEvent() or moveEvent() can lead to infinite recursion.
So I needed to prevent the moveEvent handler of the windows which was not moved directly by the user, to update the geometry of the other.
I achieved this with result via QObject::installEventFilter, intercepting the summentioned events.
When the user clicks on the title bar of WindowOne to start a move operation, WindowOne::eventFilter catches its QEvent::NonClientAreaMouseButtonPress and sets the public attribute WindowTwo::skipevent_two to true.
While the user is moving WindowOne, WindowTwo::moveEvent is called upon the setGeometry operation, performed on WindowTwo from WindowOne::moveEvent.
WindowTwo::moveEvent checks WindowTwo::skipevent_two, and if it is true, returns without performing a setGeometry operation on WindowOne which would cause infinite recursion.
As soon as the user releases the left mouse button, ending the window move operation, WindowOne::eventFilter catches QEvent::NonClientAreaMouseButtonRelease and sets back the public attribute WindowTwo::skipevent_two to false.
The same actions are performed if the user clicks the titlebar of WindowTwo, this time causing WindowOne::skipevent_one attribute to be set to true and preventing WindowOne::moveEvent to perform any setGeometry operation on WindowTwo.
I believe this solution is far from being clean and usable. Some problems:
I am not sure when and why QEvent::NonClientAreaMouseButtonRelease and QEvent::NonClientAreaMouseButtonRelease are dispatched, apart from the case considered above.
When/if one window is resized without user interaction or without the proper mouse clicks from the user, probably everything will go the infinite recursion way.
There is no guarantee that those mouse events will be dispatched the same way in the future.
Free space for more...
Proof of concept:
https://github.com/Shub77/DockedWindows
Is there a way to force a JavaFX app to repaint itself before proceeding? Similar to a Swing Panel's paint(Graphic g) method (I might be getting the keywords wrong there).
Consider the following example: you write a TicTacToe app along with the AI required for a computer player. You would like the ability to show two computer players duke it out. Maybe you put in a two second pause between computer turns to give it a life-like affect. When you hit your "Go" button, there's a large pause of unresponsiveness (the time it takes for the 9 turns to go by with faked pauses for the computer to 'decide') and then suddenly the app's visual is updated in with the completed game's state.
It seems like JavaFX repaints once processing in the app's thread is finished? I'm not completely sure here.
Thanks!
You are right. JavaFX is event-driven and single-threaded. This means that repaint and event response can not be done simultaneously. Long-running task should be executed on separate thread so they do not block the rendering of the UI, When the task is finished it can sync back to the FX thread by calling FX.deferAction() which will simply execute the code on the main thread.
This won't be the most helpful answer as I have toyed around with JavaFX for all of half a day, but wouldn't you use Timelines, Keyframes, and binding to accomplish your repaints instead of calling them explicitly like you have described?
See this tutorial for an example.
JavaFX's model is to separate you from the painting of the "stuff" on the screen. This is very powerful but is a change from how you might be familiar with.
whaley is correct that the appropriate way of doing this in JavaFX is to make a timeline where the move is done every X seconds and will be drawn at that keyframe.
If you have a question about how to do this, try it and make a new question with some code.