Looking through the documentation, it seems that the new advanced gestures API doesn't determine the direction of a swipe beyond the basic { left, right, up, down }.
I need the start point of the swipe and the direction.
Is there anyway to retrieve this other than coding my own advanced gesture library from scratch out the basic gestures?
And if this is my only option, could anyone point me to some open source code that does this?
Got it! Documentation is here, under 'Creating Custom Gesture Recognizers' at the bottom.
Basically the six gestures Apple provides all derive from UIGestureRecognizer, and you can make your own gesture recogniser in the same way.
then, inside your view's init, you hook up your recogniser. and just the act of hooking it up automatically reroutes incoming touch events.
Actually, the default behaviour is to make your recogniser an Observer of these events. Which means your view gets them as it used to, and in addition if your recogniser spots a gesture it will trigger your myCustomEventHandler method inside your view (you passed its selector when you hooked up your recogniser).
But sometimes you want to prevent the original touch events from reaching the view, and you can fiddle around in your recogniser to do that. so it's a bit misleading to think of it as an ' observer '.
There is one other scenario, where one gesture needs to eat another. Like you can't just send back a single click if your view is also primed to receive double clicks. You have to wait for the double-click recogniser to report failure. and if it is successful, you need to fail the single click -- obviously you don't want to send both back!
Related
I'm looking for the most concise way to deal with focus in an application which renders a map in a canvas component. You can pan the map location using arrow keys or ASWD keys. So far, I've been giving the canvas focus at startup and handling key pressed events via canvas.setOnKeyPressed().
This works fine, but I've always known that a problem was on the horizon when other components enter the picture. Once you interact with another component, it gains focus, and you're unable to scroll around the canvas map. I can prevent this from happening with some components like Hyperlinks or Buttons (I don't need tab-navigation) with something like this for those components:
sidePanel.getChildren().forEach(node -> node.setFocusTraversable(false));
But, when we get to things like TextArea or TextField, those do need to hold focus while they're being edited. And I'll need some way to return focus back (or at least unfocus those components) without being an annoyance to the user. (I don't want to have to click the canvas for it to regain focus.)
The options I see for returning focus back to the canvas after the user is done with those fields seem to be:
Add a key handler (ex. ESC or ENTER keypress) on EACH of these components which returns focus back to the canvas.
Maybe not so concise, and a bit of a pain... also feels a bit fragile; if I miss something and the canvas loses focus, it would fail - I need a 100% reliable solution.
Extend each of these components and add similar code to return focus back to Canvas
Even nastier and requires using custom components in Scene Builder, which
is not ideal.
Add a global event handler on the Scene and transmit events to the controller which owns the canvas
I believe an event filter would accomplish this - but on the other hand if the user is simply using arrow keys to move around a TextArea, I wouldn't want the Canvas map to move!
To solve the above problem, possibly the global event handler could ignore ASWD and arrow keypresses if the focus is on certain types of components? Is this worth trying, or am I neglecting a problem this would create?
Are there any other simple options out there that I've missed - and what would you suggest as the best option here? I'd like an automatic solution that doesn't require remembering to add some workaround code every time a UI component is added.
I'm implementing a basic shape drawing tool using a custom subclass of Qt's QGraphicsScene and several QGraphicsItem. Now there are several situations where I don't want any "global" actions to be executed:
For example, while dragging items around, the user should not be allowed to create a new file or to undo the last action (by pressing Ctrl-Z for example) since this would lead to several problems that would have to be handled separately (if the user is currently drawing an edge between two nodes, what should happen if he presses Ctrl-Z with the last recorded action being the creation of the first node?)
I noticed that several commercial applications like Microsoft Word and Adobe Photoshop just seem to ignore any usual keyboard shortcuts while being in such an "intermediate" state. Furthermore, when dragging items out of the viewport, these tools display a "forbidden" cursor and do not allow any mouse press events to reach the outer window (like a right click on the toolbar, for example).
How should I implement this in my case, when using QGraphicsScene? I already tried to add the following override:
void MyGraphicsScene::keyPressEvent(QKeyEvent* keyEvent)
{
keyEvent->accept();
}
But any pressed keys were still delivered to the main window. In addition to that, I'm not sure if filtering just keyboard events is safe enough, since there might be other input events that could trigger forbidden actions.
Is there any generic approach to this problem that I could use in my software?
From what I see, QApplication::mouseButtons() may return no buttons even when a button is held down. This happens when you have clicked a side of a window for re-sizing. It's coherent with the docs because mouseButtons() reflects the state from the flow of QEvent::mouseButtonPress, etc. However, I need just to know if the button is held down. Does any one know if it's possible through the Qt API?
I think it's not possible. Mouse events outside an application's window are not passed to its event handlers. Dragging mouse borders is one of such events, it's processed by the window system. Another example is clicking on other windows. Usually an application doesn't know what the user does with other windows. You need to install system-wide event listener or use native API features(e.g. GetAsyncKeyState on Windows) to determine that. This behavior is unusual and possibly dangerous. In most cases it's not useful, and it seems that Qt doesn't have this ability.
I'd like to create a semi-transparent information window that doesn't get in the way of the user's other activities. Any clicks on the window should just pass through as if the window wasn't there.
How would you recommend implementing such behavior? Is there an easy way to do it or do I have to follow a clumsy workaround? I'm thinking of hiding the window, re-executing the click, then making the window visible again. But this would still screw up drag'n'drop gestures.
Take a look at an enum value of Qt::WidgetAttribute: Qt::WA_TransparentForMouseEvents:
When enabled, this attribute disables the delivery of mouse events to
the widget and its children. Mouse events are delivered to other
widgets as if the widget and its children were not present in the
widget hierarchy; mouse clicks and other events effectively "pass
through" them. This attribute is disabled by default.
I did a little more research into "mouse event transparency" (didn't know the exact terminology) and I found this.
I don't think there is a general and easy approach to your problem. You will probably have to dig into the native API. Once events reach an application they are not forwarded to other applications on their own.
What do you guys think? Am I doomed to work with the native APIs of each OS?
I am trying to implement a scenario where two Qt windows will be placed side by side and they will be kind of sticky to each other. By dragging one of them, the other also gets dragged. Even when doing an alt-tab they should behave like a single window.
Any help or pointer will be extremely helpful.
-Soumya
What you describe sounds like it's a good fit for a "docking" scenario. You're probably most familiar with docking from toolbars; where you can either float a toolbar on its own or stick it to any edge of an app's window. But Qt has a more generalized mechanism:
http://doc.qt.io/qt-5/qtwidgets-mainwindows-dockwidgets-example.html
http://doc.qt.io/qt-5/qdockwidget.html
It won't be a case where multiple top level windows are moved around in sync with their own title bars and such. The top-level windows will be merged into a single containing window when they need to get "sticky". But IMO this is more elegant for almost any situation, and provides the properties you seem to be seeking.
Install a event filter on the tracked window with QObject::installEventFilter() and filter on QEvent::Move
You can then change the position of tracking window whenever your filter is called with that event type.
I found a way to keep two windows anchored: when the user moves a window, the other follows, keeping its relative position to the moved one.
It is a bit of a hack, because it assumes that the event QEvent::NonClientAreaMouseButtonPress is sent when the user left clicks on the title bar, holding it pressed while he moves the window, and releasing it at the end, so that QEvent::NonClientAreaMouseButtonRelease is sent.
The idea is to use the QWidget::moveEvent event handler of each window to update the geometry of the other, using QWidget::setGeometry.
But the documentation states that:
Calling setGeometry() inside resizeEvent() or moveEvent() can lead to infinite recursion.
So I needed to prevent the moveEvent handler of the windows which was not moved directly by the user, to update the geometry of the other.
I achieved this with result via QObject::installEventFilter, intercepting the summentioned events.
When the user clicks on the title bar of WindowOne to start a move operation, WindowOne::eventFilter catches its QEvent::NonClientAreaMouseButtonPress and sets the public attribute WindowTwo::skipevent_two to true.
While the user is moving WindowOne, WindowTwo::moveEvent is called upon the setGeometry operation, performed on WindowTwo from WindowOne::moveEvent.
WindowTwo::moveEvent checks WindowTwo::skipevent_two, and if it is true, returns without performing a setGeometry operation on WindowOne which would cause infinite recursion.
As soon as the user releases the left mouse button, ending the window move operation, WindowOne::eventFilter catches QEvent::NonClientAreaMouseButtonRelease and sets back the public attribute WindowTwo::skipevent_two to false.
The same actions are performed if the user clicks the titlebar of WindowTwo, this time causing WindowOne::skipevent_one attribute to be set to true and preventing WindowOne::moveEvent to perform any setGeometry operation on WindowTwo.
I believe this solution is far from being clean and usable. Some problems:
I am not sure when and why QEvent::NonClientAreaMouseButtonRelease and QEvent::NonClientAreaMouseButtonRelease are dispatched, apart from the case considered above.
When/if one window is resized without user interaction or without the proper mouse clicks from the user, probably everything will go the infinite recursion way.
There is no guarantee that those mouse events will be dispatched the same way in the future.
Free space for more...
Proof of concept:
https://github.com/Shub77/DockedWindows