Xlib - Ignore Modifier Keys in XGrab* - mask

Holla,
I'm currently hacking out some changes to TinyWM - one I'd like to implement is a click-to-focus policy.
I've figured out that I need to run an XGrabButton on the child as it is created in an MapNotify event, but I cannot figure out what modifier mask to use which ignores all modifier masks (meaning, I would like the focus click to happen no matter what modifier keys are activated).
I've hit a brick wall, as even AnyModifier seems not to work when I have no modifier keys pressed (and even then, it is rather picky).
Here's the relevant chunk of code:
void eMapNotify(Display *dpy, XEvent *ev){
// Ignore windows we don't care about
if (!ev.xmap.override_redirect) XSetWindowBorderWidth(dpy, ev.xmap.window, 3);
// Allows us to hook into this window's clicks to let us focus it
XGrabButton(dpy, 1, WHAT_MASK_DO_I_PUT_HERE, ev.xmap.window,
True, ButtonPressMask, GrabModeAsync, GrabModeAsync,
None, None);
}
Any ideas?
EDIT:
I've found out that in fact the event handler does record the click, but the click is not forwarded to the child window, which is precisely the behavior I want.
UPDATE
I've currently implemented focusing functionality with the following code, which tracks the pointer and focuses it wherever it is. On my machine, it's not nearly as expensive as it may look:
Window dump, child;
int rx, ry, cx, cy;
unsigned int mask;
// Get the pointer's location
XQueryPointer(dpy, root, &dump, &child, &rx, &ry, &cx, &cy, &mask);
// Focuses the pointer's current window
XSetInputFocus(dpy, dump, RevertToNone, CurrentTime);

The mask would be 0 for no modifiers.

Related

QML InputHandler stop propagation of event

I have two Rectangles, each with a TapHandler. Rectangle A is the parent of Rectangle B
How can I configure A and B, so that when B is clicked, the EventPoint does not propagate to the onSingleTapped handler of A?
The EventPoint docs suggest to set its accepted property to true:
Setting accepted to true prevents the event from being propagated to Items below the PointerHandler's Item.
However, at the same time the docs state that accepted is a read-only property, which does not make much sense (I guess the documentation is out-of-date or simply wrong).
TestCode:
Rectangle {
id: a
width: 200
height: 200
color: "yellow"
TapHandler {
onSingleTapped: console.log("A tapped")
}
Rectangle {
id: b
color: "blue"
width: 100
height: 100
TapHandler {
onSingleTapped: function(eventPoint) {
// Stop propagation.
eventPoint.accepted = true
console.log("B tapped")
}
}
}
}
UPDATE: Setting the gesturePolicy of B to TapHandler.ReleaseWithinBounds prevents A from receiving the event. Not sure if this really the best solution
For Handlers, the entire event is delivered to each handler; therefore Handlers accept individual points, not the whole event. In general, accepting all points implies accepting the entire event, but it may be that one handler accepts some points while another accepts other points. delivery is not “done” until all the points are accepted.
It looks like setting grabPermissions without a gesturePolicy does not do what's expected .. grab the event and preventing propagation to other items.
Changing Rectnagle b (a's child) TapHandler to have gesturePolicy: TapHandler.ReleaseWithinBounds TapHandler.WithinBounds seems the right way to aaccept, in other words this way it accepts the point, that means the event will not propagate to the TapHandler of the parent Rectangle!
Rectangle {
id: b
z:2
color: "blue"
width: 100
height: 100
TapHandler {
gesturePolicy: TapHandler.ReleaseWithinBounds | TapHandler.WithinBounds
grabPermissions: PointerHandler.CanTakeOverFromAnything | PointerHandler.CanTakeOverFromHandlersOfSameType | PointerHandler.CanTakeOverFromItems
| PointHandler.ApprovesTakeOverByAnything | PointHandler.ApprovesCancellation
onSingleTapped: function(eventPoint) {
// Stop propagation.
eventPoint.accepted = true // this is just a confirmation!
console.log("B tapped")
}
}
}
further from .. narkive interset group
Qt makes a difference between active and passive touch point / pointer grabs. grabPermissions only affect active grabbers. TapHandler is passive with the default gesturePolicy, and active otherwise. That's why you need to change the gesturePolicy in a TapHandler to see any grabPermissions in effect, even the default ones.
Other input handlers don't have this same quirk, but have others. While each handler is simpler than a MouseArea or MultiPointTouchArea as Qt intended, interactions between layered handlers became much more complicated than between layered Area instances. So, for complex input, I'm using an Area instead. Which one depends on whether I'm handling hover or multitouch.
Active and passive grabs: https://doc.qt.io/qt-6/qtquickhandlers-index.html#pointer-grab
TapHandler behavior: https://doc.qt.io/qt-6/qml-qtquick-taphandler.html#details
Yes as the grabPermissions docs say:
This property specifies the permissions when this handler's logic decides to take over the exclusive grab, or when it is asked to approve grab takeover or cancellation by another handler.
"Exclusive" means it's the conventional kind of grab that only one Item or Handler can hold at any given moment. Whereas passive grabs are made for stealth, to deal with the fact that all gestures are ambiguous at the time when the user presses initially: several handlers can take passive grabs to register interest in that point, to monitor mouse or touchpoint movements independently, and then perhaps one of them can decide later to transition to taking the exclusive grab when some condition is met, such that the handler believes the user is initiating a gesture that is relevant to that handler. At the time of the transition, grabPermissions control the negotiation that will occur: which one gets to take over the exclusive grab.
But TapHandler's default gesturePolicy is DragThreshold, and in that case it can detect a tap using only a passive grab, which makes it work independently of other items or handlers. This is meant to be useful for augmenting behavior of other components without interfering with their built-in behavior; but it's not so useful if you want to ensure that only one thing can detect a tap or click. If you want TapHandler to participate in exclusive-grab negotiations (stealing the grab from other items or handlers, and allowing or disallowing the grab to be stolen by another), then you need to set gesturePolicy first, to make it use an exclusive grab. Then, if the TapHandler takes the exclusive grab on press, and the grab is stolen by something else, it will emit canceled rather than tapped.
Sorry it turned out a bit unintuitive: the intention was that gesturePolicy is a designer-friendly property, you just specify the behavior you want and don't worry about grabs. But in practice, it seems to me that we often end up changing gesturePolicy specifically to make TapHandler take an exclusive grab, to not be so stealthy, to participate in the negotations with other components.
If you need to troubleshoot grab-transition scenarios at runtime (which in practice is the first thing to think about whenever mouse or touch behavior is not what you expected), there are several grabbing-related logging categories: qt.pointer.grab qt.quick.handler.grab and qt.quick.pointer.grab. (There are some logging differences between Qt 5 and 6 though.) What I do on Linux is I have a big ~/.config/QtProject/qtlogging.ini file with all logging categories that I've ever cared about, in alphabetical order, mostly commented out (first line begins with # symbol), but those related to event delivery and grabs are often uncommented, since I spend a lot of time trying to fix Qt bugs related to that.
As for setting accepted to true or false on an individual event: that's an old MouseArea pattern, not carried forward into the way that Pointer Handlers are used. There are several problems with it:
QPointerEvent and QEventPoint are stack-allocated Q_GADGET types, not Q_OBJECT. That means they are passed from C++ to QML by value. So setting the accepted property would have no effect, even if we made it a writeable property: you'd be modifying a copy, and the event delivery logic would not see it. When MouseArea lets you do that, it has to populate a special QMouseEvent (QObject) on the fly so that it can be emitted by pointer, just so that you can set that property, and Qt code can see that you set it. (At least since 5.8 those wrapper objects get reused rather than being allocated on-the-fly.)
QML is supposed to be a declarative language. It's not declarative (and not FP) to require you to write a snippet of JS that imperatively sets a property for the sake of its side effect on delivery logic.
QML is supposed to be a designer-friendly language. Newbies should not need to understand what it means to accept an event. (ok perhaps that's really not achieved... nice goal though?)
Accepting a whole event is sensible for mouse but not for touch: if some fingers touch one item and other fingers touch another, you could execute multiple gestures simultaneously (for example you could drag multiple sliders with multiple fingers, if the slider component uses DragHandlers). If accepting the event implies that one item is grabbing all fingers at the same time, that's incompatible with providing the complete event to each item. Because of this, we have to split up QTouchEvents during delivery to Items, which is complex and bad for efficiency. We'd like to be able to stop doing that some day. For now, only Handlers get to see the complete events (all touchpoints, even those that are outside the Item's bounds). This allows things like the margin property to work.
Accepting an event does two things: you are asking to stop propagation, and you are also implicitly asking for an exclusive grab, of the entire event, as you saw it (after the touch-splitting). Basically you are saying "the buck stops here", which is a bit arrogant. (How can one component know for sure, at the time of press, that it's the sole component in the entire application that could possibly care about that gesture?) This is legacy logic that we have to maintain because of all the legacy Items that do event handling that way (MouseArea, Qt Quick Controls, stuff in other Qt modules, lots of third-party components). In the future it's probably better to separate grabbing from control of propagation. This is part of why passive grabs exist too: handlers must be able to act cooperatively, to deal with ambiguity, so they generally should avoid stopping propagation, to allow other handlers to also see the same events. Only when a handler is sure that a gesture has really started should it attempt to take the exclusive grab.

QTextEdit or QTextBrowser performance issue

I have a heavy QString.
I need to display it as an output.
I tried both QTextEdit or QTextBrowser. And all methods of setting text like setText, append, setPlainText.....The performance is really poor. The most annoying things is that setting thing on user interface meaning blocking the main thread. So the program will become unresponsive during the process.
Is there any better way to display visual text result?
At least if the document is rich text, every time you append to the document, it is apparently reparsed.
This is A LOT more performant: If you want each append to actually show quickly & separately (instead of waiting until they've all been appended before they are shown), you need to access the internal QTextDocument:
void fastAppend(QString message,QTextEdit *editWidget)
{
const bool atBottom = editWidget->verticalScrollBar()->value() == editWidget->verticalScrollBar()->maximum();
QTextDocument* doc = editWidget->document();
QTextCursor cursor(doc);
cursor.movePosition(QTextCursor::End);
cursor.beginEditBlock();
cursor.insertBlock();
cursor.insertHtml(message);
cursor.endEditBlock();
//scroll scrollarea to bottom if it was at bottom when we started
//(we don't want to force scrolling to bottom if user is looking at a
//higher position)
if (atBottom) {
scrollLogToBottom(editWidget);
}
}
void scrollLogToBottom(QTextEdit *editWidget)
{
QScrollBar* bar = editWidget->verticalScrollBar();
bar->setValue(bar->maximum());
}
The scrolling to bottom is optional, but in logging use it's a reasonable default for UI behaviour.
This actually seems a kind of trap in Qt. I would know why there isn't a fastAppend method directly in QTextEdit? Or are there caveats to this solution?
(My company actually paid KDAB for this advice, but this seems so silly that I thought this should be more common knowledge.)
Original answer here.
Just got the same problem, and the resolution is very simple! Instead of creating a document + adding it immediately to the QTextBrowser/QTextEdit and then use cursor/set text to modify it, just postpone setting the document to the widget till AFTER you set the text / formatting... a complete life changer :)
The best way would be to load text partially as background operation with thread that periodically emit signal to redraw GUI, or better: just use QTimer.
Load first N lines, and then start QTimer that'll read more lines and append text inside widget. After reaching eof just kill that timer.
I believe that example can be helpful.

Qt: change QGraphicsItem receiver during mouse move

I am currently trying to implement a Bezier pen tool. The course of events looks like this:
click on point (QGraphicsItem), start moving while clicked
in QGraphicsScene mouseMoveEvent, prevent moves of point (with a boolean flag) until when distance from point.pos() to event.scenePos() reaches a threshold. When this happens unselect and mouseRelease point, add a node (QGraphicsItem) – select it and give it mousePress state (plus unset the boolean flag)
the user can move node after that, then release mouse.
(The node is a child item of the point.)
I tried to do this inside the scene’s mouseMoveEvent (I have a conditional branch to know when to do this):
point.setSelected(False)
point.ungrabMouse()
node.setPos(event.scenePos()-point.pos()) # positioning relative to point since it’s a childItem()
node.grabMouse()
event.accept()
But after doing this it occured that the node was only getting mouseMoveEvent’s after I release the mouse… (I print them in the console, the node itself did not move.)
So I figured, maybe the scene needs to eat a mouseReleaseEvent before sort of "releasing focus". I found an article that is tangent to the subject here.
So then instead of using ungrabMouse()/grabMouse(), I tried this:
mouseRelease = QEvent(QEvent.MouseButtonRelease)
self.sendEvent(point, mouseRelease)
node.setPos(event.scenePos()-point.pos()) # positioning relative to point since it’s a childItem()
mousePress = QEvent(QEvent.MouseButtonPress)
self.sendEvent(node, mousePress)
Now when I reach the distance threshold, I can see that only point gets selected (good) however as I move further both point and node are selected and moving… I would expect that since I have unselected and released (parent) point, it would not keep moving.
The article I linked to does do something different but it says "It turns out, we have to simulate a mouse release event to clear Qt’s internal state." which might be relevant to the current situation however I do not know what extra steps might need to be taken in order to “clear Qt’s internal state”… so I’m hoping a QGraphics aficionado can weigh in and help me out figuring this.
Thanks for having a look here.
A combination of sending mouse events and grabbing mouse manually works… has to be ungrabbed manually on mouseRelease though.

Reliably showing a "please wait" dialog while doing a lengthy blocking operation in the main Qt event loop

I've got a Qt app that needs to call an expensive non-Qt function (e.g. to unzip a ~200MB zip file), and since I'm calling that function from the main/GUI thread, the Qt GUI freezes up until the operation completes (i.e. sometimes for 5-10 seconds).
I know that one way to avoid that problem would be to call the expensive function from a separate thread, but since there isn't much the user can do until the unzip completes anyway, that seems like overkill. I can't add processEvents() calls into the expensive function itself, since that function is part of a non-Qt-aware codebase and I don't want to add a Qt dependency to it.
The only thing I want to change, then, is to have a little "Please wait" type message appear during the time that the GUI is blocked, so that the user doesn't think that his mouse click was ignored.
I currently do that like this:
BusySplashWidget * splash = new BusySplashWidget("Please wait…", this);
splash->show();
qApp->processEvents(); // make sure that the splash is actually visible at this point?
ReadGiantZipFile(); // this can take a long time to return
delete splash;
This works 95% of the time, but occasionally the splash widget doesn't appear, or it appears only as a grey rectangle and the "Please wait" text is not visible.
My question is, is there some other call besides qApp->processEvents() that I should also do to guarantee that the splash widget becomes fully visible before the lengthy operation commences? (I suppose I could call qApp->processEvents() over and over again for 100mS, or something, to convince Qt that I'm really serious about this, but I'd like to avoid voodoo-based programming if possible ;))
In case it matters, here is how I implemented my BusySplashWidget constructor:
BusySplashWidget :: BusySplashWidget(const QString & t, QWidget * parent) : QSplashScreen(parent)
{
const int margin = 5;
QFontMetrics fm = fontMetrics();
QRect r(0,0,margin+fm.width(t)+margin, margin+fm.ascent()+fm.descent()+1+margin);
QPixmap pm(r.width(), r.height());
pm.fill(white);
// these braces ensure that ~QPainter() executes before setPixmap()
{
QPainter p(&pm);
p.setPen(black);
p.drawText(r, Qt::AlignCenter, t);
p.drawRect(QRect(0,0,r.width()-1,r.height()-1));
}
setPixmap(pm);
}
Moving to another thread is the correct way to go but for simple operations, there's a much less complicated way to accomplish this without the pain of managing threads.
BusySplashWidget splash("Please wait…", this);
QFutureWatcher<void> watcher;
connect(&watcher, SIGNAL(finished()), &splash, SLOT(quit()));
QFuture<void> future = QtConcurrent::run(ReadGiantZipFile);
watcher.setFuture(future);
splash.exec(); // use exec() instead of show() to open the dialog modally
See the documentation about the QtConcurrent framework for more information.

QDoubleSpinBox: Stop emitting intermediate values

I am subclassing QDoubleSpinBox to add some features (like incrementing the value based on the location of the cursor in the lineedit) and to change some features I find annoying. One of the latter is that intermediate values are emitted: e.g. if you want to enter the value 323 it will emit 3 then 32 then finally 323. I'd like to set it to only emit on entry (i.e. only actually change value on entry).
Anyway, I can't figure out how to capture these intermediate edits. I overrode setValue to see if I could stop it there somehow, but it apparently isn't called (or at least my override isn't). I'm not sure how the value is actually getting set while editing in line edit.
More generally, the logic of this box escapes me. Is there some documentation that explains e.g. "if you type a digit into the lineedit then this series of routines is called... while if you hit the up arrow, this series of routines is called?"
In case it matters, I'm using PyQt5
EDIT: Here is another case in which having access to this is important. Say I want to implement an undo/redo structure for the box. The only way I can think of to get access to the changed values is to connect to the valueChanged signal. But if I'm subclassing the box it seems a little convoluted to listen for a signal rather than just watch the value change 'internally' Or am I missing something here?
You could use the following signal:
void QAbstractSpinBox::editingFinished() [signal]
This signal is emitted editing is finished. This happens when the spinbox loses focus and when enter is pressed.
based on the documentation of QAbstractSpinBox:
http://qt-project.org/doc/qt-5.1/qtwidgets/qabstractspinbox.html#editingFinished
There is nothing that combines the arrow based changes and the editingFinished changes.
My use case was to let the user enter the value without getting the signal on each new digit, while still making ↑, ↓, Page Up, Page Down keys and arrow buttons work as usual, emitting the signal on each activation.
QAbstractSpinBox::editingFinished signal doesn't provide this functionality: it's only ever emitted when focus is lost or Return/Enter is pressed.
What does work exactly as I need is the keyboardTracking property of QAbstractSpinBox. If it's true (the default), the spinbox emits valueChanged on each digit typed. If you set it to false, it behaves exactly as I described in the beginning of this answer.

Resources