I want to handle table view didSelectAtRow delegate method in Siri Shortcuts Custom IntentUI.
Also I wan to handle UIButton Action in Siri Shortcuts Custom IntentUI.
Is any of the case possible?
Please share some solution (if possible)
#SiriShortcuts
#iOS12
#Custom IntentUI
You can't handle interactions in a Custom Intent UI as the intent controller does not receive touch interactions. This is from the official documentation -> Check the section requirements and limitations.
You can update your view controllers as needed using timers or other programmatic means. Your view controllers also receive the normal callbacks when they are loaded, shown, and hidden. However, your view controllers do not receive touch events or any other events while they are onscreen, and you cannot add gesture recognizers to them. Therefore, never create an interface with controls or views that require user interactions.
Related
I remember that SWT/JFace has a representation for actions that is independent of graphical user interface elements (such as a button or menu item). This way, when the interface changes, the underlying action model remains the same. For example, I may create an action "Find" that is detached from any particular menu or button.
I am not finding a corresponding notion in JavaFX 8. Does it have it?
Here's a page explaining SWT/JFace's Action. Note that it is different from an event, and therefore different from JavaFX's ActionEvent.
There is no equivalent functionality in the core JavaFX libraries, though ControlsFX provides a similar API.
I've got a general question on qt design.
Say, I created a custom classes, impelementing QAbstractTableModel and QTableView class. I've re-implemented event handlers in View, such as mousePressed, mouseRelease etc.
Still Qt's View manages to perform some of its default functionality: it still responds to mouse clicks and movements on cells by changing selection, thus it somehow fires selectionModel "built-in" signals, though I didn't ask it to. It still resizes columns, if I drag on cell borders etc. What's the mechanism that triggers those "buit-in, default slots" and how can I disable some parts of it? E.g. if I want to disable default behavior of selectionModel or default resize?
For the sake of comparison, in gtk+ there's a concept of default per-class signal handler, which is a function, by default connected to its signal and called prior or after your custom per-class or per-object signal handlers, depending on parameters you set. You can disable it from your custom slot, if you want to and thoroughly control behavior of e.g. resize or selection.
Is Qt opaque in this place and provides customization via its interface functions only? My question is particularly related to pyqt. Please ask for clarification, if I'm too vague.
The event handlers that you refer to are used to listen to events, not to filter them. You can't override any events in them, since they don't have a return value: there's no way to inform subsequent event processing that you don't wish it to run.
To filter events, you must reimplement the event method, and invoke the base class's implementation on events that you do not wish to filter.
In Qt, event handling is done per-object, and you can install external objects as event filters on any other object. An object receives the events in its event method. The QObject class implements this method and invokes the timerEvent method. The QWidget class reimplements this method and invokes the widget-specific xxxEvent methods. And so on. All of those classes still process some events internally. Those are the per-class handlers that you speak of.
I need help to understand to use QEvents in QT, this is driving me crazy.
I am writting an application using custom events, but as in QApplication::postEvent function, it's necesary to specify the target object.
As I understand, it's possible to post events to Qt's event loop with
QApplication::postEvent(obj_target, QEvent myevent);
This means that I'm trying to catch "myevent" event in obj_target an do some stuff.
But I need to post events without specify a target object, as QMouseEvent or QKeyEvent do
I mean, when clicking in a QMainWindow with a lot of buttons, how is that I can click
any button and that button is pressed?
What is the target object when the click event is posted?
It's possible to register objects to "listen" for a specific event?
I'm really confused, it's possible to post an event without specifying a target object?
Thank you very much in advance
There is no trivial way to post events "globally", as Dan has said. All of the event dispatching of native events is done by private Qt implementation code.
The important distinction is:
There are native messages/events, delivered by the operating system, usually received by a window-specific event loop.
There are QEvents.
Internally, Qt keeps track of the top-level Widgets (windows, really), so when it receives an event from the OS, it knows which window it should go to - it can match it using the platform window id, for example.
QEvent delivery makes no sense without a receiving object, since sending an event to an object really only means that QObject::event(QEvent*) method is called on that object. It's impossible to call this method without having an object instance!
If you want to synthesize a global key press or mouse click event, then you have to figure out what object the event goes to. Namely:
Identify what top-level window (widget) the event should go to. You can enumerate top level widgets via qApp->topLevelWidgets().
Identify the child widget the event should go to. If it's a keyboard event, then sending the event to currently focused widget via qApp->focusWidget() is sufficient. You need to enumerate the child widgets to find the deepest one in the tree that overlaps the mouse coordinates.
Send the correct QEvent subclass to the widget you've just identified. Events delivered to top-level widgets will be routed to the correct child widget.
When sending mouse events, you also need to synthesize relevant enter and leave events, or you risk leaving the widgets in an invalid state. The application.cpp source file should give you some ideas there.
This doesn't give you access to native graphical items, such as menus on OS X.
Please tell us exactly what you're trying to do. Why do you want to post a broadcast event? Who receives it? Since your own QObject-derived classes will receive those broadcasts, I presume, it's easy enough to use signal-slot mechanism. You'd simply connect(...) those receiver classes to some global broadcaster QObject's signal(s).
For this purpose, I have a specific singleton class which I call GuiSignalHub. It regroups all the application-wide signals.
Objects that want to trigger an application-level action (such as opening context help) just connect their signal to the GuiSignalHub signal. Receivers just connect the GuiSignalHub to their slot.
I want to use a custom input device for multitouch input in a Qt application. I plan to create QTouchEvents based on my raw input data. I also want to generate custom gestures.
As far as I understand, I have to subclass QGestureRecognizer, create a QGesture for the widget I want to control and implement recognize() to filter my QTouchEvents and trigger the gesture when appropriate.
Now I have two questions:
Is this the correct way to do things?
How do I make sure that QTouchEvents still reach my widget (e.g. for dragging) when I already use them in my QGestureRecognizer? Or should all interaction with my widget be in form of gestures?
My progress on this matter so far, should anyone find themselves in a similar situation:
1.: It may be the right way but it doesn't work. Even after registering my recognizer with the application, it does not receive any QTouchEvents. I therefore installed my recognizer as an eventFilter for the target widget.
2.: At least when using an eventFilter, one can pass the event to the original receiver. See http://doc.qt.io/qt-5/qobject.html#eventFilter.
Currently getting started on a new wpf project and want to use mvvmlight. My initial thought was to have 4 buttons as navigation at top of the window and then a contentcontrol where new views would be injected when chosen from the navigationbuttions.
That is a mainwindow and som subviews that will be injected. The main VM should have commands hooked to wired up with the navigation buttons. When executed the command should send a navigation message to change a view, and this is where I get confused. Who should handle the navigation messages send and change the view in the ContentControl? I guess the ViewModelLocator is only for instantiating the viewmodels.
What is missing in getting this glued together?
Best regards
See this answer for a dialog strategy. This should be applicable to your problem. However, one word of caution when using messages in the above szenario ... call Unregistr() especially on your views.