I saw a tutorial video explain the chain of responsibility design pattern, and I think I understand how it works but I'm not sure when I would really use it. What are some common usages of the chain of responsibility?
From the GoF:
Known Uses
Several class libraries use the Chain of Responsibility
pattern to handle user events. They use different names for the
Handler class, but the idea is the same: When the user clicks the
mouse or presses a key, an event gets generated and passed along the
chain. MacApp [App89] and ET++ [WGM88] call it "EventHandler,"
Symantec's TCL library [Sym93b] calls it "Bureaucrat," and NeXT's
AppKit [Add94] uses the name "Responder."
The Unidraw framework for graphical editors defines Command objects
that encapsulate requests to Component and ComponentView objects
[VL90]. Commands are requests in the sense that a component or
component view may interpret a command to perform an operation. This
corresponds to the "requests as objects" approach described in
Implementation. Components and component views may be structured
hierarchically. A component or a component view may forward command
interpretation to its parent, which may in turn forward it to its
parent, and so on, thereby forming a chain of responsibility.
ET++ uses Chain of Responsibility to handle graphical update. A
graphical object calls the InvalidateRect operation whenever it must
update a part of its appearance. A graphical object can't handle
InvalidateRect by itself, because it doesn't know enough about its
context. For example, a graphical object can be enclosed in objects
like Scrollers or Zoomers that transform its coordinate system. That
means the object might be scrolled or zoomed so that it's partially
out of view. Therefore the default implementation of InvalidateRect
forwards the request to the enclosing container object. The last
object in the forwarding chain is a Window instance. By the time
Window receives the request, the invalidation rectangle is guaranteed
to be transformed properly. The Window handles InvalidateRect by
notifying the window system interface and requesting an update.
Related
I'd like to know, what is a proper way to implement my own cold source (publisher) using the Mutiny library.
Let's say there is huge file parser that should return lines as Multi<String> items according to the Subscriber's consumption rate.
New lines should be read only after previous were processed to optimize memory usage, while buffering a couple of hundred items to eliminate consumer idling.
I know about the Multi.createFrom.emitter() factory method, but using it I can't see a convenient way to implement the backpressure.
Does Mutiny have a idiomatic way to create cold sources that produce next items only after requested by the downstream, or in this case I supposed to implement my own Publisher using the Java Reactive Streams API and then wrap it in Multi?
You can use Multi.createFrom().generator(...).
The function is called for every request. But you can pass a "state" to remember where you are, typically an Iterator.
This is the opposite of the emitter approach (which does not check for requests but has a backpressure strategy attached to it).
If you need more fine-grain back-pressure support, you would need to implement a Publisher.
I am using a complex state engine system build with Qt 5.4 (using custom state engine classes).
Part of that code is logging of events, transitions, etc. It is very important for me to log all events the engine/state objects are receiving so I can completely track what is happening in the state engines.
For most event types logging is easy. However I failed to log queued connections (i.e. meta call events). QMetaCallEvent is private so there is not much I can do. However it is hard to believe that such an integral part of Qt can not be inspected properly.
Is there some way I missed that allows to log queued connections (including signal name, slot name, sender name, receiver name and arguments if possible)?
Install an event filter and intercept events with ev->type() == QEvent::MetaCall. All members visible in the debugger.
Need access to private headers? Use QT += core-private in your .pro file.
(tone mode="original poster")It's hard to believe that nobody reads documentation(/tone)
There is no official API that allows to do what I intend.
Inspecting QMetaCall events (using private framework headers) is a bad idea. First they are private (and may break your code any time) second the QMetaCall event sender() pointer may be invalid if sender was deleted immediately and I could not find a clean way to inspect events in such cases.
The way I am using now is totally different. Instead of inspecting the arriving event objects I am using a modified variant of QSignalSpy that allows to do more than the original class and helps logging the signal emissions using secondary connections.
In my situation this seems feasible even if it is pretty complicated and not a universal solution. At least no private headers are involved.
I'll try to keep the case as general as possible here: I'm writing a C++/CX application for Windows Phone 8.1 that manages a state, which is being changed in reaction to input coming, in turns, from different sources (e.g. app UI or network). I want to utilize an approach with a program loop that will, for each source, wait for input from it and then modify the state accordingly. The problem I'm having is that I could not find a good way to mirror the behavior of the await mechanism in C++/CX. Tasks seem to be the way to handle asynchronous data processing in C++/CX, but as far as I understand they are used for waiting for results of a well-defined operation, whereas I need to wait for an asynchronous event to happen and then act appropriately depending on the type of the event.
Is there an appropriate language construct, or a way to utilize tasks, to be used in this case?
Should I make use of basic multi-threading mechanisms, like semaphores, instead?
Alternatively, should I abandon this approach and handle state changes with events, securing the state from being otherwise modified?
Thanks in advance.
PySide and PyQt employ Qt signal-slot mechanism with which we can connect any/multiple signals to any/multiple slots as far as the trasmistted data types match.
The signalling object has some knowledge of the receiving slots, e.g. knows their number via method receivers or the signal can disconnect from the receiving slots via its disconnect method.
My problem relates to the oposite direction - e.g. does a slot know to which signals it is connected? Can a slot disconnect from the signals?
UPDATE: So why I am asking this - I have an object that performs some calculation. The calculation is defined by a user editable Python expression. The expression is parsed and necessary data sources are identified from this... The calculation object (acts as a slot) then connects to these data sources (they act as signals) and once the data sources produce/update a value, this fact is signalled to the slot and the expression is reevaluated. And when the expression is changed by a user, it needs to be parsed again and disconnected from the existing signals (i.e. data sources) and connect to new data sources. You can imagine it is something like a formula in Excel that is connected to other cells.
I know there are a few ways to work around this, e.g. keeping track of connections manually (well, this is extra work) or deleting the expression object and creating a new one everytime it is changed (seems not good enough, because user might want to trace back the calculation data sources and this will not help). But I was curious if this can be solved purely using simple signal-slot mechanism. In other words, I am not interested in any workarounds... I know of them and will use them I signals-slots will not help here. :)
The approach you propose forces a very close relationship between the concrete data widgets and the calculation engine. You mingle UI with the calculations. This makes it much harder than it needs to be.
What you could try instead is the model-view approach. The model would be a simple implementation of QAbstractTableModel. The view would be the individual data-entry widgets mapped to the model's cells using QDataWidgetMapper. The calculation engine would access only the model, completely unaware of how the model is modified by the widgets. This make life easier.
The calculation object can connect to the model using just one dataChanged signal and it will be notified of changes to any of the variables. You can easily pass both the value and the variable name by having two columns in the table.
The implementation of the model can be very simple, you can have a list of strings for the variable names in the first column, and a list of variants for the second column. The model must correctly emit the dataChanged signal whenever setData is called, of course.
It is hard for me to understand the difference between signals and events in Qt, could someone explain?
An event is a message encapsulated in a class (QEvent) which is processed in an event loop and dispatched to a recipient that can either accept the message or pass it along to others to process. They are usually created in response to external system events like mouse clicks.
Signals and Slots are a convenient way for QObjects to communicate with one another and are more similar to callback functions. In most circumstances, when a "signal" is emitted, any slot function connected to it is called directly. The exception is when signals and slots cross thread boundaries. In this case, the signal will essentially be converted into an event.
Events are something that happened to or within an object. In general, you would treat them within the object's own class code.
Signals are emitted by an object. The object is basically notifying other objects that something happened. Other objects might do something as a result or not, but this is not the emitter's job to deal with it.
My impression of the difference is as follows:
Say you have a server device, running an infinite loop, listening to some external client Events and reacting to them by executing some code.
(It can be a CPU, listening to interrupts from devices, or Client-side Javascript browser code, litsening for user clicks or Server-side website code, listening for users requesting web-pages or data).
Or it can be your Qt application, running its main loop.
I'll be explaining with the assumption that you're running Qt on Linux with an X-server used for drawing.
I can distinguish 2 main differences, although the second one is somewhat disputable:
Events represent your hardware and are a small finite set. Signals represent your Widgets-layer logic and can be arbitrarily complex and numerous.
Events are low-level messages, coming to you from the client. The set of Events is a strictly limited set (~20 different Event types), determined by hardware (e.g. mouse click/doubleclick/press/release, mouse move, keyboard key pressed/released/held etc.), and specified in the protocol of interaction (e.g. X protocol) between application and user.
E.g. at the time X protocol was created there were no multitouch gestures, there were only mouse and keyboard so X protocol won't understand your gestures and send them to application, it will just interpret them as mouse clicks. Thus, extensions to X protocol are introduced over time.
X events know nothing about widgets, widgets exist only in Qt. X events know only about X windows, which are very basic rectangles that your widgets consist of. Your Qt events are just a thin wrapper around X events/Windows events/Mac events, providing a compatibility layer between different Operating Systems native events for convenience of Widget-level logic layer authors.
Widget-level logic deals with Signals, cause they include the Widget-level meaning of your actions. Moreover, one Signal can be fired due to different events, e.g. either mouse click on "Save" menu button or a keyboard shortcut such as Ctrl-S.
Abstractly speaking (this is not exactly about Qt!), Events are asynchronous in their nature, while Signals (or hooks in other terms) are synchronous.
Say, you have a function foo(), that can fire Signal OR emit Event.
If it fires signal, Signal is executed in the same thread of code as the function, which caused it, right after the function.
On the other hand, if it emits Event, Event is sent to the main loop and it depends on the main loop, when it delivers that event to the receiving side and what happens next.
Thus 2 consecutive events may even get delivered in reversed order, while 2 consecutively fired signals remain consecutive.
Though, terminology is not strict. "Singals" in Unix as a means of Interprocess Communication should be better called Events, cause they are asynchronous: you call a signal in one process and never know, when the event loop is going to switch to the receiving process and execute the signal handler.
P.S. Please forgive me, if some of my examples are not absolutely correct in terms of letter. They are still good in terms of spirit.
An event is passed directly to an event handler method of a class. They are available for you to overload in your subclasses and choose how to handle the event differently. Events also pass up the chain from child to parent until someone handles it or it falls off the end.
Signals on the other hand are openly emitted and any other entity can opt to connect and listen to them. They pass through the event loops and are processed in a queue (they can also be handled directly if they are in the same thread).