Decoupling a QStateMachine from the UI - qt

I made this post on the Qt Forum as well but I haven't gotten any follow up so I'm hoping to continue the discussion some here. Here is the relevant parts of my post and the response.
My Post:
I have the following design: A main window which has a widget, and
inside that widget different widgets of type "Page" can be loaded. The
first page, let's call it the startup page, allows the user to type in
the serial number of the product. The database is then queried to get
some info about the product itself, and then a concrete child class
representing the type of "Product" is created. All child product
classes inherit from an abstract class, Product.
Basically, when the user selects the product by entering a serial
number, a product factory is called to create the concrete object and
then check if it's physically connected to the system via a bus. This
created product will then be passed to and used by other pages in
somewhat of a wizard like fashion.
Response:
The biggest problem I see [...] is that the ui
is kinda driving your logic. Typically that's a bad idea and down the
line leads to problems that are hard to foresee in the planning stage.
The way I would approach it is that your app is a state machine.
There's user input stage, product lookup stage (which may fail from
what you describe), product view/editing stage etc. Transitions
between those stages would dictate a ui change, for example switching
from product id input to product lookup would switch ui page from
input form to a wait indicator and finishing the query would move
either to success or fail state, which would show an error page or the
product page accordingly. The product object would then be governed by
the state machine state that handles it. Lookup would produce an
object and pass it to the view/edit state and that state would free it
when transitioning to another state. Your logic would be better
encapsulated this way and independent of the ui, which you may want to
change someday or make optional e.g. drive the state machine in a
silent mode, app params or something else. The ui would just send a
signal that would change the machine state e.g. entering a number
would transition the machine from input to lookup, which in turn could
be connected to page change.
The issue I'm having here is I'm not seeing how this actually decouples my application logic from my UI? The QStateMachine would be configured to transition between states based on button clicks which is inherently coupling it to my UI. If I wanted some type of "headless" mode, I'd basically have to reconfigure my entire state machine to attach to signals from different objects. I suppose I could try to abstract this by having a builder type class that configures all the state transitions based on what mode I'm in.
Maybe this indicates I would need some sort of intermediate object which is used whether or not I have a UI, and the state machine connects to signals from that object instead. My different widgets would then contain a reference to this same object and utilize it.
Basically, I am asking how I can decouple my state machine from the UI when it seems it's coupled by design, since what's driving the state machine is the UI itself.

The QStateMachine class and its related classes are not coupled “by design” to the UI. They could be used to implement state machines in any layer of the app.
The response you received just advocates for a better separation of concerns.
Like you already said (about the “headless” mode), you can pack your business logic in a component that drives such a state machine to handle the flow of your process. You can of course have also other state machines, in the UI, to handle model updates, user interaction and page navigation.

Related

something in ngrx (redux pattern) than I still dont get for large applications

I've been building data driven applications for about 18 years and for the past two, I've been successfuly using angular for my large forms/crud based apps. You know, the classic sql server db with hundreds of tables with millons of records. So far, so good.
Now I'm porting/re-engineering a desktop app with about 50 forms, all complex, all fully functional, "smart". My approach for the last couple years was to simply work tightly with the backend rest API to retrieve, insert or update data as needed and everything works fine.
Then I stumbled across ngrx and I understand exactly how it works, what it does and why it is good for a "reactive" app.
My problem is the following: In the usual lifecycle of the kind of systems i mentioned, you always have to deal with fresh data and always have to tell everything to the server. Almost no data in such apps can be safely "stored" localy since transactional systems rely on centralized data interactions. There's no such thing as "hey lets keep this employee's sales here for later use".
So why would it be so important to manage a local 'store' when most of my data is volatile? I understand why it would be useful for global app data like user-profile or general ui related state, but for the core data itself? I dont get it. You query for data, plug that data in the form, it gets processed by the user and sent back to the server. That data is no longer needed, and if you do need it, you ask for it again, as it could have changed its state since the last time you interacted with it.
I do not understand the great lengths i have to go to mantain a local store and all the boilerplate if that state is so volatile.
They say change detection does not scale but I've build some really large web apps with a simple "http service" pattern and it works just fine, cause most of the component-tree is destroyed anyway as you go somewhere else in the app, and any previous subscriptions become useless. Even with large-bulky-kinky forms, it's never that big of a problem the inner workings of a form as to require external "aid" fro a store. The way I see it, the "state" of a form is a concern of that form in that moment alone. Is it to keep the component tree in sync? never had problems with that before... even for complicated trees with lots of shared data, master detail is kind of a flat pattern in the end if al lthe data is there.
For other components, such as grids, charts, reporte, etc, same thing applyes. They get the data they need and then "puf", gone.
So now you see my mindset. I AM trying to change it to something better. Why am I missing out the redux pattern?
I have a bit of experience here! It's all subjective, so what I've done may not suit you. My system is a complex system that sounds like it's on a similar scale as yours. I battled at first with the same issues of "why build complex logic on the front end and back end", and "why bother keeping stuff in state".
A redux/NGRX approach works for me because there are multiple ways data can be changed - perhaps it's a single user using the front end, perhaps it's another user making a change and I want to respond to that change straight away to avoid concurrency issues down the track. Perhaps there are multiple parts within my front end that can manipulate the same data.
At the back end, I use a CQRS pattern instead of a traditional REST API. Typically, one might suggest to re-implement the commands/queries to "reduce" changes to the state, however I opted for a different approach. I don't just want to send a big object graph back to the server and have it blindly insert, and I don't want to re-implement logic on the client and server.
My basic "use case" life cycle looks a bit like:
Load a list of data (limited size, not all attributes).
User selects item from list
Client requests "full" object/view/dto from server
Client stores response in object entity state
User starts modifying data
These changes are stored as "in progress" changes in a different part of state. The system is now responding to the data in the "in progress" part
If another change comes in from server, it doesn't overwrite the "in progress" data, but it does replace what is in the object entity state.
If required, UI shows that the underlying data has changed / is different to what user has entered / whatever.
User clicks on the "perform action" button, or otherwise triggers a command to be sent to server
server performs command. Any errors are returned, or success
server notifies client that change was successful, the client clears the "in progress" information
server notifies client that Entity X has been updated, client re-requests entity X and puts it into the object entity state. This notification is sent to all connected clients, so they can all behave appropriately.

Best approach to wait untill all service calls returned values in Flex PureMVC

I am writing an Adobe AIR application using PureMVC.
Imagine that I have an page-based application view ( using ViewStack ), and user is navigating through this pages in some way ( like clicking the button or whatever ).
Now for example I have an Account Infromation page which when instantiated or showed again needs to load the data from WebService ( for example email, account balance and username ), and when the data is returned I want to show it on my Account Information page in the proper labels.
The problem is when I will execute this three Web Calls, each of them will return different resultEvent at different time. I am wondering what is the best way to get the information that ALL of the service calls returned results, so I know that I can finally show all the results at once ( and maybe before this happens play some loading screen ).
I really don't know much about PureMVC, but the as3commons-async library is great for managing async calls and should work just fine in any framework-setup
http://as3commons.org/as3-commons-async/
In your case, you could create 3 classes implementing IOperation or IAsyncCommand (depending on if you plan to execute the operations immediately or deferred) encapsulating your RPCs.
After that is done you simply create a new CompositeCommand and add the operations to its queue.
When all is done, CompositeCommand will fire an OperationEvent.COMPLETE
BTW, the library even includes some pre-implemented common Flex Operations, such as HTTPRequest, when you download the as3commons-asyc-flex package as well.
I would do it in this way:
Create a proxy for each of three information entities (EMailProxy, BalanceProxy, UsernameProxy);
Create a delegate class which handles the interaction with your WebService (something like "public class WSConnector implements IResponder{...}"), which is used by the proxies to call the end ws-methods;
Create a proxy which coordinates all the three results (CoordProxy);
Choose a mediator which will coordinate all the three calls (for example it could be done by your ApplicationMediator);
Create notification constants for all proxy results (GET_EMAIL_RESULT, GET_BALANCE_RESULT, GET_USERNAME_RESULT, COORD_RESULT);
Let the ApplicationMediator get all 4 notifications;
it is important that you should not only wait for all three results but also be ready for some errors and their interpretation. That is why a simple counter could be too weak.
The overall workflow could look like this:
The user initiates the process;
Some mediator gets an event from your GUI-component and sends a notification like DO_TRIPLECALL;
The ApplicationMediator catches this notification, drops the state of the CoordProxy and calls all 3 methods from your proxies (getEMail, getBalance, getUsername).
The responses are coming asynchronously. Each proxy gets its response from the delegate, changes its own data object and sends an appropriate notification.
The ApplicationMediator catches those notifications and changes the state of the CoordProxy. When all three responses are there (may be not all are successful) the CoordProxy sends a notification with the overall result.
I know it is not the best approach to do such an interaction through mediators. The initial idea was to use commands for all "business logic" decisions. But it can be too boring to create the bureaucracy.
I hope it can help you. I would be glad to know your solution and discuss it here.

UITableViewCells and NSURLConnection

I have a custom UITableViewCell, and when the user clicks a button i make a request to a server and update the cell. I do this with a NSUrlConnection and it all works fine (this is all done inside the cell class) and once it returns it fires a delegate method and the tableivew controller handles this. However when i create the cell in the tableview, i use the dequeue method and reuse my cells. So if a cell has fired a asynchronous nsurlconnection, and the cell gets reused whilst this is still going on, will this in turn erase the current connection? I just want to make sure that if the cell is reused, the actual memory that was assigned to the cell is still there so the connection can fulfil its duty??
You can customize the behavior of a UITableViewCell by subclassing it and overriding the -perpareForReuse method. In this case, I would recommend destroying the connection when the cell is dequeued. If the connection should still keep going, you’ll need to remove the reference to it (set it to nil) and handle its delegate methods elsewhere.
It's never a good idea to keep a reference of a connection or any data that you want to display in a cell, no matter how much of effort you put into it afterward to work around to arising problems. Your approach will never work reliable.
In your case, if the user quickly scrolls the table view up and down, your app will start and possibly cancel dozens of connections and yet never finishes to load something. That will be an awful user experience, and may crash the app.
Better you design your app with MVC in mind: the cell is just a means to display your model data, nothing else. It's the View in this architectural design.
For that purpose the Table View Delegate needs to retrieve the Model's properties which shall to be displayed for a certain row and setup the cell. The model encapsulates the network connection. The Controller will take the role to manage and update change notification and process user inputs.
A couple of Apple samples provide much more details about this topic, and there is a nice introduction about MVC, please go figure! ;)
http://developer.apple.com/library/ios/#documentation/general/conceptual/devpedia-cocoacore/MVC.html
The "Your Second iOS App: Storyboards" also has a step by step explanation to create "Data Controller Classes". Quite useful!
Now, when using NSURLConnection which updates your model, it might become a bit more complex. You are dealing with "lazy initializing models". That is, they may provide some "placeholder" data when the controller accesses the property instead of the "real" data when is not yet available. The model however, starts a network request to load it. When it is eventually loaded, the model must somehow notify the Table View Controller. The tricky part here is to not mess with synchronization issues between model and table view. The model's properties must be updated on the main thread, and while this happens, it must be guaranteed that the table view will not access the model's properties. There are a few samples which demonstrate a few techniques to accomplish this.

How to use QSettings?

If you have a lot of small classes (that are often created and destroyed), and they all depend on the settings, how would you do that?
It would be nice not to have to connect each one to some kind of "settings changed" signal, and even if I did, all the settings will be updated, even those objects whose settings didn't change.
When faced with that myself, I've found it's better to control the save/load settings from a central place. Do you really need to save/load the settings on a regular basis or can you have a master object (likely with a list of the sub-objects) control when savings actually need to be done? Or, worst case, as the objects are created and destroyed have them update an in-memory setting map in the parent collection and save when it thinks it should be saved, rather than child object destruction.
One way of implementing it is given below.
If you wish, the central location for the settings can be a singleton derived from QAbstractItemModel, and you can easily connect the dataChanged(...) signal to various objects as necessary, to receive notifications about changed settings. Those objects can decide whether an applicable setting was changed. By judicious use of helper and shim classes, you can make it very easy to connect your "small classes" with notifications. This would address two issues inherent in the model driven approach to settings.
The possibly large number of subscribers, all receiving notifications about the settings they usually don't care about (the filtering issue).
The extra code needed to connect the subscriber with the item model, and the duplication of information as to what indices of the model are relevant (the selection issue).
Both filtering and selection can be dealt with by a shim class that receives all of the dataChanged notifications, and for each useful index maintains a list of subscribers. Only the slots of the "interested" objects would then be invoked. This class would maintain the list of subscriber-slot pairs on its own, without offering any signals for others to connect to. It'd use invokeMethod or a similar mechanism to invoke the slots.
The selection issue can be dealt with by observing that the subscriber classes will, upon initialization, query the model for initial values of all of the settings that affect their operation - that they are "interested" in. All you need is a temporary proxy model that you create for the duration of the initialization of the subscriber. The proxy model takes the QObject* instance of the caller, and records all of the model indices that were queried (passing them onto the singleton settings model). When the proxy model is finally destroyed at the return from the initialization of the subscriber class, it feeds the information about the model indices for this QObject to the singleton. A single additional call is needed to let the singleton know about the slot to call, but that's just like calling connect().
What I typically do is to create a class handling all my settings, with one instance per thread. To make it quicker to write this class, I implemented a couple of macros that let me define it like this:
L_DECLARE_SETTINGS(LSettingsTest, new QSettings("settings.ini", QSettings::IniFormat))
L_DEFINE_VALUE(QString, string1, QString("string1"))
L_DEFINE_VALUE(QSize, size, QSize(100, 100))
L_DEFINE_VALUE(double, temperature, -1)
L_DEFINE_VALUE(QByteArray, image, QByteArray())
L_END_CLASS
This class can be instantiated once per thread and then used without writing anything to the file. A particular instance of this class, returned from LSettingsTest::notifier(), emits signals when each property changes. If exposed to QML, each property work as regular Qt property and can be assigned, read and used in bindings.
I wrote more info here. This is the GitHub repo where I placed the macros.

DDD along with collections

suppose I have a domain classes:
public class Country
{
string name;
IList<Region> regions;
}
public class Region
{
string name;
IList<City> cities;
}
etc.
And I want to model this in a GUI in form of a tree.
public class Node<T>
{
T domainObject;
ObservableCollection<Node<T>> childNodes;
}
public class CountryNode : Node<Country>
{}
etc.
How can I automatically retrieve Region list changes for Country, City list changes for Region etc.?
One solution is to implement INotifyPropertyChanged on domain classes and change IList<> to ObservableCollection<>, but that feels kinda wrong because why should my domain model have resposibility to notify changes?
Another solution is to have that responsibility put upon the GUI/presentation layer, if some action led to adding a Region to a Country the presentation layer should add the new country to both the CountryNode.ChildNodes and the domain Country.Regions.
Any thoughts about this?
Rolling INotifyPropertyChanged into a solution is part of implementing eventing into your model. By nature, eventing itself is not out of line with the DDD mantra. In fact, it is one of the things that Evans has implied that has been missing from his earlier material. I don't remember exactly where, but he does mention it in this video; What I've learned about DDD since the book
In and of itself, impelemnting an eventing model is actually legitimate in the domain, because of the fact it introduces a decoupling between the domain and the other code in your system. All you are doing is saying that your domain has the capability of notifying interested parties in the fact that something has changed. It is the responsibility of the subscriber, then, to act upon it. I think that where the confusion lies is that you are only using an INotifyPropertyChanged implementation to relay back to an event sink. That event sink will then notify any subscribers, via a registered callback, that "something" has happened.
However, with that being said, your situation is one of the "fringe" scenarios that eventing does not apply all that well to. You are looking to see if you need to repopulate a UI when the list itself changes. At work, the current solution that we have uses an ObservableCollection. While it does work, I am not a fan of it. On the flip side, publishing a fact that one or more items in the list have changed is also problematic. For example, how would you determine the entropy of the list to best determine that it has changed?
Because of this, I would actually consider this to not be a concern of the domain. I do not think that what you are describing would be a requirement of the domain owner(s), but rather an artifact of the application architecture. If you were to query the services in the domain after the point that the change is made, they would return correctly and the application would still be what is out of step. There is nothing actually wrong inside the world of the domain at this momeent in time.
So, there are a few ways that I could see this being done. The most inefficient way would be to continually poll for changes directly against the model. You could also consider having some sort of marker that indicates that the list is dirty, although not using the domain model to do so. Once again, not that clean of a solution. You can apply those principles outside of the domain, however, to come up with a working solution.
If you have some sort of a shared caching mechanism, i.e. a distributed cache, you could implement a JIT-caching/eviction approach where inserts and updates would invalidate the cache (i.e. evict the cached items) and a subsequent request would load them back in. Then, you could actually put a marker into the cache itself that would indicate something identifiable as to when that item(s) was/were rebuilt. For example, if you have a cache item that contains a list of IDs for region, you could store the DateTime that it was JIT-ed along with it. The application could then keep track of what JIT-ed version it has, and only reload when it sees that the version has changed. You still have to poll, but it eliminates putting that responsibility into the domain itself, and if you are just polling for a smaller bit of data, it is better than rebuilding the entire thing every time.
Also, before overarchitecting a full-blown solution. Address the concern with the domain owner. It may be perfectly acceptable to him/her/them that you simply have a "Refresh" button or menu item somewhere. It is about compromise, as well, and I am fairly sure that most domain owners would have a preference of core functionality over certain types of issues.
About eventing - most valuable material I've seen comes from Udi Dahan and Greg Young.
One nice implementation of eventing which I'm eager to try out can be found here.
But start with understanding if that's really necessary as Joseph suggests.

Resources