What is the best way to represent related models in Qt? - qt

I have a Qt application built on top of a library that manages an SQL database. I have several models (QAbstractTableModel and QAbstractListModel subclasses) that represent different tables in the database, and I have a few associated views.
The models typically have a cache of data from the database and a pointer to a database object that they can use to refresh their caches as needed. I have implemented insertRows, removeRows, and setData so that they attempt the database operation in a transaction, and then if it succeeds, they update the cache inside beginInsertRows()/endInsertRows() (or remove as the case may be) so that the cache doesn't change outside of the begin/end pairs and everything stays in sync.
So far, so good, and everything is working fine. If all changes to the database came from the GUI, I'd think this was a nice solution.
The problem is that some database operations involve changing tables that are represented by more than one model. To the application, it looks like table updates that originate in the database layer. I could have the database layer call up into the model layer whenever it wants to make a change (which is what I'm doing now), but I don't like the way it exposes the lower-level database layer to Qt models. It also has the problem that the models need to make their changes inside a transaction so they know when the operations succeed or fail, and going through the model layer would preclude the database layer from enclosing multiple updates in a transaction.
Alternatively, I could make the caches in the models be the "real" data. I could then change the underlying database and inform the models after the fact, at which point they would update their caches. I think this works fine for small tables, but since it would mean caching the entire table, I don't think it works for large tables.
My thought is that this is a common enough problem that may be some kind of known best practice for this. Is that the case?
Thanks.

Related

Best practice for managing / controlling object state with 2 way databinding using Polymer

Lets try this explanation again...
I'm new to polymer (and getting back into web dev after a relatively long absence), and I'm wondering what the recommended approach might be to more closely manage object state while employing 2 way databinding. I am currently consuming rest API (json) objects. My question is if polymer keeps a copy of the original object before initiating updates to the bound object's properties/attributes...so one might be able to easily undo the changes? While allowing 2 way databinding to work its magic is often desired, there are cases where I'd like to prevent / delay changes to the object / DOM until the user approves the changes (say via the paper-dialog component for instance). I suppose one could make a temporary copy of the object and bind fields to that version, and then only persist the changes back to the source object upon user approval. In any case, I'd be interested to hear thoughts and see an example or two of recommended approaches (especially if I am off-track with my ideas!)
I suppose one could make a temporary copy of the object and bind
fields to that version, and then only persist the changes back to the
source object upon user approval
This.
Consider that view-models are essentially different from pure data-models (sometimes called business-data). Frequently, the differences are irrelevant and one can use them interchangeably. However, be aware of scenarios where the view-model is distinct (uncommitted user edits are a good example).
The notion of a field editor that requires approval from the user is purely UI/View oriented. Whatever data is managed in that modality is purely in the domain of the view, and fetches/commits to the business-data should be discrete.

Loading a Meteor client app with fake fire-and-forget data

I'm trying to figure out a good way to create tutorials for using Meteor apps. Visually, I've figured out a good approach, and packed this into a smart package:
https://github.com/mizzao/meteor-tutorials.
However, there is a second piece that turns out to be rather hard to figure out.
In many cases, a tutorial app needs to be loaded with fake data, to demonstrate the interface to the user without requiring it to be populated with real data that may be hard to generate. (For example, see https://www.planapple.com/trip/demo/349/ which is a demo for PlanApple). In Meteor, since the content of an app is basically defined by the contents of some collections, I see two ways to do this:
Maintain two sets of collections, one for the tutorial and one for the actual app. Use the first set for the tutorial and the second when the user is actually using the app.
Use one set of collections, and fill it with fake data during the tutorial using a subscription and with real data when the user is actually using the app using a different subscription.
The first approach is clearly bad; it means that one cannot write the app without being agnostic to whether it's being used as a tutorial or not and there is a lot of messy if/else reactive logic in presenting the app that is unnecessary. Moreover, this will be very hard to maintain if the app has more than a few collections.
The second approach seems to be the more Meteor-esque way to do things. What we basically want is for a server publication to fill all the client collections with some fake data, and then allow the data to be manipulated in whatever way on the client side without the changes propagating to the server; the client basically gets a copy of the server's tutorial data and then makes only local changes to it which are then discarded. This boils down to two things:
Sending fake data down from the server to client via a custom subscription into the same named collections as the regular app. This is definitely possible as I've written in https://stackoverflow.com/a/18880927/586086
Ignoring any inserts, updates, and deletes from the client (on the server) after the initial load of data; but allowing them to happen locally. This is also possible if one creates null (unnamed) collections, as in http://docs.meteor.com/#meteor_collection.
The problem is that although it's possible to do each of the two steps above separately, I want to do both of them - I want the data to be loaded into the same named collections as the client would have with real data, to avoid the complicated control logic of having two sets of collections, but I also want changes to be local-only but not propagated back over the subscription during the tutorial.
Anyone have ideas about how to do this?
A related question about whether the second part is possible: How does a Meteor database mutator know if it's being called from a Meteor.method vs. normal code?
EDIT: It seems that what we'd basically want to do in the tutorial is inserting directly against the local Meteor Collection as in {https://stackoverflow.com/a/19523301/586086}. However, is there a way to generally turn on this behavior during the tutorial for all relevant mutators, instead of explicitly having to specify this?
I ended up implementing this myself with the partitioner package, which allows connected clients to be divided up into different slices each containing different data.
Basically, the idea is to put the user(s) into a new partition when they are in the tutorial, and then put them into another partition when they are using the app for real. Works great with the tutorials package as well. This gives up the ability of having changes to be client-local, but storing the tutorial data doesn't have much overhead and turned out to be useful in my case anyway.
An example of an app that does this is https://github.com/mizzao/CrowdMapper.

Need some clarifications on the Model side of MVC

I guess I would need some really good explanation on some Model related concepts.
In general does the model, as described by frameworks like Robotlegs play the role of an application state holder, or a domain state holder? I originally thought that models are entirely domain based, i.e UserModel, LocationModel, which play the same role that DAO classes play on the server. The more source code I am looking at though, the more I see stuff like UserAccountModel, ShoppingCartModel, etc, full of properties and methods related to the state of the client application, not the domain state.
I see that the people do not bother to add complex relationships to the VO classes, i.e. if a User has a lot of photos, the photos collection is obviously omitted from the UserVO class. Instead, a bunch of PhotoVO objects are loaded from the server whenever necessary, based on a service call with the user ID. Is that some sort of a rule of thumb - in general keeping VOs as "bare" as possible? Doesn't that increase the possible number of calls that must be made to the server to fetch all the data? Moreover, doesn't that fragment the domain model in general? (an entity User class on the server will always have a photos property)
With so many calls to the server, it is normal to fetch some objects that might be already on the client storage. does it make sense to make a client side cache, and check if the object that is going to be fetched is already there, or in general, the overhead of getting it once again will be paid off by the benefits of getting a fully synced object from the server. Otherwise, every object stored on the client side cache must be cared for when a change occurs. I personally think that the overhead of getting an object from the server, which might have already been picked up before is not as big. Better have fresh and synced data I'd say.
I do not believe your question is answerable, because so many of the answers are "it depends." It depends on the application you're building and the needs of the UI.
I don't really understand your distinction between "Domain State" and "Application State." However, I believe that any "Value Object" style classes implemented in the UI should focus on holding the state of specific views. It is extremely rare that a single view is a one to one relationship to database tables. As such, my UI Data Objects may not be identical to server side data objects. Although, it is very common that I will map UI objects to server side objects using AMF. But, it doesn't mean that every object in the UI is implemented server side and every server object is implemented on the UI.
I see that the people do not bother to add complex relationships to
the VO classes,
I'm not sure where you see that; I will often do exactly this. However, it depends what the view is supposed to display. IF the view is not displaying a lot of photos related to the user, then I won't make a remote call to retrieve the user information with all their photos.
With so many calls to the server, it is normal to fetch some objects
that might be already on the client storage.
It depends. I would say that the apps I write, the calls to the server are done as needed; and attempts are made to limit them as appropriate. If I already fetched data and have it cached on the client, then I am going to try to use that cache instead of retrieving the data again.
I'll restate my original assessment: I think the answers to most of your questions depend on the situation, and depend on the app. You seem to start with overally broad generalizations about how things are done. However, I Do not believe they are universal truths. Developer's fight about application architecture issues all the time.

EF4 ASP.NET - Managing Entity Edits between HTTP Posts and Rollback

I am struggling with the following use-case:
User amends an existing order. The order is complex - lots of related 'entities' (addresses, post options, suppliers, makes, models, various items etc). Across multiple http posts.
User wants to discard the changes.
--
I have an order entity and as the user is editing this I am making various changes to the entity associations e.g changing order.address, order.items.add(item)...
In a single post this is fine, but across posts I don't know how best store state. If I store the entities then I cannot save the changes as they are across different data contexts. I have read that it is bad practice to store the data context in the session state i.e. long-lived context. I can't save changes after each edit/post because I cannot roll-back (?). I really would like to work with the entities during the editing process rather than one big save at the end (taking UI settings and applying these in one chunk).
This must be a pretty common problem - it's driving me mad. Any help really appreciated.
Cheers!
We have a similar problem where we are building a complex business object through a multi-page wizard.
Instead of creating a partially complete business object at each step of the wizard, we create a dedicated wizard object that looks pretty similar to the business object, populate that through the wizard. At each step in the wizard, the wizard object is saved into the database. At the end the user can accept it and it is converted to a real business object and then becomes visible to everyone else, or they can bin it and no-one else ever knows it existed.
If this kind of approach was not suitable, I suspect you're looking at some kind of difference tracking, either at the entity or database levels. Neither are simple to implement, work with or manage in a system. The former would be some kind of calculation and storage of n changes to the entities and developing an algorithm to undo them, the latter depends on your RDBMS, but might include versioned rows or similar.
Yes its pretty much common for us. In most scenarios we use the MVC approach. Even without the actual ASP .NET MVC Projects, we use similar ViewModel with our Views/Pages/Scenarios etc. where there is no direct/single entity mapping to the Business Layer (in other words, Business.Entities). This is pretty much similar to DTOs.
It is always easy to use Disconnected EF. We retrieve data and discard the context, then transform the Entities into ViewModels/DTOs if necessary. When you need to persist changes, all you have to do is to create a new context, find the latest entity instance do the changes.
The Views/Pages/Controllers will be managing these ViewModels/DTOs. Tracking Changed and Deleted content can be done by introducing a HistoryList<T> (you can extend a List<T> to implement this).
Once done, using a Controller/Workflow/Component you can observe the ViewModel/DTO and do the necessary changes to your Entities using a new Context to retrieve and persist.
It involves a bit of a coding and I would say its not a perfect solution since it has its own pros and cons.
/KP

(LINQ-To-SQL) Creating classes first, database table second, how to auto-connect the two?

I'm creating a data model first using the LINQ-To-SQL graphical designer by using right-click->Add->Class. My idea is that I'll set up everything first using test repositories, design the entire website, then as a final step, create a database using the LINQ-To-SQL classes as a model for the database tables and relationships. My reasoning is that it's easy to edit the classes, but hard to modify DB tables (especially if there's already data in them), so by doing the database part last, it becomes much easier to design the structure.
My question is, is there an automatic way to link the two once I have the DB tables created? I know you can manually fill out the class properties for the LINQ-To-SQL entities but this is pretty cumbersome if you have a lot of tables to deal with. The other option is to delete your manually-created classes and drag the tables from the database into the designer to auto-generate the classes, but I'm not sure if this is the best way of doing it.
Linq to Sql is intended to be a relatively thin ORM layer over a database. While you can of course just add properties to a data context and use them as a sort of mock, you are correct, it isn't really easy to work with.
Instead of relying solely on Linq to Sql generated classes to give you freedom from the database implementation, you may want to look into the repository design pattern. It allows you to have a smooth separation between your database, domain model, and your middle tier; I have used it on two projects now, and have been able to (for the most part) build everything top-down, leaving the actual database for last. Below is a link to a good tutorial on the pattern (better than I could scribble down here).
https://web.archive.org/web/20110503184234/http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/10/08/the-repository-pattern.aspx
Depending on your database permissions, you may call your datacontext's DeleteDatabase() and CreateDatabase() methods as a ungraceful way of resyncing your classes and tables. This is not much of an option when you have actual data in the database, but does work when you are in your development stages.
Take a look at my add-in (which you can download from http://www.huagati.com/dbmltools/ , free 45-day trial licenses are also available from the same site).
It can generate SQL-DDL diff scripts with the SQL-DDL statements for updating your database with only the portions that has changed in the L2S model (e.g. add missing columns, missing tables, missing FKs etc), instead of the L2S-out-of-the-box support for recreating the entire db from scratch.
It also supports syncing the other way; updating the model from the database.

Resources