DDD along with collections - collections

suppose I have a domain classes:
public class Country
{
string name;
IList<Region> regions;
}
public class Region
{
string name;
IList<City> cities;
}
etc.
And I want to model this in a GUI in form of a tree.
public class Node<T>
{
T domainObject;
ObservableCollection<Node<T>> childNodes;
}
public class CountryNode : Node<Country>
{}
etc.
How can I automatically retrieve Region list changes for Country, City list changes for Region etc.?
One solution is to implement INotifyPropertyChanged on domain classes and change IList<> to ObservableCollection<>, but that feels kinda wrong because why should my domain model have resposibility to notify changes?
Another solution is to have that responsibility put upon the GUI/presentation layer, if some action led to adding a Region to a Country the presentation layer should add the new country to both the CountryNode.ChildNodes and the domain Country.Regions.
Any thoughts about this?

Rolling INotifyPropertyChanged into a solution is part of implementing eventing into your model. By nature, eventing itself is not out of line with the DDD mantra. In fact, it is one of the things that Evans has implied that has been missing from his earlier material. I don't remember exactly where, but he does mention it in this video; What I've learned about DDD since the book
In and of itself, impelemnting an eventing model is actually legitimate in the domain, because of the fact it introduces a decoupling between the domain and the other code in your system. All you are doing is saying that your domain has the capability of notifying interested parties in the fact that something has changed. It is the responsibility of the subscriber, then, to act upon it. I think that where the confusion lies is that you are only using an INotifyPropertyChanged implementation to relay back to an event sink. That event sink will then notify any subscribers, via a registered callback, that "something" has happened.
However, with that being said, your situation is one of the "fringe" scenarios that eventing does not apply all that well to. You are looking to see if you need to repopulate a UI when the list itself changes. At work, the current solution that we have uses an ObservableCollection. While it does work, I am not a fan of it. On the flip side, publishing a fact that one or more items in the list have changed is also problematic. For example, how would you determine the entropy of the list to best determine that it has changed?
Because of this, I would actually consider this to not be a concern of the domain. I do not think that what you are describing would be a requirement of the domain owner(s), but rather an artifact of the application architecture. If you were to query the services in the domain after the point that the change is made, they would return correctly and the application would still be what is out of step. There is nothing actually wrong inside the world of the domain at this momeent in time.
So, there are a few ways that I could see this being done. The most inefficient way would be to continually poll for changes directly against the model. You could also consider having some sort of marker that indicates that the list is dirty, although not using the domain model to do so. Once again, not that clean of a solution. You can apply those principles outside of the domain, however, to come up with a working solution.
If you have some sort of a shared caching mechanism, i.e. a distributed cache, you could implement a JIT-caching/eviction approach where inserts and updates would invalidate the cache (i.e. evict the cached items) and a subsequent request would load them back in. Then, you could actually put a marker into the cache itself that would indicate something identifiable as to when that item(s) was/were rebuilt. For example, if you have a cache item that contains a list of IDs for region, you could store the DateTime that it was JIT-ed along with it. The application could then keep track of what JIT-ed version it has, and only reload when it sees that the version has changed. You still have to poll, but it eliminates putting that responsibility into the domain itself, and if you are just polling for a smaller bit of data, it is better than rebuilding the entire thing every time.
Also, before overarchitecting a full-blown solution. Address the concern with the domain owner. It may be perfectly acceptable to him/her/them that you simply have a "Refresh" button or menu item somewhere. It is about compromise, as well, and I am fairly sure that most domain owners would have a preference of core functionality over certain types of issues.

About eventing - most valuable material I've seen comes from Udi Dahan and Greg Young.
One nice implementation of eventing which I'm eager to try out can be found here.
But start with understanding if that's really necessary as Joseph suggests.

Related

How to handle Complex Object in the ngrx?

Hello,
I'm working on a project in which the main part of the data has a complex structure as you can see in the above picture.
Now, the object, in reality, is much complex than that but what I showed it servers the purpose.
Because in DB they are linked together in tables relationship the first time when the website is launched, after log in, a list of projects will come together with some small details of technology and dataObjects.
I created separated action and effects files but everything is handled by a single reducer. What I mean is at the start, the list of projects will be saved on a state, than any other actions like Create a project, technology, data object, edit, delete has to perform actions over the same state "projects-state".
For example besides technologyAPIS will be another 3-4 technologies, inside each technology object will be another list of objects.
The issue here is that the reducer file is getting bigger and bigger when it handles all kinds of actions that will perform actions over the specific data from the state. It is important that the chain of Objects stay together.
My question is, is this a bad approach? it can be handled in a different way? I know I can create a reducer for each entity (project, technology, data app) but I will lose that relationship between them, where one belongs to the other?
Thank you so much for your feature response
I've only been doing reactive/NGRX for a few months now, but from my understanding, thats defiantly a bad approach. It should still work, but may be a hassle to debug/maintain.
Ngrx seems to be promoting 'Normalising' data, just like the usual Relational Database concept.
You could break down your project-state into smaller states, with relational keys in them.
Example
Projects state (not to be confused with project-state)
id:number
projectDescription:string
createdBy:string
techAPIs:number[] //where the content here is the id of the TechApi
TechApi state
id:number
otherInfo:any
And then when you need to access the TechApi state for a project, you retrieve it and filter/map it in a selector.
This is a some what General Example if my explanation is not understandable.

UITableViewCells and NSURLConnection

I have a custom UITableViewCell, and when the user clicks a button i make a request to a server and update the cell. I do this with a NSUrlConnection and it all works fine (this is all done inside the cell class) and once it returns it fires a delegate method and the tableivew controller handles this. However when i create the cell in the tableview, i use the dequeue method and reuse my cells. So if a cell has fired a asynchronous nsurlconnection, and the cell gets reused whilst this is still going on, will this in turn erase the current connection? I just want to make sure that if the cell is reused, the actual memory that was assigned to the cell is still there so the connection can fulfil its duty??
You can customize the behavior of a UITableViewCell by subclassing it and overriding the -perpareForReuse method. In this case, I would recommend destroying the connection when the cell is dequeued. If the connection should still keep going, you’ll need to remove the reference to it (set it to nil) and handle its delegate methods elsewhere.
It's never a good idea to keep a reference of a connection or any data that you want to display in a cell, no matter how much of effort you put into it afterward to work around to arising problems. Your approach will never work reliable.
In your case, if the user quickly scrolls the table view up and down, your app will start and possibly cancel dozens of connections and yet never finishes to load something. That will be an awful user experience, and may crash the app.
Better you design your app with MVC in mind: the cell is just a means to display your model data, nothing else. It's the View in this architectural design.
For that purpose the Table View Delegate needs to retrieve the Model's properties which shall to be displayed for a certain row and setup the cell. The model encapsulates the network connection. The Controller will take the role to manage and update change notification and process user inputs.
A couple of Apple samples provide much more details about this topic, and there is a nice introduction about MVC, please go figure! ;)
http://developer.apple.com/library/ios/#documentation/general/conceptual/devpedia-cocoacore/MVC.html
The "Your Second iOS App: Storyboards" also has a step by step explanation to create "Data Controller Classes". Quite useful!
Now, when using NSURLConnection which updates your model, it might become a bit more complex. You are dealing with "lazy initializing models". That is, they may provide some "placeholder" data when the controller accesses the property instead of the "real" data when is not yet available. The model however, starts a network request to load it. When it is eventually loaded, the model must somehow notify the Table View Controller. The tricky part here is to not mess with synchronization issues between model and table view. The model's properties must be updated on the main thread, and while this happens, it must be guaranteed that the table view will not access the model's properties. There are a few samples which demonstrate a few techniques to accomplish this.

How to realize persistence of a complex graph with an Object Database?

I have several graphs. The breadth and depth of each graph can vary and will undergo changes and alterations during runtime. See example graph.
There is a root node to get a hold on the whole graph (i.e. tree). A node can have several children and each child serves a special purpose. Furthermore a node can access all its direct children in order to retrieve certain informations. On the other hand a child node may not be aware of its own parent node, nor other siblings. Nothing spectacular so far.
Storing each graph and updating it with an object database (in this case DB4O) looks pretty straightforward. I could have used a relational database to accomplish data persistence (including database triggers, etc.) but I wanted to realize it with an object database instead.
There is one peculiar thing with my graphs. See another example graph.
To properly perform calculations some nodes require informations from other nodes. These other nodes may be siblings, children/grandchildren or related in some other kind. In this case a specific node knows the other relevant nodes as well (and thus can get the required informations directly from them). For the sake of simplicity the first image didn't show all potential connections.
If one node has a change of state (e.g. triggered by an internal timer or triggered by some other node) it will inform other nodes (interested obsevers, see also observer pattern) about the change. Each informed node will then take appropriate actions to update its own state (and in turn inform other observers as needed). A root node will not know about every change that occurs, since only the involved nodes will know that something has changed. If such a chain of events is triggered by the root node then of course it's not much of an issue.
The aim is to assure data persistence with an object database. Data in memory should be in sync with data stored within the database. What adds to the complexity is the fact that the graphs don't consist of simple (and stupid) data nodes, but that lots of functionality is integrated in each node (i.e. events that trigger state changes throughout a graph).
I have several rough ideas on how to cope with the presented issue (e.g. (1) stronger separation of data and functionality or (2) stronger integration of the database or (3) set an arbitrary time interval to update data and accept that data may be out of synch for a period of time). I'm looking for some more input and options concerning such a key issue (which will definitely leave significant footprints on a concrete implementation).
(edited)
There is another aspect I forgot to mention. A graph should not reside all the time in memory. Graphs that are not needed will be only present in the database and thus put in a state of suspension. This is another issue which needs consideration. While in suspension the update mechanisms will probably be put to sleep as well and this is not intended.
In the case of db4o check out "transparent activation" to automatically load objects on demand as you traverse the graph (this way the graph doesn't have to be all in memory) and check out "transparent persistence" to allow each node to persist itself after a state change.
http://www.gamlor.info/wordpress/2009/12/db4o-transparent-persistence/
Moreover you can use db4o "callbacks" to trigger custom behavior during db4o operations.
HTH
German
What's the exact question? Here a few comments:
As #German already mentioned: For complex object graphs you probably want to use transparent persistence.
Also as #German mentione: Callback can help you to do additional stuff when objects are read/written etc on the database.
To the Observer-Pattern. Are you on .NET or Java? Usually you don't want to store the observers in the database, since the observers are usually some parts of your business-logic, GUI etc. On .NET events are automatically not stored. On Java make sure that you mark the field holding the observer-references as transient.
In case you actually want to store observers, for example because they are just other elements in your object-graph. On .NET, you cannot store delegates / closures. So you need to introduce a interface for calling the observer. On Java: Often we use anonymous inner classes as listener: While db4o can store those, I would NOT recommend that. Because a anonymous inner class gets generated name which can change. Then db4o will not find that class later if you've changed your code.
Thats it. Ask more detailed questions if you want to know more.

Keeping track after the back button

I want to write a web app order system using the REST methodology for the first time. I understand the concept of the "message id" when things get posted to a page but this scenario comes up. Once a user posts to the web app, you can keep track of their state with an id attached to the URI but what happens if they hit the back button of the browser to the entry point of the app when they didn't have any id? They then lose their state in the transaction.
I know you can always give them a cookie but you can't do that if they have cookies turned off and, worst case thinking here, they also have javascript turned off.
Now, I understand the answer may be "Yes, that's what will happen", that's the end of the story, and I can live with that but, being new to this, is there something I'm missing?
REST doesn't really have states server-side; you simply point to resources. User sessions aren't tracked; instead cookies are used to track application state. However, if you find that you really do need session state, then you are going to have to break REST and track it on the server.
A few things to consider:
How many of your users have cookies disabled anyway? How many even know how to do that?
Is it really likely that your users will have JS turned off? If so, how will you accomplish PUT (edit) and DELETE (delete) without AJAX?
EDIT: Since you do not want to force cookies and JavaScript, then you cannot have a truly RESTful system. But you can somewhat fake it. You are going to need to track a user server-side. You could do this with a session object, as found in your language/framework of choice or by adding a field to the database for whatever you want to know. Of course, when the user hits the back button, they will likely be going to a cached page. If that's not OK, then you will need to modify the headers to disallow caching. Basically, it gets more complicated if you don't use cookies, but not unmanageable.
What about the missing PUT and DELETE HTTP methods? You can fake those with POSTs and a hidden parameter specifying whether or not you are making something new, editing something, or deleting a record. The GET shouldn't really change.
The answer is that your application (in a REST scenario) simply doesn't keep track of what happens. All state is managed by the client, and state transitions are effected through URI navigation. The "State Transfer" part of REST refers to client navigation to new URIs which are new states.
A URI accessed with GET is effectively a read-only operation as per both the HTTP spec and the REST methodology. That means if the client "backs up" to some previous URI, "the worst" that happens is another GET is made and more data is loaded.
Let's say the client does this (using highly simplified pseudo-HTTP)...
GET //site.com/product/123
This retrieves information (or maybe a page) about product ID 123, which presumably includes a reference to a URI which can be used to POST that item into the user's shopping cart. So the user decides to buy the item. Again, it's oversimplified but:
POST //site.com/shoppingcart/
{productid = 123}
The return from this might be a representation of the shopping cart, or a reference to the added item (which could be used on the shoppingcart URI with DELETE to remove the item again), or a variety of other things (such as deeper XML describing the cart contents with other URIs pointing to the cart items and back to the original products). It's all up to you.
But the "state" is defined by whatever the client is doing. It isn't tracked on the server at all, although you will certainly keep a copy of his shopping cart contents in your database for some period of time. (I once returned to a website two years later and my shopping cart items were still there...) But it's up to him to keep track of the ID. To your server app it's just another record, presumably with some kind of expiration.
In this way you don't need cookies, and javascript is entirely dependent on the client implementation. It's difficult to build a decent REST client without script -- you could probably build something with XSLT and only return XML from the server, but that's probably more pain than anyone needs.
The starting point is to really understand REST, then design the system, then build it. It definitely doesn't lend itself to building it on the fly like most other systems do (right or wrong).
This is an excellent article that gives you a fairly "pure" look at REST without getting too abstract and without bogging you down with code:
http://www.infoq.com/articles/subbu-allamaraju-rest
It is true that the "S" in REST stands for "state" and the "T" for "transfer". But the state is is kept on the client, not on the server. The client always hast all information necessary to decide for himself in whicht direction he wants to change the state.
The way you describe it, your system is not restful.

Ways to store an object across multiple postbacks

For the sake of argument assume that I have a webform that allows a user to edit order details. User can perform the following functions:
Change shipping/payment details (all simple text/dropdowns)
Add/Remove/Edit products in the order - this is done with a grid
Add/Remove attachments
Products and attachments are stored in separate DB tables with foreign key to the order.
Entity Framework (4.0) is used as ORM.
I want to allow the users to make whatever changes they want to the order and only when they hit 'Save' do I want to commit the changes to the database. This is not a problem with textboxes/checkboxes etc. as I can just rely on ViewState to get the required information. However the grid is presenting a much larger problem for me as I can't figure out a nice and easy way to persist the changes the user made without committing the changes to the database. Storing the Order object tree in Session/ViewState is not really an option I'd like to go with as the objects could get very large.
So the question is - how can I go about preserving the changes the user made until ready to 'Save'.
Quick note - I have searched SO to try to find a solution, however all I found were suggestions to use Session and/or ViewState - both of which I would rather not use due to potential size of my object trees
If you have control over the schema of the database and the other applications that utilize order data, you could add a flag or status column to the orders table that differentiates between temporary and finalized orders. Then, you can simply store your intermediate changes to the database. There are other benefits as well; for example, a user that had a browser crash could return to the application and be able to resume the order process.
I think sticking to the database for storing data is the only reliable way to persist data, even temporary data. Using session state, control state, cookies, temporary files, etc., can introduce a lot of things that can go wrong, especially if your application resides in a web farm.
If using the Session is not your preferred solution, which is probably wise, the best possible solution would be to create your own temporary database tables (or as others have mentioned, add a temporary flag to your existing database tables) and persist the data there, storing a single identifier in the Session (or in a cookie) for later retrieval.
First, you may want to segregate your specific state management implementation into it's own class so that you don't have to replicate it throughout your systems.
Second, you may want to consider a hybrid approach - use session state (or cache) for a short time to avoid unnecessary trips to a DB or other external store. After some amount of inactivity, write the cached state out to disk or DB. The simplest way to do this, is to serialize your objects to text (using either serialization or a library like proto-buffers). This helps allow you to avoid creating redundant or duplicate data structure to capture the in-progress data relationally. If you don't need to query the content of this data - it's a reasonable approach.
As an aside, in the database world, the problem you describe is called a long running transaction. You essentially want to avoid making changes to the data until you reach a user-defined commit point. There are techniques you can use in the database layer, like hypothetical views and instead-of triggers to encapsulate the behavior that you aren't actually committing the change. The data is in the DB (in the real tables), but is only visible to the user operating on it. This is probably a more complicated implementation than you may be willing to undertake, and requires intrusive changes to your persistence layer and data model - but allows the application to be ignorant of the issue.
Have you considered storing the information in a JavaScript object and then sending that information to your server once the user hits save?
Use domain events to capture the users actions and then replay those actions over the snapshot of the order model ( effectively the current state of the order before the user started changing it).
Store each change as a series of events e.g. UserChangedShippingAddress, UserAlteredLineItem, UserDeletedLineItem, UserAddedLineItem.
These events can be saved after each postback and only need a link to the related order. Rebuilding the current state of the order is then as simple as replaying the events over the currently stored order objects.
When the user clicks save, you can replay the events and persist the updated order model to the database.
You are using the database - no session or viewstate is required therefore you can significantly reduce page-weight and server memory load at the expense of some page performance ( if you choose to rebuild the model on each postback ).
Maintenance is incredibly simple as due to the ease with which you can implement domain object, automated testing is easily used to ensure the system behaves as you expect it to (while also documenting your intentions for other developers).
Because you are leveraging the database, the solution scales well across multiple web servers.
Using this approach does not require any alterations to your existing domain model, therefore the impact on existing code is minimal. Biggest downside is getting your head around the concept of domain events and how they are used and abused =)
This is effectively the same approach as described by Freddy Rios, with a little more detail about how and some nice keyword for you to search with =)
http://jasondentler.com/blog/2009/11/simple-domain-events/ and http://www.udidahan.com/2009/06/14/domain-events-salvation/ are some good background reading about domain events. You may also want to read up on event sourcing as this is essentially what you would be doing ( snapshot object, record events, replay events, snapshot object again).
how about serializing your Domain object (contents of your grid/shopping cart) to JSON and storing it in a hidden variable ? Scottgu has a nice article on how to serialize objects to JSON. Scalable across a server farm and guess it would not add much payload to your page. May be you can write your own JSON serializer to do a "compact serialization" (you would not need product name,product ID, SKU id, etc, may be you can just "serialize" productID and quantity)
Have you considered using a User Profile? .Net comes with SqlProfileProvider right out of the box. This would allow you to, for each user, grab their profile and save the temporary data as a variable off in the profile. Unfortunately, I think this does require your "Order" to be serializable, but I believe all of the options except Session thus far would require the same.
The advantage of this is it would persist through crashes, sessions, server down time, etc and it's fairly easy to set up. Here's a site that runs through an example. Once you set it up, you may also find it useful for storing other user information such as preferences, favorites, watched items, etc.
You should be able to create a temp file and serialize the object to that, then save only the temp file name to the viewstate. Once they successfully save the record back to the database then you could remove the temp file.
Single server: serialize to the filesystem. This also allows you to let the user resume later.
Multiple server: serialize it but store the serialized value in the db.
This is something that's for that specific user, so when you persist it to the db you don't really need all the relational stuff for it.
Alternatively, if the set of data is v. large and the amount of changes is usually small, you can store the history of changes done by the user instead. With this you can also show the change history + support undo.
2 approaches - create a complex AJAX application that stores everything on the client and only submits the entire package of changes to the server. I did this once a few years ago with moderate success. The applicaiton is not something I would want to maintain though. You have a hard time syncing your client code with your server code and passing fields that are added/deleted/changed is nightmarish.
2nd approach is to store changes in the data base in a temp table or "pending" mode. Advantage is your code is more maintainable. Disadvantage is you have to have a way to clean up abandonded changes due to session timeout, power failures, other crashes. I would take this approach for any new development. You can have separate tables for "pending" and "committed" changes that opens up a whole new level of features you can add. What if? What changed? etc.
I would go for viewstate, regardless of what you've said before. If you only store the stuff you need, like { id: XX, numberOfProducts: 3 }, and ditch every item that is not selected by the user at this point; the viewstate size will hardly be an issue as long as you aren't storing the whole object tree.
When storing attachments, put them in a temporary storing location, and reference the filename in your viewstate. You can have a scheduled task that cleans the temp folder for every file that was last saved over 1 day ago or something.
This is basically the approach we use for storing information when users are adding floorplan information and attachments in our backend.
Are the end-users internal or external clients? If your clients are internal users, it may be worthwhile to look at an alternate set of technologies. Instead of webforms, consider using a platform like Silverlight and implementing a rich GUI there.
You could then store complex business objects within the applet, provide persistant "in progress" edit tracking across multiple sessions via offline storage and easily integrate with back-end services that providing saving / processing of the finalised order. All whilst maintaining access via the web (albeit closing out most *nix clients).
Alternatives include Adobe Flex or AJAX, depending on resources and needs.
How large do you consider large? If you are talking sessions-state (so it doesn't go back/fore to the actual user, like view-state) then state is often a pretty good option. Everything except the in-process state provider uses serialization, but you can influence how it is serialized. For example, I would tend to create a local model that represents just the state I care about (plus any id/rowversion information) for that operation (rather than the full domain entities, which may have extra overhead).
To reduce the serialization overhead further, I would consider using something like protobuf-net; this can be used as the implementation for ISerializable, allowing very light-weight serialized objects (generally much smaller than BinaryFormatter, XmlSerializer, etc), that are cheap to reconstruct at page requests.
When the page is finally saved, I would update my domain entities from the local model and submit the changes.
For info, to use a protobuf-net attributed object with the state serializers (typically BinaryFormatter), you can use:
// a simple, sessions-state friendly light-weight UI model object
[ProtoContract]
public class MyType {
[ProtoMember(1)]
public int Id {get;set;}
[ProtoMember(2)]
public string Name {get;set;}
[ProtoMember(3)]
public double Value {get;set;}
// etc
void ISerializable.GetObjectData(
SerializationInfo info,StreamingContext context)
{
Serializer.Serialize(info, this);
}
public MyType() {} // default constructor
protected MyType(SerializationInfo info, StreamingContext context)
{
Serializer.Merge(info, this);
}
}

Resources