Validating a Contact has a Unique Email in Axon - axon

I am curious to understand what the best practice approach is when using the Axon Framework to validate that an email field is unique to a Set of emails for a Contact Aggregate.
Example setup
ContactCreateCommand {
identifier = '123'
name = 'ABC'
email = 'info#abc.com'
}
ContactAggregate {
ContactAggregate(ContactCreateCommand cmd) {
//1. cannot validate email
AggregateLifecycle.apply(
new ContactCreatedEvent(//fields ... );
);
}
}
From my understanding of how this might be implemented, I have identified a number of possible ways to handle this, but perhaps there are more.
1. Do nothing in the Aggregate
This approach imposes that the invoker (of the command) does a query to find Contacts by email prior to sending the command, allowing for some milliseconds where eventual consistency allows for duplication.
Drawbacks:
Any "invoker" of the command would then be required to perform this validation check as its not possible to do this check inside the Aggregate using an Axon Query Handler.
Duplication can occur, so all projections based from these events need to handle this duplication somehow
2. Validate in a separate persistence layer
This approach introduces a new persistence layer that would validate uniqueness inside the aggregate.
Inside the ContactAggregate command handler for ContactCreateCommand we can then issue a query against this persistence layer (eg. a table in postgres with a unique index on it) and we can validate the email against this database which contains all the sets
Drawbacks:
Introduces an external persistence layer (external to the microservice) to guarantee uniqueness across Contacts
Scaling should be considered in the persistence layer, hitting this with a highly scaled aggregate could prove a bottleneck
3. Use a Saga and Singleton Aggregate
This approach enhances the previous setup by introducing an Aggregate that can only have at most 1 instance (e.g. Target Identifier is always the same). This way we create a 'Singleton Aggregate' that is responsible only to encapsulate the Set of all Contact Email Addresses.
ContactEmailValidateCommand {
identifier = 'SINGLETON_ID_1'
email='info#abc.com'
customerIdentifier = '123'
}
UniqueContactEmailAggregate {
#AggregateIdentifier
private String identifier;
Set<String> email = new HashSet<>();
on(ContactEmailValidateCommand cmd) {
if (email.contains(cmd.email) == false) {
AggregateLifecycle.apply(
new ContactEmailInvalidatedEvent(//fields ... );
} else {
AggregateLifecycle.apply(
new ContactEmailValidatedEvent(//fields ... );
);
}
}
}
After we do this check, we could then re-act appropriately to the ContactEmailInvalidatedEvent or ContactEmailValidatedEvent which might invalidate the contact afterwards.
The benefit of this approach is that it keeps the persistence local to the Aggregate, which could give better scaling (as more nodes are added, more aggregates with locally managed Sets exist).
Drawbacks
Quite a lot of boiler plate to replace "create unique index"
This approach allows an 'invalid' Contact to pollute the Event Store for ever
The 'Singleton Aggregate' is complex to ensure it is a true (perhaps there is a simpler or better way)
The 'invoker' of the CreateContactCommand must check to see the outcome of the Saga
What do others do to solve this? I feel option 2 is perhaps the simplest approach, but are there other options?

What you are essentially looking for is Set Based Validation (I think here blog does a nice job explaining the concept, and how to deal with it in Axon). In short, validating some field is (or is not) contained in a set of data. When doing CQRS, this becomes a somewhat interesting concept to reason about, with several solutions out there (as you've already portrayed).
I think the best solution to this is summarized under your second option to use a dedicated persistence layer for the email addresses. You'd simply create a very concise model containing just the email addresses, which you would validate prior to issuing the ContactCreateCommand. Note that this persistence layer belongs to the Command Model, as it is used to perform business validation. You'd thus introduce an example where you not only have Aggregates in your Command Model, but also Views. And as you've rightfully noted, this View needs to be optimized for it's use case of course. Maybe introducing a cache which is created on application start up wouldn't be to bad.
To ensure this email addresses view is as up to date as possible, it's smartest to ensure it is updated in the same transaction as when the ContactCreatedEvent (which contains a new email address, I assume) is published. You can do this by having a dedicated Event Handling Component for your "Email Addresses View" which is updated through a SubscribingEventProcessor (a SEP). This would work as the SEP is invoked by the same thread publishing the event (your aggregate).
You have a couple of options when it comes to querying this model prior to sending the command. You could use a MessageDispatchInterceptor which only reacts on the ContactCreateCommand for example. Or, you introduce a Handler Enhancer which is dedicated to react ContactCreateCommand to perform this validation. Or, you introduce another command like RequestContactCreationCommand which is targeted towards a regular component. This component would handle the command, validate the model and if approved dispatches a ContactCreateCommand.
That's my two cents to the situation, hope this helps #vcetinick!

Related

In Domain Driven Design should you Add/Update Entities through the AggregateRoot?

If you have a Vendor with a list of Contacts, in DDD which is the better approach for adding a contact to a Vendor?
Here's some sample C# code using a CQRS command.
Given the following command, how should we implement adding a Contact to a Vendor
AddVendorContactCommand()
{
string vendorId;
string contactName;
}
Should we add a contact through the Vendor:
AddVendorContactHandler(AddVendorContactCommand command)
{
var vendor = await dbContext.Vendors.FindAsync(command.vendorId);
vendor.AddContact(command.contactName);
dbContext.Save();
//doesn't require a dbSet for VendorContacts???
}
Or should we reference the VendorContact and bypass the Vendor entirely.
AddVendorContactHandler(AddVendorContactCommand command)
{
//handler
var newVendorContact = new VendorContact(command.vendorId, command.contactName);
dbContext.VendorContacts.Add(newVendorContact);
dbContext.Save();
//requires a dbSet for VendorContacts;
}
I feel like the better approach is to go through the Vendor, but that requires our AddVendorContactCommand to read from the database first. In CQRS Commands, it generally suggests avoid reads. The second approach to use VendorContacts directly will have higher performance than if we go through Vendor.
Argument to go through the Vendor are the following:
What if the Vendor doesn't exist
What if the Vendor isn't allowed any more contacts.
What if the Vendor is deleted, disabled or otherwise readonly
What's the correct DDD approach?
First, as a developer, I'm obligated to say there is no single correct approach to anything.
Now that is out of the way, given the information you have provided, I'm going to assume that the Vendor entity you described can (and in my opinion should) be the Aggregate Root. With that in mind, I would definitely go with the first option you described.
I think you have a misconception about CQRS Commands. It is perfectly fine to get data from the database inside commands. The thing you have to avoid is fetching the data from the query side, which could be a totally different database.
You are also correct, you won't need a DbSet<> for VendorContact entity, and you should keep it that way on the Command side, as you want to protect the invariants inside your Vendor Aggregate Root.

Sf2 : using a service inside an entity

i know this has been asked over and over again, i read the topics, but it's always focused on specific cases and i generally try to understand why its not best practise to use a service inside an entity.
Given a very simple service :
Class Age
{
private $date1;
private $date2;
private $format;
const ym = "%y years and %m month"
const ...
// some DateTime()->diff() methods, checking, formating the entry formats, returning different period formats for eg.
}
and a simple entity :
Class People
{
private $firstname;
private $lastname;
private $birthday;
}
From a controller, i want to do :
$som1 = new People('Paul', 'Smith', '1970-01-01');
$som1->getAge();
Off course i can rewrite the getAge() function inside my entity, its not long, but im very lazy and as i've already written all the possible datetime->diff() i need in the above service, i dont understand why i shouldnt use'em...
NB : my question isnt about how to inject the container in my entity, i can understand why this doesnt make sense, but more what wld be the best practise to avoid to rewrite the same function in different entities.
Inheritance seems to be a bad "good idea" as i could use the getAge() inside a class BlogArticle and i doubt that this BlogArticle Class should be inheriting from the same class as a People class...
Hope i was clear, but not sure...
One major confusion for many coders is to think that doctrine entities "are" the model. That is a mistake.
See edit of this post at the end, incorporating ideas related to CQRS+ES -
Injecting services into your doctrine entities is a symptom of "trying to do more things than storing data" into your entities. When you see that "anti-pattern" most probably you are violating the "Single Responsibility" principle in SOLID programming.
http://en.wikipedia.org/wiki/Anti-pattern
http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29
http://en.wikipedia.org/wiki/Single_responsibility_principle )
Symfony is not an MVC framework, it is a VC framework only. Lacks the M part. Doctrine entities (I'll call them entities from now on, see clarification at the end) are a "data persistence layer", not a "model layer". Symfony has lots of things for views, web controllers, command controllers... but has no help for domain modelling ( http://en.wikipedia.org/wiki/Domain_model ) - even the persistence layer is Doctrine, not Symfony.
Overcoming the problem in SF2
When you "need" services in a data-layer, trigger an antipattern alert. Storage should be only a "put here - get from there" system. Nothing else.
To overcome this problem, you should inject the services into a "logic layer" (Model) and separate it from "pure storage" (data-persistence layer). Following the single responsibility principle, put the logics in one side, put the getters and setters to MySQL in another.
The solution is to create the missing Model layer, not present in Symfony2, and make it to give "the logic" of the domain objects, completely separated and decoupled from the layer of data-persistence which knows "how to store" the model into a MySQL database with doctrine, or to a redis, or simply to a text file.
All those storage systems should be interchangeable and your Model should still expose the very same public methods with absolutely no change to the consumer.
Here's how you do it:
Step 1: Separate the model from the data-persistence
To do so, in your bundle, you can create another directory named Model at the bundle-root level (besides tests, DependencyInjection and so), as in this example of a game.
The name Model is not mandatory, Symfony does not say anything about it. You can choose whatever you want.
If your project is simple (say one bundle), you can create that directory inside the same bundle.
If your project is many bundles wide, you could consider
either putting the model splitted in the different bundles, or
or -as in the example image- use a ModelBundle that contains all the "objects" the project needs (no interfaces, no controllers, no commands, just the logic of the game, and its tests). In the example, you see a ModelBundle, providing logical concepts like Board, Piece or Tile among many others, structures in directories for clarity.
Particularly for your question
In your example, you could have:
Entity/People.php
Model/People.php
Anything related to "store" should go inside Entity/People.php - Ex: suppose you want to store the birthdate both in a date-time field, as well as in three redundant fields: year, month, day, because of any tricky things related to search or indexing, that are not domain-related (ie not related withe lo 'logics' of a person).
Anything related to the "logics" should go inside Model/People.php - Ex: how to calculate if a person is over the majority of age just now, given a certain birthdate and the country he lives (which will determine the minumum age). As you can see, this has nothing to do on the persistence.
Step 2: Use factories
Then, you must remember that the consumers of the model, should never ever create model objects using "new". They should use a factory instead, that will setup the model objects properly (will bind to the proper data-storage layer). The only exception is in unit-testing (we'll see it later). But apart from unitary tests, grab this with fire in your brain, and tattoo it with a laser in your retina: Never do a 'new' in a controller or a command. Use the factories instead ;)
To do so, you create a service that acts as the "getter" of your model. You create the getter as a factory accessible thru a service. See the image:
You can see a BoardManager.php there. It is the factory. It acts as the main getter for anything related to boards. In this case, the BoardManager has methods like the following:
public function createBoardFromScratch( $width, $height )
public function loadBoardFromJson( $document )
public function loadBoardFromTemplate( $boardTemplate )
public function cloneBoard( $referenceBoard )
Then, as you see in the image, in the services.yml you define that manager, and you inject the persistence layer into it. In this case, you inject the ObjectStorageManager into the BoardManager. The ObjectStorageManager is, for this example, able to store and load objects from a database or from a file; while the BoardManager is storage agnostic.
You can see also the ObjectStorageManager in the image, which in turn is injected the #doctrine to be able to access the mysql.
Your managers are the only place where a new is allowed. Never in a controller or command.
Particularly for your question
In your example, you would have a PeopleManager in the model, able to get the people objects as you need.
Also in the Model, you should use the proper singular-plural names, as this is decoupled from your data-persistence layer. Seems you are currently using People to represent a single Person - this can be because you are currently (wrongly) matching the model to the database table name.
So, involved model classes will be:
PeopleManager -> the factory
People -> A collection of persons.
Person -> A single person.
For example (pseudocode! using C++ notation to indicate the return type):
PeopleManager
{
// Examples of getting single objects:
Person getPersonById( $personId ); -> Load it from somewhere (mysql, redis, mongo, file...)
Person ClonePerson( $referencePerson ); -> Maybe you need or not, depending on the nature the your problem that your program solves.
Person CreatePersonFromScratch( $name, $lastName, $birthDate ); -> returns a properly initialized person.
// Examples of getting collections of objects:
People getPeopleByTown( $townId ); -> returns a collection of people that lives in the given town.
}
People implements ArrayObject
{
// You could overload assignment, so you can throw an exception if any non-person object is added, so you can always rely on that People contains only Person objects.
}
Person
{
private $firstname;
private $lastname;
private $birthday;
}
So, continuing with your example, when you do...
// **Never ever** do a new from a controller!!!
$som1 = new People('Paul', 'Smith', '1970-01-01');
$som1->getAge();
...you now can mutate to:
// Use factory services instead:
$peopleManager = $this->get( 'myproject.people.manager' );
$som1 = $peopleManager->createPersonFromScratch( 'Paul', 'Smith', '1970-01-01' );
$som1->getAge();
The PeopleManager will do the newfor you.
At this point, your variable $som1 of type Person, as it was created by the factory, can be pre-populated with the necessary mechanics to store and save to the persistence layer.
The myproject.people.manager will be defined in your services.yml and will have access to the doctrine either directly, either via a 'myproject.persistence.manager` layer or whatever.
Note: This injection of the persistence layer via the manager, has several side effects, that would side track from "how to make the model have access to services". See steps 4 and 5 for that.
Step 3: Inject the services you need via the factory.
Now you can inject any services you need into the people.manager
You, if your model object needs to access that service, you have now 2 choices:
When the factory creates a model object, (ie when PeopleManager creates a Person) to inject it via either the constructor, either a setter.
Proxy the function in the PeopleManager and inject the PeopleManager thru the constructor or a setter.
In this example, we provide the PeopleManager with the service to be consumed by the model. When the people manager is requested a new model object, it injects the service needed to it in the new sentence, so the model object can access the external service directly.
// Example of injecting the low-level service.
class PeopleManager
{
private $externalService = null;
class PeopleManager( ServiceType $externalService )
{
$this->externalService = $externalService;
}
public function CreatePersonFromScratch()
{
$externalService = $this->externalService;
$p = new Person( $externalService );
}
}
class Person
{
private $externalService = null;
class Person( ServiceType $externalService )
{
$this->externalService = $externalService;
}
public function ConsumeTheService()
{
$this->externalService->nativeCall(); // Use the external API.
}
}
// Using it.
$peopleManager = $this->get( 'myproject.people.manager' );
$person = $peopleManager->createPersonFromScratch();
$person->consumeTheService()
In this example, we provide the PeopleManager with the service to be consumed by the model. Nevertheless, when the people manager is requested a new model object, it injects itself to the object created, so the model object can access the external service via the manager, which then hides the API, so if ever the external service changes the API, the manager can do the proper conversions for all the consumers in the model.
// Second example. Using the manager as a proxy.
class PeopleManager
{
private $externalService = null;
class PeopleManager( ServiceType $externalService )
{
$this->externalService = $externalService;
}
public function createPersonFromScratch()
{
$externalService = $this->externalService;
$p = new Person( $externalService);
}
public function wrapperCall()
{
return $this->externalService->nativeCall();
}
}
class Person
{
private $peopleManager = null;
class Person( PeopleManager $peopleManager )
{
$this->peopleManager = $peopleManager ;
}
public function ConsumeTheService()
{
$this->peopleManager->wrapperCall(); // Use the manager to call the external API.
}
}
// Using it.
$peopleManager = $this->get( 'myproject.people.manager' );
$person = $peopleManager->createPersonFromScratch();
$person->ConsumeTheService()
Step 4: Throw events for everything
At this point, you can use any service in any model. Seems all is done.
Nevertheless, when you implement it, you will find problems at decoupling the model with the entity, if you want a truly SOLID pattern. This also applies to decoupling this model from other parts of the model.
The problem clearly arises at places like "when to do a flush()" or "when to decide if something must be saved or left to be saved later" (specially in long-living PHP processes), as well as the problematic changes in case the doctrine changes its API and things like this.
But is also true when you want to test a Person without testing its House, but the House must "monitor" if the Person changes its name to change the name in the mailbox. This is specially try for long-living processes.
The solution to this is to use the observer pattern ( http://en.wikipedia.org/wiki/Observer_pattern ) so your model objects throw events nearly for anything and an observer decides to cache data to RAM, to fill data or to store data to the disk.
This strongly enhances the solid/closed principle. You should never change your model if the thing you change is not domain-related. For example adding a new way of storing to a new type of database, should require zero edition on your model classes.
You can see an example of this in the following image. In it, I highlight a bundle named "TurnBasedBundle" that is like the core functionality for every game that is turn-based, despite if it has a board or not. You can see that the bundle only has Model and Tests.
Every game has a ruleset, players, and during the game, the players express the desires of what they want to do.
In the Game object, the instantiators will add the ruleset (poker? chess? tic-tac-toe?). Caution: what if the ruleset I want to load does not exist?
When initializing, someone (maybe the /start controller) will add players. Caution: what if the game is 2-players and I add three?
And during the game the controller that receives the players movements will add desires (for example, if playing chess, "the player wants to move queen to this tile" -which may be a valid, or not-.
In the picture you can see those 3 actions under control thanks to the events.
You can observe that the bundle has only Model and Tests.
In the model, we define our 2 objects: Game, and the GameManager, to get instances of Game objects.
We also define Interfaces, like for example the GameObserver, so anyone willing to receive the Game events should be a GameObserver folk.
Then you can see that for any action that modifies the state of the model (for example adding a player), I have 2 events: PRE and POST. See how it works:
Someone calls the $game->addPlayer( $player ) method.
As soon as we enter the addPlayer() function, the PRE event is raised.
The observers then can catch this event to decide if a player can be added or not.
All PRE events should come with a cancel passed by reference. So if someone decides this is a game for 2 players and you try to add a 3rd one, the $cancel will be set to true.
Then you are again inside the addPlayer function. You can check if someone wanted to cancel the operation.
Do the operation if allowed (ie: mutate the $this-> state).
After the state has been changed, raise a POST event to indicate the observers that the operation has been completed.
In the picture you see three, but of course it has a lot lot more. As a rule of thumb, you will have nearly 2 events per setter, 2 events per method that can modify the state of the model and 1 event for each "unavoidable" action. So if you have 10 methods on a class that operate on it, you can expect to have about 15 or 20 events.
You can easily see this in the typical simple text box of any graphyc library of any operating system: Typical events will be: gotFocus, lostFocus, keyPress, keyDown, keyUp, mouseDown, mouseMove, etc...
Particularly, in your example
The Person will have something like preChangeAge, postChangeAge, preChangeName, postChangeName, preChangeLastName, postChangeLastName, in case you have setters for each of them.
For long-living actions like "person, do walk for 10 seconds" you maybe have 3: preStartWalking, postStartWalking, postStopWalking (in case a stop of 10 seconds cannot be programatically prevented).
If you want to simplify, you can have two single preChanged( $what, & $cancel ) and postChanged( $what ) events for everything.
If you never prevent your changes to happen, you can even just have one single event changed() for all and any change to your model. Then your entity will just "copy" the model properties in the entity properties at every change. This is OK for simple classes and projects or for structures you are not going to publish for third-party consumers, and saves some coding. If the model class becomes a core class to your project, spending a bit of time adding all the events list will save you time in the future.
Step 5: Catch the events from the data layer.
It is at this point that your data-layer bundle enters in action!!!
Make your data layer an observer of your model. When the model Changes its internal state then make your Entity to "copy" that state into the entity state.
In this case, the MVC acts as expected: The Controller, operates on the Model. The consequences of this are still hidden from the controller (as the controller should not have access to Doctrine). The model "broadcasts" the operation made, so anyone interested knows, which in turn triggers that the data-layer knows about the model change.
Particularly, in your project
The Model/Person object will have been created by the PeopleManager. When creating it, the PeopleManager, which is a service, and therefore can have other services injected, can have the ObjectStorageManager subsystem handy. So the PeopleManager can get the Entity/People that you reference in your question and add the Entity/People as an observer to Model/Person.
In the Entity/People mainly you substitute all the setters by event catchers.
You read your code like this: When the Model/Person changes its LastName, the Entity/People will be notified and will copy the data into its internal structure.
Most probably, you are tempted to inject the entity inside the model, so instead of throwing an event, you call the setters of the Entity.
But with that approach, you 'break' the Open-Closed principle. So if at any given point you want to migrate to MongoDb, you need to "change" your "entities" by "documents" in your model. With the observer-pattern, this change occurs outside the model, who never knows the nature of the observer beyond that is its a PersonObserver.
Step 6: Unit test everything
Finally, you want to unit test your software. As this pattern I have explained overcomes the anti-pattern that you discovered, you can (and you should) unit-test the logics of your model independently of how that is stored.
Following this pattern, helps you to go towards the SOLID principles, so each "unit of code" is independent on the others. This will allow you to create unit-tests that will test the "logics" of your Model without writing to the database, as it will inject a fake data-storage layer as a test-double.
Let me use the game example again. I show you in the image the Game test. Assume all games can last several days and the starting datetime is stored in the database. We in the example currently test only if getStartDate() returns a dateTime object.
There are some arrows in it, that represent the flow.
In this example, from the two injecting strategies I told you, I choose the first one: To inject into the Game model object the services it needs (in this case a BoardManager, PieceManager and ObjectStorageManager) and not to inject the GameManager itself.
First, you invoke phpunit that will call look for the Tests directory, recursively in all the directories, finding classes named XxxTest. Then will desire to invoke all the methods named textSomething().
But before calling it, for each test method it calls the setup().
In the setup we will create some test-doubles to avoid "real access" to the database when testing, while correctly testing the logics in our model. In this case a double of my own data layer manager, ObjectStorageManager.
It is assigned to a temporary variable for clarity...
...that is stored in the GameTest instance...
...for later use in the test itself.
The $sut (system under test) variable is then created with a new command, not via a manager. Do you remember that I said that tests were an exception? If you use the manager (you still can) here it is not a unit-test, it's an integration test because tests two classes: the manager and the game. In the new command we fake all the dependencies that the model has (like a board manager, and like a piece manager). I am hardcoding GameId = 1 here. This relates to data-persistance, see below.
We then may call the system under test (a simple Game model object) to test its internals.
I am hardcoding "Game id = 1" in the new. In this case we are only testing that the returned type is a DateTime object. But in case we want to test also that the date that it gets is the proper one, we can "tune" the ObjectStorageManager (data-persistance layer) mock to return whatever we want in the internal call, so we could test that for example when I request the date to the data-layer for game=1 the date is 1st-jun-2014 and for game=2 the date is 2nd-jun-2014. Then in the testGetStartDate I would create 2 new instances, with Ids 1 and 2 and check the content of the result.
Particularly, in your project
You will have a Test/Model/PersonTest unit test that will be able to play with the logics of the person, and in case of needing a person from the database, you will fake it thru the mock.
In case you want to test the storing of the person to the database, it is enough that you unit-test that the event is thrown, no matter who listens to it. You can create a fake listener, attach to the event, and when the postChangeAge happens mark a flag and do nothing (no real database storage). Then you assert that the flag is set.
In short:
Do not confuse logics and data-persistance. Create a Model that has nothing to do with entities, and put all the logics in it.
Do never use new to get your models from any consumer. Use factory services instead. Special attention to avoid news in controllers and commands. Exception: The unit-test is the only consumer that can use a new.
Inject the services you need in the Model via the factory, which in turn receives it from the services.yml configuration file.
Throw events for everything. When I say everything, means everything. Just imagine you observe the model. What would you like to know? Add an event for it.
Catch the events from controllers, views, commands and from other parts of the model, but, specially, catch them in the data-storage layer, so you can "copy" the object to the disk without being intrusive to the model.
Unit test your logics without depending on any real database. Attach the real database storage system in production and attach a dummy implementation for your tests.
Seems a lot of work. But it is not. It is a matter of getting used to it. Just think about the "objects" you need, create them and make the data-layer be "monitors" of your objects. Then your objects are free to run, decoupled. If you create the model from a factory, inject any needed service in the model to the model, and leave the data alone.
Edit apr/2016 - Separating Domain from Persistance
All occurences of the word entity in this answer are referring to the "doctrine entities" which is what causes confusion to the majority of coders, between the model layer and the persistance layer which should be always different.
Doctrine is infrastructure, so doctrine is outside the model by definition.
Doctrine has entities. So, by definition, then doctrine entities are also outside the model.
Instead, the increasing popularity of the DDD building blocks makes a need to clarify even more my answer, as DDD uses the word Entity within the model too.
Domain entities (not Doctrine entities) are similar to what I refer in this answer to Domain objects.
In fact, there are many types of Domain objects:
Domain entities (different from the Doctrine entites).
Domain value objects (could be thought similar to basic types, with logic).
Domain events (also distinct from those Symfony events and also different from the Doctrine events).
Domain commands (different from those Symfony command line controller-like helpers).
Domain services (different from the Symfony framework services).
etc.
Therefore, take all my explanation as this: when I say "Entities are not model objects" just read "Doctrine entities are not Domain entities".
Edit jun/2019 - CQRS+ES analogy
Ancients already used persistant history methods to recod things (for example placing marks on a stone to register transactions).
Since a decade long the CQRS+ES approach (Command Query Responsability Segregation + Event Sourcing) in programming has been growing in popularity, bringing that idea of "the history is immutable" to the programs we code and today many coders think of separating the command side vs the query side. If you don't know what I'm talking about, no worries, just skip the next paragraphs.
The growing popularity of CQRS+ES in the last 3 or 4 years makes me think to consider a comment here and how it relates to what I answered here 5 years ago:
This answer was thought as 1 single model, not a write-model and a read-model. But I'm happy to see many overlapping ideas.
Think of the PRE events, I mention here, as the "commands and the write-model". Think of the POST events as the "Event Sourcing part going towards the read-model".
In CQRS you can easily find that "commands can be accepted or not" in function of the internal state. Usually one implements them throwing exceptions but there are other alternatives there, like answering if the command was accepted or not.
For example, in a "Train" I can "set it to X speed". But if the state is that the train is in a rail that cannot go further 80Km/h, then setting it to 200 should be rejected.
This is ANALOGOUS to the cancel boolean passed by reference where an entity could just "reject" something PRIOR to its state change.
Instead the POST events do not carry the "cancel" event and are thrown AFTER the state change happened. This is why you could not cancel them: They talk about the "state change that actually occurred" and therefore it cannot be cancelled: It aleady happened.
So...
In my answer of 2014, the "pre" events match with the "Command acceptance" of the CQRS+ES systems (the command can be accepted or rejected), and the "post" events match the "Domain events" of the CQRS+ES systems (it just informs that the change actually already happened, do whatever you want with that information).
You already mentioned a very good point. Instances of class Person are not the only thing that can have an age. BlogArticles can also age along with many other types. If you're using PHP 5.4+ you can utilize traits to add little pieces of functionality instead of having service objects from the container (or maybe you can combine them).
Here is a quick mockup of what you could do to make it very flexible. This is the basic idea:
Have one age calculating trait (Aging)
Have a specific trait which can return the appropriate field ($birthdate, $createdDate, ...)
Use the trait inside your class
Generic
trait Aging {
public function getAge() {
return $this->calculate($this->start());
}
public function calculate($startDate) { ... }
}
For person
trait AgingPerson {
use Aging;
public function start() {
return $this->birthDate;
}
}
class Person {
use AgingPerson;
private $birthDate = '1999-01-01';
}
For blog article
// Use for articles, pages, news items, ...
trait AgingContent {
use Aging;
public function start() {
return $this->createdDate;
}
}
class BlogArticle {
use AgingContent;
private $createDate = '2014-01-01';
}
Now you can ask any instance of the above classes for their age.
echo (new Person())->getAge();
echo (new BlogArticle())->getAge();
Finally
If you need type hinting traits won't do you any favors. In that case you will need to provide an interface and let every class that uses the trait implement it (the actual implementation is the trait but the interface enables type hinting).
interface Ageable {
public function getAge();
}
class Person implements Ageable { ... }
class BlogArticle implements Ageable { ... }
function doSomethingWithAgeable(Ageable $object) { ... }
This may seem like a lot of hassle when in reality it's much easier to maintain and extend this way.
A big part is that there is no easy way to inject dependencies when using the database.
$person = $personRepository->find(1); // How to get the age service injected?
One solution might be to pass the age service as an argument.
$ageCalculator = $container('age_service');
$person = $personRepository->find(1);
$age = $person->calcAge($ageCalculator);
But really, you would probably be better off just adding the age stuff to your Person class. Easier to test and all that.
It sounds like you might have some output formatting going on? That sort of thing should probably be done in twig. getAge should really just return a number.
Likewise, your date of birth really should be a date object and not a string.
You are right, it's generally discouraged. However, there are several approaches how you can extend the functionality of an entity beyond the purpose of a data container. Of course, all of them can be considered (more or less) bad practice … but somehow you gotta do the job, right?
You can indeed create an AbstractEntity super class, from which all other entities inherit. This AbstractEntity would contain helper methods that other entities may need.
You can work with custom Doctrine repositories, if you need an entity context to work with an entity manager and return “more special” results than what the common getters would give you. As you have access to the entity manager in a repository, you can perform all kinds of special queries.
You can write a service that is in charge of the entity/entities in question. Downside: you cannot control that other parts of your code (or other developers) know of this service. Advantage: There's no limit to what you can do, and it's all nicely encapsuled.
You can work with Lifecycle Events/Callbacks.
If you really need to inject services into entities, you could consider setting a static property on the entity and only set it once in a controller or a dedicated service. Then you don't need to take care on each initialization of an object. Could be combined with the AbstractEntity approach.
As mentioned before, all of these are have their advantages and disadvantages. Pick your poison.

Pass list of ids between forms

I have an ASP.NET c# project.
I have to pass a list of values (id numbers such as "23,4455,21,2,765,...) from one form to another. Since QueryString is not possible because the list could be long, which is the best way to do it?
Thanks in advance.
Thanks for all your answers, you are helping a lot !!!
I decided to do this:
On the first form:
List lRecipients = new List();
.....
Session["Recipients"] = lRecipients;
On the final form:
List lRecipients = (List)Session["Recipients"];
Session.Remove("Recipients");
You could use Session collection.
In the first page, use:
List<int> listOfInts = new List<int>();
...
Session["someKey"] = listOfInts
And in the second page, retrieve it like this:
List<int> listOfInts = Session["someKey"] as List<int>;
If your using asp.net webforms you can put it into a session variable to pass stuff from page to page. You've got to be concise of the potential performance issues of putting lots of stuff into session mind.
Session["ListOfStff"] = "15,25,44.etc";
There are any number of ways to pass this data. Which you choose will depend on your environment.
Session state is useful, but is constrained by the number of concurrent users on the system and the amount of available memory on the server. Consider this when deciding whether or not to use Session state. If you do choose session state for this operation, be sure to remove the data when you're done processing the request.
You could use a hidden input field, with runat="server" applied to it. This will make its data available server-side, and it will only last for the duration of the request. The pros of this technique are that it's accessible to both the server code and the client-side JavaScript. However, it also means that the size of your request is increased, and it may take more work to get the data where you want it (and back out).
Depending on how much data's involved, you could implement a web service to serialize the data to a temporary storage medium (say, a database table), and get back a "request handle." Then, you could pass the request handle on the query string to the next form and it could use the "handle" to fetch the data from your medium.
There are all kinds of different ways to deal with this scenario, but the best choice will depend on your environment, time to develop, and costs.
For Asp.NET MVC you can use ViewData.
ViewData["ID"] = "";

Flex-Cairngorm/Hibernate - Is EAGER fetching strategy pointless?

I will try to be as concise as possible. I'm using Flex/Hibernate technologies for my app. I also use Cairngorm micro-architecture for Flex. Because i'm beginner, i have probably misunderstand something about Caringorm's ModelLocator purpose. I have following problem...
Suppose that we have next data model:
USER ----------------> TOPIC -------------> COMMENT
1 M 1 M
User can start many topics, topics can have many comments etc. It is pretty simple model, just for example. In hibernate, i use EAGER fetching strategy for unidirectional USER->TOPIC and TOPIC->COMMENT relations(here is no question about best practices etc, this is just example of problem).
My ModelLocator looks like this:
...
public class ModelLocator ....
{
//private instance, private constructor, getInstance() etc...
...
//app state
public var users:ArrayCollection;
public var selectedUser:UserVO;
public var selectedTopic:TopicVO;
}
Because i use eager fetching, i can 'walk' through all object graph on my Flex client without hitting the database. This is ok as long as i don't need to insert, update, or delete some of the domain instances. But when that comes, problems with synchronization arise.
For example, if i want to show details about some user from some UserListView, when user(actor) select that user in list, i will take selected index in UserList, get element from users ArrayCollection in ModelLocator at selected index and show details about selected user.
When i want to insert new User, ok, I will save that user in database and in IResponder result method i will add that user in ModelLocator.users ArrayCollection.
But, when i want to add new topic for some user, if i still want to use convenience of EAGER fetching, i need to reload user list again... And to add topic to selected user... And if user is in some other location(indirectly), i need to insert topic there also.
Update is even worst. In that case i need to write even some logic...
My question: is this good way of using ModelLocator in Cairngorm? It seems to me that, because of mentioned, EAGER fetching is somehow pointless. In case of using EAGER fetching, synchronization on Flex client can become big problem. Should I always hit database in order to manipulate with my domain model?
EDIT:
It seems that i didn't make myself clear enough. Excuse me for that.
Ok, i use Spring in technology stack also and DTO(DVO) pattern with flex/spring (de)serializer, but i just wanted to stay out of that because i'm trying to point out how do you stay synchronized with database state in your flex app. I don't even mention multi-user scenario and poling/pushing topic which is, maybe, my solution because i use standard request-response mechanism. I didn't provide some concrete code, because this seems conceptual problem for me, and i use standard Cairngorm terms in order to explain pseudo-names which i use for class names, var names etc.
I'll try to 'simplify' again: you have flex client for administration of above mentioned domain(CRUD for each of domain classes), you have ListOfUsersView(shows list of users with basic infos about them), UserDetailsView(shows user details and list of user topics with delete option for each of topic), InsertNewUserTopicView(form to insert new topic) etc.
Each of view which displays some infos is synchronized with ModelLocator state variables, for example:
ListOfUsersView ------binded to------> users:ArrayCollection in ModelLocator
UserDetailsView ------binded to------> selectedUser:UserVO in ModelLocator
etc.
View state transition look like this:
ListOfUsersView----detailsClick---->UserDetailsView---insertTopic--->InsertTopicView
So when i click on "Details" button in ListOfUsersView, in my logic, i get index of selected row in ListOfUsers, after that i take UserVO object from users:ArrayCollection in ModelLocator at mentioned index, after that i set that UserVO object as selectedUser:UserVO in ModelLocator and after that i change view state to UserDetailsView(it shows user details and selectedUser.topics) which is synchronized with selectedUser:UserVO in ModelLocator.
Now, i click "Insert new topic" button on UserDetailsView which results in InsertTopicView form. I enter some data, click "Save topic"(after successful save, UserDetailsView is shown again) and problem arise.
Because of my EAGER-ly fetched objects, i didn't hit the database in mentioned transitions and because of that there are two places for which i need to be concerned when insert new topic for selected user: one is instance of selectedUser object in users:ArrayCollection (because my logic select users from that collection and shows them in UserDetailsView), and second is selectedUser:UserVO(in order to sync UserDetailsView which comes after successfull save operation).
So, again my question arises... Should i hit database in every transition, should i reload users:ArrayCollection and selectedUser:UserVO after save in order to synchronize database state with flex client, should i take saved topic and on client side, without hitting the database, programmatically pass all places which i need to update or...?
It seems to me that EAGER-ly fetched object with their associations is not good idea. Am i wrong?
Or, to 'simplify' :) again, what should you do in the mentioned scenario? So, you need to handle click on "Save topic" button, and now what...?
Again, i really try to explain this as plastic as possible because i'm confused with this. So, please forgive me for my long post.
From my point of view the point isn't in fetching mode itself but in client/server interaction. From my previous experience with it I've finally found some disadvantages of using pure domain objects (especially with eager fetching) for client/server interaction:
You have to pass all the child collections maybe without necessity to use them on a client side. In your case it is very likely you'll display topics and comments not for all users you get from server. The most like situation you need to display user list then display topics for one of the selected users and then comments for one of the selected topics. But in current implementation you receive all the topics and comments even if they are not needed to display. It is very possible you'll receive all your DB in a single query.
Another problem is it can be very insecure to get all the user data (or some other data) with all fields (emails, addresses, passwords, credit card numbers etc).
I think there can be other reasons not to use pure domain objects especially with eager fetching.
I suggest you to introduce some Mapper (or Assembler) layer to convert your domain objects to Data Transfer Objects aka DTO. So every query to your service layer will receive data from your DAO or Active Record and then convert it to corresponding DTO using corresponding Mapper. So you can get user list without private data and query some additional user details with a separate query.
On a client side you can use these DTOs directly or convert them into client domain objects. You can do it in your Cairngorm responders.
This way you can avoid a lot of your client side problems which you described.
For a Mapper layer you can use Dozer library or create your own lightweight mappers.
Hope this helps!
EDIT
What about your details I'd prefer to get user list with necessary displayable fields like first name and last name (to display in list). Say a list of SimpleUserRepresentationDTO.
Then if user requests user details for editing you request UserDetailsDTO for that user and fill tour selectedUser fields in model with it. The same is for topics.
The only problem is displaying list of users after user details editing. You can:
Request the whole list again. The advantage is you can display changes performed by other users. But if the list is too long it can be very ineffective to query all the users each time even if they are SimpleUserRepresentationDTO with minimal data.
When you get success from server on user details saving you can find corresponding user in model's user list and replace changed details there.
Tell you the truth, there's no good way of using Cairngorm. It's a crap framework.
I'm not too sure exactly what you mean by eager fetching (or what exactly is your problem), but whatever it is, it's still a request/response kind of deal and this shouldn't be a problem per say unless you're not doing something right; in which case I can't see your code.
As for frameworks, I recommend you look at RobotLegs or Parsley.
Look at the "dpHibernate" project. It implements "lazy loading" on the Flex client.

NHibernate compromising domain objects

I'm writing an ASP.NET MVC application using NHibernate as my ORM. I'm struggling a bit with the design though, and would like some input.
So, my question is where do I put my business/validation logic (e.g., email address requires #, password >= 8 characters, etc...)?
So, which makes the most sense:
Put it on the domain objects themselves, probably in the property setters?
Introduce a service layer above my domain layer and have validators for each domain object in there?
Maintain two sets of domain objects. One dumb set for NHibernate, and another smart set for the business logic (and some sort of adapting layer in between them).
I guess my main concern with putting all the validation on the domain objects used by NHibernate. It seems inefficient to have unnecessary validation checks every time I pull objects out of the database. To be clear, I think this is a real concern since this application will be very demanding (think millions of rows in some tables).
Update:
I removed a line with incorrect information regarding NHibernate.
To clear up a couple of misconceptions:
a) NHib does not require you to map onto properties. Using access strategies you can easily map onto fields. You can also define your own custom strategy if you prefer to use something other than properties or fields.
b) If you do map onto properties, getters and setters do not need to be public. They can be protected or even private.
Having said that, I totally agree that domain object validation makes no sense when you are retrieving an entity from the database. As a result of this, I would go with services that validate data when the user attempts to update an entity.
My current project is exactly the same as yours. Using MVC for the front end and NHibernate for persistence. Currently, my validation is at a service layer(your option 2). But while I am doing the coding I am having feelings that my code is not as clean as I wish. For example
public class EntityService
{
public void SaveEntity(Entity entity)
{
if( entity.Propter1 == something )
{
throw new InvalidDataException();
}
if( entity.Propter2 == somethingElse )
{
throw new InvalidDataException();
}
...
}
}
This makes me feel that the EntityService is a "God Class". It knows way too much about Entity class and I don't like it. To me, it's feels much better to let the Entity classes to worry about themselves. But I also understand your concern of the NHibernate performance issue. So, my suggestion is to implement the validation logic in Setters and use field for NHibernate mapping.

Resources