In Domain Driven Design should you Add/Update Entities through the AggregateRoot? - .net-core

If you have a Vendor with a list of Contacts, in DDD which is the better approach for adding a contact to a Vendor?
Here's some sample C# code using a CQRS command.
Given the following command, how should we implement adding a Contact to a Vendor
AddVendorContactCommand()
{
string vendorId;
string contactName;
}
Should we add a contact through the Vendor:
AddVendorContactHandler(AddVendorContactCommand command)
{
var vendor = await dbContext.Vendors.FindAsync(command.vendorId);
vendor.AddContact(command.contactName);
dbContext.Save();
//doesn't require a dbSet for VendorContacts???
}
Or should we reference the VendorContact and bypass the Vendor entirely.
AddVendorContactHandler(AddVendorContactCommand command)
{
//handler
var newVendorContact = new VendorContact(command.vendorId, command.contactName);
dbContext.VendorContacts.Add(newVendorContact);
dbContext.Save();
//requires a dbSet for VendorContacts;
}
I feel like the better approach is to go through the Vendor, but that requires our AddVendorContactCommand to read from the database first. In CQRS Commands, it generally suggests avoid reads. The second approach to use VendorContacts directly will have higher performance than if we go through Vendor.
Argument to go through the Vendor are the following:
What if the Vendor doesn't exist
What if the Vendor isn't allowed any more contacts.
What if the Vendor is deleted, disabled or otherwise readonly
What's the correct DDD approach?

First, as a developer, I'm obligated to say there is no single correct approach to anything.
Now that is out of the way, given the information you have provided, I'm going to assume that the Vendor entity you described can (and in my opinion should) be the Aggregate Root. With that in mind, I would definitely go with the first option you described.
I think you have a misconception about CQRS Commands. It is perfectly fine to get data from the database inside commands. The thing you have to avoid is fetching the data from the query side, which could be a totally different database.
You are also correct, you won't need a DbSet<> for VendorContact entity, and you should keep it that way on the Command side, as you want to protect the invariants inside your Vendor Aggregate Root.

Related

Validating a Contact has a Unique Email in Axon

I am curious to understand what the best practice approach is when using the Axon Framework to validate that an email field is unique to a Set of emails for a Contact Aggregate.
Example setup
ContactCreateCommand {
identifier = '123'
name = 'ABC'
email = 'info#abc.com'
}
ContactAggregate {
ContactAggregate(ContactCreateCommand cmd) {
//1. cannot validate email
AggregateLifecycle.apply(
new ContactCreatedEvent(//fields ... );
);
}
}
From my understanding of how this might be implemented, I have identified a number of possible ways to handle this, but perhaps there are more.
1. Do nothing in the Aggregate
This approach imposes that the invoker (of the command) does a query to find Contacts by email prior to sending the command, allowing for some milliseconds where eventual consistency allows for duplication.
Drawbacks:
Any "invoker" of the command would then be required to perform this validation check as its not possible to do this check inside the Aggregate using an Axon Query Handler.
Duplication can occur, so all projections based from these events need to handle this duplication somehow
2. Validate in a separate persistence layer
This approach introduces a new persistence layer that would validate uniqueness inside the aggregate.
Inside the ContactAggregate command handler for ContactCreateCommand we can then issue a query against this persistence layer (eg. a table in postgres with a unique index on it) and we can validate the email against this database which contains all the sets
Drawbacks:
Introduces an external persistence layer (external to the microservice) to guarantee uniqueness across Contacts
Scaling should be considered in the persistence layer, hitting this with a highly scaled aggregate could prove a bottleneck
3. Use a Saga and Singleton Aggregate
This approach enhances the previous setup by introducing an Aggregate that can only have at most 1 instance (e.g. Target Identifier is always the same). This way we create a 'Singleton Aggregate' that is responsible only to encapsulate the Set of all Contact Email Addresses.
ContactEmailValidateCommand {
identifier = 'SINGLETON_ID_1'
email='info#abc.com'
customerIdentifier = '123'
}
UniqueContactEmailAggregate {
#AggregateIdentifier
private String identifier;
Set<String> email = new HashSet<>();
on(ContactEmailValidateCommand cmd) {
if (email.contains(cmd.email) == false) {
AggregateLifecycle.apply(
new ContactEmailInvalidatedEvent(//fields ... );
} else {
AggregateLifecycle.apply(
new ContactEmailValidatedEvent(//fields ... );
);
}
}
}
After we do this check, we could then re-act appropriately to the ContactEmailInvalidatedEvent or ContactEmailValidatedEvent which might invalidate the contact afterwards.
The benefit of this approach is that it keeps the persistence local to the Aggregate, which could give better scaling (as more nodes are added, more aggregates with locally managed Sets exist).
Drawbacks
Quite a lot of boiler plate to replace "create unique index"
This approach allows an 'invalid' Contact to pollute the Event Store for ever
The 'Singleton Aggregate' is complex to ensure it is a true (perhaps there is a simpler or better way)
The 'invoker' of the CreateContactCommand must check to see the outcome of the Saga
What do others do to solve this? I feel option 2 is perhaps the simplest approach, but are there other options?
What you are essentially looking for is Set Based Validation (I think here blog does a nice job explaining the concept, and how to deal with it in Axon). In short, validating some field is (or is not) contained in a set of data. When doing CQRS, this becomes a somewhat interesting concept to reason about, with several solutions out there (as you've already portrayed).
I think the best solution to this is summarized under your second option to use a dedicated persistence layer for the email addresses. You'd simply create a very concise model containing just the email addresses, which you would validate prior to issuing the ContactCreateCommand. Note that this persistence layer belongs to the Command Model, as it is used to perform business validation. You'd thus introduce an example where you not only have Aggregates in your Command Model, but also Views. And as you've rightfully noted, this View needs to be optimized for it's use case of course. Maybe introducing a cache which is created on application start up wouldn't be to bad.
To ensure this email addresses view is as up to date as possible, it's smartest to ensure it is updated in the same transaction as when the ContactCreatedEvent (which contains a new email address, I assume) is published. You can do this by having a dedicated Event Handling Component for your "Email Addresses View" which is updated through a SubscribingEventProcessor (a SEP). This would work as the SEP is invoked by the same thread publishing the event (your aggregate).
You have a couple of options when it comes to querying this model prior to sending the command. You could use a MessageDispatchInterceptor which only reacts on the ContactCreateCommand for example. Or, you introduce a Handler Enhancer which is dedicated to react ContactCreateCommand to perform this validation. Or, you introduce another command like RequestContactCreationCommand which is targeted towards a regular component. This component would handle the command, validate the model and if approved dispatches a ContactCreateCommand.
That's my two cents to the situation, hope this helps #vcetinick!

Symfony2 - Doctrine2 store changeset for later (or alternative solution to approve changes)

I have several entities, each with its form type. I want to be able, instead of saving the entity straight away on save, to save a copy of the changes we want to perform and store it in DB.
We'd send a message to the user who can approve the change, who will review the original and the changed field(s) and will approve or not. If approved the entity would be properly flushed.
To solve the issue I was thinking about:
1) doing a persist
2) getting the changesets (both the one related to "normal" fields, and the one relative to collections)
3) storing it in DB
4) Performing $em->refresh() to discard changes.
Later what I need is to get the changset(s) back, ask the (other) user to approve it and flush it.
Is this doable? What I'm especially concerned about is that the entity manager that generated the first changeset is not the same we are going to use to perform the flush, I basically need to "load" a changeset.
Any idea on how to solve the issue (this way, or another way ;) )
Another solution (working only for "normal" fields, not reference ones that come from other entities to the current one, like a many to many) would be to clone the current entity, store it, and then once approved copy the field(s) from the cloned to the original one. But it does not work for all fields (if the previous solution does not work we'd limit the feature just to "normal" fields).
Thank you!
SN
Well, you could just treat the modifications as entities themselves, so that every change is stored in the database, and then all the changes that were approved are executed against the entity.
So, for example, if you have some Books stored in the database, and you want to make sure that all the modifications made to these are approved, just add a model that would contain the changeset that has to be processed, and a handler that would apply these changes:
<?php
class UpdateBookCommand
{
// If you'll store these commands in a database, perhaps this field would be a relation,
// or you could just store the ID
public $bookId;
public $newTitle;
public $newAuthor;
// Perhaps this field should be somehow protected from unauthorized changes
public $isApproved;
}
class UpdateBookHandler
{
private $bookRepository;
private $em;
public function handle(UpdateBookCommand $command)
{
if (!$command->isApproved) {
throw new NotAuthorizedException();
}
$book = $this->bookRepository->find($command->bookId);
$book->setTitle($command->newTitle);
$book->setAuthor($command->newAuthor);
$this->em->persist($book);
$this->em->flush();
}
}
Next, in your controller you would just have to make sure that the commands are somehow stored (in a database or maybe even in a message queue), and the handler gets called when the changesets could possibly get applied.
P.S. Perhaps I could have explained this a bit better, but mostly the inspiration for this solution comes from the CQRS pattern that's explained quite well by Martin Fowler. However, I guess in your case a full-blown CQRS implementation is unnecessary and a simpler solution should work.

Working with 2 databases and just one entity manager in Symfony2

I need to keep an archive database to an application in Symfony2.
In it I'll keep all records older than 90 days. I was thinking that I could use just one entity manager (because both databases are identical).
First of all, I'm not sure if this is the best approach/solution.
And, besides that, I don't know how to implement this idea (I've just found 2 entity managers for 2 databases).
I'm sorry if this is a dumb question, but I've been looking for some solution for it for 2 days now.
We use distinct em for saving history, works fine.
Code looks like this.
Somewhere in your config...
yourpath\app\config\parameters.yml
parameters:
database_driver: pdo_mysql
database_host: site1.ru
database_port: 3346
database_name: db1
database_user: roof
database_password: jump
database_history_driver: pdo_mysql
database_history_host: site2.ru
database_history_port: 10001
database_history_name: history
database_history_user: sea
database_history_password: deep
etc...
Somewhere in your history bundle...
/**
* We make history!
**/
class historyController extends Controller
{
public function showAction($historyId)
{
// get secondary manager
$emHistory = $this->getDoctrine()->getManager('history');
// get default manager
$em = $this->getDoctrine()->getManager('default');
}
}
Somewhere in services of history bundle
class HistoryBundleUtils {
protected $em;
public function __construct($arguments) {
// get secondary manager
$this->em = $arguments['entityManager']->getManager('history');
# etc...
}
}
This isn't possible, each Entity Manager can only use one DB connection, the docs seem quite clear about it.
So I think you'll be stuck with using two EMs. Each will be configured with a duplicate set of your mappings. The detail of how you use them is up to you though:
You could just manually choose one or the other as required in your app
You could somehow abstract it away into a class of your own which has both EMs, and then when you run queries etc it will be worrying about where to get the data from (and possibly how to combine data from both EMs)
If the only activity which really needs both EMs is the archive process itself, that's an obvious thing to hide away in a class nicely
I suppose it also depends on what the point of the archive DB is. If it's some architectural thing like it needs to be on a different server or whatever, then you're stuck as above. On the other hand, if you really just want old data not to show up in day-to-day queries (without specifically asking for it), then it might be better to implement some kind of "archived" flag and a Doctrine Extension which magically hides archived items away until you ask for them, very similar to SoftDeleteable
I don't know if it is a good practice but I have been using one EM for two Databases successfully in Symfony2. The project I was working on required access to two databases. There are some limitations however. First the database_user and _password needs to be the same for both databases. You can access both databases, but you can only create (with console doctrine:database:create) and write the tables (console doctrine:schema:update) of the one defined in parameters.yml.
You can read, write, update, delete on both databases, but you need to specify the database name of your second database in the model, like:
#ORM\Table(name="my_other_database.my_table")
Basically, you can use one EM for two Databases, if one database already exists and you only need to access it.

Access Session from EntityRepository

I'm using Symfony2 with Doctrine2. I want to achieve the following:
$place = $this->getDoctrine()->getRepository('TETestBundle:Place')->find($id);
And on that place will be the info of the place (common data + texts) on the user language (in session). As I am going to do that hundreds of times, I want to pass it behind the scenes, not as a second parameter. So an English user will view the place info in English and a Spanish user in Spanish.
One possibility is to access the locale of the app from an EntityRepository. I know it's done with services and DI but I can't figure it out!
// PlaceRepository
class PlaceRepository extends EntityRepository
{
public function find($id)
{
// get locale somehow
$locale = $this->get('session')->getLocale();
// do a query with the locale in session
return $this->_em->createQuery(...);
}
}
How would you do it? Could you explain with a bit of detail the steps and new classes I have to create & extend? I plan on releasing this Translation Bundle once it's ready :)
Thanks!
I don't believe that Doctrine is a good approach for accessing session data. There's just too much overhead in the ORM to just pull session data.
Check out the Symfony 2 Cookbook for configuration of PDO-backed sessions.
Rather than setting up a service, I'd consider an approach that used a Doctrine event listener. Just before each lookup, the listener would pick out the correct locale from somewhere (session, config, or any other place you like in the future), inject it into the query, and like magic, your model doesn't have to know those details. Keeps your model's scope clean.
You don't want your model or Repository crossing over into the sessions directly. What if you decide in the future that you want a command-line tool with that Repository? With all that session cruft in there, you'll have a mess.
Doctrine event listeners are magically delicious. They take some experimentation, but they wind up being a very configurable, out-of-the-way solution to this kind of query manipulation.
UPDATE: It looks like what you'd benefit from most is the Doctrine Translatable Extension. It has done all the work for you in terms of registering listeners, providing hooks for how to pass in the appropriate locale (from wherever you're keeping it), and so on. I've used the Gedmo extensions myself (though not this particular one), and have found them all to be of high quality.

What's the RESTful way of attaching one resource to another?

this is one of the few moments I couldn't find the same question that I have at this place so I'm trying to describe my problem and hope to get some help an ideas!
Let's say...
I want to design a RESTful API for a domain model, that might have entities/resources like the following:
class Product
{
String id;
String name;
Price price;
Set<Tag> tags;
}
class Price
{
String id;
String currency;
float amount;
}
class Tag
{
String id;
String name;
}
The API might look like:
GET /products
GET /products/<product-id>
PUT /prices/<price-id>?currency=EUR&amount=12.34
PATCH /products/<product-id>?name=updateOnlyName
When it comes to updating references:
PATCH /products/<product-id>?price=<price-id>
PATCH /products/<product-id>?price=
may set the Products' Price-reference to another existing Price, or delete this reference.
But how can I add a new reference of an existing Tag to a Product?
If I wanted to store that reference in a relational database, I needed a relationship table 'products_tags' for that many-to-many-relationship, which brings us to a clear solution:
POST /product_tags [product: <product-id>, tag: <tag-id>]
But a document-based NoSQL database (like MongoDB) could store this as a one-to-many-relationship for each Product, so I don't need to model a 'new resource' that has to be created to save a relationship.
But
POST /products/<product-id>/tags/ [name: ...]
creates a new Tag (in a Product),
PUT /products/<product-id>/tags/<tag-id>?name=
creates a new Tag with <tag-id> or replaces an existing
Tag with the same id (in a Product),
PATCH /products/<product-id>?tags=<tag-id>
sets the Tag-list and doesn't add a new Tag, and
PATCH /products/<product-id>/tags/<tag-id>?name=...
sets a certain attribute of a Tag.
So I might want to say something link this:
ATTACH /products/<product-id>?tags=<tag-id>
ATTACH /products/<product-id>/tags?tag=<tag-id>
So the point is:
I don't want to create a new resource,
I don't want to set the attribute of a resource, but
I want to ADD a resource to another resources attribute, which is a set. ^^
Since everything is about resources, one could say:
I want to ATTACH a resource to another.
My question: Which Method is the right one and how should the URL look like?
Your REST is an application state driver, not aimed to be reflection of your entity relationships.
As such, there's no 'if this was the case in the db' in REST. That said, you have pretty good URIs.
You talk about IDs. What is a tag? Isn't a tag a simple string? Why does it have an id? Why isn't its id its namestring?
Why not have PUT /products/<product-id>/tags/tag_name=?
PUT is idempotent, so you are basically asserting the existance of a tag for the product referred to by product-id. If you send this request multiple times, you'd get 201 Created the first time and 200 OK the next time.
If you are building a simple system with a single concurrent user running on a single web server with no concurrency in requests, you may stop reading now
If someone in between goes and deletes that tag, your next put request would re-create the tag. Is this what you want?
With optimistic concurrency control, you would pass along the ETag a of the document everytime, and return 409 Conflict if you have a newer version b on the server and the diff, a..b cannot be reconciled. In the case of tags, you are just using PUT and DELETE verbs; so you wouldn't have to diff/look at reconciliation.
If you are building a moderately advanced concurrent system, with first-writer-wins semantics, running on a single sever, you can stop reading now
That said, I don't think you have considered your transactional boundaries. What are you modifying? A resource? No, you are modifying value objects of the product resource; its tags. So then, according to your model of resources, you should be using PATCH. Do you care about concurrency? Well, then you have much more to think about with regards to PATCH:
How do you represent the diff of a hierarchial JSON object?
How do you know what PATCH requests that conflict in a semantic way - i.e. we may not care about DELETEs on Tags, but two other properties might interact semantically.
The RFC for HTTP PATCH says this:
With PATCH, however, the enclosed entity contains a set of
instructions describing how a resource currently residing on the
origin server should be modified to produce a new version. The PATCH
method affects the resource identified by the Request-URI, and it also
MAY have side effects on other resources; i.e., new resources may be
created, or existing ones modified, by the application of a PATCH.
PATCH is neither safe nor idempotent as defined by [RFC2616], Section
9.1.
I'm probably going to stop putting strange ideas in your head now. Comment if you want me to continue down this path a bit longer ;). Suffice to say that there are many more considerations that can be done.

Resources