DDD: How to check invariant with multiple aggregate - .net-core

I know that keeping large collections in Aggregates impacts performance.
In my use case i have STORE which can have multiple PRODUCT and each product can have CUSTOMIZATION(Not more than 10-20 customization).
I thought of creating one store aggregate only and update product and customization through it but as product collection can be large so it will impact performance. So I have two aggregates STORE(to create store) and PRODUCT(with storeId,all product operation) with this approach I am not able to check if product already exist or not.
what i am doing now is getting all products by StoreId in my handler and checking duplicate which is not right way as it should belong to my domain model.
Anyone has better idea to solve this.
below are my domain models.
public class Store : Entity<Guid>, IAggregateRoot
{
private Store()
{
this.Products = new List<Product>();
}
private Store(string name, Address address) : base(System.Guid.NewGuid())
{
this.Name = name;
this.Address = address;
}
private Store(string name, Address address, ContactInfo contact) : this(name, address)
{
this.Contact = contact;
}
public string Name { get; private set; }
public Address Address { get; private set; }
public ContactInfo Contact { get; private set; }
}
public class Product : Entity<Guid>, IAggregateRoot
{
private Product()
{
}
private Product(Guid storeId, ProductInfo productInfo) : base(Guid.NewGuid())
{
this.ProductInfo = productInfo;
this.StoreId = storeId;
this.Customizations = new List<Customization>();
}
private Product(Guid storeId, ProductInfo productInfo, IEnumerable<Customization> customizations) : this(storeId, productInfo)
{
this.Customizations = customizations;
}
public ProductInfo ProductInfo { get; private set; }
private List<Customization> _customizations;
public IEnumerable<Customization> Customizations
{
get
{
return _customizations.AsReadOnly();
}
private set
{
_customizations = (List<Customization>)value;
}
}
public Guid StoreId { get; private set; }
public static Product Create(Guid storeId, ProductInfo productInfo)
{
return new Product(storeId, productInfo);
}
public void UpdateInfo(ProductInfo productInfo)
{
this.ProductInfo = productInfo;
}
public void AddCustomization(Customization customization)
{
this._customizations.Add(customization);
}
public void RemoveCustomization(Customization customization)
{
this._customizations.Remove(customization);
}
}

Well as correctly Jonatan Dragon mentioned and you found the solution in an article of course you can use domain services but taking this approach for solving these kind of problems has the danger to fall in the anemic domain model pitfalls in future developments. This is the most common cause of loosing technical excellency in the domain layer. In general whenever a problem must be solved with objects collaborations, this kind of problems will be occurred. Therefore whenever is possible to avoid using domain services it's better to find the other answers that doesn't utilize this pattern. For your case the problem can be solved without using domain services by working around on some trade-offs to handle non-functional issues (like performance) and keeping the models rich and clean!
Let's consider some assumptions for designing aggregates to identify where do we want to involve trade-offs which we will accept for solving this problem:
1- In designing aggregates, just one aggregate's state must changes during one transactional use-case. (Greg Young)
2- In designing aggregates, the things can be shared among aggregates are only their IDs. (Eric Evans)
It seems these two assumptions make our minds enclosed in a frame that solve this kind of problems by only utilizing domain services. So let's look at them more deeply.
Many DDD practitioners and mentors like Nick Tune knows the transaction default scope over the entire BC in a use-case instead of only consider it for one aggregate. Therefor this is the place where we have some degrees of freedom to involve with trade-offs.
For number 2, the philosophy behind this assumption is to share only the part of aggregates that it's invariant and never modifies during the aggregate's lifespan. Therefor not so many aggregates get locked during a transaction in one use-case. Well if there's case that a shared state of aggregates changes on one transaction scope and there's no way for that to modify separately, technically there will be no problem in sharing it.
By mixing these two we can conclude to this answer for this problem:
You can let the Store aggregate to decide for creating a Product aggregate. In OOP words you can make The Store aggregate be the Product aggregate Factory.
public class Store : Entity<Guid>, IAggregateRoot
{
private Store()
{
this.Products = new List<Product>();
}
private Store(string name, Address address) : base(System.Guid.NewGuid())
{
this.Name = name;
this.Address = address;
}
private Store(string name, Address address, ContactInfo contact) : this(name, address)
{
this.Contact = contact;
}
public Product CreateProduct(Guid storeId, ProductInfo productInfo)
{
if(ProductInfos.Contains(productInfo))
{
throw new ProductExistsException(productInfo);
}
this.ProductInfos.Add(productInfo);
return new Product(storeId, productInfo);
}
public string Name { get; private set; }
public Address Address { get; private set; }
public ContactInfo Contact { get; private set; }
public List<ProductInfo> ProductInfos {get; private set;} = new();
}
In this solution i considered ProductInfo as a value object, hence checking duplication can easily be done by checking their equality. For ensuring the Product aggregate can not be constructed independently you can make it's ctor's access modifier as internal. Usually aggregate models placed in one assembly and ORMs can use non public ctors too, therefor this will create no problem.
There are some points to notice in this answer:
1- The Store aggregate must not use the internal parts of ProductInfo. With this approach ProductInfo can change freely as it's owner ship belongs to Product aggregate.
2- As ProductInfo is a value object, storing and recovering the Store aggregate is not a heavy operation and by converting techniques in ORMs this can reduce to storing and recovering data from only one field for ProductInfos collection.
3- The Store and the Product aggregates are only coupled for Product creation use-case. They can operate freely separate in other use-cases.
So with this approach you will achieve small aggregate separation in 99% of use-cases and the duplicate checking as the domain model invariant.
PS: This is the core idea of how to solve the problem. You can cook it with other patterns and techniques like Explicit State Pattern and Row Versions if it's required.

At first make sure it really impacts the performance. If you really need 2 aggregates, you can use a Domain Service to solve your problem. Check this article by Kamil Grzybek, section BC scope validation implementation.
public interface IProductUniquenessChecker
{
bool IsUnique(Product product);
}
// Product constructor
public Product(Guid storeId, ProductInfo productInfo, IProductUniquenessChecker productUniquenessChecker)
{
if (!productUniquenessChecker.IsUnique(this))
{
throw new BusinessRuleValidationException("Product already exists.");
}
...
}

Related

Using Backlink feature of realm-dotnet in Xamarin.Forms App

My current employer is developing a mobile app using Xamarin.Forms and Asp.net mvc on the backend. I suggested to use realm in the mobile app. My manager want to see a POC(Proof of concept) app using realm with backlink feature before allowing it to be used in the app. I am working on the POC on GitHub . The documentation is very limiting and the GitHub repo of realm-dotnet don’t have good sample.
I completed the project. But unable to implement backlink. The sample app I have developed allow user to create assignees(employees) in the first page. The user can delete or edit the employees using context menu. When the user clicks on the employee name the app navigates to the ToDoListPage of that particular employee. Here the user can create ToDoItems. On this ToDoList page I want to show the ToDoItems that where assigned to that employee only.
The models were as follows:
public class Assignee : RealmObject
{
public Assignee()
{
ToDoItems = Enumerable.Empty<ToDoItem>().AsQueryable();
}
[PrimaryKey]
public string Id { get; set; } = Guid.NewGuid().ToString();
public string Name { get; set; }
public string Role { get; set; }
[Backlink(nameof(ToDoItem.Employee))]
public IQueryable<ToDoItem> ToDoItems { get; }
}
public class ToDoItem : RealmObject
{
[PrimaryKey]
public string Id { get; set; } = Guid.NewGuid().ToString();
public string Name { get; set; }
public string Description { get; set; }
public bool Done { get; set; }
public Assignee Employee { get; set; }
}
I am adding employee to each ToDo Item:
Item.Employee = Employee;
_realm.Add(Item);
Now I want to access the ToDoItems for the Employee:
Items = _realm.All<Assignee>().Where(x => x.Id == EmployeeId).FirstOrDefault().ToDoItems;
But this does not work. I will be grateful if someone can help me out by preferably writing code in my sample app or write the correct code in the reply.
Thank you
Firstly, Realm .NET doesn't currently support traversing properties (x.Employee.Id). Due to this, when I start the app and try to go to the ToDoListPage, the app crashes with the exception:
The left-hand side of the Equal operator must be a direct access to a persisted property in Realm
Realm supports object comparison, so we can fix this like so:
var employee = _realm.Find<Assignee>(EmployeeId);
Items = _realm.All<ToDoItem>().Where(x => x.Employee == employee);
Secondly, everything seemed fine in your code, so I dug a bit deeper and saw why it isn't working. The issue is that when we try to get all items with the code above, the EmployeeId parameter is null. Since the EmployeeId is being populated after the load logic has been triggered, we don't need to load the data in the ctor. So you can remove this code.
Finally, since you won't be loading the data in the ctor, and instead in the SetValues method, the UI needs to know, when the data has been updated, what exactly to redraw. Thus, you need to mark the collection to be Reactive too:
[Reactive]
public IEnumerable<ToDoItem> Items { get; set; }
Then, you need to change the SetValues method to use object comparison, instead of traversing:
async Task SetValues()
{
Employee = _realm.Find<Assignee>(EmployeeId);
Title = Employee.Name;
Items = _realm.All<ToDoItem>().Where(x => x.Employee == Employee);
}
To sum up - you don't need to try and load the data in the ctor, since you don't know when the EmployeeId will be set. You are already tracking when the property will change and inside the SetValues command you simply need to change the expression predicate.

EF 4.1 - Add items to collection property that is virtual

I'm using EF 4.1 code first. Given the following class snippet:
public class Doctor
{
public virtual ICollection<Hospital> Hospitals { get; set; }
}
Note: I have this in the database context:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
this.Configuration.LazyLoadingEnabled = false;
}
I wanted to make sure that lazy loading is not involved here.
The issue I have is that, without the virtual keyword on the Hospitals property, when I retrieve a doctor that does have a hospital associated with him, the collection is empty.
By including the virtual keyword, the hospitals collection does contain 1 item, which is what I expect.
The problem is that, when I want to create a brand new doctor and associate him with a hospital immediately, I get a Null reference exception, since the Hospitals property has not been initialised yet.
Can someone point out what I'm doing wrong here? How can I add items to the Hospitals upon creating a new doctor.
Cheers.
Jas.
Your code is something what you usually see in all examples but to make this work this one is much better:
public class Doctor
{
private ICollection<Hospital> _hospitals;
public virtual ICollection<Hospital> Hospitals
{
get { return _hospitals ?? (_hospitals = new HashSet<Hospital>()); }
set { _hospitals = value }
}
}
If you don't use virtual keyword EF will not initialize collection for you. In the same time if you create brand new Doctor via its constructor you must handle initialization yourselves.
I think this can help you.
public class Doctor
{
public Doctor()
{
Hospitals = new ICollection<Hospital>();
}
public virtual ICollection<Hospital> Hospitals { get; set; }
}

What's Automapper for?

What’s Automapper for?
How will it help me with my domain and controller layers (asp.net mvc)?
Maybe an example will help here...
Let's say you have a nicely-normalized database schema like this:
Orders (OrderID, CustomerID, OrderDate)
Customers (CustomerID, Name)
OrderDetails (OrderDetID, OrderID, ProductID, Qty)
Products (ProductID, ProductName, UnitPrice)
And let's say you're using a nice O/R mapper that hands you back a well-organized domain model:
OrderDetail
+--ID
+--Order
|--+--Date
|--+--Customer
|-----+--ID
|-----+--Name
+--Product
|--+--ID
|--+--Name
|--+--UnitPrice
+--Qty
Now you're given a requirement to display everything that's been ordered in the last month. You want to bind this to a flat grid, so you dutifully write a flat class to bind:
public class OrderDetailDto
{
public int ID { get; set; }
public DateTime OrderDate { get; set; }
public int OrderCustomerID { get; set; }
public string OrderCustomerName { get; set; }
public int ProductID { get; set; }
public string ProductName { get; set; }
public Decimal ProductUnitPrice { get; set; }
public int Qty { get; set; }
public Decimal TotalPrice
{
get { return ProductUnitPrice * Qty; }
}
}
That was pretty painless so far, but what now? How do we turn a bunch of OrderDetails into a bunch of OrderDetailDtos for data binding?
You might put a constructor on OrderDto that takes an OrderDetail, and write a big mess of mapping code. Or you might have a static conversion class somewhere. Or, you could use AutoMapper, and write this instead:
Mapper.CreateMap<OrderDetail, OrderDetailDto>();
OrderDetailDto[] items =
Mapper.Map<OrderDetail[], OrderDetailDto[]>(orderDetails);
GridView1.DataSource = items;
There. We've just taken what would otherwise have been a disgusting mess of pointless mapping code and reduced it into three lines (really just two for the actual mapping).
Does that help explain the purpose?
If you have an object of one type and you want to populate the properties of an object of another type using properties from the first type, you have two choices:
Manually write code to do such a mapping.
Use a tool that will automatically handle this for you.
AutoMapper is an example of 2.
The most common use is to flatten models into a data transfer objects (or, in general, mapping across layer boundaries). What's very nice about AutoMapper is that for common scenarios you don't have to do any configuring (convention over configuration).
Map objects between layers. Good example: Here

Slim version of Large Object/Class

I have a product class which contains 11 public fields.
ProductId
ShortTitle
LongTitle
Description
Price
Length
Width
Depth
Material
Img
Colors
Pattern
The number of fields may grow with attributes for more specific product tyes. The description may contain a large amount of data.
I want to create a slim version of this product class that only contains the data needed. I'm only displaying 4 of the 12 fields when listing products on a category page. It seems like a waste to retrieve all of the data when most of it isn't being used.
I created a parent class of ProductListing that contains the 4 fields I need for the category page
ProductId
ShortTitle
Price
Img
Then created a class of Product that inherits from ProductListing containing all product data. It seems backwards as "ProductListing" is not a kind of "Product" but I just started reading about inheritance a few months ago so it's stil a little new to me.
Is there a better way to get a slim object so I'm not pulling data I don't need?
Is the solution I have in place fine how it is?
I personally do not favor inheritance for these kinds of problems because it can become confusing over time. Specifically, I try to avoid having two concrete classes in my inheritance hierarchy where one inherits from the other and both can be instantiated and used.
How about creating a ProductCoreDetail class that has the essential fields you need and aggregating it inside of the Product class. You can still expose the public fields by declaring them as public fields and proxying them to the nested ProductCoreDetail instance.
The benefit of this model is that any shared implementation code can be placed in ProductCoreDetail. Also, you can choose to define an additional interface IProductCoreDetail that both Product and ProductCoreDetail implement so that you can pass either instance to methods that just care about code information. I would also never exposed the aggregate instance publicly to consumer of Product.
Here's a code example:
// interface that allows functional polymorphism
public interface IProductCoreDetail
{
public int ProductId { get; set; }
public string ShortTitle { get; set; }
public decimal Price { get; set; }
public string Img { get; set; }
}
// class used for lightweight operations
public class ProductCoreDetail : IProductCoreDetail
{
// these would be implemented here..
public int ProductId { get; set; }
public string ShortTitle { get; set; }
public decimal Price { get; set; }
public string Img { get; set; }
// any additional methods needed...
}
public class Product : IProductCoreDetail
{
private readonly ProductCoreDetail m_CoreDetail;
public int ProductId { get { return m_CoreDetail.ProductId; } }
public string ShortTitle { get { return m_CoreDetail.ShortTitle; } }
public decimal Price { get { return m_CoreDetail.Price; } }
public string Img { get { return m_CoreDetail.Img; } }
// other fields...
public string LongTitle
public string Description
public int Length
public int Width
public int Depth
public int Material
public int Colors
public int Pattern
}
I agree with LBushkin that inheritence is the wrong approach here. Inheritence suggests that TypeB is a TypeA. In your case, the relationship is not quite the same. I used to create classes that were subsets of a large entity for things like search results, list box items, etc. But now with C# 3.5's anonymous type support and LINQ projections, I rarely need to do that anymore.
// C#
var results = from p in products
select new {
p.ProductId,
p.ShortTitle,
p.Price,
p.Img };
// VB.NET
Dim results = From p in products _
Select p.ProductId, p.ShortTitle, p.Price, p.Img
This creates an unnamed type "on-the-fly" that contains only the fields you specified. It is immutable so the fields cannot be changed via this "mini" class but it supports equality and comparison.
But when I do need to create a named type, I typically just create a separate class that has no relationship to the main class other than a lazy-loaded reference to the "full" version of the object.
I wouldn't use a separate class or inheritance.
For your slim version, why not just retrieve only the data you need, and leave the other fields empty? You might have two queries: one that fills all the fields, and another that only fills the slim fields. If you need to differentiate between the two, that's easy if one of the non-slim fields is NOT NULL in your DB; just check for null in the object.

Populating Models using LINQ

I'm trying to figure out a clear way of populating my model classes from LINQ to SQL generated objects. My goal is to keep my Models and LinqModels separate. Say I have the following models:
public class Person {
public List<Account> Accounts {get; set;}
}
public class Account {
public List<Purchase> Purchases {get; set;}
}
public Purchase {
public String Whatever {get; set;}
}
Now, I also have nearly identical data models generated by LINQ to SQL. So if I want to populate a Person object I'm going to add a getter method within the DataContext partial class:
public Person GetPersonByID(int personID) {
....
}
How we populate this Person object and its children properties throughout the rest of the application is done like this:
public Person GetPersonByID(int personID) {
Person res =
from p in Persons
select new Person() {
Accounts = (
from a in p.Accounts
select new Account() {
Purchases = (
from m in p.Purchases
select new Purchase() {
Whatever = m.Whatever
}
).ToList()
}
).ToList()
}
return res;
}
So for each child property we need to extend the query. What I would really prefer is if I could do something more like this:
public Person GetPersonByID(int personID) {
return new Person( this.Persons.SingleOrDefault( p => p.ID == personID ) );
}
....
public class Person {
public Person(DataModels.Person p) {
Accounts = (from a in p.Accounts select new Account(a)).ToList();
}
}
public class Account {
public Account(DataModels.Account a) {
Purchases = (from r in a.Purchases select new Purchase(r)).ToList();
}
}
public class Purchase {
public Purchase(DataModels.Purchase r) {
Whatever = r.Whatever
}
}
This is much more manageable, but the initial GetPersonByID call does not return the data I need to populate these child objects. Is there any way around this?
Or is there a better alternative to populating model objects using LINQ to SQL?
*** Sorry if my code examples are not quite right*
What you are looking for is called "persistence ignorance".
It is a valued property of a design, specifically in the DDD (Domain Driven Design) circles.
You can achieve it for example using LinqToSQL's XML mapping capabilities, where you do not generate data classes and indicate to LTS how to map your domain classes (Account, Purchase etc) to the database directly.
Because of LinqToSQL's limited capabilities in terms of mapping options (specifically the lack of value objects, the limitation on the inheritance mapping and the lack of Many-to-Many relationship support), it might or might not work in your case.
Another option, if you have your LTS-generated classes and your domain classes as above, is to look into Automapper, which can help getting some of the repetitive work out of the way.

Resources