I'm trying to figure out a clear way of populating my model classes from LINQ to SQL generated objects. My goal is to keep my Models and LinqModels separate. Say I have the following models:
public class Person {
public List<Account> Accounts {get; set;}
}
public class Account {
public List<Purchase> Purchases {get; set;}
}
public Purchase {
public String Whatever {get; set;}
}
Now, I also have nearly identical data models generated by LINQ to SQL. So if I want to populate a Person object I'm going to add a getter method within the DataContext partial class:
public Person GetPersonByID(int personID) {
....
}
How we populate this Person object and its children properties throughout the rest of the application is done like this:
public Person GetPersonByID(int personID) {
Person res =
from p in Persons
select new Person() {
Accounts = (
from a in p.Accounts
select new Account() {
Purchases = (
from m in p.Purchases
select new Purchase() {
Whatever = m.Whatever
}
).ToList()
}
).ToList()
}
return res;
}
So for each child property we need to extend the query. What I would really prefer is if I could do something more like this:
public Person GetPersonByID(int personID) {
return new Person( this.Persons.SingleOrDefault( p => p.ID == personID ) );
}
....
public class Person {
public Person(DataModels.Person p) {
Accounts = (from a in p.Accounts select new Account(a)).ToList();
}
}
public class Account {
public Account(DataModels.Account a) {
Purchases = (from r in a.Purchases select new Purchase(r)).ToList();
}
}
public class Purchase {
public Purchase(DataModels.Purchase r) {
Whatever = r.Whatever
}
}
This is much more manageable, but the initial GetPersonByID call does not return the data I need to populate these child objects. Is there any way around this?
Or is there a better alternative to populating model objects using LINQ to SQL?
*** Sorry if my code examples are not quite right*
What you are looking for is called "persistence ignorance".
It is a valued property of a design, specifically in the DDD (Domain Driven Design) circles.
You can achieve it for example using LinqToSQL's XML mapping capabilities, where you do not generate data classes and indicate to LTS how to map your domain classes (Account, Purchase etc) to the database directly.
Because of LinqToSQL's limited capabilities in terms of mapping options (specifically the lack of value objects, the limitation on the inheritance mapping and the lack of Many-to-Many relationship support), it might or might not work in your case.
Another option, if you have your LTS-generated classes and your domain classes as above, is to look into Automapper, which can help getting some of the repetitive work out of the way.
Related
I know that keeping large collections in Aggregates impacts performance.
In my use case i have STORE which can have multiple PRODUCT and each product can have CUSTOMIZATION(Not more than 10-20 customization).
I thought of creating one store aggregate only and update product and customization through it but as product collection can be large so it will impact performance. So I have two aggregates STORE(to create store) and PRODUCT(with storeId,all product operation) with this approach I am not able to check if product already exist or not.
what i am doing now is getting all products by StoreId in my handler and checking duplicate which is not right way as it should belong to my domain model.
Anyone has better idea to solve this.
below are my domain models.
public class Store : Entity<Guid>, IAggregateRoot
{
private Store()
{
this.Products = new List<Product>();
}
private Store(string name, Address address) : base(System.Guid.NewGuid())
{
this.Name = name;
this.Address = address;
}
private Store(string name, Address address, ContactInfo contact) : this(name, address)
{
this.Contact = contact;
}
public string Name { get; private set; }
public Address Address { get; private set; }
public ContactInfo Contact { get; private set; }
}
public class Product : Entity<Guid>, IAggregateRoot
{
private Product()
{
}
private Product(Guid storeId, ProductInfo productInfo) : base(Guid.NewGuid())
{
this.ProductInfo = productInfo;
this.StoreId = storeId;
this.Customizations = new List<Customization>();
}
private Product(Guid storeId, ProductInfo productInfo, IEnumerable<Customization> customizations) : this(storeId, productInfo)
{
this.Customizations = customizations;
}
public ProductInfo ProductInfo { get; private set; }
private List<Customization> _customizations;
public IEnumerable<Customization> Customizations
{
get
{
return _customizations.AsReadOnly();
}
private set
{
_customizations = (List<Customization>)value;
}
}
public Guid StoreId { get; private set; }
public static Product Create(Guid storeId, ProductInfo productInfo)
{
return new Product(storeId, productInfo);
}
public void UpdateInfo(ProductInfo productInfo)
{
this.ProductInfo = productInfo;
}
public void AddCustomization(Customization customization)
{
this._customizations.Add(customization);
}
public void RemoveCustomization(Customization customization)
{
this._customizations.Remove(customization);
}
}
Well as correctly Jonatan Dragon mentioned and you found the solution in an article of course you can use domain services but taking this approach for solving these kind of problems has the danger to fall in the anemic domain model pitfalls in future developments. This is the most common cause of loosing technical excellency in the domain layer. In general whenever a problem must be solved with objects collaborations, this kind of problems will be occurred. Therefore whenever is possible to avoid using domain services it's better to find the other answers that doesn't utilize this pattern. For your case the problem can be solved without using domain services by working around on some trade-offs to handle non-functional issues (like performance) and keeping the models rich and clean!
Let's consider some assumptions for designing aggregates to identify where do we want to involve trade-offs which we will accept for solving this problem:
1- In designing aggregates, just one aggregate's state must changes during one transactional use-case. (Greg Young)
2- In designing aggregates, the things can be shared among aggregates are only their IDs. (Eric Evans)
It seems these two assumptions make our minds enclosed in a frame that solve this kind of problems by only utilizing domain services. So let's look at them more deeply.
Many DDD practitioners and mentors like Nick Tune knows the transaction default scope over the entire BC in a use-case instead of only consider it for one aggregate. Therefor this is the place where we have some degrees of freedom to involve with trade-offs.
For number 2, the philosophy behind this assumption is to share only the part of aggregates that it's invariant and never modifies during the aggregate's lifespan. Therefor not so many aggregates get locked during a transaction in one use-case. Well if there's case that a shared state of aggregates changes on one transaction scope and there's no way for that to modify separately, technically there will be no problem in sharing it.
By mixing these two we can conclude to this answer for this problem:
You can let the Store aggregate to decide for creating a Product aggregate. In OOP words you can make The Store aggregate be the Product aggregate Factory.
public class Store : Entity<Guid>, IAggregateRoot
{
private Store()
{
this.Products = new List<Product>();
}
private Store(string name, Address address) : base(System.Guid.NewGuid())
{
this.Name = name;
this.Address = address;
}
private Store(string name, Address address, ContactInfo contact) : this(name, address)
{
this.Contact = contact;
}
public Product CreateProduct(Guid storeId, ProductInfo productInfo)
{
if(ProductInfos.Contains(productInfo))
{
throw new ProductExistsException(productInfo);
}
this.ProductInfos.Add(productInfo);
return new Product(storeId, productInfo);
}
public string Name { get; private set; }
public Address Address { get; private set; }
public ContactInfo Contact { get; private set; }
public List<ProductInfo> ProductInfos {get; private set;} = new();
}
In this solution i considered ProductInfo as a value object, hence checking duplication can easily be done by checking their equality. For ensuring the Product aggregate can not be constructed independently you can make it's ctor's access modifier as internal. Usually aggregate models placed in one assembly and ORMs can use non public ctors too, therefor this will create no problem.
There are some points to notice in this answer:
1- The Store aggregate must not use the internal parts of ProductInfo. With this approach ProductInfo can change freely as it's owner ship belongs to Product aggregate.
2- As ProductInfo is a value object, storing and recovering the Store aggregate is not a heavy operation and by converting techniques in ORMs this can reduce to storing and recovering data from only one field for ProductInfos collection.
3- The Store and the Product aggregates are only coupled for Product creation use-case. They can operate freely separate in other use-cases.
So with this approach you will achieve small aggregate separation in 99% of use-cases and the duplicate checking as the domain model invariant.
PS: This is the core idea of how to solve the problem. You can cook it with other patterns and techniques like Explicit State Pattern and Row Versions if it's required.
At first make sure it really impacts the performance. If you really need 2 aggregates, you can use a Domain Service to solve your problem. Check this article by Kamil Grzybek, section BC scope validation implementation.
public interface IProductUniquenessChecker
{
bool IsUnique(Product product);
}
// Product constructor
public Product(Guid storeId, ProductInfo productInfo, IProductUniquenessChecker productUniquenessChecker)
{
if (!productUniquenessChecker.IsUnique(this))
{
throw new BusinessRuleValidationException("Product already exists.");
}
...
}
I'm using an AJAX query to call a function that references a model.
So far it returns details to a user table, which all works fine. The problem is, I have a one to many relationship to another table.
Person - PersonId goes to joining table - personId linked to potentially multiple colourIds - colourId links to colour table.
So three tables - person, favourite colour and colour are involved.
I want to include a join in my original query but I'm having difficulty. The query:
TechTestEntities testTechObj = new TechTestEntities();
var Result = from p in testTechObj.People
join fp in testTechObj.FavouriteColours on p.PersonId equals fp.PersonId
join c in testTechObj.Colours on fp.ColourId equals c.ColourId
select p;
When I run this I get the error that 'The entity type FavouriteColours is not part of the model for the current context.'
I have also added FavouriteColours to the model like so:
public virtual DbSet<FavouriteColours> FavouriteColours { get; set; }
All the tables should be included in the ADO model, so I'm not sure what the problem is and how to retrieve the colour names through a join.
Edit:
Model code
namespace techTest4
{
using System;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using techTest4.Models;
public partial class TechTestEntities : DbContext
{
public TechTestEntities()
: base("name=TechTestEntities")
{
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
throw new UnintentionalCodeFirstException();
}
public virtual DbSet<Colour> Colours { get; set; }
public virtual DbSet<Person> People { get; set; }
//public virtual DbSet<FavouriteColours> FavouriteColours { get; set; }
}
}
I had to guess what your classes look like, but take a look at this code:
( https://dotnetfiddle.net/TVqzse )
This snippet is the most interesting for you:
var favoriteColours = people.SelectMany(p => p.FavouriteColours);
foreach(var favoriteColour in favoriteColours) {
System.Console.WriteLine(favoriteColour.Color.ColorName);
}
This uses LINQ to extract the favourite colours of all people, and you should be able to do exactly the same in Entity Framework.
After many hours, countless failures, I decided to change my Entity Model to include a link table in the model for each many-to-many relationship. This worked for me because RIA Services doesn't support many-to-many relationships.
Regardless, I'm able to build, but do not have any idea how to manage these relationships within the application itself. Should I create methods on the Domain Service, that are hidden from the client and used to perform CRUD operations on the link table objects?
An example would be greatly appreciated, thanks in advance.
I guess you already know http://m2m4ria.codeplex.com/ that adds many to many support to wcf ria services, however if you want to manage it by yourself, you better send it to the client and treat them like any other entities. You will not have Entity A with a collection of B entities and entities B with a collection of A entitities but rather:
public class A
{
int Id {get; set;}
ICollection<A_To_B> B_Entities {get; private set;}
}
public class A_To_B
{
int Id {get; set;}
A EntityA {get; set;}
int id_A {get; set;}
B EntityB {get; set;}
int id_B {get; set;}
}
public class B
{
int Id {get; set;}
ICollection<A_To_B> A_Entities {get; private set;}
}
in your domain service add methods to correctly expose all of these entities and don't forget to properly decorate them (relationship is straight 1:m)
This is indeed a nuisance.
I've not tried m2m4ria and do it manually on the client, ie. I expose the bridge table in the domain service. Sometimes it turns out to be a good idea anyway if the bridge table is later elevated to carry more data.
To ease the pain of managing the bridge table on the client I've written some helper you might want to consider yourself.
public interface ILinkEntity
where LinkEntity : Entity, ILinkEntity
where SourceEntity : Entity, ILinkedSourceEntity
where TargetEntity : Entity
{
SourceEntity Source { get; set; }
TargetEntity Target { get; set; }
}
public interface ILinkedSourceEntity
where SourceEntity : Entity, ILinkedSourceEntity
where LinkEntity : Entity, ILinkEntity
where TargetEntity : Entity
{
EntityCollection Links { get; }
ObservableCollection Targets { get; set; }
}
public static class ManyToManyHelper
{
public static void UpdateLinks(this ILinkedSourceEntity source, EntitySet set)
where SourceEntity : Entity, ILinkedSourceEntity
where LinkEntity : Entity, ILinkEntity, new()
where TargetEntity : Entity
{
if (!(source is SourceEntity)) throw new Exception("Expected source to be a SourceEntity.");
var toAdd = (
from target in source.Targets
where source.Links.FirstOrDefault(le => le.Target.Equals(target)) == null
select target
).ToArray();
foreach (var target in toAdd) source.Links.Add(new LinkEntity() { Source = source as SourceEntity, Target = target });
var toRemove = (
from link in source.Links
where source.Targets.FirstOrDefault(te => te.Equals(link.Target)) == null
select link
).ToArray();
foreach (var link in toRemove)
{
source.Links.Remove(link);
// This can happen when the entities had not yet been added to the context.
set.Remove(link);
}
}
public static void UpdateTargets(this ILinkedSourceEntity source)
where SourceEntity : Entity, ILinkedSourceEntity
where LinkEntity : Entity, ILinkEntity, new()
where TargetEntity : Entity
{
if (source.Targets == null)
{
source.Targets = new ObservableCollection();
}
else
{
source.Targets.Clear();
}
foreach (var link in source.Links) source.Targets.Add(link.Target);
}
}
I have this in a file called ManyToManyUtils and it should live somewhere where your domain entities can reference them (so typically in the domain client project).
I then augment the respective auto-generated domain entities to support those interfaces, eg. like this:
public partial class Question : ILinkedSourceEntity
{
EntityCollection ILinkedSourceEntity.Links
{
get { return QuestionCategories; }
}
public ObservableCollection Categories { get; set; }
ObservableCollection ILinkedSourceEntity.Targets
{
get { return Categories; }
set { Categories = value; }
}
}
public partial class QuestionCategory : ILinkEntity
{
Question ILinkEntity.Source { get { return Question; } set { Question = value; } }
Category ILinkEntity.Target { get { return Category; } set { Category = value; } }
}
public partial class Category
{
}
So in this example each Question can be in many categories. Category as a domain entity needs not to be modified.
I usually augment domain entity classes with properties frequently anyway, so I often already have those partial classes.
Now I can bind views against those new collection properties. However, I still need to call the helper update methods to sync the bridge table with those helper collection properties.
So after each load or refresh from the domain services you have to call:
myQuestion.UpdateTargets();
And after each edit by the user (eg from a SelectionChanged handler in the view, or - if you are happy with the consequences - just before you call SaveChanges), call:
myQuestion.UpdateLinks(myContext.QuestionCategories);
That way, the nastiness is factored out as much as possible.
I'm using EF 4.1 code first. Given the following class snippet:
public class Doctor
{
public virtual ICollection<Hospital> Hospitals { get; set; }
}
Note: I have this in the database context:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
this.Configuration.LazyLoadingEnabled = false;
}
I wanted to make sure that lazy loading is not involved here.
The issue I have is that, without the virtual keyword on the Hospitals property, when I retrieve a doctor that does have a hospital associated with him, the collection is empty.
By including the virtual keyword, the hospitals collection does contain 1 item, which is what I expect.
The problem is that, when I want to create a brand new doctor and associate him with a hospital immediately, I get a Null reference exception, since the Hospitals property has not been initialised yet.
Can someone point out what I'm doing wrong here? How can I add items to the Hospitals upon creating a new doctor.
Cheers.
Jas.
Your code is something what you usually see in all examples but to make this work this one is much better:
public class Doctor
{
private ICollection<Hospital> _hospitals;
public virtual ICollection<Hospital> Hospitals
{
get { return _hospitals ?? (_hospitals = new HashSet<Hospital>()); }
set { _hospitals = value }
}
}
If you don't use virtual keyword EF will not initialize collection for you. In the same time if you create brand new Doctor via its constructor you must handle initialization yourselves.
I think this can help you.
public class Doctor
{
public Doctor()
{
Hospitals = new ICollection<Hospital>();
}
public virtual ICollection<Hospital> Hospitals { get; set; }
}
I have a product class which contains 11 public fields.
ProductId
ShortTitle
LongTitle
Description
Price
Length
Width
Depth
Material
Img
Colors
Pattern
The number of fields may grow with attributes for more specific product tyes. The description may contain a large amount of data.
I want to create a slim version of this product class that only contains the data needed. I'm only displaying 4 of the 12 fields when listing products on a category page. It seems like a waste to retrieve all of the data when most of it isn't being used.
I created a parent class of ProductListing that contains the 4 fields I need for the category page
ProductId
ShortTitle
Price
Img
Then created a class of Product that inherits from ProductListing containing all product data. It seems backwards as "ProductListing" is not a kind of "Product" but I just started reading about inheritance a few months ago so it's stil a little new to me.
Is there a better way to get a slim object so I'm not pulling data I don't need?
Is the solution I have in place fine how it is?
I personally do not favor inheritance for these kinds of problems because it can become confusing over time. Specifically, I try to avoid having two concrete classes in my inheritance hierarchy where one inherits from the other and both can be instantiated and used.
How about creating a ProductCoreDetail class that has the essential fields you need and aggregating it inside of the Product class. You can still expose the public fields by declaring them as public fields and proxying them to the nested ProductCoreDetail instance.
The benefit of this model is that any shared implementation code can be placed in ProductCoreDetail. Also, you can choose to define an additional interface IProductCoreDetail that both Product and ProductCoreDetail implement so that you can pass either instance to methods that just care about code information. I would also never exposed the aggregate instance publicly to consumer of Product.
Here's a code example:
// interface that allows functional polymorphism
public interface IProductCoreDetail
{
public int ProductId { get; set; }
public string ShortTitle { get; set; }
public decimal Price { get; set; }
public string Img { get; set; }
}
// class used for lightweight operations
public class ProductCoreDetail : IProductCoreDetail
{
// these would be implemented here..
public int ProductId { get; set; }
public string ShortTitle { get; set; }
public decimal Price { get; set; }
public string Img { get; set; }
// any additional methods needed...
}
public class Product : IProductCoreDetail
{
private readonly ProductCoreDetail m_CoreDetail;
public int ProductId { get { return m_CoreDetail.ProductId; } }
public string ShortTitle { get { return m_CoreDetail.ShortTitle; } }
public decimal Price { get { return m_CoreDetail.Price; } }
public string Img { get { return m_CoreDetail.Img; } }
// other fields...
public string LongTitle
public string Description
public int Length
public int Width
public int Depth
public int Material
public int Colors
public int Pattern
}
I agree with LBushkin that inheritence is the wrong approach here. Inheritence suggests that TypeB is a TypeA. In your case, the relationship is not quite the same. I used to create classes that were subsets of a large entity for things like search results, list box items, etc. But now with C# 3.5's anonymous type support and LINQ projections, I rarely need to do that anymore.
// C#
var results = from p in products
select new {
p.ProductId,
p.ShortTitle,
p.Price,
p.Img };
// VB.NET
Dim results = From p in products _
Select p.ProductId, p.ShortTitle, p.Price, p.Img
This creates an unnamed type "on-the-fly" that contains only the fields you specified. It is immutable so the fields cannot be changed via this "mini" class but it supports equality and comparison.
But when I do need to create a named type, I typically just create a separate class that has no relationship to the main class other than a lazy-loaded reference to the "full" version of the object.
I wouldn't use a separate class or inheritance.
For your slim version, why not just retrieve only the data you need, and leave the other fields empty? You might have two queries: one that fills all the fields, and another that only fills the slim fields. If you need to differentiate between the two, that's easy if one of the non-slim fields is NOT NULL in your DB; just check for null in the object.