I am a bit confused about the Domain layer in the Domain Driven Design architecture.
I understand that the four layers are Presentation, Application, Domain and Infrastructure, where Infrastructure contains the data repositories.
I also understand that the Domain layer is responsible of the business rules. In the now dated anemic model, the domain objects didn't have any behavior. In DDD, we are supposed to move the behavior and business rules from services to Domain.
Before, I would inject repositories to the services layer. So my question is - is it okay to inject repositories to the Domain objects so they can perform the business rules?
No, it is not okay to inject repositories into domain objects :)
What is acceptable, but only if there really is no other way, is to pass in repositories or other domain objects such as services into an AR method to perform some function via the double-dispatch method:
public void ApplyDiscount(IDiscountService service)
{
_discount = service.Discount(customerType);
}
As I have mentioned in other posts, I tend to think of an AR as I would a physical calculator. There is input, via the keypad, and there is output, via the screen. While the calculator is doing its voodoo it doesn't interact with anything else and doesn't ask for additional information. That being said, there are probably going to be exceptions as in the example above but I guess the discount could be determined by the operation script (service layer):
public void RegisterOrderTask
{
private IDiscountService _discountService;
private IOrderRepository _orderRepository;
public void RegisterOrderTask(IDiscountService discountService, IOrderRepository orderRepository)
{
_discountService = discountService;
_orderRepository = orderRepository;
}
public void Execute(OrderDetails details)
{
_orderRepository
.Add(details.CreateOrder()
.SetDiscount(_disocuntService.Discount(details.CustomerType)));
}
}
These are just some made up ideas but may get you thinking about your scenario :)
Related
My application is using SQL Server 2012, EF6, MVC and Web API.
It's also using a repository and assorted files such as:
DatabaseFactory.cs
Disposable.cs
IDatabaseFactory.cs
IRepository.cs
IUnitOfWork.cs
RepositoryBase.cs
UnitOfWork.cs
We already use a service layer between our controllers and the repository
for some complicated business logic. We have no plans EVER to change to a different database and it has been pointed
out to me that the recent thinking is that EF6 is a repository so why build
another repository on top of it and why have all of the files I have above.
I am starting to think this is a sensible approach.
Does anyone know of any examples out there that implement EF6 without a
repository, with a service layer. My search on the web has revealed many
complex code examples that seem over complicated for no reason at all.
My problem is also when using a Service Layer then where do I put:
context = new EFDbContext()
In the controller, the service layer or both ? I read that I can do this with dependancy injection. I already use Unity as an IOC but I don't know how I can do this.
Entity Framework IS already a Unit of Work pattern implementation as well as a generic repository implementation (DbContext is the UoW and DbSet is the Generic Repository). And I agree that it's way overkill in most apps to engineer another UoW or Generic Repository on top of them (besides, GenericRepsitory is considered to be an anti-pattern by some).
A Service layer can act as a concrete repository, which has a lot of benefits of encapsulating data logic that is specific to your business needs. If using this, then there is little need to build a repository on top of it (unless you want to be able to change your backend service technology, say from WCF to WebApi or whatever..)
I would put all your data access in your service layer. Don't do data access in your controller. That's leaking your data layer into your UI layer, and that's just poor design. It violates many of the core SOLID concepts.
But you do NOT need an additional UnitOfWork, or other layers beyond that in most cases, unless your apps are very complex and intended to work in multiple environments...
Setting up Unity for ASP.NET MVC and WebAPI is quite easy if you install and add the Unity.Mvc* and Unity.WebAPI* Nuget packages to your project. (The * is a version number, like 3 or 4 or 5. Look for the appropriate versions for your project. Here are for example the links to the Unity.Mvc 5 package and to the Untity.WebAPI 5 package.)
The usage of these packages is explained in this blog post.
The building blocks are roughly like so:
You build a unity container and register all your dependencies there, especially the EF context:
private static IUnityContainer BuildContainer()
{
var container = new UnityContainer();
container.RegisterType<MyContext>(new HierarchicalLifetimeManager());
container.RegisterType<IOrderService, OrderService>();
container.RegisterType<ICustomerService, CustomerService>();
container.RegisterType<IEmailMessenger, EmailMessenger>();
// etc., etc.
return container;
}
MyContext is your derived DbContext class. Registering the context with the HierarchicalLifetimeManager is very important because it will ensure that a new context per web request will be instantiated and disposed by the container at the end of each request.
If you don't have interfaces for your services but just concrete classes you can remove the lines above that register the interfaces. If a service needs to be injected into a controller Unity will just create an instance of your concrete service class.
Once you have built the container you can register it as dependency resolver for MVC and WebAPI in Application_Start in global.asax:
protected void Application_Start()
{
var container = ...BuildContainer();
// MVC
DependencyResolver.SetResolver(
new Unity.MvcX.UnityDependencyResolver(container));
// WebAPI
GlobalConfiguration.Configuration.DependencyResolver =
new Unity.WebApiX.UnityDependencyResolver(container);
}
Once the DependencyResolvers are set the framework is able to instantiate controllers that take parameters in their constructor if the parameters can be resolved with the registered types. For example, you can create a CustomerController now that gets a CustomerService and an EmailMessenger injected:
public class CustomerController : Controller
{
private readonly ICustomerService _customerService;
private readonly IEmailMessenger _emailMessenger;
public CustomerController(
ICustomerService customerService,
IEmailMessenger emailMessenger)
{
_customerService = customerService;
_emailMessenger = emailMessenger;
}
// now you can interact with _customerService and _emailMessenger
// in your controller actions
}
The same applies to derived ApiControllers for WebAPI.
The services can take a dependency on the context instance to interact with Entity Framework, like so:
public class CustomerService // : ICustomerService
{
private readonly MyContext _myContext;
public CustomerService(MyContext myContext)
{
_myContext = myContext;
}
// now you can interact with _myContext in your service methods
}
When the MVC/WebAPI framework instantiates a controller it will inject the registered service instances and resolve their own dependencies as well, i.e. inject the registered context into the service constructor. All services you will inject into the controllers will receive the same context instance during a single request.
With this setup you usually don't need a context = new MyContext() nor a context.Dispose() as the IOC container will manage the context lifetime.
If you aren't using a repository then I assume you would have some place to write your logic/processing that your service operation would use. I would create a new instance of the Context in that logic/process class method and use its methods directly. Finally, dispose it off right after its use probably under a "using".
The processing method would eventually transform the returned/processed data into a data/message contract which the service returns to the controller.
Keep the data logic completely separate from Controller. Also keep the view model separate from data contract.
If you move ahead with this architecture, you are going to be tightly coupling the Entity Framework with either your service or your controller. The repository abstraction gives you a couple things:
1) You are able to easily swap out data access technologies in the future
2) You are able to mock out your data store, allowing you to easily unit test your data access code
You are wondering where to put your EF context. One of the benefits of using the Entity Framework is that all operations on it are enrolled into a transaction. You need to ensure that any data access code uses the same context to enjoy this benefit.
The design pattern that solves that problem is the Unit of Work pattern, which by the looks of things, you are already using. I strongly recommend continuing to use it. Otherwise, you will need to initialize your context in your controller, pass it to your service, which will need to pass it to any other service it interacts with.
Looking at the objects you have listed, it appears to be a considerate attempt to build this app with enterprise architectural best practices. While abstractions do introduce complexity, there is no doubting the benefit they provide.
I'm really confused, I learned with the book "Apress pro Asp.net Mvc 4", that the best pattern for Mvc 4, is the Dependency Injection, ( to put the Model data of the database etc... in another project (Domain) and then create interfaces and implementation to those interfaces, and then connect it to the controller with Ninja..
And all the connect to the db is only from the data-layer solution, the only model in the web solution in viewModel
The Controller
public class ProductController : Controller
{
private IProductRepository repository;
public ProductController(IProductRepository productRepository)
{
this.repository = productRepository;
}
....
}
and Ninject
ninjectKernel.Bind<IProductRepository>().To<EFProductRepository>();
and on the other hand, In my last job(webmaster) , the company used another pattern for the mvc Projects (I'm using this pattern right now).
the projects is made with only One Solution and using Static Classes to handle the data layer
I don't like the Dependency Injection, this is too complicated, and by 'f12' you see only the interface instead of the Concrete class
Some questions:
which patter is better for performance (fast website)?
is't good to use " public Db db = new Db();" in the controller, instead of use it only in the domain layer (solution)??
What is the advantages of using Dependency Injection? is't bad to use my pattern?
What is the advantages of split the project into 2 solutions for the Data Layer?
example:
public class LanguageController : AdminController
{
public Db db = new Db();
protected override void Dispose(bool disposing)
{
db.Dispose();
base.Dispose(disposing);
}
//
// GET: /Admin/Language/
public ActionResult Index()
{
return View(db.Languages.ToList());
}
[HttpPost, ActionName("Delete")]
[ValidateAntiForgeryToken]
public ActionResult DeleteConfirmed(short id)
{
Language language = db.Languages.Find(id);
db.Languages.Remove(language);
db.SaveChanges();
return RedirectToAction("Index");
}
...
}
which patter is better for performance (fast website)?
Impossible to answer. You could have non-performant code in either of these approaches. Don't try to prematurely optimize for performance, optimize for clean and supportable code and address performance bottlenecks that are actually observed in real scenarios.
is't good to use " public Db db = new Db();" in the controller, instead of use it only in the domain layer (solution)
It's a question of separating concerns and isolating dependencies. If your controller internally instantiates a connection to a database then that controller can only ever be used in the context of that database. This will make unit testing the controller very difficult. It also means that replacing the database means modifying the controller, which shouldn't need to be modified in that case.
Dependency injection frameworks are simply a way of addressing the Dependency Inversion Principle. The idea is that if Object A (the controller) needs an instance of Object B (the database object) then it should require that the instance be supplied to it, rather than internally instantiate it. The immediate benefit here is that Object B can just be an interface or abstract class which could have many different implementations. Object A shouldn't care which implementation is given to it, as long as it satisfies the same observable behavior.
By inverting the dependency (whether or not you use a dependency injection framework), you remove the dependency on the database from the controller. The controller just handles client-initiated requests. Something else handles the database dependency. This makes these two separate objects more portable and re-usable.
What is the advantages of using Dependency Injection? is't bad to use my pattern?
See above. Dependency injection is a way to achieve inversion of dependencies, which is the core goal in this case. Note that there are a few different ways to achieve this. Some frameworks prefer constructor injection, some support property/setter injection, etc. Personally I tend to go with the service locator pattern, which has the added benefit of being able to abstract the dependency of the dependency injector itself.
It's only "bad" to use your approach if you run into any problems when using it. There are good patterns to address various concerns, but if your project doesn't legitimately have those concerns then using those patterns would be over-engineering and would actually hurt the supportability of the code. So, as with anything, "it depends."
What is the advantages of split the project into 2 solutions for the Data Layer?
Two solutions? Or two projects in the same solution? (Resulting in two assemblies.) The advantage is that you can re-use one without having a dependency on the other. For example, in some of the code you posted there is an allusion to the repository pattern. If you have an assembly which serves only the purpose of implementing repositories to the back-end data then any application can use those repositories. If instead it's all implemented in the MVC application directly then no other application can use it, it's tightly coupled to the MVC application.
If you will never need to re-use that functionality, then this isn't the end of the world. If you would like to re-use that functionality, separating it into a portable assembly which internally isolates its dependencies would allow for that.
The context:
(Note: in the following I am using "project" to refer to a collection of software deliverables, intended for a single customer or a specific market. I am not referring to "project" as it is used in Visual Studio to refer to a configuration that builds a single EXE or DLL, within a solution.)
We have a sizable system that consists of three layers:
A layer containing code that is shared across projects
A layer containing code that is shared across different applications within a project
A layer containing code that is specific to a particular application or website within a project.
The first two layers are built into DLL assemblies. The top layer is an assortment of EXEs and/or .aspx web applications.
IIRC, we have a number of different projects that use this pattern. All four share layer 1 (though often in slightly different versions, as managed by the VCS). Each of them has its own layer 2. Each of them has its own set of deliverables, which can range from a website, or a website and a background service, to our largest and most complex (and the bread-and-butter of our business) which consists of something like five independent web applications, 20+ console applications/background services, three or four independent web services, half-a-dozen desktop GUI apps, etc.
It's been our intent to push as much code into levels 1 and 2 as possible, to avoid duplicating logic in the top layers. We've pretty much accomplished that.
Each of layers 1 and 2 produce three deliverables, a DLL containing the code that is not web-related, a DLL containing the code that is web-related, and a DLL containing unit tests.
The problem:
The lower levels were written to make extensive use of singletons.
The non-web DLL in layer 1 contains classes to handle INI files, logging, a custom-built obect-relational mapper, which handles database connections, etc. All of these used singletons.
And when we started building things on the web, all of those singletons became a problem. Different users would hit the website, log in, and start doing different things. They'd do something that generated a query, which would result in a call into the singleton ORM to get a new database connection, which would access the singleton configuration object to get the connection string, and then the connection would be asked to perform a query. And in the query the connection would access the singleton logger to log the SQL statement that was generated, and the logger would access the singleton configuration object to get the current username, so as to include it in the log, and if someone else had logged in in the meantime that singleton configuration object would have a different current user. It was a mess.
So what what we did, when we started writing web applications using this code base was to create a singleton factory class, that was itself a singleton. Every one of the other singletons had a public static instance() method that had been calling a private constructor. Instead, the public static instance() method obtained a reference to the singleton factory object, then called a method on that to get a reference to the single instance of the class in question.
In other words, instead of having a dozen classes that each maintained its own private static reference, we now had a single class that maintained a single static reference, and the object that it maintained a reference to contained a dozen references to the other, formerly singleton classes.
Now we had only one singleton to deal with. And in its public static instance() method, we added some web-specific logic. If we had an HTTPContext and that context had an instance of the factory in its session, we'd return the instance from the session. If we had an HTTPContext, and it didn't have a factory in its session, we'd construct a new factory and store it in the session, and then return it. If we had no HTTPContext, we'd just construct a new factory and return it.
The code for this was placed in classes we derived from Page, WebControl, and MasterPage, and then we used our classes in our higher-level code.
This worked fine, for .aspx web applications, where users logged in and maintained session. It worked fine for .asmx web services running within those web applications. But it has real limits.
In particular, it won't work in situations where there is no session. We're feeling pressure to provide websites that serve a larger user base - that might have tens or hundreds of thousands of users hitting them dynamically. Up to now our users have been pretty typical desktop business users. They log into our websites, and stay in them much of the day, using our web apps as an alternative to a desktop app. A given customer might have as many as six users who might use our websites, and while we have a thousand or more customers, combined they don't make for all that heavy a load. But our current architecture will not scale to that.
We're also running into situations where ASP.NET MVC would be a better fit for building the web UI than .aspx web forms. And we're exploring building mobile apps that would be communicating with stand-alone WFC web services. And while in both of these, it looks like it's possible to run them in an environment that has a session, it looks to limit their flexibility and performance fairly severely.
So, we're really looking at ways to eliminate these singletons.
What I'd really like:
I'm trying to envision a series of refactors, that would eventually lead to a better-structured, more flexible architecture. I could easily see the advantages of an IoC framework, in our situation.
But here's the thing - from what I've seen of IoC frameworks, they need their dependencies provided to them externally via constructor parameters. My logger class, for example, needs an instance of my config class, from which to obtain the current user. Currently, it is using the public static instance() method on the config class to obtain it. To use an IoC framework, I'd need to pass it as a constructor.
In other words, from where I sit, the first, and unavoidable task, is to change every class that uses any of these singletons so as to take the singleton factory as a constructor parameter. And that's a huge amount of work.
As an example, I just spent the afternoon doing exactly that, in the level 1 libraries, to see just how much work it is. I ended up changing over 1300 lines of code. The level 2 libraries will be worse.
So, are there any alternatives?
Typically, you should try to wrap the contextual information into its own instance and provide a static accessor method to refer to it. For example, consider HttpContext and its available every where in web application via HttpContext.Current.
You should try to devise something similar so that instead of returning singleton instance, you would return the instance from the current context. That way, you need to not change your consumer code that refers to these static methods (e.g. Logger.Instance()).
I generally roll-up information such as logger, current user, configuration, security permissions into application context (can be more than one class if need arises). The AppContext.Current static method returns the current context. The method implementation goes something like
public interface IContextStorage
{
// Gets the stored context
AppContext Get();
// Stores the context, context can be null
void Set(AppContext context);
}
public class AppContext
{
private static IContextStorage _storageProvider, _defaultStorageProvider;
public static AppContext Current
{
get
{
var value = _storageProvider.Get();
// If context is not available in storage then lookup
// using default provider for worker (threadpool) therads.
if (null == value && _storageProvider != _defaultStorageProvider
&& Thread.CurrentThread.IsThreadPoolThread)
{
value = _defaultStorageProvider.Get();
}
return value;
}
}
...
}
IContextStorage implementations are application specific. The static variables _storageProvider gets injected at the application start-up time while _defaultStorageProvider is a simple implementation that looks into current call context.
App Context creation happens in multiple stages - for example, a global information such as configuration gets read and cached at application start-up while specific information such as user & security gets formed at authentication stage. Once all info is available, the actual instance is created and stored into the app specific storage location. For example, desktop application will use a singleton instance while web application can probably store the instance into the session state. For web application, you may have logic at start of each request to ensure that the context is initialized.
For a scalable web applications, you can have a storage provider that will store the context instance into the cache and if not present in the cache then re-built it.
I'd recommend starting by implementing "Poor Man's DI" pattern. This is where you define two constructors in your classes, one that accepts an instance of the dependencies (IoC), and another default constructor that new's them up (or calls a singleton).
This way you can introduce IoC incrementally, and still have everything else work using the default constructors. Eventually when you have IoC being used in most places you can start to remove the default constructors (and the singletons).
public class Foo {
public Foo(ILogger log, IConfig config) {
_logger = log;
_config = config;
}
public Foo() : this(Logger.Instance(), Config.Instance()) {}
}
I have a ASP.NET web site which uses domain driven design and uses repository for its database operations.
I want to know that what is pros and cons of singleton repository and static repository and simple repository class which will new on every access?
further more if anyone can compare and guide me to use which one of them I will be appreciate.
Static and singleton aren't good solution for Repository pattern. What if your application will use 2 or more repositories in the future?
IMO the best solution is to use a Dependency Injection container and inject your IRepository interface in the classes that need it.
I suggest you to read a good book about domain driven design and a good book about dependency injection (like Mark Seeman's Dependency Injection in .NET).
With singletons and static classes your application won't be scalable
You have two singleton repositories:
class Repository<TEntity> {
static Repository<TEntity> Instance { get { ... /*using sql server*/ } }
}
class Repository2<TEntity> {
static Repository2<TEntity> Instance { get { ... /*using WCF or XML or any else */ } }
}
The services using them must have a static reference to one of them or both:
class OrderService {
public void Save(Order order) { Repository<Order>.Instance.Insert(order); }
}
How can you save your order using Repository2, if the repository is statically referenced?
A better solution is using DI:
interface IRepository<TEntity> { ... }
class SqlRepository<TEntity> : IRepository<TEntity> { ....}
class OrderService {
private readonly IRepository<TEntity> _repo;
public OrderService(IRepository<TEntity> repo) { _repo = repo; }
public void Save(Order order) { repo.Insert(order); }
}
Don't use static or singleton repositories because of:
It affects testablility, you can not mock it when unit testing.
It affects extensibility, you can not make more than one concrete implementation and you can not replace behavior without re-compiling.
It affects scalability in terms of lifecycle management, insetead depend on dependency injection framework to inject the dependency and manage lifecycle.
It affects maintainability, it forces dependency upon concrete implementation instead of abstraction.
Bottom line: DONT use static or singleton repositories
instead create repository interfaces in your domain model project, and implement these interfaces in a concrete data access project, and use dependency injection framework.
Two SOLID reasons to not have singleton repository:
Consumers of repository will be coupled to repository implementation. This will negatively affect extensibility and testabiliy. This is a DIP violation. Depend on abstractions, not on concretions.
Repository implementation will have to violate SRP because it will most likely end up managing ORM session, database connection and potentially transactions. It would also have to make thread safe guarantees because it potentially can be used from multiple threads. Instead, database connection (ORM Session) should be injected into repository implementation so that consuming code can arrange multiple repository calls into transaction.
Possible solution to both problems is Constructor Injection.
I personally respectfully disagree with previous answers.
I have developed multiple websites (one with 7 millions page views per month) and never had a single problem with my static repositories.
My static repository implementation is pretty simple and only contains objects providers as properties. A single repository can contain as many providers as you need.
Then, the providers are responsible to manage database connection and transactions. Using TransactionScope, the consumer could manage the transactions or let it to the providers.
Every providers are developed using the singleton pattern.
This way, I can get my objects by simply calling this :
var myObj = MyRepository.MyProvider.GetMyObject(id);
At any time, there is only one repository and one provider of each type in every web pool of my application. Depending on how many users you have at the same time on your site, you could set more than one web pool (but most of the time one is just enough).
I don't see where my repository/providers consumers are coupled with my repository. In fact, the implementations of my providers are totally abstracted from them. Of course, all providers returned by my repository are interfaces and I could easily change the implementation of them at any time and push my new dll on the web server. If I want to create a completely new provider with the same interface, I only have to change it in ONE place: my repository.
This way, no need to add dependency injection or having to create your own ControllerFactory (for MVC procjects).
And you still have the advantage of clean code in your controllers. You will also save a lot of repository creation and destruction every time a page is requested (which normally use reflection in your ControllerFactory).
If you are looking for a scalable solution (if you really need it which most of the time is not really a problem), my way of developing repositories should never be a problem compared to dependency injection.
We have many instances in our application where we would like to be able to access things like the currently logged in user id in our business domain and data access layer. On log we push this information to the session, so all of our front end code has access to it fairly easily of course. However, we are having huge issues getting at the data in lower layers of our application. We just can't seem to find a way to store a value in the business domain that has global scope just for the user (static classes and properties are of course shared by the application domain, which means all users in the session share just one copy of the object). We have considered passing in the session to our business classes, but then our domain is very tightly coupled to our web application. We want to keep the prospect of a winforms version of the application possible going forward.
I find it hard to believe we are the first people to have this sort of issue. How are you handling this problem in your applications?
I don't think having your business classes rely on a global object is that great of an idea, and would avoid it if possible. You should be injecting the necessary information into them - this makes them much more testable and scalable.
So rather than passing a Session object directly to them, you should wrap up the information access methods that you need into a repository class. Your business layer can use the repository class as a data source (call GetUser() on it, for example), and the repository for your web app can use session to retrieve the requested information (return _session.User.Identity).
When porting it to winforms, simply implement the repository interface with a new winform-centric class (i.e. GetUser() returns the windows version of the user principal).
In theory people will tell you it's a bad business practice.
In practice, we just needed the data from the session level available in the business layers all the time. :-(
We ended up having different storage engines united under a small interface.
public interface ISessionStorage
{
SomeSessionData Data {get;set;}
...
.. and most of the data we need stored at "session" level
}
//and a singleton to access it
public static ISessionStorage ISessionStorage;
this interface is available from almost anywhere in our code.
Then we have both a Session and/or a singleton implementation
public WebSessionStorage
{
public SomeSessionData Data
{
get { return HttpContext.Current.Session["somekey"] as SomeSessionData;}
set { HttpContext.Current.Session["somekey"] = value;}
}
public WebFormsSessionStorage
{
private static SomeSessionData _SomeSessionData; //this was before automatic get;set;
public SomeSessionData
{
get{ return _SomeSessionData;}
set{ _SomeSessionData=value; }
}
}
On initing the application, the website will do a
Framework.Storage.SessionStorage = new WebSessionStorage();
in Global.asax, and the FormsApp will do
Framework.Storage.SessionStorage = new WebFormsSessionStorage();
I agree with Womp completely - inject the data down from your front-end into your lower tiers.
If you want to do a half-way cheat (but not too much of a cheat), what you can do is create a very small assembly with just a couple POCO classes to store all of this information you want to share across all of your tiers (currently logged-in username, time logged in, etc.) and just pass this object from your front-end into your biz/data tiers. Now if you do this, you MUST avoid the temptation to turn this POCO assembly into a general utility assembly - it MUST stay small or you WILL have problems in the future (trust me or learn the hard way or ask somebody else to elaborate on this one). However, if you have this POCO assembly, injecting this data through the various tiers becomes very easy and since it's POCO, it serializes very well and works nicely with web services, WCF, etc.