I have a solution structured like so:
Models assembly
Data assembly -
defines repository interfaces and a
base repository class
ORM assembly -
implements repository interfaces & inherits
base repository class ^
Business
assembly - has a reference to the
data assembly, and dynamically pulls
in the ORM object via MEF (no
explicit reference to the ORM
assembly)
UI assembly(s)
In this fashion, I can easily swap out the ORM, if we decide to go with something else.
I'm curious if it's possible to have similar functionality with Unity. I want to decouple my business logic from the underlying ORM. From what I've read, unity mainly works at compile time and MEF is at runtime. That being said, is it possible to decouple with unity in such a way that my business layer has no reference to the ORM, but instead just the interfaces that it implements from the data assembly? How can Unity define what implements the interface without having a reference to the implementing assembly?
Currently, with MEF, no assembly has a reference to the ORM (other than when the business layer dynamically pulls it in at runtime to discover parts and fill the interface with an object). I would prefer to continue working along these lines and would like to know if I can do that with Unity.
To do the same with Unity, you typically have the ORM interfaces and their implementation in separate assemblies.
For example, in the Project.Orm.Interface assembly you would define the interfaces that any ORM must implement; the rest of your solution would have references to Project.Orm.Interface. This way, no part of your application has references to any concrete ORM implementation.
The Project.Orm.ConcreteImplementation assembly would also reference Project.Orm.Interface and register the concrete types in the container using the interface types they implement (much like the dependent code resolves the types by asking for the interfaces they implement).
In the context of Prism, there would be a dynamically discovered IModule that loads Project.Orm.ConcreteImplementation and registers the types in the container at module initialization time.
Related
I am trying to figure out a good way to architect my solution. I know that I am going to be using the following technologies, Asp.Net Webforms, and Entity Framework 4.1. My EF model is based on an existing database. I'm planning to use the EF DbContext generator to build my context and entities. And this is the point where things get a little tricky for me.
I want to have proper separation of concerns, providing for better testability and allowing me to separate my business logic from my DAL. I have three projects in my solution currently: Web, Core, and Data. I would like dependencies to be Web -> Core <- Data, with no dependency between Web and Data at all. This requires my entities to actually exist in Core, rather than Data (where my edmx is). Currently, my thought is to move the Entities.tt file to Core and change the inputFile to point to my edmx in Data to generate my Entities in Core. But I'm unsure what to do with the Context. It's heavily dependent on EF and therefore I don't simply want to move that into Core. I thought about interfacing it, creating my own IEntities.Context.tt and dropping that in Core. My concern is the loss of functionality if my interface doesn't create DbSets and DbContext.
Two thoughts I've been having on this are, 1) put a ref to System.Data.Entity in Core, 2) don't use DbSet and replace it with ICollection (or some such generic container) and wrap DbContext as just an Object in my interface.
Any insight would be very appreciated. Thank you.
There are lots of different patterns you could use, but two come to mind immediately:
1) Add a business / service layer - this will abstract between your data layer and your presentation layer. This is the approach I take most often - using AutoMapper and Dependency Injection (I like Ninject) to make the monkey work easier. Your business layer would expose either its own version of your database objects (not recommended), or objects which related to your business model (a more robust approach).
2) Use the Inversion of Control pattern - Very popular at the moment, though I'm yet to give it a bash in a real life scenario. Apparently very good for TDD / mocking etc... it basically means that your data layer has a dependency on your business layer instead of the other way around.
FYI - My "Core" or "Common" assemblies know nothing about my business or data layers - they merely provide platform agnostic helpers and common classes - if I want to create common MVC functionality, for example, I'll create a Company.MVC.Core assembly instead.
If your solution is completely greenfield then I like to use a code first approach in entity framework (forgive the shameless plug but I've put a tutorial on my blog about this http://www.terric.co.uk/code-first-entity-framework-and-sql-migrations/). I like the control it gives me that I can't seem to get when I generate a .edmx.
Moving onto structure, I usually separate the layers of my project into separate assemblies: Domain (and Data) and WebUI structured with the following folders (namespaces):
Domain (business layer and data layer assembly)
Data (contains my EF data context and Interface to the context)
Entities (contains my POCO objects for the context)
WebUI (presentation layer assembly)
Infrastructure (contains my dependency inject initialiser)
I never DI my entities and instead use the concretes in my presentation layer, however the context I'll always DI as I may want / have to use ADO.Net (especially for legacy apps) where my Domain layer will still use ADO.Net to read / write my POCO entities. This way, when I eventually get scope to implementing an ORM with my legacy app I can simply DI the ORM version of my Domain.
As a footnote to this, if you were following the repository pattern you could always interface them and DI your repositories. Either way, your POCOs should be specific to the solution so the underlying data structure doesn't dramatically change often hence I never DI them.
I'm starting a new personal Prism 4 project. The Reference Implementation currently uses Unity.
I'd like to know if I should use MEF instead, or just keep to Unity.
I know a few discussions have mentioned that these two are different, and they do overlap, but will I be missing out if I simply choose Unity all the way?
Also check out the documentation:
Key Decision: Choosing a Dependency Injection Container
The Prism Library provides two options
for dependency injection containers:
Unity or MEF. Prism is extensible,
thereby allowing other containers to
be used instead with a little bit of
work. Both Unity and MEF provide the
same basic functionality for
dependency injection, even though they
work very differently.
Some of the capabilities provided by both containers include the following:
They both register types with the container.
They both register instances with the container.
They both imperatively create instances of registered types.
They both inject instances of registered types into constructors.
They both inject instances of registered types into properties.
They both have declarative attributes for marking types and dependencies that need to be managed.
They both resolve dependencies in an object graph.
Unity provides several
capabilities that MEF does not:
It resolves concrete types without registration.
It resolves open generics.
It uses interception to capture calls to objects and add additional functionality to the target object.
MEF provides several
capabilities that Unity does not:
It discovers assemblies in a directory.
It uses XAP file download and assembly discovery.
It recomposes properties and collections as new types are discovered.
It automatically exports derived types.
It is deployed with the .NET Framework.
I am currently doing the same investigation. I was last week attending the p&p symposium at Redmond. I had the chance to chat with some of the p&p people on that.
MEF
+Part of .net, no need for extra libraries
+Very powerful in extensibility, modularity scenarios
-More generic approach, less flexible for DI scenarios
-You need to decorate with attributes, your code is glued to MEF
Unity
+Very flexible for DI scenarios
+If you stick with ctor injection and avoid using named instances then you
don't need to use any attributes. Most
of your system doesn't rely on Unity
-No out of the box support for extensibility, modularity scenarios
-Need to deploy the 3rdparty libraries
What I think is a good idea is to use MEF for extensibility (manage the modules of your app, localize registrations) and use Unity for DI.
Well this has to be clear that MEF implements Inversion of control but it is not a part of it, so this means that they are not same, there is a difference, that we use unity when we have static dependency and MEF provides us with dynamic dependency.
MEF also provides us with extensibility, by which we can induce a port type mechanism and can also specigy the type of component which can interact via that port.
more can be understood from: MSDN Document
I'm working on an ASP.Net website along with a supporting Class Library for my Business Logic, Data Access code, etc.
I'm EXTREMELY new and unfamiliar with the Unity Framework and Dependency Injection as a whole. However, I've managed to get it working by following the source code for the ASP.NET 3.5 Portal Starter Kit on codeplex. But herein lies the problem:
The Class Library is setup with Unity and several of my classes have [Dependency] attributes on their properties (I'm exclusively using property setter injections for this). However, the Global.asax is telling Unity how to handle the injections....in the Class Library.
Is this best practice or should the Class Library be handle it's own injections so that I can re-use the library with other websites, webapps or applications? If that is indeed the case, where would the injection code go in this instance?
I'm not sure how clear the question is. Please let me know if I need to explain more.
Though not familiar with Unity (StructureMap user) The final mappings should live in the consuming application. You can have the dll you are using define those mappings, but you also want to be able to override them when needed. Like say you need an instance of IFoo, and you have one mapped in your Class Library, but you've added a new one to use that just lives in the website. Having the mappings defined in the site allows you to keep things loosely coupled, or else why are you using a DI container?
Personally I try and code things to facilitate an IOC container but never will try and force an IOC container into a project.
My solution breakdown goes roughly:
(Each one of these are projects).
Project.Domain
Project.Persistence.Implementation
Project.Services.Implementation
Project.DIInjectionRegistration
Project.ASPNetMVCFrontEnd (I use MVC, but it doesn't matter).
I try to maintain strict boundaries about projects references. The actual frontend project cannot contain any *.Implementation projects directly. (The *.implementation projects contain the actual implementations of the interfaces in domain in this case). So the ASPNetMVCFrontEnd has references to the Domain and the DIInjectionWhatever and to my DI container.
In the Project.DIInjectionWhatever I tie all the pieces together. So this project has all the references to the implementations and to the DI framework. It contains the code that does the registering of components. Autofac lets me breakdown component registration easily, so that's why I took this approach.
In the example here I don't have any references to the container in my implementation projects. There's nothing wrong with it, and if your implementation requires it, then go ahead.
I am Creating a new ASP.Net website "not mvc", and willing to make the data access layer has 2 versions, one using the Linq to Sql and another one using ad.net entity framework.
Note: kigg is doing the same but in MVC website and too complex than i want.
I learned that the best pattern to achieve my goal is using repository design pattern.
My question is, where in my code and layers "dal, bal, ui" the switch will happen? in other words where i will change the code to apply the linq to sql to ado.net entity framework or vice versa.
I wrote:
IRepository repository;
Then in the class constuctor
repository = MyRepositoryLinqToSql();
can some one teach me this part architecture wise?
If you want to create a pluggable architecture to be able to swap out the repository layer on the fly then you will need to create everything behind an interface and then use something like StructureMap to dynamically swap in what you need when you need it.
You would want to define a repository class like AccountRepository. One for linq to sql LSAccountRepository and EFAccountRepository. Both of these would inherit from IAccountRepository and have methods such as GetAccountByID, SaveAccount, DeleteAccount, etc.
Then using StructureMap you would have syntax like so to load the appropriate repository based on the system that you are loading in.
IAccountRepository _accountRepository = ObjectFactory.GetInstance();
Then via your configuration you can specify the default implementation of IAccountRepository and swap this out to point to either implementation at any time.
If this is too complex then a dependancy injection patten can be used in that you can have a method with a parameter of IAccountRepository. This method would be called by a controller (MVP or MVC) and you can pass in the appropriate reference at that time. This way you do not directly instantiate the repository inside of the method that might need one repository vs. another.
Even if you do decide to do a DI pattern you can still use StructureMap. When you instantiate an object that has other dependancies in it that StructureMap knows about (in the constructor) then StructureMap will also load up those objects as well. Then the caller of the object that has the method that you need will be the only loose coupling that is needed as StructureMap will dynamically take care of the dirty work for you.
This is a bit of a heavy question to answer, but you could add a constructor that takes an IRepository instance as a parameter and use that within the class itself. As for how to populate it, please do some research on Inversion of Control containers such as Spring and Windsor. These tools take configuration details about which specific implementations you want to use and then automatically pass these instances to the constructors and properties of classes.
For example, you can indicate which version of IRepository you want to use in your app.config file, and whereever this appears in a constructor an instance of your chosen class will be passed in.
I think you're looking for "Dependency Injection" solution.
Chack this out:
Unity Dependency Injection Video using ASP.NET Webforms - Ninject and Autofac IoC Too!
Unity And Asp.Net WebForms
I want to isolate all my code from the IoC container library that I have chosen (Unity). To do so, I created an IContainer interface that exposes Register() and Resolve(). I created a class called UnityContainerAdapter that implements IContainer and that wraps the real container. So only the assembly where UnityContainerAdapter is defined knows about the Unity library.
I have a leak in my isolation thought. Unity searches for attributes on a type's members to know where to inject the dependencies. Most IoC libraries I have seen also support that. The problem I have is that I want to use that feature but I don’t want my classes to have a dependency on the Unity specific attribute.
Do you have any suggestions on how to resolve this issue?
Ideally I would create my own [Dependency] attribute and use that one in my code. But I would need to tell the real container the search for my attribute instead of its own.
Check out the Common Service Locator project:
The Common Service Locator library
contains a shared interface for
service location which application and
framework developers can reference.
The library provides an abstraction
over IoC containers and service
locators. Using the library allows an
application to indirectly access the
capabilities without relying on hard
references. The hope is that using
this library, third-party applications
and frameworks can begin to leverage
IoC/Service Location without tying
themselves down to a specific
implementation.
Edit: This doesn't appear to solve your desire to use attribute-based declaration of dependency injection. You can either choose not to use it, or find a way to abstract the attributes to multiple injection libraries (like you mentioned).
That is the basic problem with declarative interfaces -- they are tied to a particular implementation.
Personally, I stick to constructor injection so I don't run into this issue.
I found the answer: Unity uses an extension to configure what they call "selector policies". To replace the attributes used by Unity, you just code your own version of the UnityDefaultStrategiesExtension class and register you own "selector policies" that use your own attributes.
See this post on the Unity codeplex site for details on how to do that.
I'm not sure that it's going to be easy to do the same if I switch to another IoC library but that solves my problem for now.
Couldn´t you just setup your configuration without the attributes, in xml. That makes it a bit more "unclear" I know, personally I use a combination of xml and attributes, but at least it "solves" your dependency on unity thing.