New Prism Project - Use MEF or Unity? - unity-container

I'm starting a new personal Prism 4 project. The Reference Implementation currently uses Unity.
I'd like to know if I should use MEF instead, or just keep to Unity.
I know a few discussions have mentioned that these two are different, and they do overlap, but will I be missing out if I simply choose Unity all the way?

Also check out the documentation:
Key Decision: Choosing a Dependency Injection Container
The Prism Library provides two options
for dependency injection containers:
Unity or MEF. Prism is extensible,
thereby allowing other containers to
be used instead with a little bit of
work. Both Unity and MEF provide the
same basic functionality for
dependency injection, even though they
work very differently.
Some of the capabilities provided by both containers include the following:
They both register types with the container.
They both register instances with the container.
They both imperatively create instances of registered types.
They both inject instances of registered types into constructors.
They both inject instances of registered types into properties.
They both have declarative attributes for marking types and dependencies that need to be managed.
They both resolve dependencies in an object graph.
Unity provides several
capabilities that MEF does not:
It resolves concrete types without registration.
It resolves open generics.
It uses interception to capture calls to objects and add additional functionality to the target object.
MEF provides several
capabilities that Unity does not:
It discovers assemblies in a directory.
It uses XAP file download and assembly discovery.
It recomposes properties and collections as new types are discovered.
It automatically exports derived types.
It is deployed with the .NET Framework.

I am currently doing the same investigation. I was last week attending the p&p symposium at Redmond. I had the chance to chat with some of the p&p people on that.
MEF
+Part of .net, no need for extra libraries
+Very powerful in extensibility, modularity scenarios
-More generic approach, less flexible for DI scenarios
-You need to decorate with attributes, your code is glued to MEF
Unity
+Very flexible for DI scenarios
+If you stick with ctor injection and avoid using named instances then you
don't need to use any attributes. Most
of your system doesn't rely on Unity
-No out of the box support for extensibility, modularity scenarios
-Need to deploy the 3rdparty libraries
What I think is a good idea is to use MEF for extensibility (manage the modules of your app, localize registrations) and use Unity for DI.

Well this has to be clear that MEF implements Inversion of control but it is not a part of it, so this means that they are not same, there is a difference, that we use unity when we have static dependency and MEF provides us with dynamic dependency.
MEF also provides us with extensibility, by which we can induce a port type mechanism and can also specigy the type of component which can interact via that port.
more can be understood from: MSDN Document

Related

Clean Architecture (DDD) Why are domain objects (DB Entitites) and DbContext in separate projects?

I understand the need for abstraction and separating concerns and unit tests, however, it seems to me that separating entities and context into 2 projects is slight overengineering?
I could be missing something really, but is this because you want to be open for different ORM-s?
Much thanks for the clarification.
The main reason I prefer to have Infrastructure in a separate project, rather than just a separate folder, from the domain model (Core project) is simple: enforcing my design via the compiler.
I have a design rule, which is basically the Dependency Inversion Principle. Don't depend on low level implementations (such as those found in Infrastructure), instead depend on abstractions (interfaces). Also, don't have your abstractions depend on details; have details depend on abstractions. The details of how and which infrastructure is being used for a given abstraction are in the Infrastructure service implementations.
Abstractions say what; implementations say how.
What: I need to send an email.
ISendEmail interface
How: I want to do it using the SMTP protcol
SmtpEmailSender class (implements ISendEmail)
How: I want to do it using a SendGrid API
SendGridEmailSender class (implements ISendEmail)
So, in a single project, how would you ensure that the implementations depend on the interfaces, and not vice versa?
How would you ensure your domain classes didn't directly reference or use Infrastructure types?
I'm not aware of a way to do this.
But if you put them in separate projects, and you have the implementation details project depend on the abstractions-and-models project, you now have solved the problem. The compiler WILL NOT ALLOW the Core project to reference anything in the Infrastructure project, because it would create a circular dependency.
This constraint helps developers do the right thing and keeps them falling into the pit of success even if they don't completely grok how the dependency inversion principle works or why it's important.
And I've never found 3 projects (Core/Infra/UI) to be overengineering for any non-demo app I've built for real work. It's only 3 projects.

.NET Core keyed dependency injection

Unity, Autofac and probably quite a few other Dependency injection packages all support "keyed dependency injection containers" that allow to register multiple implementations of an interface and identify them uniquely via a key (be it a string, an int, an enum or whatever else).
However, .NET Core, so far I can see at least, doesn't have such a feature and if I were to try to implement anything like this, I'd have to do a workaround or find some hacky solutions for it. I am wondering, is there a particular reason this has not been introduced in .NET Core?
Unity example:
container.RegisterType<IService, ServiceImplementation1>("1");
container.RegisterType<IService, ServiceImplementation2>("2");
Autofac example:
builder.RegisterType<ServiceImplementation1>().Keyed<IService>("1");
builder.RegisterType<ServiceImplementation2>().Keyed<IService>("2");
...,is there a particular reason this has not been introduced in .NET Core?
Short answer: Yes
Reference Default service container replacement
The built-in service container is designed to serve the needs of the framework and most consumer apps. We recommend using the built-in container unless you need a specific feature that the built-in container doesn't support, such as:
Property injection
Injection based on name (a.k.a keyed)
Child containers
Custom lifetime management
Func support for lazy initialization
Convention-based registration
note: emphasis mine

Handling Dependency Injections - Where does the logic go?

I'm working on an ASP.Net website along with a supporting Class Library for my Business Logic, Data Access code, etc.
I'm EXTREMELY new and unfamiliar with the Unity Framework and Dependency Injection as a whole. However, I've managed to get it working by following the source code for the ASP.NET 3.5 Portal Starter Kit on codeplex. But herein lies the problem:
The Class Library is setup with Unity and several of my classes have [Dependency] attributes on their properties (I'm exclusively using property setter injections for this). However, the Global.asax is telling Unity how to handle the injections....in the Class Library.
Is this best practice or should the Class Library be handle it's own injections so that I can re-use the library with other websites, webapps or applications? If that is indeed the case, where would the injection code go in this instance?
I'm not sure how clear the question is. Please let me know if I need to explain more.
Though not familiar with Unity (StructureMap user) The final mappings should live in the consuming application. You can have the dll you are using define those mappings, but you also want to be able to override them when needed. Like say you need an instance of IFoo, and you have one mapped in your Class Library, but you've added a new one to use that just lives in the website. Having the mappings defined in the site allows you to keep things loosely coupled, or else why are you using a DI container?
Personally I try and code things to facilitate an IOC container but never will try and force an IOC container into a project.
My solution breakdown goes roughly:
(Each one of these are projects).
Project.Domain
Project.Persistence.Implementation
Project.Services.Implementation
Project.DIInjectionRegistration
Project.ASPNetMVCFrontEnd (I use MVC, but it doesn't matter).
I try to maintain strict boundaries about projects references. The actual frontend project cannot contain any *.Implementation projects directly. (The *.implementation projects contain the actual implementations of the interfaces in domain in this case). So the ASPNetMVCFrontEnd has references to the Domain and the DIInjectionWhatever and to my DI container.
In the Project.DIInjectionWhatever I tie all the pieces together. So this project has all the references to the implementations and to the DI framework. It contains the code that does the registering of components. Autofac lets me breakdown component registration easily, so that's why I took this approach.
In the example here I don't have any references to the container in my implementation projects. There's nothing wrong with it, and if your implementation requires it, then go ahead.

What are the benefits of Spring Actionscript considering Dynamic Proxies and Reflection is limited

What are the benefits of Spring Actionscript considering Dynamic Proxies are not possible in the current version of Actionscript and Reflection is quite limited.
So for example I could specify my object creation in an XML application context, but why would I do that when I can simply specify that in code, and hence take advantage of static type checking etc.
It is by no means my intent to belittle the work done on Spring Actionscript but more to find an application for it in my projects.
Besides XML configuration, Spring ActionScript also supports MXML configuration. The type of config (XML, MXML) depends on the use cases your application needs to support. For the reasons you mention, it makes perfect sense to configure most of the context in MXML, but I would encourage you to externalize the config of service endpoints in every case.
In a past project we opted for XML config since the configuration was generated at runtime when a user logged on to the application. Depending on the user credentials, different endpoints and various different settings were used. We could not have done this elegantly with static MXML configs.
Both config types have their strengths and weaknesses, and it's up to you to decide what type you want to use. I think we could even support a mixture of MXML and XML quite easily actually if that would make sense. As soon as we have Dynamic Proxies and class loading, XML config will make a lot more sense.
I would agree with Sean in the general sense that trying to force Flex inside of the Java box is generally a bad idea. As many similarities as there are, Flex is not Java.
That being said, there are plenty of reasons why you might want to have some of your configuration in an external XML file, not the least of which is in the use case of configuring your service destinations and endpoints, where you may have a need to be able to change the endpoint URI without having to recompile your application.
There are several projects available that are simply misguided ports of philosophies from other platforms. Whenever starting in on a new platform, I think the best thing to do is figure out how people are effectively developing and go from there.
I say all of that because I think all of the java-esque frameworks for flex/flash leave you worse off than you started. You do need dependency injection, but there are good as3/mxml-friendly frameworks for that (Mate, Swiz). There is absolutely no point in using xml when you can use mxml, which is strongly typed.

How to hide the real IoC container library?

I want to isolate all my code from the IoC container library that I have chosen (Unity). To do so, I created an IContainer interface that exposes Register() and Resolve(). I created a class called UnityContainerAdapter that implements IContainer and that wraps the real container. So only the assembly where UnityContainerAdapter is defined knows about the Unity library.
I have a leak in my isolation thought. Unity searches for attributes on a type's members to know where to inject the dependencies. Most IoC libraries I have seen also support that. The problem I have is that I want to use that feature but I don’t want my classes to have a dependency on the Unity specific attribute.
Do you have any suggestions on how to resolve this issue?
Ideally I would create my own [Dependency] attribute and use that one in my code. But I would need to tell the real container the search for my attribute instead of its own.
Check out the Common Service Locator project:
The Common Service Locator library
contains a shared interface for
service location which application and
framework developers can reference.
The library provides an abstraction
over IoC containers and service
locators. Using the library allows an
application to indirectly access the
capabilities without relying on hard
references. The hope is that using
this library, third-party applications
and frameworks can begin to leverage
IoC/Service Location without tying
themselves down to a specific
implementation.
Edit: This doesn't appear to solve your desire to use attribute-based declaration of dependency injection. You can either choose not to use it, or find a way to abstract the attributes to multiple injection libraries (like you mentioned).
That is the basic problem with declarative interfaces -- they are tied to a particular implementation.
Personally, I stick to constructor injection so I don't run into this issue.
I found the answer: Unity uses an extension to configure what they call "selector policies". To replace the attributes used by Unity, you just code your own version of the UnityDefaultStrategiesExtension class and register you own "selector policies" that use your own attributes.
See this post on the Unity codeplex site for details on how to do that.
I'm not sure that it's going to be easy to do the same if I switch to another IoC library but that solves my problem for now.
Couldn´t you just setup your configuration without the attributes, in xml. That makes it a bit more "unclear" I know, personally I use a combination of xml and attributes, but at least it "solves" your dependency on unity thing.

Resources