Basic concept in DI is that dependency objects should be passed instead of creating them in the dependent object.
Only reasons I could find for this is in this answer:
To hide the construction details of dependency from dependent to make code less ugly.
To facilitate mocking in Unit Testing.
Are there any other reasons?
If not, can you please explain these reasons more to justify DI?
Let's look at the real life example. Let's say you have a car. Car needs an engine.
class Car
{
private $engine;
public function __construct()
{
$this->engine = new V6Engine();
}
}
The car has a dependency on the engine. In this case, the car itself needs to construct a new engine!
Does it make sense?
Well.. NO!
Also, the car is coupled to the specific version of the engine.
This makes more sense.
Someone else needs to provide the car engine. It could be some engine supplier, engine factory... It is not car's job to create engine!
class Car
{
private $engine;
public function __construct(Engine $engine)
{
$this->engine = new $engine;
}
}
interface Engine
{
public function start();
}
class V6Engine implements Engine
{
public function start()
{
echo "vrooom, vrooom V6 cool noise"
}
}
Also, you could easily swap the engine, you are not coupled to the specific engine. That new engine only needs to be able to start.
Martin Fowler has written a very good article about the inversion of control and dependency injection.
https://martinfowler.com/articles/injection.html
Please read it - because he will explain the DI much better than I can do :)))
Also, there is very good video by the Miško Hevery "The Clean Code Talks - Don't Look For Things!". You will be much clever after watching it :)
https://www.youtube.com/watch?v=RlfLCWKxHJ0
I would add that creating the object inside your service hides the scope of your service.
Requiring it as an hard dependency makes it explicit that your service needs an instance of such object in order to work. By making it part of the contract, the dependency is no more an implementation detail.
That also opens for flexibility, you may typehint against e.g. EngineInterface instead of a concrete implementation, meaning that you don't care about what implementation is passed to your service but rely on the contract imposed by the interface (imagine a mailer that send mails for production, but a no-op for testing).
Related
Autofac provides the OnActivated() method, which provides the capability to run any action after constructing a registered type.
Is possible to use a similar method in MvvmCross? Do you have any ideas to provide the same functionality?
It usually pays to understand the fundamentals of Dependency Injection (DI) instead of relying on particular DI Container features. Ask yourself the question: If I didn't have a DI Container, then how would I solve my problem?
Ironically, it turns out that things are usually much simpler with Pure DI.
If you didn't have a DI Container, then how would you run an action after construction of an object?
The easiest solution is to provide a factory that creates and initialises the object. Assuming the same API and requirements as the Autofac documentation implies, you could do this:
public static Dependency2 CreateDependency2(ITestOutputHelper output, Dependency1 dependency)
{
var d2 = new Dependency2(ITestOutputHelper output, Dependency1 dependency);
d2.Initialize();
return d2;
}
If you still must use another DI Container, most of them enable you to register a factory like the above against the type. I don't know how MvvmCross works, but I'd be surprised if this wasn't possible. If it isn't, you can implement an Adapter over your actual dependency. The Adapter would take care of running the action on the adapted object.
FWIW, if an object isn't in a valid state before you've run some action on it, then encapsulation is broken. The fundamental characteristic of encapsulation is that objects protect their invariants so that they can never be in invalid states. If possible, consider a better API design.
Using https://insight.sensiolabs.com to scan / check my code, I get the following warning:
The Doctrine Entity Manager should not be passed as an argument.
Why is it such a bad practice to inject the Entity Manager in a service? What is a solution?
With respect to the comment that repositories cannot persist entities.
class MyRepository extends EntityRepository
{
public function persist($entity) { return $this->_em->persist($entity); }
public function flush () { return $this->_em->flush (); }
I like to make my repositories follow more or less a "standard" repository interface. So I do:
interface NyRepositoryInterface
[
function save($entity);
function commit();
}
class MyRepository extends EntityRepository implements MyRepositoryInterface
{
public function save ($entity) { return $this->_em->persist($entity); }
public function commit() { return $this->_em->flush (); }
This allows me to define and inject non-doctrine repositories.
You might object to having to add these helper functions to every repository. But I find that a bit of copy/paste is worth it. Traits might help here as well.
The idea is move away from the whole concept of an entity manager.
I am working on a quite a large project currently and have recently started following the approach with repositories that can mutate data. I don't really understand the motivation behind having to inject EntityManager as a dependency which is as bad as injecting ServiceManager to any class. It is just a bad design that people try to justify. Such operations like persist, remove and flush can be abstracted into sth like AbstractMutableRepository that every other repository can inherit from. So far it has been doing quite well and makes code more readable and easier to unit test!
Show me at least one example of a service that has EM injected and unit test for it looked correctly? To be able to unit test sth that has EM injected is more about testing implementation than anything else. What happens is then you end up having so many mocks set up for it, you cannot really call it a decent unit test! It is a code coverage hitter, nothing more!
I've seen a lot about UnitOfWork and Repo Pattern on the web but still don't have a clear understanding of why and when to use -- its somewhat confusing to me.
Considering I can make my repositories testable by using DI thru the use of an IoC as suggested in this post What are best practices for managing DataContext. I'm considering passing in a context as a dependency on my repository constructor then disposing of it like so?:
public interface ICustomObjectContext : IDisposable {}
public IRepository<T> // Not sure if I need to reference IDisposable here
public IMyRepository : IRepository<MyRepository> {}
public class MyRepository : IMyRepository
{
private readonly ICustomObjectContext _customObjectContext;
public MyRepository(ICustomObjectContext customObjectContext)
{
_customObjectContext = customObjectContext;
}
public void Dispose()
{
if (_customObjectContext != null)
{
_customObjectContext.Dispose();
}
}
...
}
My current understanding of using UnitOfWork with Repository Pattern, is to perform an operation across multiple repositories -- this behavior seems to contradict what #Ladislav Mrnka recommends for web applications:
For web applications use single context per request. For web services use single context per call. In WinForms or WPF application use single context per form or per presenter. There can be some special requirements which will not allow to use this approach but in most situation this is enough.
See the full answer here
If I understand him correctly the DataContext should be shortlived and used on a per request or presenter basis (seen this in other posts as well). In this case it would be appropriate for the repo to perform operations against the context since the scope is limited to the component using it -- right?
My repos are registered in the IoC as transient, so I should get a new one with each request. If that's correct, then I should be getting a new context (with code above) with each request as well and then disposing of it -- that said...Why would I use the UnitOfWork Pattern with the Repository Pattern if I'm following the convention above?
As far as I understand the Unit of Work pattern doesn't necessarily cover multiple contexts. It just encapsulates a single operation or -- well -- unit of work, similar to a transaction.
Creating your context basically starts a Unit of Work; calling DbContext.SaveChanges() finishes it.
I'd even go so far as to say that in its current implementation Entity Framework's DbContext / ObjectContext resembles both the repository pattern and the unit of work pattern.
I would use a simplified UoW if i wanted to push context's SaveChanges away from the repositories when they share the same instance of context across one web request.
I imagine you have sth like Save() method on your repositories that looks similiar to _customObjectContext.SaveChanges(). Now lets assume you have two methods containing business logic and using repos to persist changes in DB. For the sake of simplicity we'll call them MethodA and MethodB, both of them containing a fair amount of logic for performing some activities. MethodA is used separately in the system but also it is called by MethodB for some reason. What happens is MethodA saves changes on some repository and since we are still in the same request changes made in MethodB, before it called MethodA, will also be saved regardless of whether we want it or not. So in such case we unintentionally break the transaction inside MethodB and make the code harder to understand.
I hope i described this clear enough, it wasn't easy. Anyway other than that i cannot see why UoW would be helpful in your scenario. As Dennis Traub pointed quite correctly ObjectContext and DbContext are in fact an implementation of a UoW so you'd be probably reinventing the wheel while implementing it on your own.
The ObjectContext/DbContext is an implementation of the UnitOfWork pattern. It encapsulates several operations and makes sure they are submitted in one transaction to the database.
The only thing you are doing is wrapping it in your own class to make sure you're not depending on a specific implementation in the rest of your code.
In your case, the problem lies in the fact that your Context shouldn't be disposed of by your Repository. The Repository is not the one that instantiates the Context, so it shouldn't dispose of it either. The UnitOfWork that encapsulates multiple repositories is responsible for creating and disposing the Context and you will call a Save method on your UnitOfWork.
Code can look like this:
using (IUnitOfWork unitOfWork = new UnitOfWork())
{
PersonRepository personRepository = new PersonRepository(unitOfWork);
var person = personRepository.FindById(personId);
ProductRepository productRepository = new ProductRepository(unitOfWork);
var product= productRepository.FindById(productId);
p.CreateOrder(orderId, product);
personRepository.Save();
}
Can someone explain to me what is the purpose of the Unity Application Block? I tried looking through the documentation but its all very abstract.
What are some practical uses for the Unity block?
Inversion of Control
A quick summation (lot more reading is available this topic, and I highly suggest reading more)...
Microsoft's Unity from the Enterprise Patterns and Practices team is an Inversion of Control container project, or IoC for short. Just like Castle Windsor, StructureMap, etc. This type of development is also referred in lamen's terms as Loosely Coupling your components.
IoC includes a pattern for Dependency Injection of your objects, in which you rely on an external component to wire up the dependencies within your objects.
For example, instead of accessing static managers (which are near impossible to unit test), you create an object that relies on an external dependency to act upon. Let's take a Post service in which you want to access the DB to get a Post.
public class PostService : IPostService
{
private IPostRepository _postRepo;
public PostService(IPostRepository postRepo)
{
_postRepo = postRepo;
}
public IList<Post> GetPosts()
{
return _postRepo.GetAllPosts().ToList();
}
}
This PostService object now has an external dependency on IPostRepository. Notice how no concretes and no static manager classes are used? Instead, you have a loose-coupling of a simple Interface - which gives you the power of wiring up all different kinds of concrete classes that implement IPostRepository.
Microsoft Unity's purpose is to wire up that IPostRepository for you, automatically. So you never have to worry about doing:
// you never have to do this with Unity
IPostRepository repo = new PostRepository();
IPostService service = new PostService(repo); // dependency injection
IList<Post> posts = service.GetPosts();
The above shows where you have to implement two concrete classes, PostRepository() and PostService(). That is tightly-coupling your application to demand/require those exact instances, and leaves it very difficult to unit test.
Instead, you would use Unity in your end point (The controller in MVC, or code behind in ASPX pages):
IUnityContainer ioc = new UnityContainer();
IPostService postService = ioc.Resolve<IPostService>();
IList<Post> posts = postService.GetPosts();
Notice that there are no concretes used in this example (except UnityContainer and Post, obviously)! No concretes of the services, and no repository. That is loosely-coupling at its finest.
Here's the real kicker...
Unity (or any IoC container framework out there!) will inspect IPostService for any dependencies. It will see that it wants (depends) on an instance of IPostRepository. So, Unity will go into it's object map and look for the first object that implements IPostRepository that was registered with the container, and return it (i.e. a SqlPostRepository instance). That is the real power behind IoC frameworks - the power to inspect services and wire up any of the dependencies automatically.
I need to finish my blog post about the comparisons of UNity vs Castle vs StructureMap. I actually prefer Castle Windsor due to its configuration file options, and extensibility points personally.
The Unity Application Block is used for dependency injection. I think the best simple definition for DI is from this question
When you go and get things out of the refrigerator for yourself, you can cause problems. You might leave the door open, you might get something Mommy or Daddy doesn't want you to have. You might even be looking for something we don't even have or which has expired.
What you should be doing is stating a need, "I need something to drink with lunch," and then we will make sure you have something when you sit down to eat.
So for an example,
IUnityContainer container = new UnityContainer();
ILunch lunch = container.Resolve<ILunch>();
Console.WriteLine(lunch.Drink);
This Outputs "Lemonade" because we defined Drink as a Dependency.
public class ILunch {
[Dependency]
public IDrink Drink { get; set; }
}
As far as practicality goes, Dependency Injection is really great when you have Dependencies on other objects and you don't want the developer to have to manually set them. It is that simple. It is also great for mocking. The most used example I can think of that I use is mocking a data layer. Sometimes the database isn't ready so instead of stopping development I have a fake layer that returns fake data. The data object is accessed via DI and can be configured to return objects which access the fake layer or the real layer. In this example I am almost using the DI Container as a configurable factory class. For more on Unity MSDN has some great resources http://msdn.microsoft.com/en-us/library/cc468366.aspx.
Other good DI Frameworks include Spring.NET and Ninject
what's the best practice for creating test persistence layers when doing an ASP.NET site (eg. ASP.NET MVC site)?
Many examples I've seen use Moq (or another mocking framework) in the unit test project, but I want to, like .. moq out my persistence layer so that my website shows data and stuff, but it's not coming from a database. I want to do that last. All the mocking stuff I've seen only exists in unit tests.
What practices do people do when they want to (stub?) fake out a persistence layer for quick and fast development? I use Dependency Injection to handle it and have some hard coded results for my persistence layer (which is really manual and boring).
What are other people doing? Examples and links would be awesome :)
UPDATE
Just a little update: so far I'm getting a fair bit of mileage out of having a fake repository and a SQL repository - where each class implements an interface. Then, using DI (I'm using StructureMap), I can switch between my fake repository or the SQL repository. So far, it's working well :)
(also scary to think that I asked this question nearly 11 months ago, from when I'm editing this, right now!)
Assuming you're using the Repository pattern from Rob Conery's MVC Store Front:
http://blog.wekeroad.com/mvc-storefront/mvc-storefront-part-1/
I followed Rob Conery's tutorial but ran into the same want as you. Best thing to do is move the Mock Repositories you've created into a seperate project called Mocks then you can swap them out pretty easily with the real ones when you instantiate your service. If your feeling adventurous you could create a factory that takes a value from the config file to instantiate either a mock or a real repository,
e.g.
public static ICatalogRepository GetCatalogRepository(bool useMock)
{
if(useMock)
return new FakeCatalogRepository();
else
return new SqlCatalogRepository();
}
or use a dependency injection framework :)
container.Resolve<ICatalogRepository>();
Good luck!
EDIT: In response to your comments, sounds like you want to use a list and LINQ to emulate a db's operations e.g. GetProducts, StoreProduct. I've done this before. Here's an example:
public class Product
{
public int Identity { get; set; }
public string Name { get; set; }
public string Description { get; set; }
//etc
}
public class FakeCatalogRepository()
{
private List<Product> _fakes;
public FakeCatalogCatalogRepository()
{
_fakes = new List<Product>();
//Set up some initial fake data
for(int i=0; i < 5; i++)
{
Product p = new Product
{
Identity = i,
Name = "product"+i,
Description = "description of product"+i
};
_fakes.Add(p);
}
}
public void StoreProduct(Product p)
{
//Emulate insert/update functionality
_fakes.Add(p);
}
public Product GetProductByIdentity(int id)
{
//emulate "SELECT * FROM products WHERE id = 1234
var aProduct = (from p in _fakes.AsQueryable()
where p.Identity = id
select p).SingleOrDefault();
return aProduct;
}
}
Does that make a bit more sense?
Boring or not, I think you're on the right track. I assume you're creating a fakeRepository that is a concrete implementation of your IRepository which in turn is injected into your service layer. This is nice because at some point in the future when you're happy with the shape of your entities and the behavior of your services, controllers, and views, you can then test drive your real Repositories that will use the database to persist those entities. Of course the nature of those tests will be integration tests, but just as important if not more so.
One thing that may be less boring for you when the time comes to create your real repositories is if you use nHibernate for your persistence you will be able let nhibernate generate your database after you create the nhibernate maps for your entities, assuming you don't have to use a legacy schema.
For instance, I have the following method that is called by my SetUpFixture to generate my db schema:
public class SchemaBuilder
{
public static void ExportSchema()
{
Configuration configuration = new Configuration();
configuration.Configure();
new SchemaExport(configuration).Create(true, true);
}
}
and my SetUpFixture is as follows:
[SetUpFixture]
public class SetUpFixture
{
[SetUp]
public void SetUp()
{
SchemaBuilder.ExportSchema();
DataLoader.LoadData();
}
}
where DataLoader is responsible for creating all of my seed data and test data using the real respoitory.
This probably doesn't answer your questions but I hope it serves to reassure you in your approach.
Greg
Although I'm not using Asp.Net or the MVC framework I do have the need to test services without hitting the database. Your question triggered the writing up of a short (ok, maybe not so short) summary of how I do it. Not claiming it's the best or anything, but it works for us. We access data through a repository and when required we plug in an in-memory repository as explained in the post.
http://blogs.microsoft.co.il/blogs/kim/archive/2008/11/14/testable-data-access-with-the-repository-pattern.aspx
I am using a complete in memory database with SQLite and ActiveRecord. Basically we delete and re-create the database before every integration test is being run, so that the data is always in a known state. The contents of the database are inserted through code. So an example would be like this:
ActiveRecord.Initalize(lots of parameters)
ActiveRecord.DropSchema();
ActiveRecord.CreateSchema();
and then we just add lots of customers or whatever, DDD style:
customerRepository.Save(customer);
Another way to solve this could be using NDbUnit to maintain the state of the database.
I know this question is a bit old, but I've finally come up with an answer :)
Firstly, use RavenDb (Embedded). It's part of the RavenDb Document Database. Its a fully in memory database and works perfectly with unit tests :) I've done it with MSTest, NUnit and xUnit.
Secondly, you can use NHibernate with SqlLite if you don't want to use RavenDb. Ayende has a post about using this.
I've gone the route of creating tables and data during a setup method in a unit test class, running tests, then doing clean up during the teardown. Yes, this method works, but if you really end up using your unit tests for debugging purposes, invariably you will run the setup, debug something then stop in the middle without doing the teardown. It's very brittle and you will probably end up (in the long run) with bad data in your test database and/or unusable unit tests. I personally think its best to mock the database layer using a mocking framework. I do understand that sometimes it's best to do logic in the database. For these cases you can use a tool like DBFit to write tests for your database layer.