Injecting the Doctrine Entity Manager in services - Bad practice? - symfony

Using https://insight.sensiolabs.com to scan / check my code, I get the following warning:
The Doctrine Entity Manager should not be passed as an argument.
Why is it such a bad practice to inject the Entity Manager in a service? What is a solution?

With respect to the comment that repositories cannot persist entities.
class MyRepository extends EntityRepository
{
public function persist($entity) { return $this->_em->persist($entity); }
public function flush () { return $this->_em->flush (); }
I like to make my repositories follow more or less a "standard" repository interface. So I do:
interface NyRepositoryInterface
[
function save($entity);
function commit();
}
class MyRepository extends EntityRepository implements MyRepositoryInterface
{
public function save ($entity) { return $this->_em->persist($entity); }
public function commit() { return $this->_em->flush (); }
This allows me to define and inject non-doctrine repositories.
You might object to having to add these helper functions to every repository. But I find that a bit of copy/paste is worth it. Traits might help here as well.
The idea is move away from the whole concept of an entity manager.

I am working on a quite a large project currently and have recently started following the approach with repositories that can mutate data. I don't really understand the motivation behind having to inject EntityManager as a dependency which is as bad as injecting ServiceManager to any class. It is just a bad design that people try to justify. Such operations like persist, remove and flush can be abstracted into sth like AbstractMutableRepository that every other repository can inherit from. So far it has been doing quite well and makes code more readable and easier to unit test!
Show me at least one example of a service that has EM injected and unit test for it looked correctly? To be able to unit test sth that has EM injected is more about testing implementation than anything else. What happens is then you end up having so many mocks set up for it, you cannot really call it a decent unit test! It is a code coverage hitter, nothing more!

Related

Differences between different methods of Symfony service collection

For those of you that are familiar with the building of the Symfony container, do you know what is the differences (if any) between
Tagged service Collector using a Compiler pass
Tagged service Collector using the supported shortcut
Service Locator especially, one that collects services by tags
Specifically, I am wondering about whether these methods differ on making these collected services available sooner or later in the container build process. Also I am wondering about the ‘laziness’ of any of them.
It can certainly be confusing when trying to understand the differences. Keep in mind that the latter two approaches are fairly new. The documentation has not quite caught up. You might actually consider making a new project and doing some experimenting.
Approach 1 is basically an "old school" style. You have:
class MyCollector {
private $handlers = [];
public function addHandler(MyHandler $hamdler) {
$handlers[] = $handler;
# compiler pass
$myCollectorDefinition->addMethodCall('addHandler', [new Reference($handlerServiceId)]);
So basically the container will instantiate MyCollector then explicitly call addHandler for each handler service. In doing so, the handler services will be instantiated unless you do some proxy stuff. So no lazy creation.
The second approach provides a somewhat similar capability but uses an iterable object instead of a plain php array:
class MyCollection {
public function __construct(iterable $handlers)
# services.yaml
App\MyCollection:
arguments:
- !tagged_iterator my.handler
One nice thing about this approach is that the iterable actually ends up connecting to the container via closures and will only instantiate individual handlers when they are actually accessed. So lazy handler creation. Also, there are some variations on how you can specify the key.
I might point out that typically you auto-tag your individual handlers with:
# services.yaml
services:
_instanceof:
App\MyHandlerInterface:
tags: ['my.handler']
So no compiler pass needed.
The third approach is basically the same as the second except that handler services can be accessed individually by an index. This is useful when you need one out of all the possible services. And of course the service selected is only created when you ask for it.
class MyCollection {
public function __construct(ServiceLocator $locator) {
$this->locator = $locator;
}
public function doSomething($handlerKey) {
/** #var MyHandlerInterface $handler */
$handler = $serviceLocator->get($handlerKey);
# services.yaml
App\MyCollection:
arguments: [!tagged_locator { tag: 'app.handler', index_by: 'key' }]
I should point out that in all these cases, the code does not actually know the class of your handler service. Hence the var comment to keep the IDE happy.
There is another approach which I like in which you make your own ServiceLocator and then specify the type of object being located. No need for a var comment. Something like:
class MyHandlerLocator extends ServiceLocator
{
public function get($id) : MyHandlerInterface
{
return parent::get($id);
}
}
The only way I have been able to get this approach to work is a compiler pass. I won't post the code here as it is somewhat outside the scope of the question. But in exchange for a few lines of pass code you get a nice clean custom locator which can also pick up handlers from other bundles.

Why should object be passed instead of creating them in Dependency Injection?

Basic concept in DI is that dependency objects should be passed instead of creating them in the dependent object.
Only reasons I could find for this is in this answer:
To hide the construction details of dependency from dependent to make code less ugly.
To facilitate mocking in Unit Testing.
Are there any other reasons?
If not, can you please explain these reasons more to justify DI?
Let's look at the real life example. Let's say you have a car. Car needs an engine.
class Car
{
private $engine;
public function __construct()
{
$this->engine = new V6Engine();
}
}
The car has a dependency on the engine. In this case, the car itself needs to construct a new engine!
Does it make sense?
Well.. NO!
Also, the car is coupled to the specific version of the engine.
This makes more sense.
Someone else needs to provide the car engine. It could be some engine supplier, engine factory... It is not car's job to create engine!
class Car
{
private $engine;
public function __construct(Engine $engine)
{
$this->engine = new $engine;
}
}
interface Engine
{
public function start();
}
class V6Engine implements Engine
{
public function start()
{
echo "vrooom, vrooom V6 cool noise"
}
}
Also, you could easily swap the engine, you are not coupled to the specific engine. That new engine only needs to be able to start.
Martin Fowler has written a very good article about the inversion of control and dependency injection.
https://martinfowler.com/articles/injection.html
Please read it - because he will explain the DI much better than I can do :)))
Also, there is very good video by the Miško Hevery "The Clean Code Talks - Don't Look For Things!". You will be much clever after watching it :)
https://www.youtube.com/watch?v=RlfLCWKxHJ0
I would add that creating the object inside your service hides the scope of your service.
Requiring it as an hard dependency makes it explicit that your service needs an instance of such object in order to work. By making it part of the contract, the dependency is no more an implementation detail.
That also opens for flexibility, you may typehint against e.g. EngineInterface instead of a concrete implementation, meaning that you don't care about what implementation is passed to your service but rely on the contract imposed by the interface (imagine a mailer that send mails for production, but a no-op for testing).

Doctrine EntityManagerDecorator

I've created custom decorator for EntityManager and now when I'm doing doctrine->getManager(), then I can get my custom manager class, but inside repository class I still have native EntityManager how can I fix this. Or maybe there is another way to set something inside repository classes from container?
Decorator calls getRepository on $wrapped(EntityManager) and then $wrapped pass $this inside RepositoryFactory $this == $wrapped == EntityManager
My solution is:
public function getRepository($className)
{
$repository = parent::getRepository($className);
if ($repository instanceof MyAbstractRepository) {
$repository->setDependency();
}
return $repository;
}
There are a couple of approaches:
Copy the static EntityManager::createRepository code to your entity manager class and adjust it accordingly. This is fragile since any change to the EntityManager code might break your code. You have to keep track of doctrine updates. However, it can be made to work.
A second approach is to define your repositories as services. You could then inject your entity manager in the repository. Bit of a hack but it avoids cloning the createRepository code.
The third approach is the recommended approach. Don't decorate the entity manager. Think carefully about what you are trying to do. In most cases, Doctrine events or a custom base repository class can handle your needs. And it saves you from fooling around with the internals.
One option would be to override the entity manager service classes or parameters via a compiler pass.

Why would I want to use UnitOfWork with Repository Pattern?

I've seen a lot about UnitOfWork and Repo Pattern on the web but still don't have a clear understanding of why and when to use -- its somewhat confusing to me.
Considering I can make my repositories testable by using DI thru the use of an IoC as suggested in this post What are best practices for managing DataContext. I'm considering passing in a context as a dependency on my repository constructor then disposing of it like so?:
public interface ICustomObjectContext : IDisposable {}
public IRepository<T> // Not sure if I need to reference IDisposable here
public IMyRepository : IRepository<MyRepository> {}
public class MyRepository : IMyRepository
{
private readonly ICustomObjectContext _customObjectContext;
public MyRepository(ICustomObjectContext customObjectContext)
{
_customObjectContext = customObjectContext;
}
public void Dispose()
{
if (_customObjectContext != null)
{
_customObjectContext.Dispose();
}
}
...
}
My current understanding of using UnitOfWork with Repository Pattern, is to perform an operation across multiple repositories -- this behavior seems to contradict what #Ladislav Mrnka recommends for web applications:
For web applications use single context per request. For web services use single context per call. In WinForms or WPF application use single context per form or per presenter. There can be some special requirements which will not allow to use this approach but in most situation this is enough.
See the full answer here
If I understand him correctly the DataContext should be shortlived and used on a per request or presenter basis (seen this in other posts as well). In this case it would be appropriate for the repo to perform operations against the context since the scope is limited to the component using it -- right?
My repos are registered in the IoC as transient, so I should get a new one with each request. If that's correct, then I should be getting a new context (with code above) with each request as well and then disposing of it -- that said...Why would I use the UnitOfWork Pattern with the Repository Pattern if I'm following the convention above?
As far as I understand the Unit of Work pattern doesn't necessarily cover multiple contexts. It just encapsulates a single operation or -- well -- unit of work, similar to a transaction.
Creating your context basically starts a Unit of Work; calling DbContext.SaveChanges() finishes it.
I'd even go so far as to say that in its current implementation Entity Framework's DbContext / ObjectContext resembles both the repository pattern and the unit of work pattern.
I would use a simplified UoW if i wanted to push context's SaveChanges away from the repositories when they share the same instance of context across one web request.
I imagine you have sth like Save() method on your repositories that looks similiar to _customObjectContext.SaveChanges(). Now lets assume you have two methods containing business logic and using repos to persist changes in DB. For the sake of simplicity we'll call them MethodA and MethodB, both of them containing a fair amount of logic for performing some activities. MethodA is used separately in the system but also it is called by MethodB for some reason. What happens is MethodA saves changes on some repository and since we are still in the same request changes made in MethodB, before it called MethodA, will also be saved regardless of whether we want it or not. So in such case we unintentionally break the transaction inside MethodB and make the code harder to understand.
I hope i described this clear enough, it wasn't easy. Anyway other than that i cannot see why UoW would be helpful in your scenario. As Dennis Traub pointed quite correctly ObjectContext and DbContext are in fact an implementation of a UoW so you'd be probably reinventing the wheel while implementing it on your own.
The ObjectContext/DbContext is an implementation of the UnitOfWork pattern. It encapsulates several operations and makes sure they are submitted in one transaction to the database.
The only thing you are doing is wrapping it in your own class to make sure you're not depending on a specific implementation in the rest of your code.
In your case, the problem lies in the fact that your Context shouldn't be disposed of by your Repository. The Repository is not the one that instantiates the Context, so it shouldn't dispose of it either. The UnitOfWork that encapsulates multiple repositories is responsible for creating and disposing the Context and you will call a Save method on your UnitOfWork.
Code can look like this:
using (IUnitOfWork unitOfWork = new UnitOfWork())
{
PersonRepository personRepository = new PersonRepository(unitOfWork);
var person = personRepository.FindById(personId);
ProductRepository productRepository = new ProductRepository(unitOfWork);
var product= productRepository.FindById(productId);
p.CreateOrder(orderId, product);
personRepository.Save();
}

Call private method in Flex, Actionscript

I need it in FlexUnit to test private methods. Is there any possibility to do this via reflection by using describeType or maybe flexUnit has some build in facility? I dislike artificial limitation that i cannot test private functions, it greatly reduces flexibility. Yes it is good design for me to test private functions, so please do not advise me to refactor my code. I do not want to break the encapsulation for the sake of unit testing.
I'm 99% certain this isn't possible and I'm intrigued to know why you would want to do this.
You should be unit testing the output of a given class, based on given inputs, regardless of what happens inside the class. You really want to allow someone to be able to change the implementation details so long as it doesn't change the expected outputs (defined by the unit test).
If you test private methods, any changes to the class are going to be tightly coupled to the unit tests. if someone wants to reshuffle the code to improve readability, or make some updates to improve performance, they are going to have to update the unit tests even though the class is still functioning as it was originally designed.
I'm sure there are edge cases where testing private methods might be beneficial but I'd expect in the majority of cases it's just not needed. You don't have to break the encapsulation, just test that your method calls give correct outputs... no matter what the code does internally.
Just create a public method called "unitTest" and call all your unit tests within that method. Throw an error when one of them fails and call it from your test framework:
try {
myobject.unitTest();
} catch (Exception e) {
//etc.
}
You cannot use describeType for that.
From the Livedocs - flash.utils package:
[...]
Note: describeType() only shows public properties and methods, and will not show
properties and methods that are private, package internal or in custom namespaces.
[...]
When the urge to test a private method is irresistible I just create a testable namespace for the method.
Declare a namespace in a file like this:
package be.xeno.namespaces
{
public namespace testable = "http://www.xeno.be/2015/testable";
}
Then you can use the testable as a custom access modifier for the method you want to test like this:
public class Thing1
{
use namespace testable;
public function Thing1()
{
}
testable function testMe() : void
{
}
}
You can then access that modifier by using the namespace in your tests:
public class Thing2
{
use namespace testable;
public function Thing2()
{
var otherThing : Thing1 = new Thing1();
otherThing.testMe();
}
}
Really though I think this is a hint that you should be splitting your functionality into a separate class.

Resources