Does every class in symfony2 need to be a service? - symfony

Intro moved to the bottom:
Right now I'm working on a small system, that manages orders and products.
I'm trying to refactor chunks of code from the controller and into business and service classes, here's what I'm trying to pull
/src/domain/bundle/Business/
/src/domain/bundle/Services/
So in the Business, I will have an Order Class, that does some calculations, some of these calculations required data from the database (on the fly).
Here's exactly the problem I have:
The controller loads an array of Orders that needs processing
The controller sends orders that needs processing to the OrderBusiness class
The OrderBusiness class needs to get the product(s) price from the database
Now I'm stuck..
What I'm attempting to do, is I made a ProductsService class, that returns the required product and price from the database, but how can I call this class from my OrderBusiness class without defining my OrderBusiness class as a service and injecting it with the ProductsService class
Intro:
I'm quiet sorry if my questions seems to be a little general and ignorant.
I've been using Symfony2 for less than a year now, and some things I can't wrap my mind around, even after reading the documentations, and a lot of the questions.

You can do whatever you want but it's often better to stick to Symfony's default way:
Controller gets data from request, validates it, calls other classes, and renders request (usually using Twig or JsonResponse::create).
To get data from database there are repositories. It's good when they return plain old PHP objects (POPO). Usually are managed with Doctrine magic.
To process objects (aggregate, filter, connect to external services, etc) you can create services. You don't to suffix them with Service (any class name is OK) and put to Service folder.
When you create many simple classes that follow "Single responsibility principle" it's convenient to connect them with dependency injection. Then your classes don't stick to each other too much and it's easy to swap one class with another without changing code in all files, also good for testing.

Symfony: Dependency Injection
So, Symfony not requirements for define all classes as services! The service layer - DependencyInjection
You must register a new service in container for access to this service in another systems/services/business logic. Performance for this: the service will be created (new SomeService) only one time, and cache this object in inner cache layer.
If you want create a new service instance for each time, you can add scope: prototype to service definition
This system vary good for many references between services.
You problem/solution:
In best practices - in no way!
As solution: you can use singlton, or use static services.
But, i recommend use dependency injection pattern layer for this problem and create all classes for each logic. Reasons:
Single responsibility
Easy testing with PHPSpec/PHPUnit (because each class - one business logic).
You can inject common logic in __constructor without parameters $this->dependencyService = new SomeService()!
If you not want define service in container, but another service have reference, you can define dependency service as private public: false
Dynamical services (creates via factory) for any condition (request, user, scope, etc...)
In my case:
As example from my another project (Orders, Products, Variants...)
I have:
ProductRepository - for load products, variants.
PriceCalculator - for calculate price for product (I loads price from product property, but you can inject PriceLoader service for loads prices from another storages).
OrderProcessor - for processing order.
Controller:
class OrderProcessingController
{
private $productRepository;
private $orderProcessor;
public function __construct($productRepository, $orderProcessor)
{
$this->productRepository = $productRepository;
$this->orderProcessor = $orderProcessor;
}
public function processForProduct($product)
{
$product = $this->productRepository->find($productId = $product);
if (!$product) {
// Control for product not found
}
$this->orderProcessor->processForProduct($product);
return new Response('some html');
}
}
In this we only load product, control if not found and call to process. Vary simple and easy testing.
Order processor:
class OrderProcessor
{
private $priceCalculator;
private $priceLoader;
public function __construct($priceCalculator, $priceLoader)
{
$this->priceCalculator = $priceCalculator;
$this->priceLoader = $priceLoader;
}
public function processForProduct($product)
{
$price = $this->priceLoader->loadForProduct($product);
$price = $this->priceCalculator->calculateForProduct($price, $product);
// Some processing
return $order;
}
}
In this class - we load price for product, calculate, create order and call to another processing if necessary. Vary simple and easy testing.
P.S.
Right now I'm working on a small system, that manages orders and products.
Can use any microframework? Silex as example or Symfony Microframework, and completely unsubscribe from dependency injection layer?

Now I'm stuck.. What I'm attempting to do, is I made a ProductsService
class, that returns the required product and price from the database,
Use repository class for business queries. Don't mix application layers. You can also define the repository as a service.
When class represents the general interface or represents an algorithm (e.g. business) define it as service.

Related

Differences between different methods of Symfony service collection

For those of you that are familiar with the building of the Symfony container, do you know what is the differences (if any) between
Tagged service Collector using a Compiler pass
Tagged service Collector using the supported shortcut
Service Locator especially, one that collects services by tags
Specifically, I am wondering about whether these methods differ on making these collected services available sooner or later in the container build process. Also I am wondering about the ‘laziness’ of any of them.
It can certainly be confusing when trying to understand the differences. Keep in mind that the latter two approaches are fairly new. The documentation has not quite caught up. You might actually consider making a new project and doing some experimenting.
Approach 1 is basically an "old school" style. You have:
class MyCollector {
private $handlers = [];
public function addHandler(MyHandler $hamdler) {
$handlers[] = $handler;
# compiler pass
$myCollectorDefinition->addMethodCall('addHandler', [new Reference($handlerServiceId)]);
So basically the container will instantiate MyCollector then explicitly call addHandler for each handler service. In doing so, the handler services will be instantiated unless you do some proxy stuff. So no lazy creation.
The second approach provides a somewhat similar capability but uses an iterable object instead of a plain php array:
class MyCollection {
public function __construct(iterable $handlers)
# services.yaml
App\MyCollection:
arguments:
- !tagged_iterator my.handler
One nice thing about this approach is that the iterable actually ends up connecting to the container via closures and will only instantiate individual handlers when they are actually accessed. So lazy handler creation. Also, there are some variations on how you can specify the key.
I might point out that typically you auto-tag your individual handlers with:
# services.yaml
services:
_instanceof:
App\MyHandlerInterface:
tags: ['my.handler']
So no compiler pass needed.
The third approach is basically the same as the second except that handler services can be accessed individually by an index. This is useful when you need one out of all the possible services. And of course the service selected is only created when you ask for it.
class MyCollection {
public function __construct(ServiceLocator $locator) {
$this->locator = $locator;
}
public function doSomething($handlerKey) {
/** #var MyHandlerInterface $handler */
$handler = $serviceLocator->get($handlerKey);
# services.yaml
App\MyCollection:
arguments: [!tagged_locator { tag: 'app.handler', index_by: 'key' }]
I should point out that in all these cases, the code does not actually know the class of your handler service. Hence the var comment to keep the IDE happy.
There is another approach which I like in which you make your own ServiceLocator and then specify the type of object being located. No need for a var comment. Something like:
class MyHandlerLocator extends ServiceLocator
{
public function get($id) : MyHandlerInterface
{
return parent::get($id);
}
}
The only way I have been able to get this approach to work is a compiler pass. I won't post the code here as it is somewhat outside the scope of the question. But in exchange for a few lines of pass code you get a nice clean custom locator which can also pick up handlers from other bundles.

Where should EntityManager::persist() and EntityManager::flush() be called

I'm developing a medium scale application using Symfony2 and Doctrine2. I'm trying to structure my code according to the SOLID principles as much as possible. Now here is the question:
For creating new Entities, I use Symfony Forms with proxy objects i.e: I don't bind the form directly to my Entity, but to some other class that will passed to some service which will take the needed action based on the received data, i.e: the proxy class serves as a DTO to that service which I will call the Handler. Now considering the Handler doesn't have a dependency on the EntityManager, where should I do calls to EntityManager::persist() and EntityManager::flush()? I am usually comfortable with putting flush in the controller but I'm not so sure about persist since the controller shouldn't assume anything about what the Handler does, and maybe Handler::handle (the method that the form data is passed to) does more than just persist a new Entity to the database. One Idea is to create some interfaces to encapsulate flush and persist and pass them around, which will act as wrappers around EntityManager::flush() and EntityManager::persist(), but I'm not so sure about it since EntityManager::flush() might create unwanted consequences. So Maybe I should just create an interface around persist.
So My question is where and how to make the call to persist and flush, in order to get the most Solid code? Or am I just overcomplicating things in my quest of best practices?
If you have a service that will handle tasks upon your entities, to me, the right way is to inject EntityManager into your service definition and do persist and flush operation inside it.
Another way to proceed, if you want to keep separate that logic, is to create an EventSubscriber and raise a custom event from your "entity service" when you're ready to do persist and flush operations
My 2 cents:
about flush, as it calls the DB, doing it like you already do when needed in your controllers sounds good to me.
about presist, it should be called in your Handler when your entity is in a "ready to be flushed" state. A Persister interface with only the persist method as a dependency of your Handlers, and a DoctrinePersister implementation injected in them looks OK.
Another option here - you can implement save() method in your entity repository class and make persistence there. Inject your entity repository as dependency into your Handler class.
If you don't want to couple your service and business logic to the EntityManager (good job), SOLID provides a perfect solution to separate it from your database logic.
//This class is responsible for business logic.
//It knows nothing about databases
abstract class CancelOrder
{
//If you need something from the database in your business logic,
//create a function that returns the object you want.
//This gets implemented in the inherited class
abstract protected function getOrderStatusCancelled();
public function cancel($order)
{
$order->setOrderStatus($this->getOrderStatusCancelled());
$order->setSubmittedTime(new DateTime());
//and other business logic not involving database operations
}
}
//This class is responsible for database logic. You can create a new class for any related CRUD operations.
class CancelOrderManager extends CancelOrder
{
public function __construct($entityManager, $orderStatusRepository)...
public function getOrderStatusCancelled()
{
return $this->orderStatusRepository->findByCode('cancelled');
}
public function cancel($order)
{
parent::cancel($order);
$this->entityManager->flush();
}
}

Sf2 : using a service inside an entity

i know this has been asked over and over again, i read the topics, but it's always focused on specific cases and i generally try to understand why its not best practise to use a service inside an entity.
Given a very simple service :
Class Age
{
private $date1;
private $date2;
private $format;
const ym = "%y years and %m month"
const ...
// some DateTime()->diff() methods, checking, formating the entry formats, returning different period formats for eg.
}
and a simple entity :
Class People
{
private $firstname;
private $lastname;
private $birthday;
}
From a controller, i want to do :
$som1 = new People('Paul', 'Smith', '1970-01-01');
$som1->getAge();
Off course i can rewrite the getAge() function inside my entity, its not long, but im very lazy and as i've already written all the possible datetime->diff() i need in the above service, i dont understand why i shouldnt use'em...
NB : my question isnt about how to inject the container in my entity, i can understand why this doesnt make sense, but more what wld be the best practise to avoid to rewrite the same function in different entities.
Inheritance seems to be a bad "good idea" as i could use the getAge() inside a class BlogArticle and i doubt that this BlogArticle Class should be inheriting from the same class as a People class...
Hope i was clear, but not sure...
One major confusion for many coders is to think that doctrine entities "are" the model. That is a mistake.
See edit of this post at the end, incorporating ideas related to CQRS+ES -
Injecting services into your doctrine entities is a symptom of "trying to do more things than storing data" into your entities. When you see that "anti-pattern" most probably you are violating the "Single Responsibility" principle in SOLID programming.
http://en.wikipedia.org/wiki/Anti-pattern
http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29
http://en.wikipedia.org/wiki/Single_responsibility_principle )
Symfony is not an MVC framework, it is a VC framework only. Lacks the M part. Doctrine entities (I'll call them entities from now on, see clarification at the end) are a "data persistence layer", not a "model layer". Symfony has lots of things for views, web controllers, command controllers... but has no help for domain modelling ( http://en.wikipedia.org/wiki/Domain_model ) - even the persistence layer is Doctrine, not Symfony.
Overcoming the problem in SF2
When you "need" services in a data-layer, trigger an antipattern alert. Storage should be only a "put here - get from there" system. Nothing else.
To overcome this problem, you should inject the services into a "logic layer" (Model) and separate it from "pure storage" (data-persistence layer). Following the single responsibility principle, put the logics in one side, put the getters and setters to MySQL in another.
The solution is to create the missing Model layer, not present in Symfony2, and make it to give "the logic" of the domain objects, completely separated and decoupled from the layer of data-persistence which knows "how to store" the model into a MySQL database with doctrine, or to a redis, or simply to a text file.
All those storage systems should be interchangeable and your Model should still expose the very same public methods with absolutely no change to the consumer.
Here's how you do it:
Step 1: Separate the model from the data-persistence
To do so, in your bundle, you can create another directory named Model at the bundle-root level (besides tests, DependencyInjection and so), as in this example of a game.
The name Model is not mandatory, Symfony does not say anything about it. You can choose whatever you want.
If your project is simple (say one bundle), you can create that directory inside the same bundle.
If your project is many bundles wide, you could consider
either putting the model splitted in the different bundles, or
or -as in the example image- use a ModelBundle that contains all the "objects" the project needs (no interfaces, no controllers, no commands, just the logic of the game, and its tests). In the example, you see a ModelBundle, providing logical concepts like Board, Piece or Tile among many others, structures in directories for clarity.
Particularly for your question
In your example, you could have:
Entity/People.php
Model/People.php
Anything related to "store" should go inside Entity/People.php - Ex: suppose you want to store the birthdate both in a date-time field, as well as in three redundant fields: year, month, day, because of any tricky things related to search or indexing, that are not domain-related (ie not related withe lo 'logics' of a person).
Anything related to the "logics" should go inside Model/People.php - Ex: how to calculate if a person is over the majority of age just now, given a certain birthdate and the country he lives (which will determine the minumum age). As you can see, this has nothing to do on the persistence.
Step 2: Use factories
Then, you must remember that the consumers of the model, should never ever create model objects using "new". They should use a factory instead, that will setup the model objects properly (will bind to the proper data-storage layer). The only exception is in unit-testing (we'll see it later). But apart from unitary tests, grab this with fire in your brain, and tattoo it with a laser in your retina: Never do a 'new' in a controller or a command. Use the factories instead ;)
To do so, you create a service that acts as the "getter" of your model. You create the getter as a factory accessible thru a service. See the image:
You can see a BoardManager.php there. It is the factory. It acts as the main getter for anything related to boards. In this case, the BoardManager has methods like the following:
public function createBoardFromScratch( $width, $height )
public function loadBoardFromJson( $document )
public function loadBoardFromTemplate( $boardTemplate )
public function cloneBoard( $referenceBoard )
Then, as you see in the image, in the services.yml you define that manager, and you inject the persistence layer into it. In this case, you inject the ObjectStorageManager into the BoardManager. The ObjectStorageManager is, for this example, able to store and load objects from a database or from a file; while the BoardManager is storage agnostic.
You can see also the ObjectStorageManager in the image, which in turn is injected the #doctrine to be able to access the mysql.
Your managers are the only place where a new is allowed. Never in a controller or command.
Particularly for your question
In your example, you would have a PeopleManager in the model, able to get the people objects as you need.
Also in the Model, you should use the proper singular-plural names, as this is decoupled from your data-persistence layer. Seems you are currently using People to represent a single Person - this can be because you are currently (wrongly) matching the model to the database table name.
So, involved model classes will be:
PeopleManager -> the factory
People -> A collection of persons.
Person -> A single person.
For example (pseudocode! using C++ notation to indicate the return type):
PeopleManager
{
// Examples of getting single objects:
Person getPersonById( $personId ); -> Load it from somewhere (mysql, redis, mongo, file...)
Person ClonePerson( $referencePerson ); -> Maybe you need or not, depending on the nature the your problem that your program solves.
Person CreatePersonFromScratch( $name, $lastName, $birthDate ); -> returns a properly initialized person.
// Examples of getting collections of objects:
People getPeopleByTown( $townId ); -> returns a collection of people that lives in the given town.
}
People implements ArrayObject
{
// You could overload assignment, so you can throw an exception if any non-person object is added, so you can always rely on that People contains only Person objects.
}
Person
{
private $firstname;
private $lastname;
private $birthday;
}
So, continuing with your example, when you do...
// **Never ever** do a new from a controller!!!
$som1 = new People('Paul', 'Smith', '1970-01-01');
$som1->getAge();
...you now can mutate to:
// Use factory services instead:
$peopleManager = $this->get( 'myproject.people.manager' );
$som1 = $peopleManager->createPersonFromScratch( 'Paul', 'Smith', '1970-01-01' );
$som1->getAge();
The PeopleManager will do the newfor you.
At this point, your variable $som1 of type Person, as it was created by the factory, can be pre-populated with the necessary mechanics to store and save to the persistence layer.
The myproject.people.manager will be defined in your services.yml and will have access to the doctrine either directly, either via a 'myproject.persistence.manager` layer or whatever.
Note: This injection of the persistence layer via the manager, has several side effects, that would side track from "how to make the model have access to services". See steps 4 and 5 for that.
Step 3: Inject the services you need via the factory.
Now you can inject any services you need into the people.manager
You, if your model object needs to access that service, you have now 2 choices:
When the factory creates a model object, (ie when PeopleManager creates a Person) to inject it via either the constructor, either a setter.
Proxy the function in the PeopleManager and inject the PeopleManager thru the constructor or a setter.
In this example, we provide the PeopleManager with the service to be consumed by the model. When the people manager is requested a new model object, it injects the service needed to it in the new sentence, so the model object can access the external service directly.
// Example of injecting the low-level service.
class PeopleManager
{
private $externalService = null;
class PeopleManager( ServiceType $externalService )
{
$this->externalService = $externalService;
}
public function CreatePersonFromScratch()
{
$externalService = $this->externalService;
$p = new Person( $externalService );
}
}
class Person
{
private $externalService = null;
class Person( ServiceType $externalService )
{
$this->externalService = $externalService;
}
public function ConsumeTheService()
{
$this->externalService->nativeCall(); // Use the external API.
}
}
// Using it.
$peopleManager = $this->get( 'myproject.people.manager' );
$person = $peopleManager->createPersonFromScratch();
$person->consumeTheService()
In this example, we provide the PeopleManager with the service to be consumed by the model. Nevertheless, when the people manager is requested a new model object, it injects itself to the object created, so the model object can access the external service via the manager, which then hides the API, so if ever the external service changes the API, the manager can do the proper conversions for all the consumers in the model.
// Second example. Using the manager as a proxy.
class PeopleManager
{
private $externalService = null;
class PeopleManager( ServiceType $externalService )
{
$this->externalService = $externalService;
}
public function createPersonFromScratch()
{
$externalService = $this->externalService;
$p = new Person( $externalService);
}
public function wrapperCall()
{
return $this->externalService->nativeCall();
}
}
class Person
{
private $peopleManager = null;
class Person( PeopleManager $peopleManager )
{
$this->peopleManager = $peopleManager ;
}
public function ConsumeTheService()
{
$this->peopleManager->wrapperCall(); // Use the manager to call the external API.
}
}
// Using it.
$peopleManager = $this->get( 'myproject.people.manager' );
$person = $peopleManager->createPersonFromScratch();
$person->ConsumeTheService()
Step 4: Throw events for everything
At this point, you can use any service in any model. Seems all is done.
Nevertheless, when you implement it, you will find problems at decoupling the model with the entity, if you want a truly SOLID pattern. This also applies to decoupling this model from other parts of the model.
The problem clearly arises at places like "when to do a flush()" or "when to decide if something must be saved or left to be saved later" (specially in long-living PHP processes), as well as the problematic changes in case the doctrine changes its API and things like this.
But is also true when you want to test a Person without testing its House, but the House must "monitor" if the Person changes its name to change the name in the mailbox. This is specially try for long-living processes.
The solution to this is to use the observer pattern ( http://en.wikipedia.org/wiki/Observer_pattern ) so your model objects throw events nearly for anything and an observer decides to cache data to RAM, to fill data or to store data to the disk.
This strongly enhances the solid/closed principle. You should never change your model if the thing you change is not domain-related. For example adding a new way of storing to a new type of database, should require zero edition on your model classes.
You can see an example of this in the following image. In it, I highlight a bundle named "TurnBasedBundle" that is like the core functionality for every game that is turn-based, despite if it has a board or not. You can see that the bundle only has Model and Tests.
Every game has a ruleset, players, and during the game, the players express the desires of what they want to do.
In the Game object, the instantiators will add the ruleset (poker? chess? tic-tac-toe?). Caution: what if the ruleset I want to load does not exist?
When initializing, someone (maybe the /start controller) will add players. Caution: what if the game is 2-players and I add three?
And during the game the controller that receives the players movements will add desires (for example, if playing chess, "the player wants to move queen to this tile" -which may be a valid, or not-.
In the picture you can see those 3 actions under control thanks to the events.
You can observe that the bundle has only Model and Tests.
In the model, we define our 2 objects: Game, and the GameManager, to get instances of Game objects.
We also define Interfaces, like for example the GameObserver, so anyone willing to receive the Game events should be a GameObserver folk.
Then you can see that for any action that modifies the state of the model (for example adding a player), I have 2 events: PRE and POST. See how it works:
Someone calls the $game->addPlayer( $player ) method.
As soon as we enter the addPlayer() function, the PRE event is raised.
The observers then can catch this event to decide if a player can be added or not.
All PRE events should come with a cancel passed by reference. So if someone decides this is a game for 2 players and you try to add a 3rd one, the $cancel will be set to true.
Then you are again inside the addPlayer function. You can check if someone wanted to cancel the operation.
Do the operation if allowed (ie: mutate the $this-> state).
After the state has been changed, raise a POST event to indicate the observers that the operation has been completed.
In the picture you see three, but of course it has a lot lot more. As a rule of thumb, you will have nearly 2 events per setter, 2 events per method that can modify the state of the model and 1 event for each "unavoidable" action. So if you have 10 methods on a class that operate on it, you can expect to have about 15 or 20 events.
You can easily see this in the typical simple text box of any graphyc library of any operating system: Typical events will be: gotFocus, lostFocus, keyPress, keyDown, keyUp, mouseDown, mouseMove, etc...
Particularly, in your example
The Person will have something like preChangeAge, postChangeAge, preChangeName, postChangeName, preChangeLastName, postChangeLastName, in case you have setters for each of them.
For long-living actions like "person, do walk for 10 seconds" you maybe have 3: preStartWalking, postStartWalking, postStopWalking (in case a stop of 10 seconds cannot be programatically prevented).
If you want to simplify, you can have two single preChanged( $what, & $cancel ) and postChanged( $what ) events for everything.
If you never prevent your changes to happen, you can even just have one single event changed() for all and any change to your model. Then your entity will just "copy" the model properties in the entity properties at every change. This is OK for simple classes and projects or for structures you are not going to publish for third-party consumers, and saves some coding. If the model class becomes a core class to your project, spending a bit of time adding all the events list will save you time in the future.
Step 5: Catch the events from the data layer.
It is at this point that your data-layer bundle enters in action!!!
Make your data layer an observer of your model. When the model Changes its internal state then make your Entity to "copy" that state into the entity state.
In this case, the MVC acts as expected: The Controller, operates on the Model. The consequences of this are still hidden from the controller (as the controller should not have access to Doctrine). The model "broadcasts" the operation made, so anyone interested knows, which in turn triggers that the data-layer knows about the model change.
Particularly, in your project
The Model/Person object will have been created by the PeopleManager. When creating it, the PeopleManager, which is a service, and therefore can have other services injected, can have the ObjectStorageManager subsystem handy. So the PeopleManager can get the Entity/People that you reference in your question and add the Entity/People as an observer to Model/Person.
In the Entity/People mainly you substitute all the setters by event catchers.
You read your code like this: When the Model/Person changes its LastName, the Entity/People will be notified and will copy the data into its internal structure.
Most probably, you are tempted to inject the entity inside the model, so instead of throwing an event, you call the setters of the Entity.
But with that approach, you 'break' the Open-Closed principle. So if at any given point you want to migrate to MongoDb, you need to "change" your "entities" by "documents" in your model. With the observer-pattern, this change occurs outside the model, who never knows the nature of the observer beyond that is its a PersonObserver.
Step 6: Unit test everything
Finally, you want to unit test your software. As this pattern I have explained overcomes the anti-pattern that you discovered, you can (and you should) unit-test the logics of your model independently of how that is stored.
Following this pattern, helps you to go towards the SOLID principles, so each "unit of code" is independent on the others. This will allow you to create unit-tests that will test the "logics" of your Model without writing to the database, as it will inject a fake data-storage layer as a test-double.
Let me use the game example again. I show you in the image the Game test. Assume all games can last several days and the starting datetime is stored in the database. We in the example currently test only if getStartDate() returns a dateTime object.
There are some arrows in it, that represent the flow.
In this example, from the two injecting strategies I told you, I choose the first one: To inject into the Game model object the services it needs (in this case a BoardManager, PieceManager and ObjectStorageManager) and not to inject the GameManager itself.
First, you invoke phpunit that will call look for the Tests directory, recursively in all the directories, finding classes named XxxTest. Then will desire to invoke all the methods named textSomething().
But before calling it, for each test method it calls the setup().
In the setup we will create some test-doubles to avoid "real access" to the database when testing, while correctly testing the logics in our model. In this case a double of my own data layer manager, ObjectStorageManager.
It is assigned to a temporary variable for clarity...
...that is stored in the GameTest instance...
...for later use in the test itself.
The $sut (system under test) variable is then created with a new command, not via a manager. Do you remember that I said that tests were an exception? If you use the manager (you still can) here it is not a unit-test, it's an integration test because tests two classes: the manager and the game. In the new command we fake all the dependencies that the model has (like a board manager, and like a piece manager). I am hardcoding GameId = 1 here. This relates to data-persistance, see below.
We then may call the system under test (a simple Game model object) to test its internals.
I am hardcoding "Game id = 1" in the new. In this case we are only testing that the returned type is a DateTime object. But in case we want to test also that the date that it gets is the proper one, we can "tune" the ObjectStorageManager (data-persistance layer) mock to return whatever we want in the internal call, so we could test that for example when I request the date to the data-layer for game=1 the date is 1st-jun-2014 and for game=2 the date is 2nd-jun-2014. Then in the testGetStartDate I would create 2 new instances, with Ids 1 and 2 and check the content of the result.
Particularly, in your project
You will have a Test/Model/PersonTest unit test that will be able to play with the logics of the person, and in case of needing a person from the database, you will fake it thru the mock.
In case you want to test the storing of the person to the database, it is enough that you unit-test that the event is thrown, no matter who listens to it. You can create a fake listener, attach to the event, and when the postChangeAge happens mark a flag and do nothing (no real database storage). Then you assert that the flag is set.
In short:
Do not confuse logics and data-persistance. Create a Model that has nothing to do with entities, and put all the logics in it.
Do never use new to get your models from any consumer. Use factory services instead. Special attention to avoid news in controllers and commands. Exception: The unit-test is the only consumer that can use a new.
Inject the services you need in the Model via the factory, which in turn receives it from the services.yml configuration file.
Throw events for everything. When I say everything, means everything. Just imagine you observe the model. What would you like to know? Add an event for it.
Catch the events from controllers, views, commands and from other parts of the model, but, specially, catch them in the data-storage layer, so you can "copy" the object to the disk without being intrusive to the model.
Unit test your logics without depending on any real database. Attach the real database storage system in production and attach a dummy implementation for your tests.
Seems a lot of work. But it is not. It is a matter of getting used to it. Just think about the "objects" you need, create them and make the data-layer be "monitors" of your objects. Then your objects are free to run, decoupled. If you create the model from a factory, inject any needed service in the model to the model, and leave the data alone.
Edit apr/2016 - Separating Domain from Persistance
All occurences of the word entity in this answer are referring to the "doctrine entities" which is what causes confusion to the majority of coders, between the model layer and the persistance layer which should be always different.
Doctrine is infrastructure, so doctrine is outside the model by definition.
Doctrine has entities. So, by definition, then doctrine entities are also outside the model.
Instead, the increasing popularity of the DDD building blocks makes a need to clarify even more my answer, as DDD uses the word Entity within the model too.
Domain entities (not Doctrine entities) are similar to what I refer in this answer to Domain objects.
In fact, there are many types of Domain objects:
Domain entities (different from the Doctrine entites).
Domain value objects (could be thought similar to basic types, with logic).
Domain events (also distinct from those Symfony events and also different from the Doctrine events).
Domain commands (different from those Symfony command line controller-like helpers).
Domain services (different from the Symfony framework services).
etc.
Therefore, take all my explanation as this: when I say "Entities are not model objects" just read "Doctrine entities are not Domain entities".
Edit jun/2019 - CQRS+ES analogy
Ancients already used persistant history methods to recod things (for example placing marks on a stone to register transactions).
Since a decade long the CQRS+ES approach (Command Query Responsability Segregation + Event Sourcing) in programming has been growing in popularity, bringing that idea of "the history is immutable" to the programs we code and today many coders think of separating the command side vs the query side. If you don't know what I'm talking about, no worries, just skip the next paragraphs.
The growing popularity of CQRS+ES in the last 3 or 4 years makes me think to consider a comment here and how it relates to what I answered here 5 years ago:
This answer was thought as 1 single model, not a write-model and a read-model. But I'm happy to see many overlapping ideas.
Think of the PRE events, I mention here, as the "commands and the write-model". Think of the POST events as the "Event Sourcing part going towards the read-model".
In CQRS you can easily find that "commands can be accepted or not" in function of the internal state. Usually one implements them throwing exceptions but there are other alternatives there, like answering if the command was accepted or not.
For example, in a "Train" I can "set it to X speed". But if the state is that the train is in a rail that cannot go further 80Km/h, then setting it to 200 should be rejected.
This is ANALOGOUS to the cancel boolean passed by reference where an entity could just "reject" something PRIOR to its state change.
Instead the POST events do not carry the "cancel" event and are thrown AFTER the state change happened. This is why you could not cancel them: They talk about the "state change that actually occurred" and therefore it cannot be cancelled: It aleady happened.
So...
In my answer of 2014, the "pre" events match with the "Command acceptance" of the CQRS+ES systems (the command can be accepted or rejected), and the "post" events match the "Domain events" of the CQRS+ES systems (it just informs that the change actually already happened, do whatever you want with that information).
You already mentioned a very good point. Instances of class Person are not the only thing that can have an age. BlogArticles can also age along with many other types. If you're using PHP 5.4+ you can utilize traits to add little pieces of functionality instead of having service objects from the container (or maybe you can combine them).
Here is a quick mockup of what you could do to make it very flexible. This is the basic idea:
Have one age calculating trait (Aging)
Have a specific trait which can return the appropriate field ($birthdate, $createdDate, ...)
Use the trait inside your class
Generic
trait Aging {
public function getAge() {
return $this->calculate($this->start());
}
public function calculate($startDate) { ... }
}
For person
trait AgingPerson {
use Aging;
public function start() {
return $this->birthDate;
}
}
class Person {
use AgingPerson;
private $birthDate = '1999-01-01';
}
For blog article
// Use for articles, pages, news items, ...
trait AgingContent {
use Aging;
public function start() {
return $this->createdDate;
}
}
class BlogArticle {
use AgingContent;
private $createDate = '2014-01-01';
}
Now you can ask any instance of the above classes for their age.
echo (new Person())->getAge();
echo (new BlogArticle())->getAge();
Finally
If you need type hinting traits won't do you any favors. In that case you will need to provide an interface and let every class that uses the trait implement it (the actual implementation is the trait but the interface enables type hinting).
interface Ageable {
public function getAge();
}
class Person implements Ageable { ... }
class BlogArticle implements Ageable { ... }
function doSomethingWithAgeable(Ageable $object) { ... }
This may seem like a lot of hassle when in reality it's much easier to maintain and extend this way.
A big part is that there is no easy way to inject dependencies when using the database.
$person = $personRepository->find(1); // How to get the age service injected?
One solution might be to pass the age service as an argument.
$ageCalculator = $container('age_service');
$person = $personRepository->find(1);
$age = $person->calcAge($ageCalculator);
But really, you would probably be better off just adding the age stuff to your Person class. Easier to test and all that.
It sounds like you might have some output formatting going on? That sort of thing should probably be done in twig. getAge should really just return a number.
Likewise, your date of birth really should be a date object and not a string.
You are right, it's generally discouraged. However, there are several approaches how you can extend the functionality of an entity beyond the purpose of a data container. Of course, all of them can be considered (more or less) bad practice … but somehow you gotta do the job, right?
You can indeed create an AbstractEntity super class, from which all other entities inherit. This AbstractEntity would contain helper methods that other entities may need.
You can work with custom Doctrine repositories, if you need an entity context to work with an entity manager and return “more special” results than what the common getters would give you. As you have access to the entity manager in a repository, you can perform all kinds of special queries.
You can write a service that is in charge of the entity/entities in question. Downside: you cannot control that other parts of your code (or other developers) know of this service. Advantage: There's no limit to what you can do, and it's all nicely encapsuled.
You can work with Lifecycle Events/Callbacks.
If you really need to inject services into entities, you could consider setting a static property on the entity and only set it once in a controller or a dedicated service. Then you don't need to take care on each initialization of an object. Could be combined with the AbstractEntity approach.
As mentioned before, all of these are have their advantages and disadvantages. Pick your poison.

How to do optional cross-bundle associations in Symfony 2?

I'm working on a Symfony 2.3 Project that utilizes the Doctrine 2 ORM. As is to be expected functionality is split and grouped into mostly independent bundles to allow for code-reuse in other projects.
I have a UserBundle and a ContactInfoBundle. The contact info is split off because other entities could have contact information associated, however it is not inconcievable that a system may be built where users do not require said contact information. As such I'd very much prefer these two do not share any hard links.
However, creating the association mapping from the User entity to the ContactInfo entity creates a hard dependency on the ContactInfoBundle, as soon as the bundle is disabled Doctrine throws errors that ContactInfo is not within any of its registered namespaces.
My investigations have uncovered several strategies that are supposed to counter this, but none of them seem fully functional:
Doctrine 2's ResolveTargetEntityListener
This works, as long as the interface is actually replaced at runtime. Because the bundle dependency is supposed to be optional, it could very well be that there is NO concrete implementation available (i.e. contactInfoBundle is not loaded)
If there is no target entity, the entire configuration collapses onto itself because the placeholder object is not an entity (and is not within the /Entity namespace), one could theoretically link them to a Mock entity that doesn't really do anything. But this entity then gets its own table (and it gets queried), opening up a whole new can of worms.
Inverse the relation
For the ContactInfo it makes the most sense for User to be the owning side, making ContactInfo the owning side successfully sidesteps the optional part of the dependency as long as only two bundles are involved. However, as soon as a third (also optional) bundle desires an (optional) link with ContactInfo, making ContactInfo the owning side creates a hard dependency from ContactInfo on the third bundle.
Making User the owning side being logical is a specific situation. The issue however is universal where entity A contains B, and C contains B.
Use single-table inheritance
As long as the optional bundles are the only one that interacts with the newly added association, giving each bundle their own User entity that extends UserBundle\Entities\User could work. However having multiple bundles that extend a single entity rapidly causes this to become a bit of a mess. You can never be completely sure what functions are available where, and having controllers somehow respond to bundles being on and/or off (as is supported by Symfony 2's DependencyInjection mechanics) becomes largely impossible.
Any ideas or insights in how to circumvent this problem are welcome. After a couple of days of running into brick walls I'm fresh out of ideas. One would expect Symfony to have some method of doing this, but the documentation only comes up with the ResolveTargetEntityListener, which is sub-optimal.
I have finally managed to rig up a solution to this problem which would be suited for my project. As an introduction, I should say that the bundles in my architecture are laid out "star-like". By that I mean that I have one core or base bundle which serves as the base dependency module and is present in all the projects. All other bundles can rely on it and only it. There are no direct dependencies between my other bundles. I'm quite certain that this proposed solution would work in this case because of the simplicity in the architecture. I should also say that I fear there could be debugging issues involved with this method, but it could be made so that it is easily switched on or off, depending on a configuration setting, for instance.
The basic idea is to rig up my own ResolveTargetEntityListener, which would skip relating the entities if the related entity is missing. This would allow the process of execution to continue if there is a class bound to the interface missing. There's probably no need to emphasize the implication of the typo in the configuration - the class won't be found and this can produce a hard-to-debug error. That's why I'd advise to turn it off during the development phase and then turn it back on in the production. This way, all the possible errors will be pointed out by the Doctrine.
Implementation
The implementation consists of reusing the ResolveTargetEntityListener's code and putting some additional code inside the remapAssociation method. This is my final implementation:
<?php
namespace Name\MyBundle\Core;
use Doctrine\ORM\Event\LoadClassMetadataEventArgs;
use Doctrine\ORM\Mapping\ClassMetadata;
class ResolveTargetEntityListener
{
/**
* #var array
*/
private $resolveTargetEntities = array();
/**
* Add a target-entity class name to resolve to a new class name.
*
* #param string $originalEntity
* #param string $newEntity
* #param array $mapping
* #return void
*/
public function addResolveTargetEntity($originalEntity, $newEntity, array $mapping)
{
$mapping['targetEntity'] = ltrim($newEntity, "\\");
$this->resolveTargetEntities[ltrim($originalEntity, "\\")] = $mapping;
}
/**
* Process event and resolve new target entity names.
*
* #param LoadClassMetadataEventArgs $args
* #return void
*/
public function loadClassMetadata(LoadClassMetadataEventArgs $args)
{
$cm = $args->getClassMetadata();
foreach ($cm->associationMappings as $mapping) {
if (isset($this->resolveTargetEntities[$mapping['targetEntity']])) {
$this->remapAssociation($cm, $mapping);
}
}
}
private function remapAssociation($classMetadata, $mapping)
{
$newMapping = $this->resolveTargetEntities[$mapping['targetEntity']];
$newMapping = array_replace_recursive($mapping, $newMapping);
$newMapping['fieldName'] = $mapping['fieldName'];
unset($classMetadata->associationMappings[$mapping['fieldName']]);
// Silently skip mapping the association if the related entity is missing
if (class_exists($newMapping['targetEntity']) === false)
{
return;
}
switch ($mapping['type'])
{
case ClassMetadata::MANY_TO_MANY:
$classMetadata->mapManyToMany($newMapping);
break;
case ClassMetadata::MANY_TO_ONE:
$classMetadata->mapManyToOne($newMapping);
break;
case ClassMetadata::ONE_TO_MANY:
$classMetadata->mapOneToMany($newMapping);
break;
case ClassMetadata::ONE_TO_ONE:
$classMetadata->mapOneToOne($newMapping);
break;
}
}
}
Note the silent return before the switch statement which is used to map the entity relations. If the related entity's class does not exist, the method just returns, rather than executing faulty mapping and producing the error. This also has the implication of a field missing (if it's not a many-to-many relation). The foreign key in that case will just be missing inside the database, but as it exists in the entity class, all the code is still valid (you won't get a missing method error if accidentally calling the foreign key's getter or setter).
Putting it to use
To be able to use this code, you just have to change one parameter. You should put this updated parameter to a services file which will always be loaded or some other similar place. The goal is to have it at a place that will always be used, no matter what bundles you are going to use. I've put it in my base bundle services file:
doctrine.orm.listeners.resolve_target_entity.class: Name\MyBundle\Core\ResolveTargetEntityListener
This will redirect the original ResolveTargetEntityListener to your version. You should also clear and warm your cache after putting it in place, just in case.
Testing
I have done only a couple of simple tests which have proven that this approach might work as expected. I intend to use this method frequently in the next couple of weeks and will be following up on it if the need arises. I also hope to get some useful feedback from other people who decide to give it a go.
You could create loose dependencies between ContactInfo and any other entities by having an extra field in ContactInfo to differentiate entities (e.g. $entityName). Another required field would be $objectId to point to objects of specific entities. So in order to link User with ContactInfo, you don't need any actual relational mappings.
If you want to create a ContactInfo for a $user object, you need to manually instantiate it and simply setEntityName(get_class($user)), setObjectId($user->getId()). To retrieve user ContactInfo, or that of any object, you can create a generic function that accepts $object. It could simply just return ...findBy(array('entityName' => get_class($user), 'objectId' => $object->getId());
With this approach, you could still create User form with ContactInfo (embed ContactInfo into User). Though after you process the form, you will need to persist User first and flush, and then persist ContactInfo. Of course this is only necessary for newly created User objects, just so to get user id. Put all persist/flush in a transaction if you're concerned about data integrity.

Why would I want to use UnitOfWork with Repository Pattern?

I've seen a lot about UnitOfWork and Repo Pattern on the web but still don't have a clear understanding of why and when to use -- its somewhat confusing to me.
Considering I can make my repositories testable by using DI thru the use of an IoC as suggested in this post What are best practices for managing DataContext. I'm considering passing in a context as a dependency on my repository constructor then disposing of it like so?:
public interface ICustomObjectContext : IDisposable {}
public IRepository<T> // Not sure if I need to reference IDisposable here
public IMyRepository : IRepository<MyRepository> {}
public class MyRepository : IMyRepository
{
private readonly ICustomObjectContext _customObjectContext;
public MyRepository(ICustomObjectContext customObjectContext)
{
_customObjectContext = customObjectContext;
}
public void Dispose()
{
if (_customObjectContext != null)
{
_customObjectContext.Dispose();
}
}
...
}
My current understanding of using UnitOfWork with Repository Pattern, is to perform an operation across multiple repositories -- this behavior seems to contradict what #Ladislav Mrnka recommends for web applications:
For web applications use single context per request. For web services use single context per call. In WinForms or WPF application use single context per form or per presenter. There can be some special requirements which will not allow to use this approach but in most situation this is enough.
See the full answer here
If I understand him correctly the DataContext should be shortlived and used on a per request or presenter basis (seen this in other posts as well). In this case it would be appropriate for the repo to perform operations against the context since the scope is limited to the component using it -- right?
My repos are registered in the IoC as transient, so I should get a new one with each request. If that's correct, then I should be getting a new context (with code above) with each request as well and then disposing of it -- that said...Why would I use the UnitOfWork Pattern with the Repository Pattern if I'm following the convention above?
As far as I understand the Unit of Work pattern doesn't necessarily cover multiple contexts. It just encapsulates a single operation or -- well -- unit of work, similar to a transaction.
Creating your context basically starts a Unit of Work; calling DbContext.SaveChanges() finishes it.
I'd even go so far as to say that in its current implementation Entity Framework's DbContext / ObjectContext resembles both the repository pattern and the unit of work pattern.
I would use a simplified UoW if i wanted to push context's SaveChanges away from the repositories when they share the same instance of context across one web request.
I imagine you have sth like Save() method on your repositories that looks similiar to _customObjectContext.SaveChanges(). Now lets assume you have two methods containing business logic and using repos to persist changes in DB. For the sake of simplicity we'll call them MethodA and MethodB, both of them containing a fair amount of logic for performing some activities. MethodA is used separately in the system but also it is called by MethodB for some reason. What happens is MethodA saves changes on some repository and since we are still in the same request changes made in MethodB, before it called MethodA, will also be saved regardless of whether we want it or not. So in such case we unintentionally break the transaction inside MethodB and make the code harder to understand.
I hope i described this clear enough, it wasn't easy. Anyway other than that i cannot see why UoW would be helpful in your scenario. As Dennis Traub pointed quite correctly ObjectContext and DbContext are in fact an implementation of a UoW so you'd be probably reinventing the wheel while implementing it on your own.
The ObjectContext/DbContext is an implementation of the UnitOfWork pattern. It encapsulates several operations and makes sure they are submitted in one transaction to the database.
The only thing you are doing is wrapping it in your own class to make sure you're not depending on a specific implementation in the rest of your code.
In your case, the problem lies in the fact that your Context shouldn't be disposed of by your Repository. The Repository is not the one that instantiates the Context, so it shouldn't dispose of it either. The UnitOfWork that encapsulates multiple repositories is responsible for creating and disposing the Context and you will call a Save method on your UnitOfWork.
Code can look like this:
using (IUnitOfWork unitOfWork = new UnitOfWork())
{
PersonRepository personRepository = new PersonRepository(unitOfWork);
var person = personRepository.FindById(personId);
ProductRepository productRepository = new ProductRepository(unitOfWork);
var product= productRepository.FindById(productId);
p.CreateOrder(orderId, product);
personRepository.Save();
}

Resources