Up till now our Doctrine entities did the cache busting via their LifecycleEvents. The APCu cache was deleted based on the entity's id in combination with a cache key constant:
/**
* #ORM\PostPersist()
*/
public function postPersist(LifecycleEventArgs $event)
{
apc_delete(sprintf(selff::CACHE_KEY, $event->getEntity()->getId()));
}
This was possible because of the procedural apc method, but this will never allow us to upgrade to PSR-6 because it makes use of a CacheItemPool that should be injected as a service.
As we will never going to inject the cache pool in the entities, my guess would be that we should create a EventSubscriber or EventListener for more than half the entities we have. This possible overhead frightens me a bit.
Will the subscriber / listener restructuring add a lot of overhead, and is that the right way to go? Should we add one global listener/subscriber for all entities (1..n) that handles all events or would it be better to add one listener/subscriber for every entity (n..m)?
Will the subscriber / listener restructuring add a lot of overhead, and is that the right way to go?
postPersist method is also added as listener to doctrine EventManager. http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/events.html
Based on this I would guess that there would be no significant overhead.
However, I wouldn't base my decision on this but instead try it out. You can use Symfony profiler to see if any overhead exists and how big it is.
Really cool service I use for this kind of comparison is Blackfire.io.
Should we add one global listener/subscriber for all entities (1..n) that handles all events or would it be better to add one listener/subscriber for every entity (n..m)?
I would create global listener for all entities. In case you need different caching logic for any particular entity you can always create separate case.
Related
I'm struggling to find a way to perform a persist() and flush() methods after the final flush (I mainly want to do it in postFlush event).
I collect the necessary entities in onFLush event (with changeSets) and wait up until all entities (which I collected) got flushed to get their id's (auto incremented).
So that I have at this point an array with all needed entities and their change sets and their id's set.
Then I want to create new entities (let's call them "traces") based on fields of previously collected entities and persist & flush "traces" in database.
But I'm really stuck here as I can't know entities id's in onFlush event, and I can't persist & flush them in postFlush when they already have their id's set.
Currently Doctrine documentation states following:
postFlush is called at the end of EntityManager#flush(). EntityManager#flush() can NOT be called safely inside its listeners.
And if I dare do this, it ends up in a recursion and php fails with an error.
Which approach may I take here?
I believe you could do a check if you aren't flushing "traces" entity and then perform your "traces" creation. That shouldn't loop.
Also you might want to look at symfony's eventDispatcher . You could dispatch your events manually, since it might be cleaner.
More details on "traces" would be helpful, from what I can imagine it is some kind of a changelog, history; so for that I might suggest EntityAuditBundle. It works pretty good with doctrine and is not hard to set up, I am using it myself.
I'm running two daemons that interrogate an external service basically almost every second 24/7. Each of them inserts or updates things in the same local database after every loop, but they work on different objects of the same entities.
Since they run 24/7, after some tests I decided to clear the entity manager after every loop to avoid having a huge number of managed entities and a lot of memory usage.
So, in both of them, I run something like this after every loop:
$this->entityManager->flush();
....
$this->entityManager->clear(MyClass:class);
$this->entityManager->clear(MyOtherClass:class);
....
What I want to ask is: if DaemonA clears the entities and DaemonB hasn't flushed the persisted changes yet, what happens? When DaemonA flushes, does it affect in any way the entities in DaemonB? Could some objects get lost? Could some get duplicated? If so, what can I do to avoid this kind of things?
As I said, they work on different objects of the same entities, e.g DaemonA works on MyOtherClass objects 1, 2, 3 and DaemonB on MyOtherClass objects 4, 5, 6.
Both daemons are Symfony commands constructed like this:
class DaemonA extends Command
{
private $entityManager;
public function __construct(EntityManagerInterface $entityManager)
{
$this->entityManager = $entityManager;
parent::__construct();
}
...
}
That’s a lot of questions here, so let’s go through them step-by-step.
Before we start, remember how Doctrine works internally: If an entity, or a set of entities, is requested through a query or a repository, Doctrine loads the entity data from the database, creates entities, populates them with data, tracks changes and syncs changes back to the database. Doctrine entities have states, usually they are in a managed state unless you detach them. When you clear the entity manager, entities become detached.
Now, to answer your questions:
if DaemonA clears the entities and DaemonB hasn't flushed the persisted changes yet, what happens?
Clearing the entity manager only means that entities become detached and, if they aren’t referenced any more, are garbage-collected (I think). On a database level, this is irrelevant.
When DaemonA flushes, does it affect in any way the entities in DaemonB?
Yes, but not while DaemonB is running and doesn’t reload the entities from the database. Should DaemonA modify entities and DaemonB modify the same entities before reloading them in the meantime, the modifications by DaemonB will persist.
Could some get duplicated?
Only if you persist detached entities (they would have a new ID, though). However, persisting detached entities doesn’t make sense anyway.
If so, what can I do to avoid this kind of things?
Locks and transactions!
Every modification of your database which contains more than one query, must be wrapped in a transaction. This avoids inconsistencies if something goes wrong or a concurrent request modifies the data. On the PHP level, the transaction again should be wrapped in a try/catch block.
If you are modifying an entity, lock it. Doctrine supports different types of locking; pick the one that suits your scenario best.
The code in one of your daemons might look like this:
try
{
$em->beginTransaction();
$entity = $em->find($entityClassName, $id);
// lock the entity for all reading and writing.
$em->lock($entity, LockMode::PESSIMISTIC_WRITE);
$em->flush();
$em->commit();
}
catch (Exception $e)
{
$em->rollback();
throw $e;
}
Note that depending on your locking strategy and the general implementation of your system, the daemons may block each other by locking the database, up to the point where your system runs out of resources.
For example, pessimistic write locking is more secure (it makes sure that other processes don’t read the data until the modification is complete), but other processes will have to wait until the lock is released.
Be mindful and do test under heavy-load scenarios!
I'm trying to persist a History entity whenever a Message gets updated. I have too much going on behind the scenes to post all the code here and for it to make sense, but I've basically tracked the issue down to the UnitOfWork::commit method. There, the UOW first loops through the entityInsertions, and finding nothing, continues on to the entityUpdates. There the UOW's entityInsertions gets updated, but since it's already past that loop, it doesn't pick up that it still needs to persist some entities. Is there any way to force the UOW to "restart" this process? If so, how? I'm using Doctrine 2.4.
Thanks for any help!
This might be the dirtiest solution ever, but what I ended up doing was basically the following...
Create an onFlush event subscriber
Inject the entire container into the subscriber (seeing as injecting only the entity manager will result in a circular reference error)
Loop through the UnitOfWork's scheduledEntityUpdates and scheduledEntityInserts (I wasn't interested in deletes)
Handle each scheduled update or insert which you are interested in (in my case, I marked each entity I was interested in with a LoggableInterface, just to know which entities are loggable)
Handle the relevant object with a handler chain (This was just my own algorithm, yours may not require this. This was set up to handle logging of different LoggableInterface objects in different ways)
Persist the entity (the actual history event) via the entity manager, and do the following:
$classMeta = $this->entityManager->getClassMetadata(get_class($historyEntity));
$this->entityManager->getUnitOfWork()->computeChangeSet($classMeta, $historyEntity);
Profit
Hope this helps somebody!
I have a tree-like structure with a couple of entities: a process is composed of steps and a step may have sub-processes. Let's say I have 2 failure modes: abort and re-do. I have tree traversal logic implemented that cascades the fail signal up and down the tree. In the case of abort, all is well; abort cascades correctly up and down, notifying its parent and its children. In the case of re-do, the same happens, EXCEPT a new process is created to replace the one that failed. Because I'm using the DataMapper pattern, the new object can't save itself, nor is there a way to pass the new object to the EntityManager for persistence, given that entities have no knowledge of persistence or even services in general.
So, if I don't pass the EntityManager to the domain layer, how can I pick up on the creation of new objects before they go out of scope?
Would this be a good case for implementing AOP, such as with the JMSAopBundle? This is something I've read about, but haven't really found a valid use case for.
If I understand your problem correctly (your description seems to be written a bit in a hurry), I would do the following:
mark your failed nodes and your new nodes with some kind of flag (i.e. dirty flag)
Have your tree iterator count the number of failed and new nodes
Repeat tree-iteration / Re-Do prcocess as often as you want, until no more failed or new nodes are there that need to be handled
I just found a contribution from Benjamin Eberlei, regarding business logic changes in the domain layer on a more abstract level: Doctrine and Domain Events
Brief quote and summary from the blog post:
The Domain Event Pattern allows to attach events to entities and
dispatch them to event listeners only when the transaction of the
entity was successfully executed. This has several benefits over
traditional event dispatching approaches:
Puts focus on the behavior in the domain and what changes the domain triggers.
Promotes decoupling in a very simple way
No reference to the event dispatcher and all the listeners required except in the Doctrine UnitOfWork.
No need to use unexplicit Doctrine Lifecycle events that are triggered on all update operations.
Each method requiring action should:
Call a "raise" method with the event name and properties.
The "raise" method should create a new DomainEvent object and set it into an events array stored in the entity in memory.
An event listener should listen to Doctrine lifecycle events (e.g. postInsert), keeping entities in memory that (a) implement events, and (b) have events to process.
This event listener should dispatch a new (custom) event in the preFlush/postFlush callback containing the entity of interest and any relevant information.
A second event listener should listen for these custom events and trigger the logic necessary (e.g. onNewEntityAddedToTree)
I have not implemented this yet, but it sounds like it should accomplish exactly what I'm looking for in a more automated fashion that the method I actually implemented.
I'm using Symfony2 with Doctrine2. I want to achieve the following:
$place = $this->getDoctrine()->getRepository('TETestBundle:Place')->find($id);
And on that place will be the info of the place (common data + texts) on the user language (in session). As I am going to do that hundreds of times, I want to pass it behind the scenes, not as a second parameter. So an English user will view the place info in English and a Spanish user in Spanish.
One possibility is to access the locale of the app from an EntityRepository. I know it's done with services and DI but I can't figure it out!
// PlaceRepository
class PlaceRepository extends EntityRepository
{
public function find($id)
{
// get locale somehow
$locale = $this->get('session')->getLocale();
// do a query with the locale in session
return $this->_em->createQuery(...);
}
}
How would you do it? Could you explain with a bit of detail the steps and new classes I have to create & extend? I plan on releasing this Translation Bundle once it's ready :)
Thanks!
I don't believe that Doctrine is a good approach for accessing session data. There's just too much overhead in the ORM to just pull session data.
Check out the Symfony 2 Cookbook for configuration of PDO-backed sessions.
Rather than setting up a service, I'd consider an approach that used a Doctrine event listener. Just before each lookup, the listener would pick out the correct locale from somewhere (session, config, or any other place you like in the future), inject it into the query, and like magic, your model doesn't have to know those details. Keeps your model's scope clean.
You don't want your model or Repository crossing over into the sessions directly. What if you decide in the future that you want a command-line tool with that Repository? With all that session cruft in there, you'll have a mess.
Doctrine event listeners are magically delicious. They take some experimentation, but they wind up being a very configurable, out-of-the-way solution to this kind of query manipulation.
UPDATE: It looks like what you'd benefit from most is the Doctrine Translatable Extension. It has done all the work for you in terms of registering listeners, providing hooks for how to pass in the appropriate locale (from wherever you're keeping it), and so on. I've used the Gedmo extensions myself (though not this particular one), and have found them all to be of high quality.