Clear propel cache (instance pool) - symfony

I need to force reread data from DB within one php execution, using propel. I already have a bit hacky solution: call init%modelName% for corresponding classes, but want something better.
Is there any single call or service config option for that? Like killing whole instance pool.
About service: we use symfony2 and don't need cache only in one specific case, hence we can create even separate environment for that.

You can globally disable the instance pooling by calling: Propel::disableInstancePooling() (Propel::enableInstancePooling() is useful to enable the instance pooling).
Otherwise, you can rely on PEER classes which contain generated methods like clearInstancePool(), and clearRelatedInstancePool().

I needed to update realated objects and found out clear%modelName% should be called.
init%modelName% deletes all entries and related entires could never be read. clear[Related]InstancePool don't help.
$foo = FooQuery->create()->findOne();
// meanwhile somebody else updated DB and Propel don't know about that:
mysql_query("INSERT INTO `foo_bars`, (`foo_id`, `bar_id`) VALUES (".$foo->getId().", 1)");
// here we need some magic to force Propel re-read relation table.
$foo->clearFooBars();
// now entries would be re-read
$foo->getFooBars();

Related

Doctrine find() and querybuilder() return different result in PHPUnit test

With Doctrine and Symfony in my PHPUnit test method :
// Change username for user #1 (Sheriff Woody to Chuck Norris)
$form = $crawler->selectButton('Update')->form([
'user[username]' => 'Chuck Norris',
]);
$client->submit($form);
// Find user #1
$user = $em->getRepository(User::class)->find(1);
dump($user); // Username = "Sheriff Woody"
$user = $em->createQueryBuilder()
->from(User::class, 'user')
->andWhere('user.id = :userId')
->setParameter('userId', 1)
->select('
user
')
->getQuery()
->getOneOrNullResult()
;
dump($user); // Username = "Chuck Norris"
Why my two methods to fetch the user #1 return different results ?
diagnosis / explanation
I assume* you already created the User object you're editing via crawler before in that function and checked that it is there. This leads to it being a managed entity.
It is in the nature of data, to not sync itself magically with the database, but some automatism must be in place or some method executed to sync it.
The find() method will always try to use the cache (unless explicitly turned off, also see side note). The query builder won't, if you explicitly call getResult() (or one of its varieties), since you explicitly want a query to be executed. Executing a different query might lead to the cache not being hit, producing the current result. (it should update the first user object though ...) [updated, due to comment from Arno Hilke]
((( side note: Keeping objects in sync is hard. It's mainly about having consistency in the database, but all of ACID is wanted. Any process talking to the database should assume, that it only is working with the state at the moment of its first query, and is the only user of the database. Unless additional constraints must be met and inconsistent reads can occur, in which case isolation levels should be raised (See also: transactions or more precisely: isolation). So, automatically syncing is usually not wanted. Doctrine uses certain assumptions for performance gains (mainly: isolation / locking is optimistic). However, in your particular case, all of those things are of no actual concern... since you actually want a non-repeatable read. )))
(* otherwise, the behavior you're seeing would be really unexpected)
solution
One easy solution would be, to actively and explicitly sync the data from the database by either calling $em->refresh($user), or - before fetching the user again - to call $em->clear(), which will detach all entities (clearing the cache, which might have a noticable performance impact) and allowing you to call find again with the proper results being returned.
Please note, that detaching entities means, that any object previously returned from the entity manager should be discarded and fetched again (not via refresh).
alternate solution 1 - everything is requests
instead of checking the database, you could instead do a different request to a page that displays the user's name and checks that it has changed.
alternate solution 2 - using only one entity manager
using only one entity manager (that is: sharing the entity manager / database in the unit test with the server on the request) may be a reasonable solution, but it comes with its own set of problems. mainly omitted commits and flushes may avoid detection.
alternate solution 3 - using multiple entity managers
using one entity manager to set up the test, since the server is using a new entity manager to perform its work, you should theoretically - to do this actually properly - create yet another entity manager to check the server's behavior.
comment: the alternate solutions 1,2 and 3 would work with the highest isolation level, the initial solution probably wouldn't.

Register callback in Autofac and build container again in the callback

I have a dotnet core application.
My Startup.cs registers types/implementations in Autofac.
One of my registrations needs previous access to a service.
var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterSettingsReaders(); // this makes available a ISettingsReader<string> that I can use to read my appsettings.json
containerBuilder.RegisterMyInfrastructureService(options =>
{
options.Username = "foo" //this should come from appsettings
});
containerBuilder.Populate(services);
var applicationContainer = containerBuilder.Build();
The dilemma is, by the time I have to .RegisterMyInfrastructureService I need to have available the ISettingsReader<string> that was registered just before (Autofac container hasn't been built yet).
I was reading about registering with callback to execute something after the autofac container has been built. So I could do something like this:
builder.RegisterBuildCallback(c =>
{
var stringReader = c.Resolve<ISettingsReader<string>>();
var usernameValue = stringReader.GetValue("Username");
//now I have my username "foo", but I want to continue registering things! Like the following:
containerBuilder.RegisterMyInfrastructureService(options =>
{
options.Username = usernameValue
});
//now what? again build?
});
but the problem is that after I want to use the service not to do something like starting a service or similar but to continue registering things that required the settings I am now able to provide.
Can I simply call again builder.Build() at the end of my callback so that the container is simply rebuilt without any issue? This seems a bit strange because the builder was already built (that's why the callback was executed).
What's the best way to deal with this dilemma with autofac?
UPDATE 1: I read that things like builder.Update() are now obsolete because containers should be immutable. Which confirms my suspicion that building a container, adding more registrations and building again is not a good practice.
In other words, I can understand that using a register build callback should not be used to register additional things. But then, the question remain: how to deal with these issues?
This discussion issue explains a lot including ways to work around having to update the container. I'll summarize here, but there is a lot of information in that issue that doesn't make sense to try and replicate all over.
Be familiar with all the ways you can register components and pass parameters. Don't forget about things like resolved parameters, modules that can dynamically put parameters in place, and so on.
Lambda registrations solve almost every one of these issues we've seen. If you need to register something that provides configuration and then, later, use that configuration as part of a different registration - lambdas will be huge.
Consider intermediate interfaces like creating an IUsernameProvider that is backed by ISettingsReader<string>. The IUsernameProvider could be the lambda (resolve some settings, read a particular one, etc.) and then the downstream components could take an IUsernameProvider directly.
These sorts of questions are hard to answer because there are a lot of ways to work around having to build/rebuild/re-rebuild the container if you take advantage of things like lambdas and parameters - there's no "best practice" because it always depends on your app and your needs.
Me, personally, I will usually start with the lambda approach.

Symfony2, Doctrine2, Entity insert then update at once

Simple thing, but doesn't work. We have at the bottom part of script
$oMan = $this->getContainer()->get('doctrine')->getManager();
// add entry of calling
$oLastCall = new CronLastCall();
$oLastCall->setType('key');
$oMan->persist($oLastCall);
$oMan->flush();
we insert to db once we create it, then do some stuff that can take a few minutes. Then call this one.
$oLastCall->setDateEnd(new \DateTime('now'));
$oMan->flush();
after this one - exist from method\action. So regarding logic (and doctrine2 manual I read) entity that were created already become 'managed' (we persist it) and we can simply update it. (I call flush (at the end) to update this entity, but it not updating.)
Where is trouble?
As far as I can tell, there's nothing wrong with the code you've shown. But if the $oLastCall object becomes detached between the first and second code blocks, you have to re-attach (merge) it to the manager so that it detects the changes for the second flush.
Merging can be done in this way:
$oLastCallMerged = $oMan->merge($oLastCall);
$oLastCallMerged->setDateEnd(new \DateTime('now'));
$oMan->flush();
You can also check the state (MANAGED/NEW/DETACHED/REMOVED) of an object using this code:
$oMan->getUnitOfWork()->getEntityState($oLastCall);
If that doesn't help (i.e. detachment of the object isn't your problem), you need to give more info about the context this code runs in and any errors you get. Is this code part of a Console Command or a regular web-app Controller? Do you get any output or errors when running it in 'dev' environment? (Check .../app/logs/dev.log.) Does the $oLastCall object stay in memory while waiting for the stuff that takes some minutes, or do you reload it from somewhere?
Btw, objects doesn't magically get detached by themselves. They'll only be detached if you load them from a different source than the entity manager (for example storing them in the session between requests) OR if you explicitly detach them by calling $oMan->detach($entity) or $oMan->clear().
Edit
You can also check if Doctrine detects the change by echoing out the changeset using $oMan->getUnitOfWork()->getEntityChangeSet($oLastCall) before and after the change, e.g:
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
$oLastCall->setDateEnd(new \DateTime('now'));
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
After flushing the manager all objects will be removed internally from the manager. So if you want to update the object later you need to call persist again before flushing.

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

What is the strategy used by JobLockService To enforce the synchronization?

What is the strategy used by JobLockService to enforce the synchronization ? Does it lock the whole Repository? or there is another technique?
When i write a code such that :
String lockString = jobLockService.getLock(QName.createQName(Prefix,LocalName, Resolver));
LockToken lockToken = new LockToken();
lockToken.set(lockString);
// Something going here such as create a node or update or delete
// another somethign processes here
jobLockService.releaseLock(lockString);
As you can notice from that code i use JobLockService What is happening once the lock is acquired ? Does it lock the repository at all and prevents any other prcoesses from accessing the repository ?
I'm asking about actual techniques used to enforece the synchronization.
Also, What is about LockToken here ? what is the benefit from it ?
Thanks in advance, you replies are very highly appreciated.
JobLockService does not put any lock on actual content stored into the repository, let alone lock the whole repository. After you successfully called JobLockService.getLock, any thread is free to update whichever node it wants to edit. It is your code that must ensure that blocks that have to execute with a controlled concurrency are first trying to obtain the same lock.
That LockToken object you create seems of no use and can be dropped.

Resources