How to stop Objectify from returning cached values? - objectify

I'm saving an object with Objectify like this:
Thing th = new Thing();
th.identifier = thingId;
th.name = thingName;
th.json = thingData;
ofy().save().entity(thing);
pResponse.setStatus(200);
pResponse.getWriter().println("OK");
I verify using the GAE datastore browser that the value has been updated in the database. I'm running locally. Then I load all things like this:
Map<String, List<Thing>> responseJsonMap = new HashMap<String, List<Thing>>();
List<Thing> things = ofy().load().type(Thing.class).list();
responseJsonMap.put("things", things);
pResponse.setContentType("application/json");
try {
GSON.toJson(responseJsonMap, pResponse.getWriter());
} ...
What I get back is the data that existed before the save. I have tried turning caching off on the entity and calling ofy().clear() but neither work. If I restart my server or wait long enough, the saved data comes through. I have also tried adding .now() after the save, but it's not necessary since I can verify in the datastore that the action has completed. I really would like to be able to load the data I just saved. What am I doing wrong?

What I'm doing wrong is not reading the manual and forgetting a step.
Objectify requires a filter to clean up any thread-local transaction
contexts and pending asynchronous operations that remain at the end of
a request. Add this to your WEB-INF/web.xml:
<filter>
<filter-name>ObjectifyFilter</filter-name>
<filter-class>com.googlecode.objectify.ObjectifyFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>ObjectifyFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Well, good thing to check if you're getting cached values that you really don't expect.

Aside from the ObjectifyFilter issue, what you are most likely seeing is the eventually consistent nature of queries in the datastore.
See this: https://developers.google.com/appengine/docs/java/datastore/structuring_for_strong_consistency

Related

Doctrine: atomic updates and exceptions in a loop

We are migrating a project from a more basic ORM to using Symfony+Doctrine. In the project we have a lot of cron jobs looking like this:
$rows = $someRepository->getRows();
foreach ($rows as $row) {
try {
$db->beginTransaction(); //simple begin transaction in db
//do some handling of data
// Maybe load some other entities and update those
// ...
$db->commit();
} catch (Throwable $t) {
//log error
//clear entity cache
$db->rollback(); //simple rollback in db
}
}
When we did it this way, all changes within the try catch was atomic while it at the same time was possible to recover from an error and continue on the next $row.
In Symfony+Doctrine, I simply cannot figure out how to mimic this behaviour. The recommendation from Doctrine to handle an exception is closing the EntityManager, but how do you recover?
The ORM does this implicitly on flush, so most of the time you can avoid the hassle of doing so on your own.
However, if you want clear demarcation you can still do it explicitly, in a similar manner you did so far.
More reading and examples here: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html
EDIT related to the comment below:
Instead of injecting the manager, you should inject the registry.
After that on catch, you can check if the $em->isOpen(), and call $registry->resetManager() if not.
I suspect this will also reset the unit of work, so you might encounter detached entities. In that case you should do $em->merge();
One thing to note here is that an expection is not considered normal in doctrine, so they are closing the manager because of that. You might think that this is overcompicated - yes it is, because you are working against the philosophy here. Validate your data if you can. Read this section: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html#exception-handling
As for the why: (This is not offical, just based on my knowledge) The managers internal unit of work is a stateful object. When an exception occures during a transaction that state will remain the same, but couln't be persisted to the database. If they let this go that would mean the EM would try to apply all state changes again, and would encounter the same exception again. So no point in leaving it open in the same state, a reset is needed.

Slowdown issue in web project

I just need suggestion in this case. There is a PIN code field in my project in asp.net environment. I have stored 50,000 around pin code in sql server database. When I run project in local host, it becomes slow down. Since I have a drop-down to get value from database. I think it is because of huge data is being rendered into html, since when I click on view source at run-time, I can see all the PIN-code inside it.
Moreover, I have also done this for Select CITY, and STATE from database in a same way.
I will really appreciate you, if you get me any logic or technique to lessen this slowdown
If you are using all the Pincode in the single page then You have multiple option to optimized this slow down If this is in initialized phase then Try MongoDB ,No SQL DB otherwise go for Solr , Redis that gives fast accessing of the data. If you are not able to using these then You can optimised it by eager loading , Cache Storing of data.
If its not in single page then break it to batch via paginate the pincode.
This is common problem with any website where we deal with large amount of data. To be frank there is no code level solution for this. You need to select any of following approach.
You can try multiple options for faster retrieval.
Caching -
Use redis or memcache - in simpler words, on the first request cache manager will read and store your data from SQL server. For subsequent requests, data will be served from cache.
Also, don't forget to make a provision to invalidate the data when new pin codes are added.
Edit: You can also use object caching provided by .Net framework. Refer: object caching
Code will be something like.
if (Cache["key_pincodes"] == null)
{
// if No object is present in Cache, add it to the cache with expiry time of 10 minutes
// Read data to datatable or any object
DataTable pinCodeObject = GetPinCodesFromdatabase();
Cache.Insert("key_pincodes", pinCodeObject, null, DateTime.MaxValue, TimeSpan.FromMinutes(10));
}
else // If pinCodes are cached, dont make Database call and read it from cache
{
// This will get execute
DataTable pinCodeObject = (DataTable)Cache["key_pincodes"];
}
// bind it your dropdown
No-sql database-
MongoDB, XML, Txt files could be used to read the data. It will take much lesser time than the database hit.

Symfony2, Doctrine2, Entity insert then update at once

Simple thing, but doesn't work. We have at the bottom part of script
$oMan = $this->getContainer()->get('doctrine')->getManager();
// add entry of calling
$oLastCall = new CronLastCall();
$oLastCall->setType('key');
$oMan->persist($oLastCall);
$oMan->flush();
we insert to db once we create it, then do some stuff that can take a few minutes. Then call this one.
$oLastCall->setDateEnd(new \DateTime('now'));
$oMan->flush();
after this one - exist from method\action. So regarding logic (and doctrine2 manual I read) entity that were created already become 'managed' (we persist it) and we can simply update it. (I call flush (at the end) to update this entity, but it not updating.)
Where is trouble?
As far as I can tell, there's nothing wrong with the code you've shown. But if the $oLastCall object becomes detached between the first and second code blocks, you have to re-attach (merge) it to the manager so that it detects the changes for the second flush.
Merging can be done in this way:
$oLastCallMerged = $oMan->merge($oLastCall);
$oLastCallMerged->setDateEnd(new \DateTime('now'));
$oMan->flush();
You can also check the state (MANAGED/NEW/DETACHED/REMOVED) of an object using this code:
$oMan->getUnitOfWork()->getEntityState($oLastCall);
If that doesn't help (i.e. detachment of the object isn't your problem), you need to give more info about the context this code runs in and any errors you get. Is this code part of a Console Command or a regular web-app Controller? Do you get any output or errors when running it in 'dev' environment? (Check .../app/logs/dev.log.) Does the $oLastCall object stay in memory while waiting for the stuff that takes some minutes, or do you reload it from somewhere?
Btw, objects doesn't magically get detached by themselves. They'll only be detached if you load them from a different source than the entity manager (for example storing them in the session between requests) OR if you explicitly detach them by calling $oMan->detach($entity) or $oMan->clear().
Edit
You can also check if Doctrine detects the change by echoing out the changeset using $oMan->getUnitOfWork()->getEntityChangeSet($oLastCall) before and after the change, e.g:
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
$oLastCall->setDateEnd(new \DateTime('now'));
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
After flushing the manager all objects will be removed internally from the manager. So if you want to update the object later you need to call persist again before flushing.

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

Clear propel cache (instance pool)

I need to force reread data from DB within one php execution, using propel. I already have a bit hacky solution: call init%modelName% for corresponding classes, but want something better.
Is there any single call or service config option for that? Like killing whole instance pool.
About service: we use symfony2 and don't need cache only in one specific case, hence we can create even separate environment for that.
You can globally disable the instance pooling by calling: Propel::disableInstancePooling() (Propel::enableInstancePooling() is useful to enable the instance pooling).
Otherwise, you can rely on PEER classes which contain generated methods like clearInstancePool(), and clearRelatedInstancePool().
I needed to update realated objects and found out clear%modelName% should be called.
init%modelName% deletes all entries and related entires could never be read. clear[Related]InstancePool don't help.
$foo = FooQuery->create()->findOne();
// meanwhile somebody else updated DB and Propel don't know about that:
mysql_query("INSERT INTO `foo_bars`, (`foo_id`, `bar_id`) VALUES (".$foo->getId().", 1)");
// here we need some magic to force Propel re-read relation table.
$foo->clearFooBars();
// now entries would be re-read
$foo->getFooBars();

Resources