Datanucleus/JDO InstanceLifecycleListener for makeTransient (or LOAD by query) - jdo

I have a problem that i need to solve using DATANUCLEUS (JDO), maybe a limitation of something that was not covered by JDO specs.
I need to catch when objects are loaded in a Query - there is NO InstanceLifecycleListener for this! (Queried objects should be treated as LOADED objects after all - they can be changed, detached, etc)
Query query=pm.newQuery(...);
Collection col=(Collection)query.execute();
Another way to do it would be to catch when objects are made TRANSIENT. I found no way of doing that either!
pm.makeTransientAll(col,true);
Any ideas?
For detached Query elements, i use the DetachLifecycleListener over the PMF, that listens to the DETACH method. That's the only way I've found to make InstanceLifecycleListener work on queries.

Here's what i found out with a lot of DEBUG:
1) Queries DO NOT trigger any InstanceLifecycleListener (except when elements are DETACHED) if the DEFAULT fetch group (aka METADATA default-fetch-group) is REMOVED from the PersistentManager prior to the query. In other words: one cannot use the pm.getFetchPlan().clearGroups() method, or one has to ADD the DEFAULT fetch group back before executing the query. SO... if there is a need to define a programmatic fetch group, it needs to take into account that the DEFAULT fetch group will (must!) be present (aka, all the fields defined in the metadata!) for the InstanceLifecycleListener to work.
The workaround would be to DEFINE the DEFAULT fetch group with ONLY the KEY (it's needed) and just change the logic to work with that. What i did exactly... The result?
2) the getObjectById method STOPS triggering the postLoad!!! After debug i found that for this to work, the DEFAULT fetch plan MUST define another field apart from the KEY one!
There is a total lack of information about the way the FETCH GROUPS affect the way the InstanceLifecycleListener are triggered, and Apache JDO (at least) should revise the way this works (or at least the DOCS), so that JDO users don't go nuts with implementation.

Related

How do you manage adding new attributes on existing objects when using firebase?

I have an app using React + Redux and coupled with Firebase for the backend.
Often times, I will want to add some new attributes to existing objects.
When doing so, existing objects won't get the attribute until they're modified with the new version of the app that handles those new attributes.
For example, let's say I have a /categories/ node, in there I've got objects such as this :
{
name: "Medical"
}
Now let's say I want to add an icon field with a default of "
Is it possible to update all categories at once so that field always exists with the default value?
Or do you handle this in the client code?
Right now I'm always testing the values to see if they're here or not, but it doesn't seem like a very good way to go about it. I'd like to have one place to define defaults.
It seems like having classes for each object type would be interesting but I'm not sure how to go about this in Redux.
Do you just use the reducer to turn all categories into class instances when you fetch them for example? I'm worried this would be heavy performance wise.
Any write operation to the Firebase Database requires that you know the exact path to the node that you're writing.
There is no built-in operation to bulk update nodes with a path that is only partially known.
You can either keep your client-side code robust enough to handle the missing properties, or you can indeed run a migration script to add the new property to each relevant node. But since that script will have to know the exact path of each node to write, it will likely first have to read/query the database to determine those paths. Depending on the number of items to update, it could possibly use multi-location updates after that to update multiple nodes in one call. E.g.
firebase.database().ref("categories").update({
"idOfMedicalCategory/icon": "newIconForMedical",
"idOfCommercialCategory/icon": "newIconForCommercial"
"idOfTechCategory/icon": "newIconForTech"
})

Plone: catalog_object method won't add my (AT) objects

I have a transmogrifier pipeline to insert objects to my Zope database (importing zexp files from a directory structure). This works - the objects are created; but I don't get them added to the portal_catalog.
I added a section to add the objects to the catalog explicitly, inspired by plone.app.transmogrifier.reindexobject: I call portal_catalog.catalog_object(obj) for each item.
The objects exist, and getPhysicalPath yields the correct values, but the objects are not added. There is no error message or exception whatsoever.
I tried to specify the list of indexes (the idxs argument), but this didn't change anything. If not specified, all indexes should be filled anyway, right?
Since it looks like a transaction problem to me (no errors, but nothing stored in the catalog either), I tried transaction code (begin, savepoint, commit, and in case of exceptions abort), but it didn't help. When I call the catalog immediately after the catalog_object call (portal_catalog(path='/Plonesite/full/path/to/object')), nothing has happened, and an empty list is returned.
The catalog does contain objects; even objects of my custom datatypes (AT-based). Not even the Folder objects of my imports are indexed.
Without the objects in the catalog, my import is useless. What can I do?
Thank you!
Edit: Any hint about how to get my object trees in the catalog is appreciated! Even if it can't be integrated in my process. I need the contents cataloged ...
My custom content types are contained in the Plone Catalog Tool page selection field, but I don't know whether this is sufficient.
Edit 2:
Somehow my objects have been catalogued - the unrestrictedSearchResults method shows them! However, it can't be the desired solution to use this method all over; so I need to "un-restrict" the entries somehow.
It turned out that I have a monkey:patch (xmlns:monkey="http://namespaces.plone.org/monkey") for the Products.CMFPlone.CatalogTool.CatalogTool.searchResults method; this filters the catalog for my additional field subportal unless a special value for it is given - even in the management view ... Unfortunately, I had no way to specify this special value in that view.
Thus, the solution was to weed out all wrong values (for subportals which don't exist in the other Zope tree) to have the default value take effect.
Quite specific to my setup, I'm afraid ...

Adding a callback when reading from an object in Twig

Let's say I have a basic entity called Entity which is mapped to a database table. This entity has got two properties: propertyA and propertyB.
One particularity of this entity is, although we may store whatever we want in these properties, when using the value of propertyB on a Twig template with entity.propertyB we want to systematically truncate the value to 100 characters.
Now, this is perfectly doable in several ways:
Truncate the value directly in the getPropertyB() method;
Register a Twig extension and create a dedicated filter;
Add a lifecycle callback on the entity to truncate the value before the object is actually created.
As this is strictly a display rule, and not a business rule on our entity, the second solution seems to be the best IMHO. However, it demands we apply the filter every time we need to use the value of propertyB in a template. Should an unaware developer come by, the value may not be truncated.
So my question is: is there a way to register some kind of callback, strictly restricted to the view model wrapping our entity, which would allow us to apply some filters on the fly on some of its properties ?
Since you never need to access anything beyond 100 characters, you can truncate the property in its setter. This doesn't really pollute Entity code, because this is some logic inherent to it.

Entity Framework table updates not working due to trigger calling CLR

I have a table with a trigger that points to an assembly:
CREATE TRIGGER [dbo].[triggername] ON [dbo].[tablename]
WITH EXECUTE AS CALLER
AFTER DELETE, UPDATE
NOT FOR REPLICATION
AS EXTERNAL NAME [Namofassembly].[blahblah].[blahblah]
We also using code first EF in .net 4.
When I use delete everything works fine but the trigger does not get called.
dataRepo.UsersPermanentAuditAssignments.Remove(isInsertFound)
When I use update I get a permissions error. This is either when I try it through the object model or a dataRepo.Database.ExecuteSqlCommand(updateSql)
System.Data.SqlClient.SqlException: The context transaction which was active before entering user defined routine, trigger or aggregate "name" has been ended inside of it, which is not allowed. Change application logic to enforce strict transaction nesting.
Everything works fine when I run the queries via the sql management studio.
I also am not able to change this configuration so while I don't care for this design I am not able to change it.
My questions are:
1> Why would the delete not get logged but work?
2> Do I need to add something extra to my repo configuration object that will allow this to work? Do I need to add some transaction like unitofwork before I start this since it has a trigger maybe?
I have figured out the causes of this issue.
It relates to having a composite primary key (station,user) and trying to update one of the values.
I could not update any column of the primary key, ie change the user assigned to a station.
The trigger failure masked the issue of not being able to update a value inside the key.
My experiments show the following for the compositekey/pk update:
Method History Trigger Result
EF.SaveChanges Enabled Fail at trigger
EF.SaveChanges Disabled Fail at trigger
EF.ExecuteSQLCommand(sql) Enabled Fail at trigger
EF.ExecuteSQLCommand(sql) Disabled Works
Unfortunately, I don't have the ability to change to a surrogate with a unique index which would work. Also, the trigger CLR prevents me from using DataBase.ExecuteSQLCommand(sql) also which I believe is actually a problem with the CLR of which I have not ability to modify.
So my advise (that I can't take) is if you get this use a surrogate key and a unique index instead of combining the 2.
If anyone knows any way to allow EF to allow you to change a value inside a composite/primary key please comment.

Accessing an object's workflow state in collective.easytemplate

I would like to use collective.easytemplate to generate templated emails (for content rules). However, I am not sure if it can output an objects workflow state. Anybody know if it is possible and how it is done?
Thanks.
You can, it is possible, and one way is to use the portal_workflow tool e.g. from parts/omelette/plone/app/contentrules/tests/test_action_workflow.py:
self.assertEquals('published',
self.portal.portal_workflow.getInfoFor(self.folder.d1, 'review_state'))
More generally, something like:
context.portal_workflow.getInfoFor(context, 'review_state')
in a page template should work. Or use the portal_catalog as Spanky suggests e.g. if "obj" is a catalog "brain" (i.e. part of a result set from a catalog search) then:
obj.review_state
should work.
The portal_catalog also has an index of the workflow's Review State, so if you don't already have the object you're working on (e.g. context ≠ the object) you could use the catalog, look up the object and get the review state from the resulting "brains" object.
Apparently there are ALSO browser view methods available to you as well, and I notice that one of them is workflow_state. See:
http://plone.org/documentation/manual/theme-reference/page/otherinfo

Resources