Getting Cannot change locked branch exception in Guidewire PC - gosu

I am trying to make a readonly entity to writable by using
Transaction.runwithNewBundle(\bundle -> {
entity = bundle.add(entity)
})
but I am getting
java.lang.IllegalArgumentException: You cannot change a locked branch.
Please help me out with this.

I got it resolved. Guidewire provides one field in entity called "Locked". If it is set to true, the entity cannot be modified even in the Transaction.runwithNewBundle scope.Set it to false to resolve the exception.

That’s not really the best idea. You are trying to edit a branch (policyPeriod) that is essentially in a “thou shall not change” state. If the branch is bound or quoted and you modify it it’s likely you are at least invalidating the quote. I’d highly recommend NOT modifying a locked branch. Instead open it for edit first (if you can)

Simple! You cannot edit a branch that is Locked (Quoted or Bound). Click on "Edit Policy Transaction", bring back the status to Draft.

Related

Exporting AEM experience fragments to Adobe Target automatically every time a related Content Fragment is updated

I have this unique requirement where each time a particular Content Fragment is updated in AEM, all the Experience Fragments referencing that particular Content Fragment need to be automatically exported to Adobe Target.
Thinking about using SQL2 query to retrieve XFs referencing a particular CF and then incorporating this into a workflow process. Also, wondering if I can leverage aem OOTB workflow process called "Export to Target" in this.
Not really sure of how to call this "Export to Target" process on each Experience Fragment that we need to export to Target or is this possible at all?
Wondering if anyone has ever come across this requirement and succeeded.
Highly appreciate any tips or suggestions in this regard. Many Thanks in advance.
Whenever a Content Fragment is created or updated an OSGi event is triggered. All events are logged under http://localhost:4502/system/console/events. You could write a EventListener or EventHandler, get the path of the event, get the Resource and adapt it to com.adobe.cq.dam.cfm.ContentFragment. The topic for these events is "com/day/cq/dam" or in this constant.
From the adapted Class or Resource you can get informations about the model and if it's the model you want to process.
To find all references I would also create an oak index and use SQL2 query to find all references.
The query would be something like this:
select [jcr:path], [jcr:score], * from [nt:base] as a where contains(*, '"/content/dam/myReferencedModel"')
If you have all referencing XF's you can kick off any workflow via WorkflowService:
    #Reference
    private WorkflowService workflowService;
        WorkflowSession wfSession = workflowService.getWorkflowSession(session);
        WorkflowModel wfModel = wfSession.getModel("/var/workflow/models/mymodel");
        WorkflowData wfData = wfSession.newWorkflowData("JCR_PATH", "/payload");
        wfSession.startWorkflow(wfModel, wfData);

Entity Framework table updates not working due to trigger calling CLR

I have a table with a trigger that points to an assembly:
CREATE TRIGGER [dbo].[triggername] ON [dbo].[tablename]
WITH EXECUTE AS CALLER
AFTER DELETE, UPDATE
NOT FOR REPLICATION
AS EXTERNAL NAME [Namofassembly].[blahblah].[blahblah]
We also using code first EF in .net 4.
When I use delete everything works fine but the trigger does not get called.
dataRepo.UsersPermanentAuditAssignments.Remove(isInsertFound)
When I use update I get a permissions error. This is either when I try it through the object model or a dataRepo.Database.ExecuteSqlCommand(updateSql)
System.Data.SqlClient.SqlException: The context transaction which was active before entering user defined routine, trigger or aggregate "name" has been ended inside of it, which is not allowed. Change application logic to enforce strict transaction nesting.
Everything works fine when I run the queries via the sql management studio.
I also am not able to change this configuration so while I don't care for this design I am not able to change it.
My questions are:
1> Why would the delete not get logged but work?
2> Do I need to add something extra to my repo configuration object that will allow this to work? Do I need to add some transaction like unitofwork before I start this since it has a trigger maybe?
I have figured out the causes of this issue.
It relates to having a composite primary key (station,user) and trying to update one of the values.
I could not update any column of the primary key, ie change the user assigned to a station.
The trigger failure masked the issue of not being able to update a value inside the key.
My experiments show the following for the compositekey/pk update:
Method History Trigger Result
EF.SaveChanges Enabled Fail at trigger
EF.SaveChanges Disabled Fail at trigger
EF.ExecuteSQLCommand(sql) Enabled Fail at trigger
EF.ExecuteSQLCommand(sql) Disabled Works
Unfortunately, I don't have the ability to change to a surrogate with a unique index which would work. Also, the trigger CLR prevents me from using DataBase.ExecuteSQLCommand(sql) also which I believe is actually a problem with the CLR of which I have not ability to modify.
So my advise (that I can't take) is if you get this use a surrogate key and a unique index instead of combining the 2.
If anyone knows any way to allow EF to allow you to change a value inside a composite/primary key please comment.

Problem persisting collection of interfaces in JDO/Datanucleus. "unable to assign an object of type.."

I am getting below error whilst trying to persist an object that has a collection of interfaces which I want to hold a couple of different types of objects. Seems to be happening almost randomly. Sometimes after restarting it works ok ( I might be doing something wrong though).
class CommentList {
#Persistent
#Join
ArrayList<IComment> = new ArrayList<IComment>();
}
somewhere else...
CommentList cl = new CommentList();
cl.addComment( new SimpleComment() );
cl.addComment( new SpecialComment() );
repo.persist( cl );
I can see the join table has been created in my DB along with ID fields for each of the Implementation classes of IComment.
SimpleComment and SpecialComment implement IComment. If I just add a SimpleComment it works fine. As soon as I start trying to add other types of objects I start to get the errors.
error im getting
java.lang.ClassCastException: Field "com.myapp.model.CommentList.comments" is a reference field (interface/Object) of type com.myapp.behaviours.IComment but DataNucleus is unable to assign an object of type "com.myapp.model.ShortComment" to this field. You can only assign this field to a type specified by the "implementation-classes" extension attribute.
at org.datanucleus.store.mapped.mapping.MultiMapping.setObject(MultiMapping.java:220)
at org.datanucleus.store.mapped.mapping.ReferenceMapping.setObject(ReferenceMapping.java:526)
at org.datanucleus.store.mapped.mapping.MultiMapping.setObject(MultiMapping.java:200)
at org.datanucleus.store.rdbms.scostore.BackingStoreHelper.populateElementInStatement(BackingStoreHelpe
r.java:135)
at org.datanucleus.store.rdbms.scostore.RDBMSJoinListStoreSpecialization.internalAdd(RDBMSJoinListStore
Specialization.java:443)
at org.datanucleus.store.mapped.scostore.JoinListStore.internalAdd(JoinListStore.java:233)
When it does save, if I restart the server and try to query for a list of the comments, I get null values returned.
I'm using mysql backend - if I switch to db4o it works fine.
Please let me know if any info would be useful.
If you have any idea where I might be going wrong or can provide some sample code for persisting collection of different objects implementing the same interface that would be appreciated.
Thanks for any help.
Tom
When I used interfaces I just enabled dynamicSchemaUpdates (some persistence property with a name like that) and FK's are added when needed. The log gives all SQL I think
I fixed this by specifying
<extension implemention-classes="SimpleComment SpecialComment"/>
for the field cl in my pacakge.jdo.

Do not allow a user to delete a node but allow to delete through Views Bulk Operations

I have the following scenario:
Editor Role should not be allowed to
delete nodes. Therefore the corresponding
permission is de-selected in the
permissions page.
However Editor
should be able to to delete nodes
from Views Bulk operations. Using
Rules an action is created called
"safe delete" that checks things like
if the node is not published etc.
before deleting the node.
The problem is the Views Bulk Operations respects Node permissions. Editor will not be able to delete the node as he has not been given that permission. Is there a way that Editor can become a higher role user (as sort of sudo) while performing that action in VBO? Alternatively is there a way to tell VBO to ignore node access for this action?
I'm sure this is a mainstream requirement but I can't seem to find a solution.
Solutions which do not involve programming will be preferred.
The simple, but not-so-clean way, is the route you already took, but with an additional, small module to help it.
has a function my_module_can_delete($user), that returns TRUE if the user is allowed to delete, FALSE if the user is not.
implements hook_form_alter() to modify and delete the button on the node_edit form, if my_module_can_delete($user)
implements hook_form_alter() to modify the confirm form that is called on /node/%nid/delete, and add a message there, telling the user he or she my_module_can_delete($user). This should be enough, since disabling this form will result in users not being able to get past this form. FORM-API will take care of that.
However, you can make it more sturdy, to catch other deleting modules:
implements hook_nodeapi(), $op == 'delete' to catch delete actions and halt (by invoking drupal_goto(), or calling drupal_access_denied() to enforce a user-error. Only catch delete-actions if the referer was the delete-confirm-form as mentioned above. Or, more secure, whitelist your VBO-action and return false on all other referers. A referer can often be found by reading out the $node passed along to hook_nodeapi().
A, IMHO, much cleaner, but probably more intensive alternative, would be to simply make sure your batches/actions are called on every delete action.
In a module, you could do this by avoiding all the VBO-configuration and leaving all the extra-delete actions out of there.
Then write a module that implements hook_nodeapi() and then calls all the cleaning actions from there. That way you can be sure that your delete-actions are called on every delete-action on any node. Obviously you can add some conditions into your hook_nodeapi() to only invoke your modules in certain cases (node-types, user-roles, permissions and so on).
Well, it seems to me that you've got a setup where you don't want Editor Role users to delete things, really, except in certain extreme situations. Here's my suggestion:
1) Install Flag module. Create a 'To Be Deleted' flag that can only be assigned by Editor Role people.
2) I haven't looked into it, but I"m sure there's probably a rule or trigger/action combo which will unpublish the node when the 'To Be Deleted' flag is assigned to it. This will remove the node from casual view.
3) Then either set up some cron run activity (trigger/action or rule) to delete nodes with 'To Be Deleted' flag set on them, or have another user with higher permissions come in occasionally and delete out the flagged items.
This way you're not actually bypassing the permissions system, and yet things are still being removed from your site.
I got caught out of this for a while until I noticed the "actions_permissions" module, enable this and on the Permissions page you can provide access to specific actions on a role by role basis.
I don't have a good no-coding solution, and I'm not sure I would call this solution "great" - but one way might be to implement a simple module with a form_alter hook that removes the delete button from the node edit forms as they are built.
In general it seems like the role either has permission to delete nodes or not, and monkeying around like this is going to be less robust that you might like.

ServerControl randomly null

I got a master page with a server control in it. Randomly the server control is inaccessible from codebehind. This doesn't happen on a specific action (eg a Button click or so). Currently I have no clue what this could be. I don't think it's output caching since this is not explcitly activated and the error happens far to seldom for that. But I'm going to disable caching in the master page explicitly with next deployment.
Anyone an idea how to find more info to find what's happening? Or has someone had a similar error?
The control is defined in markup. The accompaning codebehind is:
PGFMainNavi.HasAccessToFunction = HasAccessToNaviItem;
// HasAccessToNavi is a local function
Exception is:
System.NullReferenceException: Object reference not set to an instance
of an object
Thanks.
sa
I have similar errors when I cache my controls - and I always check if their null, or if they are the correct types.
I think that your control is cached somewhere.
Use this code, to check that is not cached.
if(PGFMainNavi != null)
{
PGFMainNavi.HasAccessToFunction = HasAccessToNaviItem;
}
or find where you set the case on this control and remove it.
Second Solution
Some times after an online update I get this error, because compiler did fail to read correct all involving files - probably some user read the page the same time I copy the files or something.
To avoid that I always use the app_offline.htm before make my updates.

Resources