To my surprise, I didn't find standard update and delete operations on InvenItemService. So in order to fullfill our client's requirements, I ran the AIF update wizard and added these two operations. I thought its easy and found the process of doing that very quick. Before doing that, I set the update property of the AxdItem query to Yes. Later, while debugging the update operations, I figured I had to modify the updateList() and Update() methods on AxdItem class as accordingly to provide method definitions.
public AifResult updateList( AifEntityKeyList _entityKeyList,
AifDocumentXml _xml,
AifEndpointActionPolicyInfo _actionPolicyInfo,
AifConstraintListCollection _constraintListCollection)
{
//throw error(strFmt("#SYS94920"));
return super(_entityKeyList, _xml, _actionPolicyInfo, _constraintListCollection);
}
AifResult update( AifEntityKey _entityKey ,
AifDocumentXml _xml,
AifEndpointActionPolicyInfo _actionPolicyInfo,
AifConstraintList _constraintList)
{
//throw error(strFmt("#SYS94920"));
return super(_entityKey, _xml, _actionPolicyInfo, _constraintList);
}
Now while trying to update an existing item in AX, I am getting following AIF exception.
Cannot edit a record in Item sales order settings (InventItemSalesSetup).
The operation cannot be completed, since the record was not selected for update. Remember TTSBEGIN/TTSCOMMIT as well as the FORUPDATE clause.
Then I changed the update property of all the child data sources on AxdItem Query and re-ran the wizard. Ran CIL again and getting the following exception now.
Cannot edit a record in Item sales order settings (InventItemSalesSetup).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
Any suggestions/ideas?
I have tried several things and spent too much time but failed.
Seems like you trying update the record that already has been deleted in method removeDefaultItemOrderSetup of AxdItem class
This will give you a hint what happens
https://dynamicsuser.net/ax/f/developers/72116/aif-update-cannot-be-run-twice
Related
We are migrating a project from a more basic ORM to using Symfony+Doctrine. In the project we have a lot of cron jobs looking like this:
$rows = $someRepository->getRows();
foreach ($rows as $row) {
try {
$db->beginTransaction(); //simple begin transaction in db
//do some handling of data
// Maybe load some other entities and update those
// ...
$db->commit();
} catch (Throwable $t) {
//log error
//clear entity cache
$db->rollback(); //simple rollback in db
}
}
When we did it this way, all changes within the try catch was atomic while it at the same time was possible to recover from an error and continue on the next $row.
In Symfony+Doctrine, I simply cannot figure out how to mimic this behaviour. The recommendation from Doctrine to handle an exception is closing the EntityManager, but how do you recover?
The ORM does this implicitly on flush, so most of the time you can avoid the hassle of doing so on your own.
However, if you want clear demarcation you can still do it explicitly, in a similar manner you did so far.
More reading and examples here: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html
EDIT related to the comment below:
Instead of injecting the manager, you should inject the registry.
After that on catch, you can check if the $em->isOpen(), and call $registry->resetManager() if not.
I suspect this will also reset the unit of work, so you might encounter detached entities. In that case you should do $em->merge();
One thing to note here is that an expection is not considered normal in doctrine, so they are closing the manager because of that. You might think that this is overcompicated - yes it is, because you are working against the philosophy here. Validate your data if you can. Read this section: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html#exception-handling
As for the why: (This is not offical, just based on my knowledge) The managers internal unit of work is a stateful object. When an exception occures during a transaction that state will remain the same, but couln't be persisted to the database. If they let this go that would mean the EM would try to apply all state changes again, and would encounter the same exception again. So no point in leaving it open in the same state, a reset is needed.
In a clustered intershop environment, we see a lot of error messages. I'm suspecting the communication between the application servers is not reliable.
Caused by: com.intershop.beehive.orm.capi.common.ORMException:
Could not UPDATE object: com.intershop.beehive.bts.internal.orderprocess.basket.BasketPO
Is there safe way to for the local application server, to load the latest instance.
BasketPO basket = null;
try{
BasketPOFactory factory = (BasketPOFactory) NamingMgr.getInstance().lookupFactory(BasketPOFactory.FACTORY_NAME);
try(ORMObjectCollection<BasketPO>baskets = factory.getObjectsBySQLWhere("uuid=?", new Object[]{basketID},CacheMode.NO_CACHING);){
if(null != baskets && !baskets.isEmpty()){
basket = baskets.stream().findFirst().get();
}
}
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
Does the ORMObject#refresh method help ?
try{
if(null != basket)
basket.refresh();
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
You experience that error because an optimistic lock "fails". To understand the problem better I'll try to explain how the optimistic locking works in particular in the Intershop ORM layer.
There is a column named OCA in the PO tables (OCA == optimistic control attribute?). Imagine that two servers (or two different threads/transactions) try to update the same row in a table. For performance reasons there is no DB locking involved by default (e.g. by issuing select for update). Instead the first thread/server increments the OCA by one when it updates the row successfully within its transaction.
The second thread/server knows the value of the OCA from the time that it created its own state. It then tries to update the row by issuing a similar query:
UPDATE ... OCA = OCA + 1 ... WHERE UUID = <uuid> AND OCA = <old_oca>
Since the OCA is already incremented by the first thread/server this update fails (in reality - updates 0 rows) and the exception that you posted above is thrown when the ORM layer detects that no rows were updated.
Your problem is not the inter-server communication but rather the fact that either:
multiple servers/threads try to update the same object;
there are direct updates in the database that bypass the ORM layer (less likely);
To solve this you may:
Avoid that situation altogether (highly recommended by me :-) );
Use the ISH locking framework (very cumbersome imHo);
Use pesimistic locking supported by the ISH ORM layer and Oracle (beware of potential performance issues, deadlocks, bugs);
Use Java locking - but since the servers run in different JVM-s this is rarely an option;
OFFTOPIC remarks: I'm not sure why you use getObjectsBySQLWhere when you know the primary key (uuid). As far as I remember ORMObjectCollection-s should be closed if not iterated completely.
UPDATE: If the cluster is not configured correctly and the multicasts can't be received from the nodes you won't be able to resolve the problems programatically.
The "ORMObject.refresh()" marks the cached shared state as invalid. Next access to the object reloads the state from the database. This impacts the performance and increase the database server load.
BUT:
The "refresh()" method does not reload the PO instance state if it already assigned to the current transaction.
Would be best to investigate and fix the server communication issues.
Other possibility is that it isn't a communication problem (multicast between node in the cluster i assume), but that there are simply two request trying to update the basket at the same time. Example two ajax request to update something on the basket.
I would avoid trying to "fix" the orm, it would only cause more harm than good. Rather investigate further and post back more information.
I have a SysOperation Framework process that creates a ReliableAsynchronous batch to post packing slips and several get created at a time.
Depending on how quickly I click to create them, I get:
Cannot edit a record in LastValue (SysLastValue).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
And
Cannot create a record in LastValue (SysLastValue). User ID: t edit a, Class.
The record already exists.
On a couple of them in the BatchHistory. I have this.parmLoadFromSysLastValue(false); set. I'm not sure how to prevent writing to SysLastValue table.
Any idea what could be going on?
I get this exception a lot too, so I've created the habit of catching DuplicateKeyException in my service operation. When it is thrown, catch it and retry (for a default of 5x).
The error occurs when a lot of processes run simultaneously, like you are doing now.
DupplicateKeyException can be caught inside a transaction so you could improve by putting a try/catch around the code that does the insert in the SysLastValue table if you can find the code.
As far as I can see these are the only to occurrences where a record is inserted in this table (except maybe in kernel):
InventUnusedDimCleanUp.serialize()
SysAutoSemaphore.autoSemaphore()
Put a breakpoint there and see if that code is executed. If so you can add a try/catch with retry and see if that "fixes" it.
You could also use the tracing cockpit and the trace parser to figure out where that record is inserted if it's not one of those two.
My theory about LoadFromSysLastValue: I believe setting this.parmLoadFromSysLastValue(false) does not work since it is only taken into account when the dialog is started, not when your operation is executed. When in batch, no SysLastValue will be used to initialize your data contract as you want it to use the exact parameters you have supplied in your data contract .
It's because of the code calling SysOperationController.savelast() while in batch, my solution is to set loadFromSysLastValue to false in SysOperationController.loadFromSysLastValue() as part of the in batch check:
if (!this.isInBatch())
{
.....
}
//Begin
else
{
loadFromSysLastValue = false;
}
//End
Simple thing, but doesn't work. We have at the bottom part of script
$oMan = $this->getContainer()->get('doctrine')->getManager();
// add entry of calling
$oLastCall = new CronLastCall();
$oLastCall->setType('key');
$oMan->persist($oLastCall);
$oMan->flush();
we insert to db once we create it, then do some stuff that can take a few minutes. Then call this one.
$oLastCall->setDateEnd(new \DateTime('now'));
$oMan->flush();
after this one - exist from method\action. So regarding logic (and doctrine2 manual I read) entity that were created already become 'managed' (we persist it) and we can simply update it. (I call flush (at the end) to update this entity, but it not updating.)
Where is trouble?
As far as I can tell, there's nothing wrong with the code you've shown. But if the $oLastCall object becomes detached between the first and second code blocks, you have to re-attach (merge) it to the manager so that it detects the changes for the second flush.
Merging can be done in this way:
$oLastCallMerged = $oMan->merge($oLastCall);
$oLastCallMerged->setDateEnd(new \DateTime('now'));
$oMan->flush();
You can also check the state (MANAGED/NEW/DETACHED/REMOVED) of an object using this code:
$oMan->getUnitOfWork()->getEntityState($oLastCall);
If that doesn't help (i.e. detachment of the object isn't your problem), you need to give more info about the context this code runs in and any errors you get. Is this code part of a Console Command or a regular web-app Controller? Do you get any output or errors when running it in 'dev' environment? (Check .../app/logs/dev.log.) Does the $oLastCall object stay in memory while waiting for the stuff that takes some minutes, or do you reload it from somewhere?
Btw, objects doesn't magically get detached by themselves. They'll only be detached if you load them from a different source than the entity manager (for example storing them in the session between requests) OR if you explicitly detach them by calling $oMan->detach($entity) or $oMan->clear().
Edit
You can also check if Doctrine detects the change by echoing out the changeset using $oMan->getUnitOfWork()->getEntityChangeSet($oLastCall) before and after the change, e.g:
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
$oLastCall->setDateEnd(new \DateTime('now'));
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
After flushing the manager all objects will be removed internally from the manager. So if you want to update the object later you need to call persist again before flushing.
Each task has a reference to the goal it is assigned to. When I try and delete the tasks, and then the goal I get the error
"Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries." on the line _goalRepository.Delete(goalId);
What am I doing wrong?
[HttpPost]
public void DeleteGoal(int goalId, bool deleteTasks)
{
try
{
if (deleteTasks)
{
Goal goalWithTasks = _goalRepository.GetWithTasks(goalId);
foreach (var task in goalWithTasks.Tasks)
{
_taskRepository.Delete(task.Id);
}
goalWithTasks.Tasks = null;
_goalRepository.Update(goalWithTasks);
}
_goalRepository.Delete(goalId);
}
catch (Exception ex)
{
Exception deleteException = ex;
}
}
Most likely the problem is because you're attempting to hold onto and reuse a context across page views. You should create a new context, do your work, and dispose of the context atomically. It's called the Unit Of Work pattern.
The main reason for this is that the context maintains some state information about the database rows it has seen, if that state information becomes stale or out of date then you get exceptions like this.
There are a lot of other reasons to use the Unit of Work pattern, I would suggest you do a web search and do a little reading as an educational exercise.
This may have nothing to do with data access though. You are removing items from a list as you are iterating it, which would cause problems if you were using a normal List. Without knowing much about the internals of EF, my guess is that your delete calls to the repository are changing the same list that you are iterating.
Try iterating the list in one pass and record the Task ids you want to delete in separate list. Then when you have finished iterating, call delete on the Task list. For example:
var tasksToDelete = new List<int>();
foreach (var task in goalWithTasks.Tasks)
{
tasksToDelete.Add(task.Id);
}
foreach (var id in tasksToDelete)
{
_taskRepository.Delete(id);
}
This may not be the cause of your problem but it is good practice to never change the collection you are iterating.
I ran across this issue at work (I am an Intern), I was getting this error when trying to delete a piece of Equipment that was referenced in other data-tables.
I was deleting all references before attempting to delete the Equipment BUT the reference deletion was happening in another Model which had its own database context, the reference deletion would be saved within the Model's context.
But the Equipment Model's context would not know about the changes that just happened in another Model's context which is why when I was trying to delete the Equipment and then save the changes (eg: db.SaveChanges()) the error would happen (the Equipment context still thought there was references to that equipment in other tables).
My solution for this was to re-allocate the context before attempting to delete the Equipment:
db = new DatabaseContext();
Now the newly allocated context has the latest snapshot of the database and is aware of all changes made. Deletion happens without issues.
Hope my experience helps.