Creating many batches (SysOperation Framework) very quickly doing similar processes - "Cannot edit a record in LastValue (SysLastValue)"? - axapta

I have a SysOperation Framework process that creates a ReliableAsynchronous batch to post packing slips and several get created at a time.
Depending on how quickly I click to create them, I get:
Cannot edit a record in LastValue (SysLastValue).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
And
Cannot create a record in LastValue (SysLastValue). User ID: t edit a, Class.
The record already exists.
On a couple of them in the BatchHistory. I have this.parmLoadFromSysLastValue(false); set. I'm not sure how to prevent writing to SysLastValue table.
Any idea what could be going on?

I get this exception a lot too, so I've created the habit of catching DuplicateKeyException in my service operation. When it is thrown, catch it and retry (for a default of 5x).
The error occurs when a lot of processes run simultaneously, like you are doing now.
DupplicateKeyException can be caught inside a transaction so you could improve by putting a try/catch around the code that does the insert in the SysLastValue table if you can find the code.
As far as I can see these are the only to occurrences where a record is inserted in this table (except maybe in kernel):
InventUnusedDimCleanUp.serialize()
SysAutoSemaphore.autoSemaphore()
Put a breakpoint there and see if that code is executed. If so you can add a try/catch with retry and see if that "fixes" it.
You could also use the tracing cockpit and the trace parser to figure out where that record is inserted if it's not one of those two.
My theory about LoadFromSysLastValue: I believe setting this.parmLoadFromSysLastValue(false) does not work since it is only taken into account when the dialog is started, not when your operation is executed. When in batch, no SysLastValue will be used to initialize your data contract as you want it to use the exact parameters you have supplied in your data contract .

It's because of the code calling SysOperationController.savelast() while in batch, my solution is to set loadFromSysLastValue to false in SysOperationController.loadFromSysLastValue() as part of the in batch check:
if (!this.isInBatch())
{
.....
}
//Begin
else
{
loadFromSysLastValue = false;
}
//End

Related

Doctrine: atomic updates and exceptions in a loop

We are migrating a project from a more basic ORM to using Symfony+Doctrine. In the project we have a lot of cron jobs looking like this:
$rows = $someRepository->getRows();
foreach ($rows as $row) {
try {
$db->beginTransaction(); //simple begin transaction in db
//do some handling of data
// Maybe load some other entities and update those
// ...
$db->commit();
} catch (Throwable $t) {
//log error
//clear entity cache
$db->rollback(); //simple rollback in db
}
}
When we did it this way, all changes within the try catch was atomic while it at the same time was possible to recover from an error and continue on the next $row.
In Symfony+Doctrine, I simply cannot figure out how to mimic this behaviour. The recommendation from Doctrine to handle an exception is closing the EntityManager, but how do you recover?
The ORM does this implicitly on flush, so most of the time you can avoid the hassle of doing so on your own.
However, if you want clear demarcation you can still do it explicitly, in a similar manner you did so far.
More reading and examples here: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html
EDIT related to the comment below:
Instead of injecting the manager, you should inject the registry.
After that on catch, you can check if the $em->isOpen(), and call $registry->resetManager() if not.
I suspect this will also reset the unit of work, so you might encounter detached entities. In that case you should do $em->merge();
One thing to note here is that an expection is not considered normal in doctrine, so they are closing the manager because of that. You might think that this is overcompicated - yes it is, because you are working against the philosophy here. Validate your data if you can. Read this section: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html#exception-handling
As for the why: (This is not offical, just based on my knowledge) The managers internal unit of work is a stateful object. When an exception occures during a transaction that state will remain the same, but couln't be persisted to the database. If they let this go that would mean the EM would try to apply all state changes again, and would encounter the same exception again. So no point in leaving it open in the same state, a reset is needed.

Added an update operation to InventItemService AIF service and getting exceptions

To my surprise, I didn't find standard update and delete operations on InvenItemService. So in order to fullfill our client's requirements, I ran the AIF update wizard and added these two operations. I thought its easy and found the process of doing that very quick. Before doing that, I set the update property of the AxdItem query to Yes. Later, while debugging the update operations, I figured I had to modify the updateList() and Update() methods on AxdItem class as accordingly to provide method definitions.
public AifResult updateList( AifEntityKeyList _entityKeyList,
AifDocumentXml _xml,
AifEndpointActionPolicyInfo _actionPolicyInfo,
AifConstraintListCollection _constraintListCollection)
{
//throw error(strFmt("#SYS94920"));
return super(_entityKeyList, _xml, _actionPolicyInfo, _constraintListCollection);
}
AifResult update( AifEntityKey _entityKey ,
AifDocumentXml _xml,
AifEndpointActionPolicyInfo _actionPolicyInfo,
AifConstraintList _constraintList)
{
//throw error(strFmt("#SYS94920"));
return super(_entityKey, _xml, _actionPolicyInfo, _constraintList);
}
Now while trying to update an existing item in AX, I am getting following AIF exception.
Cannot edit a record in Item sales order settings (InventItemSalesSetup).
The operation cannot be completed, since the record was not selected for update. Remember TTSBEGIN/TTSCOMMIT as well as the FORUPDATE clause.
Then I changed the update property of all the child data sources on AxdItem Query and re-ran the wizard. Ran CIL again and getting the following exception now.
Cannot edit a record in Item sales order settings (InventItemSalesSetup).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
Any suggestions/ideas?
I have tried several things and spent too much time but failed.
Seems like you trying update the record that already has been deleted in method removeDefaultItemOrderSetup of AxdItem class
This will give you a hint what happens
https://dynamicsuser.net/ax/f/developers/72116/aif-update-cannot-be-run-twice

how to handle exceptions in vb.net?

I am creating a program in which a user can search and add their desired order. The problem that I'm facing now is that when I throw the exception, the program does not read the exception so that the user will know if the id that is entered is on the database or not. I will provide the code snippet of the program that I'm working on.
Problems
Your code will not throw an error if the item_code does not exist in your database. It will simply not enter the while loop.
This is not the proper use of an exception. It is not an error if the record is not found. The proper way of checking if the item_code exists is a check if the datareader has results.
You must properly defend yourself again SQL injection. By concatenating the sql query you are opening yourself up to a whole host of problems. For example, if a user maliciously enters the following text, it will delete the entire Products table: ';DROP TABLE Products;-
You are not disposing of the OleDbConnection or the OleDbCommand objects correctly. If an exception occurs, your code will not run the Dispose() method. This can cause you to quickly run out of resources.
Solutions
You should check if the dataRead has any rows. If it does not, then you can alert the user via javascript. Like so:
If dataRead.HasRows Then
//READ DATA
Else
//ALERT USER
End If
Solution #1 address Problem #2 as well
Use a parameterized query. The .NET framework will prevent these kinds of attacks (SQL Injection).
selectProductQuery = "SELECT * FROM Products WHERE item_code = #item_code"
...
newCmd.Parameters.AddWithValue("item_code", txtItemCode.Text);
Wrap all objects that implement Dispose() in a using block. This will guarantee everything is properly disposed of, whether an error is thrown or not.
Using newCon As New OleDbConnection(....)
Using newCmd As New OleDb.OleDbCommand(...)
...
End Using
End Using
To be perfectly honest, there is quite a bit "wrong" with your code, but this should get you headed in the right direction.
The line:
Response.Write(<script>alert('The ...')</script>)
Needs to be (note the quotes):
Response.Write("<script type='text/javascript'>alert('The ...')</script>")
Same for the other one at the top, but I dont think that will fix your overall problem.
Instead, use javascript like this:
if(!alert('Whoops!')){window.location.reload();}
to pop up an alert box and then reload the page after they click on the button.

Dealing with concurrent requests in Meteor

I'm dealing with a problem where a user can update a document within a specified time limit, and if he doesn't, the server will.
The update involves incrementing a value and adding an object to an array of a document. I need to ensure that only one of the user/server updates the document. Not both.
To ensure this happens, some checks are run to see if the document has already been updated, but there are times where the user and server run at exactly the same time and both pass the checks and then the document is updated twice.
I've been trying many different ways of fixing this, but I haven't been able to. I tried implement a lock similar to this: http://en.wikipedia.org/wiki/Peterson%27s_algorithm to ensure that only one update will happen and the second update will fail, but I haven't been successful. Any ideas?
To ensure this happens, some checks are run to see if the document has already been updated, but there are times where the user and server run at exactly the same time and both pass the checks and then the document is updated twice.
You can achieve this by using a MongoDB update query that simultaneously checks if the value has been updated and updates it. Like this:
var post = Posts.findOne("ID");
// ... do some stuff with the post ...
Posts.update({counter: post.counter}, {$push: {items: newItem}, $inc: {counter: 1}});
As you can see, in one query we both check the counter and increment it - so if two of these queries run one right after another only one will actually update the document (since the counter won't match anymore).

Flush separate Castle ActiveRecord Transaction, and refresh object in another Transaction

I've got all of my ASP.NET requests wrapped in a Session and a Transaction that gets commited only at the very end of the request.
At some point during execution of the request, I would like to insert an object and make it visible to other potential threads - i.e. split the insertion into a new transaction, commit that transaction, and move on.
The reason is that the request in question hits an API that then chain hits another one of my pages (near-synchronously) to let me know that it processed, and thus double submits a transaction record, because the original request had not yet finished, and thus not committed the transaction record.
So I've tried wrapping the insertion code with a new SessionScope, TransactionScope(TransactionMode.New), combination of both, flushing everything manually, etc.
However, when I call Refresh on the object I'm still getting the old object state.
Here's some code sample for what I'm seeing:
Post outsidePost = Post.Find(id); // status of this post is Status.Old
using (TransactionScope transaction = new TransactionScope(TransactionMode.New))
{
Post p = Post.Find(id);
p.Status = Status.New; // new status set here
p.Update();
SessionScope.Current.Flush();
transaction.Flush();
transaction.VoteCommit();
}
outsidePost.Refresh();
// refresh doesn't get the new status, status is still Status.Old
Any suggestions, ideas, and comments are appreciated!
I've had a similar problem before, related to isolation levels. The default isolation level was set to "Snapshot", and when running on SQL Server, this meant that the first transaction would not see anything that changed since it started. Maybe it's your isolation level?
If not, try creating a brand-new TransactionScope straight after disposing the one above, and see if you can read it back as New. If you can't, it's probably nothing to do with the outside transaction.
Hope that helps.
Marcus

Resources