How does PeopleSoft App Engine program flow occur - peoplesoft

I am learning more about PeopleSoft Application Engine program flow. From what I've read in PeopleBooks, any actions within a step that specify a Do Select , Do When or Do While perform a looping activity, where all subsequent Actions (within that step) are looped through one row at a time.
I have seen some App Engine programs, including the below one where a Do Select action occurs in a step, followed by a Call Section action that executes anoter section of the program. Does this mean that the loops still iterates over the called section one row at a time, just like any other action(s) would be repeated within the calling step?
My 2nd question is specific to the below App Engine program. In the highlighted PeopleCode action at the bottom of the program, you can see it runs PeopleCode to check/compare data elements and then Exit. My question is whether this code is running within the context of the looping action occuring above where it is executing one row at a time, or is this executing by looking at everything in the buffer at the same time? I would think it can only process row-by-row as it needs to correctly exit/break from the step. Hopefully my question makes sense, but I'm happy to clarify is needed. Thanks!

Both of your assumptions are correct.
If you call another program section within a Do ..., then that call gets executed once for every row that is returned from the Do .... Within the context of the called section, the data in your state tables and temp tables will the same as they were when you hit the Call Section action.
When you execute a PeopleCode action, it executes with whatever data is in the state records and temp tables at that time.

Related

Two conflicting long lived process managers

Let assume we got two long lived process managers. Both sagas operates over 10 milion items for example. First saga adds something to each item. Second saga removes it from each item. Given both process managers need few minutes to complete its job if I run them simultaneously I get into troubles.
Part of those items would hold the value while rest of them not. The result is close to random actually and depends on command order that affect particular item. I wondered if redispatching "Remove" command in case of failure would solve the problem. I mean if you try remove non existing value you should wait for the first saga to add the value. But while process managers are working someone else may dispatch "Remove" or "Add" command. In such case my approach would fail.
How may I solve such problem? :)
It seems that you would want the second saga to not run if the first saga is running (and presumably not run until some process which depends on whatever the first saga added being there). So the apparent solution would be to have a component (could be a microservice, could also be a record in a strongly consistent datastore like zookeeper/etcd/consul) that gives permission for the sagas to start executing. An example protocol might look like:
Saga sends a message to the component identifying the saga and conveying the intention to start
Component validates that no sagas might be running which would prevent this saga from running
Component responds with permission to start running
Subsequent saga attempts result in rejection until the running saga tells the component it's OK to run the other saga
Assuming that this component is reliably durable, the failure mode to worry about is that permission is granted but this component never processes the message that the saga finished (causes of this could include the permission message not getting delivered/processed or the saga crashing). No amount of acknowledgements or extra messages can solve this (it's basically the Two Generals' Problem).
A mitigation is to have this component (or something watching this component) alert if it seems that too much time has passed without saga completion. Whatever/whoever is responsible for ensuring liveness would then investigate to see if the saga is still running and if none is running, inform the component that it's OK to run the other saga. Note that this is not foolproof: it's quite possible for the decider in question to make what turns out to be the wrong decision.
I feel like I need more context. Whilst you don't say it explicitly, is the problem that the second saga tries to remove values that haven't been added by the first?
If that was true, a simple solution would be to just use a third state.
What I mean by that is to just more explicitly define and declare item state. You currently seem to have two states with value, and without value, but nothing to indicate if an item is ready to be processed by the second saga because the first saga has already done it's work on the item in question.
So all that needs to happen is that the second saga keeps looking for items where:
(with_value == true & ready_for_saga2 == true)
Ready_for_saga2 or "Saga 1 processing complete", whatever seems more appropriate in your context.
I'd say that the solution would vary based on which actual problem, we're trying to solve.
Say it's an inventory and add are items added to the inventory and remove are items requested for delivery. Then the order of commands does not matter that much because you could just process the request for delivery, when new items are added to the inventory.
This would lead to an aggregate root with two collections: Items and PendingOrders.
One process manager adds new inventory to Items - if any orders are pending, it will complete these orders in the same transaction and remove both the item and the order from the collections.
If the other process manager adds an order (tries to remove an item), it will either do it right away, if there's any items left - or it will add the order to the pending orders to be processed when new items arrive (and maybe notify someone about the delay, while we're at it).
This way we end up with the same state regardless of the order of commands, but the actual real-world-problem has great influence on the model chosen.
If we have other real world problems, we can make a model those too.
Let's say you have two users that each starts a process that bulk updates titles on inventory items. In this case you - and the users - have to decide how best to resolve this conflict - what will lead to the best real world outcome.
If you want consistency across all the items - all or no items should be updated by a single bulk update - I would embed this knowledge in a new model. Let's call it UpdateTitlesProcesses. We have only one instance of this model in the system. The state is shared between processes. This model is effectually a command queue, and when a user initiates the bulk operation, it adds all the commands to the queue and starts processing each item one at a time.
When the second user initiates another title update, the business logic in our models will reject this, as there's already another update started. Or if the experts say that the last write should win, then we ditch the remaining commands from the first process and add the new ones (and similarly we should decide what should happen if a user issues a single title update, not bulk - should it be rejected, prioritized or put on hold?).
So in short I'd say:
Make it clear which real world problem we are solving - and thus which conflict resolution outcome is best (probably a trade off, often also something that requires user interaction or notification).
Model this explicitly (where processes, actions and conflict handling are also part of the model).

Fire another command within the command handler in an axonframework application

Is this a good way to fire another command within the command handler in axonframework application?
For example, I want to provide a ROLLBACK function, which the underlying process is read the history state of aggregate with the given sequence number, and then update the aggregate according to the history state, imagine it as following:
#CommandHandler
private void on(RollbackCommand command, MetaData metaData) {
ContractAggregate ca = queryGateWay.query(new QueryContractWithGivenSequenceNumber(...));
commandGateWay.sendCommandAndWait(new UpdateContractCommand(ca));
}
Will it work fine?
On "Dispatching commands from command handlers"
Command handlers can roughly exist in two areas in an Axon application:
Within the Aggregate
On a Command Handling Component
In both options, it would be possible to dispatch a command from within the command handler, but I would only advice such an operation from option 2.
The reasoning behind this is that when Axon handles a command from within an Aggregate, that exact Aggregate instance will be locked.
This is done to ensure no concurrent operations are performed on a given aggregate instance.
Knowing this, we can deduce that the subsequent command could also land up in an aggregate instance, which will be locked as well. Added, if the command being dispatched from within an aggregate command handler is targeted to the same aggregate instance, you'll effectively be blocking the system. Axon will throw a LockAcquisitionFailedException eventually, but nonetheless you'd have created something undesirable.
Thus, I'd only dispatch commands from within #CommandHandler annotated methods which reside on a Command Handling Component.
On "your use case"
I have some questions on your use case, as the blanks make me slightly concerned whether this is the best approach. Thus, let me ask you some follow up questions.
If I understand your question correctly, you want to introduce a command handler which queries the aggregate to be able to rollback it's state with a single command?
Would you have a command which adjusts the entire state of the aggregate?
Or specific portions of the aggregate instance?
And, I assume the query is targeted towards a dedicated Query Model representing the aggregate, thus not Axon's idea of the #Aggregate annotated class, right?

Oracle DB trigger fails, then succeeds with same input

In my project, there is a DB trigger that gets input from an application. The application populates a table, the trigger takes the table rows as inputs, and populates a different table as output. The trigger has been performing fine for years.
A few months ago, the trigger started failing for a huge number of inputs, giving a general exception. When manually trying to reprocess the errored out inputs, they get processed correctly. So now I have written a second trigger, that searches for errored out entries, and updates their status as "not processed", and the original trigger processes them correctly.
While it took care of the problem, I still cannot figure out why the first errors happen in the first place. If it were a trigger problem, the issue could be reproduced with the same input, but it can't. Any errored out input, when processed again, passes with flying colours.
What could be the problem here? When does an Oracle DB trigger throw general exception with an input, but never a second time with the same input?

Is context.executeQueryAsync a transactional operation?

Let's say i update multiple items in a loop and then call executeQueryAsync() on ClientContext class and this call returns error (failed callback is invoked). Can i be sure that not a sinle item was updated of all these i wanted to update? is there a chance that some of them will get updated and some of them will not? In other words, is this operation transactional? Thank you, i cannot find a single post about it.
I am asking about CSOM model not server solutions.
SharePoint handles its internal updates in a transactional method, so updating a document will actually be multiple calls to the DB that if one method fails, will roll back the other changes so nothing is half updated on a failure.
However, that is not made available to us as an external developer. If you create an update that updates 9 items within your executeQueryAsync call and it fails on #7, then the first 6 will not be rolled back. You are going to have to write code to handle the failures and if rolling back is important, then you will have to manually roll back the changes within your code.

To run batch jobs one after the other

I submitting jobs to the batch process one after the other.
How do i control such that the second batch job runs only when the first one is finished.
Right now both the jobs executes simultaneously which i dont want to happen
There are two options. You can do this through code, or just via manual setup. Manual method is fairly easy, just go to (Basic>Inquiries>Batch Job), create a new batch job and save it. Then click "View Tasks" and create a new task, where this will be your first batch task. Choose your class, description, batch group, etc., then save. Click "parameters" to setup the parameters.
After that, you can setup your dependent task. Make sure your tasks both have descriptions. Add your second batch task and save. Then in the lower left corner, you click on your task that you want to have a condition, then add a row there and setup your conditions so that one task won't go until the second has completed.
Via X++ code, you would create a BatchHeader where you setup basically the same thing we just did manually. You use the .addDependency to make one task dependent on the completion of the other. This walkthrough will get you started with a job to create the batch header, and you'll just have to play around to get the dependency working.

Resources