Oracle BPEL, why compensation while catch can handle exception - soa

I just wonder in Oracle BPEL, the purpose of compensation is business rollback. But catch can do the nearly similar things (except rollback in reverse order of completion). I don't really understand why we still need Compensation?

There are many scenarios in which manual coded compensation in error handlers will be pretty hard if not impossible to write and will contain lots of duplicate code.
Imagine you have following process:
<flow>
<sequence>
<invoke name="I1"/>
<invoke name="I2"/>
</sequence>
<sequence>
<invoke name="I3"/>
<invoke name="I4"/>
</sequence>
</flow>
If you want to do compensation handling you just need to add compensation handlers to every invoke and you are done.
If you went by using the error handlers then you would need to somehow check which activities have already been executed. Imagine I4 throws a fault. You know that I3 has been completed and needs to be compensated. However, you do NOT know whether I1 or I2 has been started nor completed yet. You would need to fiddle around with flags as variables that you need to set on isolated activities etc. Also the error handlers for I2, I3, and I4 would need to contain the compensation logic for I1. Using compensation handlers is much cleaner and easier than trying to re-invent them :)

Related

Separating Axon commands and its effects

I'm maintaining an event-sourced application that went far off the road I'm afraid.
In one case a command is received by an aggregate root that publishes an event that is handled by an event handler that needs to do 2 things:
send a command (cmd1) to another aggregate root that will publish an event that will create a number of sagas each firing of some commands that are eventually handled by a number of aggregates
send a second command (cmd2) that will also lead to all sorts of command/event/command sequences.
In schematic form:
cmd0 -> AR0 -> evt0 -> evtHandler -> cmd1 -> AR1 -> evt1 -> saga stuff and more cmds and evts
|-> cmd2 -> AR2 -> evt2 -> more saga stuff, cmds and evts
Everything happens in the same thread and everything happens in 1 transaction started at the first command handling.
Now the goal: all events, saga's, aggregate calls originated from the first command (cmd1) should happen first and then all events, saga's and aggregate calls originated from the second command (cmd2) should happen.
Here's the observation: cmd1 calls AR1 that published evt1 but after that cmd2 calls AR2 publishing evt2. All other events and commands originating from cmd1 are mingled with those from cmd2.
First I thought I could get away with it using the UnitOfWork but even explicitly creating a separate unit of work for handling cmd1 didn't solve the problem. Looking at the implementation in AbstractEventBus I see that the events are simply merged in the parent unit of work and thus end up being merged with the ones originating from cmd2.
Here's the question: Is there a way to first call cmd1 and wait until all effects originating from that command are handled before calling cmd2 while still preserving the transactional atomicity that I currently have?
To be completely honest with you Jan, the best would be if the components within your application don't rely to much on that order.
It essentially means you have distinct message handling components, which in essence could be different micro service, but they are all tied together as the order is important.
Ideally, you'd set up your components to work on their own.
So, aggregates handle a command and publish the result, done.
Sagas react to events, regardless of where they come from, and react on them with actions (e.g. command dispatching).
Embracing the eventuality would help here, as it will drop the entire requirement of waiting for one process to complete.
From a theoretical stance, that would be my response.
From a more pragmatic corner looking at your question, I'd like to point out that it sounds like a rabbit hole you are going in to. You don't only want cmd1 handling to be done, you want event handling on all sagas to be resolved, including commands coming out of that too, correct? Who's here to tell what the number of Sagas is? Or what the number of commands those saga dispatch need to be taken into account? These criteria will likely change over time, adding more an more stuff which needs to happen "in a single transaction".
Well, yes there are way to wait for processing from some parts, to pull them all in a single transaction. But to be honest with you, I wouldn't recommend taking that route, as it will only make using such a message based system more and more complex.
The crux is what all effects are. From the point of dispatching that command, you should only care if that exact command handles successfully yes or no, and that's where the concerns should end.
I know this does not give you a simple programmatic solution, as you need to adjust the design. But I think decoupling is the only right way to go hear.
That's my two cents to the situation, hope this helps you further in any way Jan.
Message Anticipation explanation update
In essence, the messages you'd use in an Axon application form a boundary. A boundary after which the components essentially don't have a clue what is going to handle those messages. The behaviour per message differs a little, but might clarify what opens you have too:
Commands - Commands are consistently routed to a single handle, on a single instance. Furthermore, you can anticipate a response, in the form of an OK or NOK. OK's mean the handler is void or the identifier of a created entity (like the aggregate itself). NOK's typically are the exceptions you throw from your command handling methods, which signal something went wrong or the command simply couldn't be executed and it should be let know to the dispatching end.
Events - Events will be broadcast to any component which has subscribed itself to the EventBus as being capable to handle a given event. Note that event handling is segregated in time from the actual publication point of the event. This means there is no way there are results from event handling which could (or should) be returned to the dispatcher of an event.
Queries - Query messages can be routed in several forms. Either a single component is best suited to answer the query (called Point-to-Point queries). You can also dispatch a query to several handlers and aggregate the results (called Scatter-Gather queries). Lastly, you can subscribe to query models by doing a "Subscription query", which is essentially a combination of a point-to-point followed up by a Flux of updates. Clearly, query dispatching would mean you are receiving a result from some component. It's just that you have freedom in the type of query you do. If any assurance is required about the "up-to-date"-ness of a query response should be part of the implementation of the query being sent and how it is handled by a #QueryHandler annotated method.
Hope this provides some additional clarity at what each of the messages do in an Axon application!

what is the behaviour of an asynchronous transaction in infinispan cache?

I'm investigating asynchronous transaction in order to improve performances.
Could you please explain me what is the behaviour of a transaction in a replicated asynchronous cache?
If I have a transaction composed by operations, in which every operation depends by previous operation (i.e. order of execution of operations is important).
For instance, consider a transaction T that performs a read of data1 needed to build data2, data2 is then written on cache.
TRANSACTION T {
// 1° operation
data1 = get(key1);
// 2° operation
data2 = elaborate(data1);
// 3° operation
put(data2);
}
In other words, I need that "entire transactions" are asynchronously executed, but I need that operations performed inside a transaction remain synchronous.
Is it possible? If yes, how I have to configure infinispan?
Many Thanks
Francesco Sclano
I guess you've wrapped those operations into tm.begin(); try { ... tm.commit(); } catch (...) { tm.rollback(); }.
There are few other option, therefore, I assume you use the default - optimistic transactions and 2-phase commit. I'd also recommend enabling write skew check - this should be default in Infinispan 7 but in prior versions you had to enable it explicitly, and some operations behave weird without that.
As for the get, this is always synchronous - you have to wait for the response.
Before commit, the put does basically just another get (in order to return the value) and records that the transaction should do the write during commit.
Then, during commit, the PrepareCommand which locks all updated entries is sent asynchronously - therefore, you can't know whether it succeeded or failed (usually due to timeout, but also due to write skew check or changed value in conditional command).
Second phase, the CommitCommand which overwrites the entries and unlocks the locks is sent synchronously after that - so you wait until the response is received. However, if you have <transaction useSynchronization="true" /> (default), even if this command fails somewhere, the commit succeeds.
In order to send the CommitCommand (or RollbackCommand) asynchronously, you need to configure <transaction syncCommitPhase="false" syncRollbackPhase="false" />.
I hope I've interpreted the sources correctly - and don't ask me why is it done this way. Regrettably, I am not sure whether you can configure the semantics "commit or rollback this transaction reliably, but don't report the result and let me continue" out of the box.
EDIT: The commit in asynchronous mode should be 1-phase as you can't check the result anyway. Therefore, it's possible that concurrent writes get reordered on different nodes and you'll get the cluster inconsistent, but you are not waiting for the transaction to be completed.
Anyway, if you want to execute the whole block atomically and asynchronously, there's nothing easier than to wrap the code into Runnable and execute it in your own threadpool. And with synchronous cache.

A MailboxProcessor that operates with a LIFO logic

I am learning about F# agents (MailboxProcessor).
I am dealing with a rather unconventional problem.
I have one agent (dataSource) which is a source of streaming data. The data has to be processed by an array of agents (dataProcessor). We can consider dataProcessor as some sort of tracking device.
Data may flow in faster than the speed with which the dataProcessor may be able to process its input.
It is OK to have some delay. However, I have to ensure that the agent stays on top of its work and does not get piled under obsolete observations
I am exploring ways to deal with this problem.
The first idea is to implement a stack (LIFO) in dataSource. dataSource would send over the latest observation available when dataProcessor becomes available to receive and process the data. This solution may work but it may get complicated as dataProcessor may need to be blocked and re-activated; and communicate its status to dataSource, leading to a two way communication problem. This problem may boil down to a blocking queue in the consumer-producer problem but I am not sure..
The second idea is to have dataProcessor taking care of message sorting. In this architecture, dataSource will simply post updates in dataProcessor's queue. dataProcessor will use Scanto fetch the latest data available in his queue. This may be the way to go. However, I am not sure if in the current design of MailboxProcessorit is possible to clear a queue of messages, deleting the older obsolete ones. Furthermore, here, it is written that:
Unfortunately, the TryScan function in the current version of F# is
broken in two ways. Firstly, the whole point is to specify a timeout
but the implementation does not actually honor it. Specifically,
irrelevant messages reset the timer. Secondly, as with the other Scan
function, the message queue is examined under a lock that prevents any
other threads from posting for the duration of the scan, which can be
an arbitrarily long time. Consequently, the TryScan function itself
tends to lock-up concurrent systems and can even introduce deadlocks
because the caller's code is evaluated inside the lock (e.g. posting
from the function argument to Scan or TryScan can deadlock the agent
when the code under the lock blocks waiting to acquire the lock it is
already under).
Having the latest observation bounced back may be a problem.
The author of this post, #Jon Harrop, suggests that
I managed to architect around it and the resulting architecture was actually better. In essence, I eagerly Receive all messages and filter using my own local queue.
This idea is surely worth exploring but, before starting to play around with code, I would welcome some inputs on how I could structure my solution.
Thank you.
Sounds like you might need a destructive scan version of the mailbox processor, I implemented this with TPL Dataflow in a blog series that you might be interested in.
My blog is currently down for maintenance but I can point you to the posts in markdown format.
Part1
Part2
Part3
You can also check out the code on github
I also wrote about the issues with scan in my lurking horror post
Hope that helps...
tl;dr I would try this: take Mailbox implementation from FSharp.Actor or Zach Bray's blog post, replace ConcurrentQueue by ConcurrentStack (plus add some bounded capacity logic) and use this changed agent as a dispatcher to pass messages from dataSource to an army of dataProcessors implemented as ordinary MBPs or Actors.
tl;dr2 If workers are a scarce and slow resource and we need to process a message that is the latest at the moment when a worker is ready, then it all boils down to an agent with a stack instead of a queue (with some bounded capacity logic) plus a BlockingQueue of workers. Dispatcher dequeues a ready worker, then pops a message from the stack and sends this message to the worker. After the job is done the worker enqueues itself to the queue when becomes ready (e.g. before let! msg = inbox.Receive()). Dispatcher consumer thread then blocks until any worker is ready, while producer thread keeps the bounded stack updated. (bounded stack could be done with an array + offset + size inside a lock, below is too complex one)
Details
MailBoxProcessor is designed to have only one consumer. This is even commented in the source code of MBP here (search for the word 'DRAGONS' :) )
If you post your data to MBP then only one thread could take it from internal queue or stack.
In you particular use case I would use ConcurrentStack directly or better wrapped into BlockingCollection:
It will allow many concurrent consumers
It is very fast and thread safe
BlockingCollection has BoundedCapacity property that allows you to limit the size of a collection. It throws on Add, but you could catch it or use TryAdd. If A is a main stack and B is a standby, then TryAdd to A, on false Add to B and swap the two with Interlocked.Exchange, then process needed messages in A, clear it, make a new standby - or use three stacks if processing A could be longer than B could become full again; in this way you do not block and do not lose any messages, but could discard unneeded ones is a controlled way.
BlockingCollection has methods like AddToAny/TakeFromAny, which work on an arrays of BlockingCollections. This could help, e.g.:
dataSource produces messages to a BlockingCollection with ConcurrentStack implementation (BCCS)
another thread consumes messages from BCCS and sends them to an array of processing BCCSs. You said that there is a lot of data. You may sacrifice one thread to be blocking and dispatching your messages indefinitely
each processing agent has its own BCCS or implemented as an Agent/Actor/MBP to which the dispatcher posts messages. In your case you need to send a message to only one processorAgent, so you may store processing agents in a circular buffer to always dispatch a message to least recently used processor.
Something like this:
(data stream produces 'T)
|
[dispatcher's BCSC]
|
(a dispatcher thread consumes 'T and pushes to processors, manages capacity of BCCS and LRU queue)
| |
[processor1's BCCS/Actor/MBP] ... [processorN's BCCS/Actor/MBP]
| |
(process) (process)
Instead of ConcurrentStack, you may want to read about heap data structure. If you need your latest messages by some property of messages, e.g. timestamp, rather than by the order in which they arrive to the stack (e.g. if there could be delays in transit and arrival order <> creation order), you can get the latest message by using heap.
If you still need Agents semantics/API, you could read several sources in addition to Dave's links, and somehow adopt implementation to multiple concurrent consumers:
An interesting article by Zach Bray on efficient Actors implementation. There you do need to replace (under the comment // Might want to schedule this call on another thread.) the line execute true by a line async { execute true } |> Async.Start or similar, because otherwise producing thread will be consuming thread - not good for a single fast producer. However, for a dispatcher like described above this is exactly what needed.
FSharp.Actor (aka Fakka) development branch and FSharp MPB source code (first link above) here could be very useful for implementation details. FSharp.Actors library has been in a freeze for several months but there is some activity in dev branch.
Should not miss discussion about Fakka in Google Groups in this context.
I have a somewhat similar use case and for the last two days I have researched everything I could find on the F# Agents/Actors. This answer is a kind of TODO for myself to try these ideas, of which half were born during writing it.
The simplest solution is to greedily eat all messages in the inbox when one arrives and discard all but the most recent. Easily done using TryReceive:
let rec readLatestLoop oldMsg =
async { let! newMsg = inbox.TryReceive 0
match newMsg with
| None -> oldMsg
| Some newMsg -> return! readLatestLoop newMsg }
let readLatest() =
async { let! msg = inbox.Receive()
return! readLatestLoop msg }
When faced with the same problem I architected a more sophisticated and efficient solution I called cancellable streaming and described in in an F# Journal article here. The idea is to start processing messages and then cancel that processing if they are superceded. This significantly improves concurrency if significant processing is being done.

Starting mutliple orchestrations from parent orchestration and passing messages to them

I have a situation where a main orchestration is responsible for processing a convoy of messages. These messages belong to a set of customers, the orchestration will read the messages as they come in, and for each new customer id it finds, it will spin up a new orchestration that is responsible for processing the messages of a particular customer. I have to preserve the order of messages as they come in, so the newly created orchestrations should process the message it has and wait for additional messages from the main orchestration.
Tried different ways to tackle this, but was not able to successfuly implement it.
I would like to hear your opinions on how this could be done.
Thanks.
It sounds like what you want is a set of nested convoys. While it might be possible to get that working, it's going to... well, hurt. In particular, my first worry would be maintenance: any changes to the process would be a pain in the neck to make, and, much worse, deployment would really, really suck.
Personally, I would really try to find an alternative way to implement this and avoid the convoys if possible, but that would depend a lot on your specific scenario.
A few questions, if you don't mind:
What are your ordering requirements? For example, do you only need ordered processing for each customer on a single incoming batch, or across batches? If the latter, could you make do without the master orchestration and just force a single convoy'd instance per customer? Still not great, but would likely simplify things a lot.
What are you failure requirements with respect to ordering? Should it completely stop processing? Save message and keep going? What about retries?
Is ordering based purely on the arrival time of the message? Is there anything in the message that you could use to force ordering internally instead of relying purely on the arrival time?
What does the processing of the individual messages do? Is the ordering requirement only to ensure that certain preconditions are met when a specific message is processed (for example, messages represent some tree structure that requires parents are processed before children).
I don't think you need a master orchestration to start up the sub-orchestrations. I am assumin you are not talking about the master orchestration implmenting a convoy pattern. So, if that's the case, here's what I might do.
There is a brief example here on how to implment a singleton orchestration. This example shows you how to setup an orchestration that will only ever exist once. All the messages going to it will be lined up in order of receipt and processed one at a time. Your example differs in that you want to have this done by customer ID. This is pretty simple. Promote the customer ID in the inbound message and add it to the correlation type. Now, there will only ever be one instance of the orchestration per customer.
The problem with singletons is this. You have to kill them at some point or they will live forever as dehydrated orchestrations. So, you need to have them end. You can do this if there is a way for the last message for a given customer to signal the orchestration that it's time to die through an attribute or such. If this is not possible, then you need to set a timer. If no messags are received in x seconds, terminate the orch. This is all easy to do, but it can introduce Zombies. Zombies occur when that orchestration is in the process of being shut down when another message for that customer comes in. this can usually be solved by tweeking the time to wait. Regardless, it will cause the occasional Zombie.
A note fromt he field. We've done this and it's really not a great long term solution. We were receiving customer info updates and we had to ensure ordered processing. We did this singleton approach and it's been problematic from the Zombie issue and the exeption issue. If the Singleton orchestration throws an exception, it will block the processing for a all future messages for that customer. So - handle every single possible exception. The real solution would have been to have the far end system check the time stamps from the update messages and discard ones that were older than the last update. We wanted to go this way, but the receiving system didn't want to do this extra work.

Practical value for concurrent-request-timeout parameter or options for avoiding concurrent access to conversation exception

In the Seam Reference Guide, one can find this paragraph:
We can set a sensible default for the concurrent request timeout (in ms) in components.xml:
<core:manager concurrent-request-timeout="500" />
However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access.
In our application we have a combination of page scoped ajax requests (triggered by various user actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages.
Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site.
After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment.
And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option.
Any other suggestions?
TIA,
Andrei
We use 60000 or 120000 (1-2 minutes). Concurrent-request-timeout is designed to avoid deadlocks. Historically we have far more problems with timeouts than deadlocks. A better approach is to use a client-side queue (<a4j:ajaxQueue> if using RichFaces) to serialize and remove duplicate requests as much as possible, then set the timeout high enough to avoid any remaining problems.
There are many serious issues resulting from Seam's concurrent request timeouts:
The issue is the last request gets the ConcurrentRequestTimeoutException. If the user double-clicks or reloads the page, only the last request matters -- why should he get an error?
Usually the ConcurrentRequestTimeoutException is suppressed, and only secondary NullPointerExceptions and #In injection failures are shown, making debugging difficult.
Seam 2.2.1 has a severe problem where transactions, ThreadLocals, and locks may leak after a timeout occurs, especially when used with <spring:spring-transaction/>. Look at SeamPhaseListener.afterRestoreView: there's no finally block to clean up after restoreConversation fails!
In my opinion there are many poor aspects to this design, so it's best to use a much higher timeout and try to avoid the issues.
This is what we have and it works fine for us:
<core:manager concurrent-request-timeout="5000"
conversation-timeout="120000" conversation-id-parameter="cid"
parent-conversation-id-parameter="pid" />
We also use a much higher value for the concurrent-request-timeout.
At least for duplicate events you can use settings in the a4j components to filter and delay them with eventsQueue, requestDelay and ignoreDupResponses=”true”.
(Last point http://docs.jboss.org/seam/2.0.1.GA/reference/en/html/conversations.html )
Can you analyse which types of request are taking a long time? Is there a particular type which you could reduce the request time by doing the "work" asynchronously and getting the update back in your poll?
In my opinion, ajax requests should always complete fairly quickly, then you can calculate a max concurrent request time by (request time * max number of requests likely to be initiated)

Resources