How to set an activity as idempotent in Oracle SOA Suite - soa

I read a document from Oracle which explains what is idempotent in BPEL.
13.3.2 Partner Link Property
You can dynamically configure a partner link at runtime in BPEL. This
is useful for scenarios in which the target service that BPEL wants to
invoke is not known until runtime. The following Partner Link
properties can be tuned for performance:
13.3.2.1 idempotent
An idempotent activity is an activity that can be retried (for
example, an assign activity or an invoke activity). Oracle BPEL Server
saves the instance after a nonidempotent activity. This property is
applicable to both durable and transient processes.
Values:
This property has the following values:
False: Activity is dehydrated immediately after execution and recorded in the dehydration store. When idempotent is set to False, it
provides better failover protection, but may impact performance if the
BPEL process accesses the dehydration store frequently.
True (default): If Oracle BPEL Server fails, it performs the activity again after restarting. This is because the server does not
dehydrate immediately after the invoke and no record exists that the
activity executed. Some examples of where this property can be set to
True are: read-only services (for example, CreditRatingService) or
local EJB/WSIF invocations that share the instance's transaction.
But I wonder is there anyway to set an activity as non-idempotent or idempotent in design time and runtime?

idempotent property can be set at partnerLink operation level using DeploymentDescriptor property idempotent.
Refer to Section c.1 Introduction to Deployment Descriptor properties of soa dev doc

Related

How is the DefaultKafkaProducerFactory cache managed for transactions?

In the spring kafka documentation https://docs.spring.io/spring-kafka/docs/2.3.3.RELEASE/reference/html/#transactions
It mentions;
Transactions are enabled by providing the DefaultKafkaProducerFactory with a transactionIdPrefix. In that case, instead of managing a single shared Producer, the factory maintains a cache of transactional producers. When the user calls close() on a producer, it is returned to the cache for reuse instead of actually being closed. The transactional.id property of each producer is transactionIdPrefix + n
How is this cache configured e.g. producer pool size?
Does it dynamically create a new producer when there isn't any available producers from the cache for the given transaction?
It depends on whether the transaction is producer only and the producerPerConsumerPartition which is true by default (for consumer initiated transactions).
This property is to support EOSMode.ALPHA (or fallback to ALPHA when BETA is used but the broker is older than 2.5).
See here for more information about exactly once semantics.
When using producerPerConsumerPartition=false and for producer-only transactions, there is no limit to the cache size; new producers are created when the cache is empty, and returned to the cache when "closed".

Axoniq Event Handler Resuming from offset

I am looking at the AxonIQ framework and have managed to get a test application up and running. But I have a question about how EventHandlers should be treated when using a store that has persistence in the Read Model.
From my (possible naive) understanding. #EventHandler annotated methods in my Projection class get called from the beginning when first launched. This would mechanism seems to assume that the Projection utilises some kind of in volatile store (e.g. an in memory sql like h2) which is re-created from scratch during the application bootup.
However, if the store was persistent in something like Elastic Search, I would want the #EventHandler to resume from its last persisted event instead of from the beginning event.
Is there anyway to control the behaviour of the #EventHandler in this way?
Axon has two types of Event Processors: Subscribing and Tracking.
The Subscribing mode (which was the default up to Axon 3) will handle events in the thread that delivers them. That means you're at "the mercy" of the delivery guarantees of whichever component delivers the events.
The Tracking mode (which is the default since Axon 4 when using an Event Store or otherwise a source that supports it) will have events handled in dedicated threads, managed by the Event Processor itself. That means events are handled asynchronously from the actual publication mechanism.
The Tracking Event Processor uses Tokens to keep track of progress. These Tokens are stored in a TokenStore and updates as the Processor has correctly processed each incoming event (possibly batched). You decide where those tokens are stored. If you update a relational database, we recommend storing the tokens in the same database, so that event changes and tokens are updated atomically.
If you don't specify any TokenStore, it depends on whether you're on Spring Boot, in which case Axon will attempt to detect a suitable TokenStore implementation for you. Otherwise, it may very well just be an in-memory TokenStore, which causes Processors to re-initialize on every startup (and possibly start from the beginning).
To configure a TokenStore
On Spring (Boot), simply add a bean of type TokenStore with the implementation you want to use
When using Axon's Configuration API, on the EventProcessingConfigurer, use one of the registerTokenStore(...) methods.
When the Tracking Processor starts, it will check the Token Store for previous progress, and continue from there automatically.

Datastore: Failed transactions and rollbacks: What happens if rollback is not called or fails?

What happens if a transaction fails and the application crashes for other reasons and the transaction is not rolled back?
Also, what happens and how should rollback failures be treated?
You don't have to worry about the impact of your app's crashes on transaction rollbacks (or any other stateful datastore operation).
The application just sends RPC requests for the operations. The actual operation steps/sequence execution, happens on the datastore backend side, not inside your application.
From Life of a Datastore Write:
We'll dive into a bit more detail in terms of what new data is placed
in the datastore as part of write operations such as inserts,
deletions, updates, and transactions. The focus is on the backend work
that is common to all of the runtimes.
...
When we call put or makePersistent, several things happen behind
the scenes before the call returns and sets the entity's key:
The my_todo object is converted into a protocol buffer.
The appserver makes an RPC call to the datastore server, sending the entity data in a protocol buffer.
If a key name is not provided, a unique ID is determined for this entity's key. The entity key is composed of app ID | ancestor keys |
kind name | key name or ID.
The datastore server processes the request in two phases that are executed in order: commit, then apply. In each phase, the datastore
server identifies the Bigtable tablet servers that should receive
the data.
Now, depending on the client library you use, transaction rollback could be entirely automatic (in the ndb python client library, for example) or could be your app's responsibility. But even if it is your app's responsibility, it's a best-effort attempt anyways. Crashing without requesting a rollback would simply mean that some potentially pending operations on the backend side will eventually time out instead of being actively ended. See also related GAE: How to rollback a transaction?

JMS Session and JPA transaction with XA

I'm using WebSphere 8.5 with EJB 3.1 and JMS Generic provider.
I need to write messages in a queue using a stateless session bean as a producer. The EJB is annotated with the TransactionAttributeType.REQUIRED because I need to perform some "DB insert" before I send messages on a queue and consume these messages reading records wrote by the producer.
The problem is if I define a JDBC non XA datasource, the producer writes the messages in queue but the server complains about a failed 2 phase-commit of a local resource (the Datasource itself I think) and doesn't call the onMessage method of the MDB. If I define a JDBC XA everything works.
My questions:
Is JMS session required to be a default XA resources? And why?
What happen if I configure my JMS connection factory to create a non XA JMS session in a JTA Transaction? Is that a bad practice?
What happen if the consumer starts to consume message while the producer is still finishing his operations on database? Would the consumer see changes on database because they are in the same transaction?
Thanks in advance, regards
Is JMS session required to be a default XA resources? And why?
You need both resources to be XA. This is distributed transaction - among 2 different resources - database and JMS queue. To participate in one, same transaction they both must be XA (there is an option to have one non XA resource in transaction - using last participant support, but I wouldn't recommend that) .
If your resources are not XA, then you may set bean to NOT_SUPPORTED and handle transaction by yourself - means - manage 2 separate transactions, first to database and second to JMS queue. However, since db transaction will be commited first, you would have to code compensating it, when sending message fails (as you cannot do rollback), to avoid situation were database state has changed and you didn't send the message.
What happens if I configure my JMS connection factory to create a non XA JMS session in a JTA Transaction?
If another resource is a part of that transaction (e.g. database) you will have exception about 2 phase-commit support.
What happen if the consumer starts to consume message while the producer is still finishing his operations on database?
It's not clear for me, what you are asking. If producer first writes to the database, then writes to the queue in one XA transaction, they will be commited at the same time, so consumer will not be able to see the message first.
However, if you create 2 separate transactions (one for db access, second for queue access) you could have a situation, if you first commit the queue, that consumer could read the message. But in that case, consumer will not be able to see changes to the db, if they are not commited.
Would the consumer see changes on database because they are in the same transaction?
Producer and consumer are not in the same transaction (producer creates message and commits, consumer starts separate transaction to read).

WF4 entity status handling, entities batch processing

I have created a simple order manager wf service (state machine) in WF4.
Order (EF entity) properties: Id, IsExport, NumOfProduct, ProductName, Status (waiting, approved, rejected).
State machine states:
1. OrderReceived (validation -> response activity)
2. Waiting (empty)
- Transitions:
update(update order activity) -> waiting state
approve(assign status field, update order and response activities) -> final state
3. Final state.
Correlation key: Order.Id
The implementation rised a few questions.
WF can manage one flow of the order instance, the order flow and the order entity is in one-one relation.
Question is that where and how should I implement the listing of entityes according to a state filter (eg. approved orders or waiting orders). The list should be accessible via WCF service method.
What is the best practise to manage the batch data processing. (eg: Multiple order approval. "Foreach" in the client is not the required sln.)
The state of the order is symbolized by the "state activity persisted instances" and the entity's status field in the db as well.
What is the best practise to decide the state of the entity, listing the active persisted activity instances in the defined state or select the entities from the db (by an activity) according to a state filter parameter?
Any help would be appreciated.
Good questions!
Taking your first and third questions, there are several possible approaches to this. All require that you write a custom WCF service to enumerate the required orders. This would probably not be a WF service; it might be a REST or OData service. How would you implement the service?
You could do it entirely by querying your database through EF. This would have no dependency on WF at all, and is probably the easiest way. Your workflow would update the database record on each state change, and the service would only need to read that value.
You could rely on the tracking mechanism provided by WF, and the extensions that Ron Jacobs refers to in his answer to your question. The tracking infrastructure is described here on MSDN. It is possible to use the tracking object in memory to get the state of active workflows. However, this probably won't work well with IIS/WF services, which are automatically persisted and unloaded when dormant. You would be better off using the tracking facilities to write state records to a database. Your custom service would then just query this tracking database.
Unless you want comprehensive information about the state changes and updates that have occurred through your WF service, suggestion number one should suffice.
As for your second question, that is a little more complicated. Let's say you write a REST service that lists the orders awaiting approval. You write a Web page that displays those orders, and the user can check the orders he wants to approve. Now, the number of workflows that you need to update is the same as the number of orders he approves.
You could, as you mention, call the Web service multiple times—but for a large number of orders that would be an unnecessary overhead.
What's the alternative? You would need to write a custom service method on your non-WF service that takes an array of order ids. That service would have to call your WF service multiple times to update each one. Since the WF service is being called from another service on the same machine, you can use the .Net Named Pipe binding instead of one of the HTTP bindings so that the overhead is much less.
It's worth noting that Entity Framework doesn't support batched updates either. You'd need to write a stored procedure or custom SQL if you wanted the database update to be batched too.
Is all of this worth the effort? Probably! Using WCF and the named pipes binding is pretty standard with WF. You'll need to configure Windows Activation Service for named pipes. Also, if you're not already using AppFabric for Windows Server, have a look into it, because it adds some very good management tools for WF services.
I recently published some new samples to show how you can access the current state of the StateMachine and possible transitions. These might help you.
Windows Workflow Foundation (WF4) - Tracking State Machine Workflow Service
Windows Workflow Foundation (WF4) - Tracking State Machine

Resources