We are capturing a new committed state in the vault through vaultTrack method on Corda RPC proxy for using in the logs recording. Although it’s working properly, we thinks it might have cause some overhead for network connection. So, we decided to try using ServiceHub in the CorDapp for capturing the new event instead. Unfortunately, the event keep occurring every time when the flow is called (based on observable concept?). Maybe we did not set up properly?. Based on your experience and expertise, could you
Suggest what went wrong; and
The corresponding solutions?
More details here:
As we are using the logs of CorDapp for a performance benchmark. Therefore, we are focusing only new committed state event. In API endpoint where we had started, we are using VaultTrack in RPC to record each new committed state event as shown in the example below:
Although the API seems to work properly but we think it might consume RPC connection in the overall performance since the observable is called every time a new state is committed. Please correct us if we're wrong. As such we decided to change to logging the events in the flow instead.
In CorDapp, we are using VaultService in ServiceHub to record each new committed state event in the ‘call function’ of flow initiator as shown in the example below:
We found that the logs recording in CorDapp i.e. in the flow (from the serviceHub mentioned above) keep gaining duplicated log every time the flow is called. From our initial investigation, we found that the problem is "vaultService" keep getting subscribed every time the flow is initiated. Therefore, we switched back to use the API endpoint method. Please could you advise us the right way to capture the event in CorDapp. To log the event of a newly committed state during our performance testing.
The approach of subscribing to a vault observable within a flow will not work. Once the flow ends, the subscription will not be terminated. Every time you run the flow, an additional subscriber will be added. This will degrade performance (although the RPC overhead is generally quite low as long as the states serialise quickly enough).
You should observe updates to the vault using an RPC client instead. Here is an example:
val client = CordaRPCClient(nodeAddress)
val proxy = client.start(rpcUserUsername, rpcUserPassword).proxy
// Track IOUState updates in the vault
val (snapshot, updates) = proxy.vaultTrack(IOUState::class.java)
// Log the existing IOUStates and listen for new ones.
snapshot.states.forEach { logState(it) }
updates.toBlocking().subscribe { update ->
update.produced.forEach { logState(it) }
}
When you call start on the CordaRPCClient, you will connect to the node's Artemis message queue. This message queue will be used to stream updates from the vault back to the client over time.
In the example above, the vault updates are simply logged. You can change this behaviour as required (e.g. to call an API whenever an update is produced).
Related
I'm designing a system with multiple bounded contexts (microservices). I will have 2 kind of events.
Domain Events, which happens "in memory" within single transaction (sync)
Integration Events, which are used between bounded contexts (async)
My problem is, how to make sure that once transaction is committed (at this point I'm sure all Domain Events were processed successfully) that Integration Events are successful as well.
When my Transaction is committed, normally I will dispatch Integration Events (e.g. to the queue), but there is possibility that this queue is down as well, so previously just-committed transaction has to be "reverted". How?
The only solution that comes to my mind is to store Integration Events to the same DB, within the same Transaction, and then process the Integration Events records and push them to the queue - this would be something like "using current DB, as a pre-queue, before pushing it to The Real Queue (however I read that using DB for this is an anti-pattern).
Is there any pattern (reliable approach) to make sure both: Transaction commit and Message pushed to the queue is an atomic operation?
EDIT
After reading https://devblogs.microsoft.com/cesardelatorre/domain-events-vs-integration-events-in-domain-driven-design-and-microservices-architectures/ , the author actually suggests the approach of "pre-queue" in same DB (he calls it “ready to publish the event”).
Checkout transactional outbox pattern.
This pattern does create a pre-queue. But the nice part is that pushing messages from pre-queue to real queue is fully decoupled. Instead you have a middleman called, a message relay that reads your transaction logs and pushes your event from to the real queue. Now since sending message and your domain events are fully decoupled, you can do all your domain events in a single transaction.
And make sure you that all your services are idempontent(same result despite duplicate calls). This transactional outbox patter does guarantee that messages are published, but in case when the message relay fails just after publishing(before acknowledging) it would publish the same event again.
Idempotent services is also necessary in other scenarios. As the event bus(the real queue) could have the same issue. Event bus propagates events, services acknowledge, then network error, then since the event bus is not acknowledged, the same event would be sent again.
Hmm actually idempotence alone could solve the whole issue. After the domain events computation completes(single transaction), if publishing message fails the service can simply throw an error without roll back. Since the event is not acknowledged the event bus will send the same event again. Now since the service is idempotent, the same database transaction will not happen twice, it will basically overwrite or better(should) skip and directly move to message publishing and acknowledging.
I am looking at the AxonIQ framework and have managed to get a test application up and running. But I have a question about how EventHandlers should be treated when using a store that has persistence in the Read Model.
From my (possible naive) understanding. #EventHandler annotated methods in my Projection class get called from the beginning when first launched. This would mechanism seems to assume that the Projection utilises some kind of in volatile store (e.g. an in memory sql like h2) which is re-created from scratch during the application bootup.
However, if the store was persistent in something like Elastic Search, I would want the #EventHandler to resume from its last persisted event instead of from the beginning event.
Is there anyway to control the behaviour of the #EventHandler in this way?
Axon has two types of Event Processors: Subscribing and Tracking.
The Subscribing mode (which was the default up to Axon 3) will handle events in the thread that delivers them. That means you're at "the mercy" of the delivery guarantees of whichever component delivers the events.
The Tracking mode (which is the default since Axon 4 when using an Event Store or otherwise a source that supports it) will have events handled in dedicated threads, managed by the Event Processor itself. That means events are handled asynchronously from the actual publication mechanism.
The Tracking Event Processor uses Tokens to keep track of progress. These Tokens are stored in a TokenStore and updates as the Processor has correctly processed each incoming event (possibly batched). You decide where those tokens are stored. If you update a relational database, we recommend storing the tokens in the same database, so that event changes and tokens are updated atomically.
If you don't specify any TokenStore, it depends on whether you're on Spring Boot, in which case Axon will attempt to detect a suitable TokenStore implementation for you. Otherwise, it may very well just be an in-memory TokenStore, which causes Processors to re-initialize on every startup (and possibly start from the beginning).
To configure a TokenStore
On Spring (Boot), simply add a bean of type TokenStore with the implementation you want to use
When using Axon's Configuration API, on the EventProcessingConfigurer, use one of the registerTokenStore(...) methods.
When the Tracking Processor starts, it will check the Token Store for previous progress, and continue from there automatically.
In below scenario, what would be the bahavior of Axon -
Command Bus recieved the command
It creates an event
However messaging infra is down (say kafka)
Does Axon has re-queing capability for event or any other alternative to handle this scenario.
If you're using Axon, you know it differentiates between Command, Event and Query messages. I'd suggest to be specific in your question which message type you want to retry.
However, I am going to make the assumption it's about events, as your stating Kafka.
If this is the case, I'd highly recommend reading the reference guide on the matter, as it states how you can uncouple Kafka publication from actual event storage in Axon.
Simply put, use a TrackingEventProcessor as the means to publish events on Kafka, as this will ensure a dedicate thread is used for publication instead of the same thread storing the event. Added, the TrackingEventProcessor can be replayed, thus "re-process" events.
I am trying to implement the following use-case in Corda:
FlowA has been invoked on PartyA via startFlowDynamic. FlowA creates a partially signed transaction and invokes FlowB on PartyB via sendAndReceive. A human user shall now review and manually approve this transaction. Ideally FlowB should suspend after receiving the transaction. I would like to be able to query for suspended instances of FlowB via RPC, and present those (or rather some representation of the transaction therein) to the user in my UI. Then, after the user actions his approval, I would like to resume FlowB via RPC, which would then sign the transaction and return it to FlowA on PartyA.
I noticed that I can inspect suspended flows to some degree via CordaRPCOps.stateMachineAndUpdates and I read the tutorial on progress tracking, but it doesn't quite suffice for my case. I also read that interacting with people from flows is listed as a future feature, I just wondered if there isn't already some way to accomplish this ?
See the Negotiation Cordapp sample for an example of how this would work in practice here.
Corda doesn't currently support suspending a flow for user interaction.
However, you can support this kind of workflow as follows. Suppose you're writing a CorDapp for loan applications. You could have an initial flow that agrees the creation of a loanApplicationstate between two parties. From there, the approver can inspect the loan application, and either kick off an approve flow that creates a transaction to transform the loanApplication into an approvedLoan state, or kick off a reject flow to consume the loanApplication state without issuing an approvedLoan state.
Equally, you could add a status field to the loan state, specifying whether the loan is approved or not. Initially, the loan state would have the field set to unapproved. Then the approver could kick off one of two flows to update the loan state, to either have an approved or a rejected status.
I'm not sure if this is a "recommended approach" but I implemented a Quasar compatible AsynchListenableFuture in my flow as someone else had described here.
I needed to suspend a flow and wait for the production of a state from another flow (in response to a user interaction). It seems to work, but suspect it could be regarded as rather off-piste(?!).
Splitting the activities into atomic flows invoked directly by UI interaction is fine, but I needed a sort of "monitoring" flow to wait for an external (e.g. user) event before determining which sub flow to initiate next, and this needed to happen automatically and from within a flow already invoked prior to the the user interaction - the flow logic is then conditional on a state change which may arise from a user interaction or an incoming transaction from another node. In my case, this high level monitoring flow detects the consumption of a known state on the node, then invokes a subflow in response. The high level flow waits on the AsynchListenableFuture as described in the answer referenced above. I created a composite VaultQuery on an attribute of states of contract state types of interest (e.g custom field X = Y), and converted the returned observable (returned from trackBy.future) to a Quasar compatible AsynchListenableFuture. When the state is consumed by a transaction created by a flow triggered by the external action, the future returns and the automatic event (in my case the creation of an other transaction with another party) is executed.
I'm only experimenting / evaluating Corda, not sure how robust this approach would be in production reality, but it seems to work OK, hope this helps in some way.
Some form of higher level workflow flows in Corda, which can wait on external events and conditionally invoke other flows depending on the external action would be of real interest in my context.
I have a chat application using firebase and node.js where I keep track of presence by running a single worker thread on the server that monitors child_added and child_deleted events on the firebase presence channel, and updating our presence database tables accordingly.
My question is this - now that firebase queue exists https://www.firebase.com/blog/2015-05-15-introducing-firebase-queue.html
Can I use the queue to replace the worker thread that I have running on the server to monitor presence and child_added events? Looking at the current examples - it looks like I would create a reference to the queue on the client and then set on disconnect and connect events to push into that queue from the client? However I'd like to secure it a bit more and not rely on the client so much. I'd also like to have the queue process the event by archiving it to a 3rd party logging service - credentials or details I wouldn't want to expose to the client.
Does this mean I would still need a server side worker process - and if so what benefit would the firebase queue be in this use case?
Firebase Queue is not a hosted solution - you still need to run it on your own server.
The main advantage of using a queue over a single listener process is the ability to run multiple workers for the same tasks so there's not a single point of failure. Using the queue you'll know that the worker processes are synchronized such that only one worker will be processing a task at any one point in time, and if a worker dies during processing or takes too long, another worker will pick it up again once the task has timed out.
It sounds like you're trying to create some kind of audit trail for presence, but there's currently no way to report presence directly from the server - you'll need to rely on the client at some point. Your security rules can enforce whether a write is a boolean to specific location in your database, but they can't enforce that the client was in any particular presence state when writing it. Also note that there's no push or childByAutoId equivalent onDisconnect handler, so to push to a queue you'd have to do something like:
var ref = new Firebase(…);
var disconnectTask = {};
var pushId = ref.push().key(); // This just generates the ID and does no network traffic
disconnectTask[pushId] = { /* populate with task data here */ };
ref.onDisconnect().update(disconnectTask);
Note that the push ID will be generated client-side before the operation is sent to the server and so the task won't necessarily be in order when added to the queue.