currently I am working on one work that once the transaction is done, we need to send out one notification for both sender and recipient users. This notification service will be a non-corda service, sender and recipient should invoke this external service by themselves. Our thought is in Corda code, there is one place that we can know the transaction is done or data is already stored into ledger correct, and this place code is applicable for both sender and recipient. But we haven't found this code yet. Anyone can provide some guidance here? Thanks a lot.
You want to use:
fun waitForLedgerCommit(hash: SecureHash): SignedTransaction
This takes an input the hash of a transaction, and suspends the flow until the transaction with this hash has been verified and sent to the vault for processing.
Related
I am trying to update the existing message in Google Spaces, but it seems my webhook creates a new message every time. Any guidance will be appreciated.
if you look at the webhook i shared it contains a thredkey, i successfully created a message but unable to post Thread message even though using exactly as described by Google but for some reason my Thread webhook is posting a new message and not inside the Thread.
https://chat.googleapis.com/v1/spaces/AAAAxsij123/messages?&threadKey=g3kjfKDp123&key=AIzaSyDdI0hCZtE6vy123-WEfRq3CPzqKqgtHY&token=SE2dKe34qeSC6pvIc8NNCALCiUtdfo3FF5T_fWcFGT8%3D
It seems that you can only use a threadKey that you created with the same webhook, you cannot just grab the key from an existing thread to post in it. This is explained in the documentation here:
Each threadKey is unique to the app that sets it. If two different Chat apps or webhooks set the same threadKey, the messages do not thread. It is not possible to retrieve a threadKey from Chat API. The spaces.messages.thread.name field is the resource name of a thread in Chat API, not the threadKey.
This means that each webhook or Chat app has its own unique way to hash the string you used as threadKey and turn it into a "thread name", but as a result there's no (publicly available) way to do this process in reverse to get the threadKey from the "thread name". If you try to grab the name from an existing thread and add it as a key then it will just hash it again and turn it into a different one.
Essentially this means that, as the documentation explains, you have to create your own thread with an arbitrary string, then save that string to use it in the future. The arbitrary string can be anything so you can use something easy to remember. As shown in the docs:
https://chat.googleapis.com/v1/spaces/SPACE_ID/messages?threadKey=MY_KEY
Then just keep using MY_KEY in future messages and the webhook will post to the same thread. What you cannot do is to grab an existing thread, look for its ID and then try to post to it. The thread has to have been created by the same webhook.
I stumbled over Handling duplicate messages using the Idempotent consumer pattern :
Similar, but slightly different is the Transactional Inbox Pattern which acknowledges the kafka message receipt after the transaction INSERT into messages (no business transaction) concluded successfully and having a background polling to detect new messages in this table and trigger the real business logic (i.e. the message listener) subsequently.
Now I wonder, if there is a Spring magic to just provide a special DataSource config to track all received messages and discard duplicated message deliveries?
Otherwise, the application itself would need to take care to ack the kafka message receipt, message state changes and data cleanup of the event table, retry after failure and probably a lot of other difficult things that I did not yet thought about.
The framework does not provide this out of the box (there is no general solution that will work for all), but you can implement it via a filter, to avoid putting this logic in your listener.
https://docs.spring.io/spring-kafka/docs/2.7.9/reference/html/#filtering-messages
I am new to Rebus and am trying to get up to speed with some patterns we currently use in Azure Logic Apps. The current target implementation would use Azure Service Bus with Saga storage preferably in Cosmos DB (still investigating that sample implementation). Maybe even use Rebus Mongo DB with Cosmos DB using the Mongo DB API (not sure if that is possible though).
One major use case we have is an event/timeout pattern, and after doing some reading of samples/forums/Stack Overflow this is not uncommon. The tricky part is that our Sagas would behave more as a Finite State Machine vs. a Directed Acyclic Graph. This mainly happens because dates are externally changed and therefore timeouts for events change.
The Defer() method does not return a timeout identifier, which we assume is an implementation restriction (Azure Service Bus returns a long). Since we must ignore timeouts that had been scheduled for an event which has now shifted in time, we see a way of having those timeouts "ignored" (since they cannot be cancelled) as follows:
Use a Dictionary<string, Guid> in our own SagaData-derived base class, where the key is some derivative of the timeout message type, and the Guid is the identifier given to the timeout message when it was created. I don't believe this needs to be a concurrent dictionary but that is why I am here...
On receipt of the event message, remove the corresponding timeout message type key from the above dictionary;
On receipt of the timeout message:
Ignore if it's timeout message type key is not present or the Guid does not match the dictionary key/value; else
Process. We could also remove the dictionary key at this point as well.
When event rescheduling occurs, simply add the timeout message type/Guid dictionary entry, or update the Guid with the new timeout message Guid.
Is this on the right track, or is there a more 'correct' way of handling defunct timeout (deferred) messages?
You are on the right track đ
I don't believe this needs to be a concurrent dictionary but that is why I am here...
Rebus lets your saga handler work on its own copy of the saga data (using optimistic concurrency), so you're free to model the saga data as if it's being only being accessed by one at a time.
I'm designing a system with multiple bounded contexts (microservices). I will have 2 kind of events.
Domain Events, which happens "in memory" within single transaction (sync)
Integration Events, which are used between bounded contexts (async)
My problem is, how to make sure that once transaction is committed (at this point I'm sure all Domain Events were processed successfully) that Integration Events are successful as well.
When my Transaction is committed, normally I will dispatch Integration Events (e.g. to the queue), but there is possibility that this queue is down as well, so previously just-committed transaction has to be "reverted". How?
The only solution that comes to my mind is to store Integration Events to the same DB, within the same Transaction, and then process the Integration Events records and push them to the queue - this would be something like "using current DB, as a pre-queue, before pushing it to The Real Queue (however I read that using DB for this is an anti-pattern).
Is there any pattern (reliable approach) to make sure both: Transaction commit and Message pushed to the queue is an atomic operation?
EDIT
After reading https://devblogs.microsoft.com/cesardelatorre/domain-events-vs-integration-events-in-domain-driven-design-and-microservices-architectures/ , the author actually suggests the approach of "pre-queue" in same DB (he calls it âready to publish the eventâ).
Checkout transactional outbox pattern.
This pattern does create a pre-queue. But the nice part is that pushing messages from pre-queue to real queue is fully decoupled. Instead you have a middleman called, a message relay that reads your transaction logs and pushes your event from to the real queue. Now since sending message and your domain events are fully decoupled, you can do all your domain events in a single transaction.
And make sure you that all your services are idempontent(same result despite duplicate calls). This transactional outbox patter does guarantee that messages are published, but in case when the message relay fails just after publishing(before acknowledging) it would publish the same event again.
Idempotent services is also necessary in other scenarios. As the event bus(the real queue) could have the same issue. Event bus propagates events, services acknowledge, then network error, then since the event bus is not acknowledged, the same event would be sent again.
Hmm actually idempotence alone could solve the whole issue. After the domain events computation completes(single transaction), if publishing message fails the service can simply throw an error without roll back. Since the event is not acknowledged the event bus will send the same event again. Now since the service is idempotent, the same database transaction will not happen twice, it will basically overwrite or better(should) skip and directly move to message publishing and acknowledging.
We are capturing a new committed state in the vault through vaultTrack method on Corda RPC proxy for using in the logs recording. Although itâs working properly, we thinks it might have cause some overhead for network connection. So, we decided to try using ServiceHub in the CorDapp for capturing the new event instead. Unfortunately, the event keep occurring every time when the flow is called (based on observable concept?). Maybe we did not set up properly?. Based on your experience and expertise, could you
Suggest what went wrong; and
The corresponding solutions?
More details here:
As we are using the logs of CorDapp for a performance benchmark. Therefore, we are focusing only new committed state event. In API endpoint where we had started, we are using VaultTrack in RPC to record each new committed state event as shown in the example below:
Although the API seems to work properly but we think it might consume RPC connection in the overall performance since the observable is called every time a new state is committed. Please correct us if we're wrong. As such we decided to change to logging the events in the flow instead.
In CorDapp, we are using VaultService in ServiceHub to record each new committed state event in the âcall functionâ of flow initiator as shown in the example below:
We found that the logs recording in CorDapp i.e. in the flow (from the serviceHub mentioned above) keep gaining duplicated log every time the flow is called. From our initial investigation, we found that the problem is "vaultService" keep getting subscribed every time the flow is initiated. Therefore, we switched back to use the API endpoint method. Please could you advise us the right way to capture the event in CorDapp. To log the event of a newly committed state during our performance testing.
The approach of subscribing to a vault observable within a flow will not work. Once the flow ends, the subscription will not be terminated. Every time you run the flow, an additional subscriber will be added. This will degrade performance (although the RPC overhead is generally quite low as long as the states serialise quickly enough).
You should observe updates to the vault using an RPC client instead. Here is an example:
val client = CordaRPCClient(nodeAddress)
val proxy = client.start(rpcUserUsername, rpcUserPassword).proxy
// Track IOUState updates in the vault
val (snapshot, updates) = proxy.vaultTrack(IOUState::class.java)
// Log the existing IOUStates and listen for new ones.
snapshot.states.forEach { logState(it) }
updates.toBlocking().subscribe { update ->
update.produced.forEach { logState(it) }
}
When you call start on the CordaRPCClient, you will connect to the node's Artemis message queue. This message queue will be used to stream updates from the vault back to the client over time.
In the example above, the vault updates are simply logged. You can change this behaviour as required (e.g. to call an API whenever an update is produced).