How can I do a transaction rollback in corda. Let's say I've a complex flow which includes 2 flows. I want to rollback the previous transaction if the last one failed how can I do that in corda? Or I need to re-design my complex flow or invalidate the previous state created myself? eg: I've a main flow.in that I created subflow which creates a new state (or updates some state). now suppose for some reason the main flow fails how do I rollback transaction created by my previous subflow?
Once a transaction has been notarised, it is final and cannot be rolled back. However, depending on how the transaction's contracts are written, it may be possible to consume the newly created state to create the old state again.
Regarding your comment, the broadcast cannot "fail" in Corda unless one of the nodes permanently leaves the network. ACKs are used to ensure messages between nodes are always received.
Related
I'm using 'reactor-kafka' out-of-order commits and interval commits (using manual acknowledge for each message). I'm wondering what will happen if I process messages that I polled before the rebalance, while they are currently processed asynchronously in another thread (using publishOn(Schedulers.parallel()) after the kafkaReceiver.receive()).
Will they be committed after the rebalance while the partition may be consumed by a new consumer? I want to avoid this situation, which can lead to processing event at the same time by two consumers, since that can cause race and conflicts (that I need to avoid).
I'll be ok with processing an event that was polled before the rebalance only if I don't acknowledge and commit it. That's because then the new consumer will process this message after the rebalance either (I'm working with 'at least once' strategy so that's fine).
How can I achieve this behaviour? Does checking if the event's source partition belong to the assigned partitions before acknowledging is a good option?
Or is there any way of forcing the acknowledge function to fail if it's an event of old partition that is not currently assigned by the consumer?
I am new to Corda. My question is not about any particular implementation, but more of an architectural question.
What happens during back chain validation if one of the nodes involved permanently dies and fails to respond? How is that transaction validated?
I have seen this issue that only talks about how of transaction volume could slow down validation. Does validation come to a grinding halt if one of the nodes fails permanently?
As per Corda webinar on Consensus, in the example that is 5 minutes into the video, the back chain is Charlie -> Dan -> Alice -> Bob. In this, if either Charlie or Dan are unavailable, the proposed transaction cannot be validated. The same webinar further says that this is not a problem in other blockchains such as Ethereum.
Applications that can foresee the need for a highly-available record keeper, can surely accommodate such a node during the design phase, as suggested by Adel Rustum.
However, a privacy-conscious application reluctant to leak information that is deployed globally, could suffer from many transaction-validation failures due to the vagaries of a wide-area network. Thoughts?
The short answer is, transaction verification will fail (if that node was the only node that had that transaction); and that's the point of using a DLT (or a blockchain). If you can't go back in the history of a certain block of data until genesis, then you can't verify how that block and its ancestors were created.
As for the issue that you referenced in your question; Corda Enterprise 4.4 introduced a new feature called bulk back-chain fetching, which allows modifying the way the transactions that are needed to verify a certain transaction are fetched. Previously it was depth first, now you can change that to breadth first and specify how many transactions you want to fetch in one call. More details in this video.
The back chain validation doesn't depend on the nodes who were part of the transaction in the past. The validation of the back chain is only done by those nodes who are part of the current ongoing transaction. The other nodes who were part of a past transaction that involves the evolution of a state in question don't need to be contacted (or stay online) while the back chain is validated.
Back chain validation only involves checking that all transaction which happened in the past on a particular state used as input in the current transaction is valid. The validity is checked by running the contracts again for those previous transactions. There is no real need to reach to the parties of a previous transaction.
However, you need to make sure that the parties involved in the current transaction are online and responding since you would need signatures from them to successfully complete the transaction.
Is it okay to make HTTP requests to a counter party's external service from within a responder flow?
My use case is a Party invokes a "request-token" flow with an exchange node. That exchange node makes a HTTP request (on the responder flow) to move cash from that parties account to an exchange account in the external payment system. The event of the funds actually hitting the count and hence the issuance of the tokens would happen with another flow.
If it is not okay, what may be an alternative design to achieve the task?
It is not always a good idea to make HTTP request that way.
Unless you think very carefully about what happens when the previous checkpoint is replayed.so dedupe and idempotence are key considerations.plus what happens if target is down? plus this may exhaust the thread pool upon which the fibers operate.
Flows are run on fibers. CordaServices can spawn their own threads
threads can block on I/O, fibers can only do so for short periods and we make no guarantees about freeing resources, or ordering unless it is the DB. Also threads can register observables
The real challenge is restart-ability and for that they need to test the hell out of their code with a random kills.
You need to be aware that steps can be replayed in the event of a crash. this is true of any server-side work based system that restarts work.
Effectively, you should:
Step 1) execute an on-ledger Corda transaction to move one or more
assets into a locked state (analogous to XA 'prepare'). When
successfully notarised,
Step 2) execute the off-ledger transaction
with an idempotent call that succeeds or fails. When we know if it
succeeded or failed, move to
Step 3) execute a second Corda
transaction that either reverts the status of the asset or moves it
to its intended final state
I have 5 nodes & 1 Notary node corda private network and a web client with UI & RESTful services.
There are quite a few states and its attributes that are managed by users using the UI. I need to understand how to handle timeouts and avoid multiple updates or Errors
Scenario 1
User is viewing a specific un-consumed current state of a feature.
User performs an edit and updates the state
Upon receiving the request RESTful component use CORDA RPCClient to start the flow. It sets the timeout value e.g. 2 secs
CORDA flow runs the configured rules and it has to sync / collect signatures from all the participating nodes (4). Complete processing takes more than 2 secs (Some file processing, multiple states update etc. I can change the timeout to higher value based on specific use cases. It can surely happen anytime. Need to understand what is the recommended way of handling)
As time taken is higher than provided, CORDA RPCClient throws exception. For the RESTFul service / User transaction has failed.
Behind the scenes CORDA is processing and collecting signatures and updating nodes. From CORDA perspective everything looks fine and changed set is committed to the ledger.
Question:
Is there a way to know transaction submitted is in progress so RESTful service should wait
If user submits again we do check for transaction hash is the latest one associated with unconsumed state and reject if not (It was provided to UI while querying.
Any recommended way of handling.
Scenario 2
User viewing a specific un-consumed current state of a feature.
User performs an edit and updates the state
Upon receiving the request RESTful component use CORDA RPCClient to start the flow. It sets the timeout value e.g. 2 secs
CORDA flow runs the configured rules and it has to sync / collect signatures from all the participating nodes (4). One of the nodes is down or not reachable. Flow hangs / waits for the node to be live again.
RESTFul service / UI receives a timeout exception. User refreshes the view and submits the change again. Querying the current node will return old data and user will try to make change again and submit. Same will happen at CORDA layer transaction will be of latest unconsumed state (comparing the tx hash as state is not committed, it will proceed further and will hang / waits for the node to be live again. It waits for long time i have waited for a minute it did not quite trying.
Now the node comes up and will be syncing with peers. Notary will give exception as there are two states / requests pending to form the next state in chain. Transaction fails.
Question:
Is there a way to know transaction submitted is in progress so RESTful service should wait
Any recommended way of handling.
Is there a way to provide timeout values for node communication.
Do i need to keep monitoring if the node is active or not and accordingly tailor user experience.
Appreciate all the help and support for above issue. Please let me know if there is any additional information needed.
Timeouts
As of Corda 3.3, there is no way to set a timeout either on a Corda RPC request, a flow, or a message to another node. If another node is down when you try to contact it as part of a flow, the message will simply remain in your outbound message queue until it can be successfully delivered.
Checking flow progress
Each flow has a unique run ID. When you start a flow via RPC (e.g. using CordaRPCOps.startFlowDynamic), you get back a FlowHandle. The flow's unique run ID is then available via FlowHandle.id. Once you have this ID, you can check whether the flow is still in progress by checking whether it is still present in the list of current state machines (i.e. flows):
val flowInProgress = flowHandle.id in cordaRPCOps.stateMachinesSnapshot().map { it.id }
You can also monitor the state machine manager to wait until the flow completes, then get its result:
val flowUpdates = cordaRPCOps.stateMachinesFeed().updates
flowUpdates.subscribe {
if (it.id == flowHandle.id && it is StateMachineUpdate.Removed) {
val int = it.result.getOrThrow()
// Handle result.
}
}
Handling duplicate requests
The flow will throw an exception if you try and consume the same state twice, either when you query the vault to retrieve the state or when you try to notarise the transaction. I'd suggest letting the user start the flow again, then handling any double-spend errors appropriately and reflecting them on the front-end (e.g. via an error message and automatic refresh).
I am building a system that processes orders. Each order will follow a workflow. So this order can be, e.g., booked,accepted,payment approved,cancelled and so on.
Every time a status of a order changes I will post this change to SNS. To know if a status order has changed I will need to make a request to a external API, and compare to the last known status.
The question is: What is the best place to store the last known order status?
1. A SQS queue. So every time I read a message from queue, check status using the external API, delete the message and insert another one with the new status.
2. Use a database (like Dynamo DB) to control the order status.
You should not use the word "store" to describe something happening with stateful facts and a queue. Stateful, factual information should be stored -- persisted -- to a database.
The queue messages should be treated as "hints" on what work needs to be done -- a request to consider the reasonableness of a proposed action, and if reasonable, perform the action.
What I mean by this, is that when a queue consumer sees a message to create an order, it should check the database and create the order if not already present. Update an order? Check the database to see whether the order is in a correct status for the update to occur. (Canceling an order that has already shipped would be an example of a mismatched state).
Queues, by design, can't be as precise and atomic in their operation as a database should. The Two Generals Problem is one of several scenarios that becomes an issue in dealing with queues (and indeed with designing a queue system) -- messages can be lost or delivered more than once.
What happens in a "queue is authoritative" scenario when a message is delivered (received from the queue) more than once? What happens if a message is lost? There's nothing wrong with using a queue, but I respectfully suggest that in this scenario the queue should not be treated as authoritative.
I will go with the database option instead of SQS:
1) option SQS:
You will have one application which will change the status
Add the status value into SQS
Now another application will check your messages and send notification, delete the message
2) Option DynamoDB:
Insert you updated status in DynamoDB
Configure a Lambda function on update of that field
Lambda function will send notifcation
The database option looks clear additionally, you don't have to worry about maintaining any queue plus you can read one message from the queue at a time unless you implement parallel reader to read from the queue. In a database, you can update multiple rows and it will trigger the lambda and you don't have to worry about it.
Hope that helps