How Do I Customise The Corda Hospital? - corda

As you can see from this other question a flow is being sent to the hospital when a unique db constraint is being violated
org.h2.jdbc.JdbcSQLIntegrityConstraintViolationException: Unique index or primary key violation:
This is clearly never going to be able to be resolved so I want it to fail instead and not go to the hospital.
It is currently going to the hospital due to Cordas built-in rules.
Is it possible to modify these rules to prevent this exception from being sent to the hospital?

Unfortunately, according to the official documentation; this type of errors will go to the flow hospital:
Database constraint violation (ConstraintViolationException): This scenario may occur due to natural contention between racing flows as Corda delegates handling using the database’s optimistic concurrency control. If this exception occurs, the flow will retry. After retrying a number of times, the errored flow is kept in for observation.
So you have 2 things that you can do:
Go to the database and modify the existing record that's colliding with the record the flow is trying to add.
Go to your node's terminal and kill the flow.

Related

Cloud Datastore transaction terminated without explicit rollback defined

From following document: https://cloud.google.com/datastore/docs/concepts/transactions
What would happen if transaction fails with no explicit rollback defined? For example, if we're performing put() operation on value arguments.
The document states that transaction should be idempotent, what does this mean with respect to put() operation? It is not clear how idempotency is applied in this context.
How do we detect failure if failure from commit is not reliable according to the documentation?
We are seeing some symptoms where put() against value argument is sometimes partially saving the data. Note we do not have explicit rollback defined.
As you may already know, Datastore transactions are guaranteed to be atomic, which means that it applies the all-or-nothing principle; either all operations succeed or they all fail. This ensures that the data in your database remains consistent over time.
Now, regardless whether you execute put or any other operation in your transaction, your implementation of the code should always ensure that your transaction has either successfully commited or rolled back. This means that if you aren't fully sure whether the commit succeeded, you should explicitly issue a rollback.
However, there may be some exceptions where a commit might fail, and this doesn't necessarily mean that no data was written to your database. The documentation even points out that "you can receive errors in cases where transactions have been committed."
The simple way to detect transaction failures would be to add a try/catch block in your code for when an Exception (failed transactional operation) or DatastoreException (errors related to Datastore - failed commit) are thrown. I believe that you may already have an answer in this Stackoverflow post about this particular question.
A good practice is to make your transactions idempotent whenever possible. In other words, if you're executing a transaction that includes a write operation put() to your database, if this operation were to fail and needed to be retried, the end result should ideally remain the same.
A real world example can be - you're trying to transfer some money to your friend; the transaction consists of withdrawing 20 USD from your bank account and depositing this same amount into your friend's bank account. If the transaction were to fail and had to be retried, the transaction should still operate with the same amount of money (20 USD) as the final result.
Keep in mind that the Datastore API doesn't retry transactions by default, but you can add your own retry logic to your code, as per the documentation.
In summary, if a transaction is interrupted and your logic doesn't handle the failure accordingly, you may eventually see inconsistencies in the data of your database.

Back chain validation under node failures in R3 Corda

I am new to Corda. My question is not about any particular implementation, but more of an architectural question.
What happens during back chain validation if one of the nodes involved permanently dies and fails to respond? How is that transaction validated?
I have seen this issue that only talks about how of transaction volume could slow down validation. Does validation come to a grinding halt if one of the nodes fails permanently?
As per Corda webinar on Consensus, in the example that is 5 minutes into the video, the back chain is Charlie -> Dan -> Alice -> Bob. In this, if either Charlie or Dan are unavailable, the proposed transaction cannot be validated. The same webinar further says that this is not a problem in other blockchains such as Ethereum.
Applications that can foresee the need for a highly-available record keeper, can surely accommodate such a node during the design phase, as suggested by Adel Rustum.
However, a privacy-conscious application reluctant to leak information that is deployed globally, could suffer from many transaction-validation failures due to the vagaries of a wide-area network. Thoughts?
The short answer is, transaction verification will fail (if that node was the only node that had that transaction); and that's the point of using a DLT (or a blockchain). If you can't go back in the history of a certain block of data until genesis, then you can't verify how that block and its ancestors were created.
As for the issue that you referenced in your question; Corda Enterprise 4.4 introduced a new feature called bulk back-chain fetching, which allows modifying the way the transactions that are needed to verify a certain transaction are fetched. Previously it was depth first, now you can change that to breadth first and specify how many transactions you want to fetch in one call. More details in this video.
The back chain validation doesn't depend on the nodes who were part of the transaction in the past. The validation of the back chain is only done by those nodes who are part of the current ongoing transaction. The other nodes who were part of a past transaction that involves the evolution of a state in question don't need to be contacted (or stay online) while the back chain is validated.
Back chain validation only involves checking that all transaction which happened in the past on a particular state used as input in the current transaction is valid. The validity is checked by running the contracts again for those previous transactions. There is no real need to reach to the parties of a previous transaction.
However, you need to make sure that the parties involved in the current transaction are online and responding since you would need signatures from them to successfully complete the transaction.

Doctrine: How to prevent transaction from becoming 'rollback only' through caught exception?

Deleting an entity fails because of an exception within a postRemove event handler. Even if the exception is caught the deletion fails because the transaction cannot be commit any more. How to solve this?
The complete story:
I need to keep track of some deleted entities in a Symfony 3.4 based web service using Doctrine.
To to this I have create an EventSubscriber which handles the postRemove event to check whether the deleted entity needs to be logged. In this case the entities UUID is stored in a DeleteLog table of th DB.
This works fine, but in rare cases persisting of the the DeleteLogEntry fails since there already exists a log entry for the given UUID which needs to be unique.
The source of this problem is some 3rd party code I cannot change my self. As a temporary solution tried to catch the UniqueConstraintViolationException. This does not solve the problem since now I get ConnectionException
Transaction commit failed because the transaction has been marked for
rollback only.
Is it possible to solve this dilemma?
Of course I could check if a DeleteLogEntry with the given UUID exists before creating a new one. But since this problem occurs only in rare cases, the check would be negative most of the time. Of course running the check anyway is not a catastrophic performance impact but simply seems not be the best solution.
Is there any may to catch the exception and keep the transaction from being marked as rollback only?
Nope, it's not possible to keep a transaction from being marked.
Doctine starts a nested transaction for postRemove and if it fails no other transactions should be committed. Marking a transaction for rollback only (and even closing entity manager) is expected behavior in such scenario, because there is no other way for Doctrine to ensure consistency as there are no support for real nested transactions.
If performance is not an issue, then checking for DeleteLogEntry is a good option.
Other possible workarounds:
store ID somewhere (Redis, Memcache, file, etc.) temporarily and update DeleteLogEntry later, after initial delete is committed
use a separate entity manager/connection to update DeleteLogEntry
remove Unique Constrain and use a background task to watch for duplicates

How can i access vault in SmartContract?

How can I access vault in Smart Contract?
I want to do below business validation in Smart Contract
- New Data and attachment which I have entered, already exists in vault or not
You cannot access the vault, or any other source of outside information, from within the contract. This is because contract execution must be deterministic. If a contract's view of the validity of a ledger update depended on the current contents of your vault, disagreements could arise between different nodes (or even within the same node at different points in time) on whether a given ledger update was valid. This would destroy the integrity of the ledger - there would be no consensus on which updates were valid.
In your case, it might be best to impose the additional constraints you want to impose within the flow. For example, within the flow you could check the contents of the proposed transaction against the contents of the vault, and sign or not sign the transaction accordingly.
It's important to keep in mind - just because a transaction is contractually valid, does not mean you have to sign it!

Using SQS or DynamoDB to control order status

I am building a system that processes orders. Each order will follow a workflow. So this order can be, e.g., booked,accepted,payment approved,cancelled and so on.
Every time a status of a order changes I will post this change to SNS. To know if a status order has changed I will need to make a request to a external API, and compare to the last known status.
The question is: What is the best place to store the last known order status?
1. A SQS queue. So every time I read a message from queue, check status using the external API, delete the message and insert another one with the new status.
2. Use a database (like Dynamo DB) to control the order status.
You should not use the word "store" to describe something happening with stateful facts and a queue. Stateful, factual information should be stored -- persisted -- to a database.
The queue messages should be treated as "hints" on what work needs to be done -- a request to consider the reasonableness of a proposed action, and if reasonable, perform the action.
What I mean by this, is that when a queue consumer sees a message to create an order, it should check the database and create the order if not already present. Update an order? Check the database to see whether the order is in a correct status for the update to occur. (Canceling an order that has already shipped would be an example of a mismatched state).
Queues, by design, can't be as precise and atomic in their operation as a database should. The Two Generals Problem is one of several scenarios that becomes an issue in dealing with queues (and indeed with designing a queue system) -- messages can be lost or delivered more than once.
What happens in a "queue is authoritative" scenario when a message is delivered (received from the queue) more than once? What happens if a message is lost? There's nothing wrong with using a queue, but I respectfully suggest that in this scenario the queue should not be treated as authoritative.
I will go with the database option instead of SQS:
1) option SQS:
You will have one application which will change the status
Add the status value into SQS
Now another application will check your messages and send notification, delete the message
2) Option DynamoDB:
Insert you updated status in DynamoDB
Configure a Lambda function on update of that field
Lambda function will send notifcation
The database option looks clear additionally, you don't have to worry about maintaining any queue plus you can read one message from the queue at a time unless you implement parallel reader to read from the queue. In a database, you can update multiple rows and it will trigger the lambda and you don't have to worry about it.
Hope that helps

Resources