I am just confused when the responder flow gets executed in flow class and that responder checks and signing the transcation?
There are two types of flow annotations:
InitiatingFlow, which is used to annotate flows that are started directly (either by other flows, by a service, or via RPC)
InitiatedBy, which is used to annotate flows that the node starts in response to messages from other flows. This annotation takes as its only argument the name of an InitiatingFlow class
When a node receives a message from a flow running on another node, it goes through the following process:
It checks whether it has a flow installed that is InitiatedBy the flow it has a received a message from
If it does, it invokes that flow to commence communication with the InitiatingFlow
If it does not, it throws an exception
So a responder flow instance gets created every time the node receives a message from the responder flow's InitiatedBy flow. This flow "stays alive" until it has finished communicating with the InitiatedBy flow instance.
Related
I need to store some information from Corda (such as LinearId, Transaction Hash etc) in an off-ledger database (not an extra table in the node database) for subsequent external processing and downstream actions.
The key is that the code has to run after a specific flow (not all flows) has completed and only on one side/node of the transaction.
The node trigger could write to the external database directly
Or the trigger could write the data to a JMS queue for an external engine to pick up and process
How can I trigger an action after a flow has completed?
One way you could do this is with a responder flow. It depends on your use case but one thing you could do is just hold off on the return statement in the responder flow and just run some additional code or make an HTTP request from the responder flow.
Here's a code example: (you can see how it returns the subflow, but you can return it later and run some code after the return as well potentially)
https://github.com/corda/samples-kotlin/blob/master/Advanced/obligation-cordapp/workflows/src/main/kotlin/net/corda/samples/obligation/flows/IOUSettleFlow.kt#L98-L113
More on responders flows: https://docs.corda.net/docs/corda-os/4.7/api-flows.html
Making an HTTP request from a flow: https://github.com/corda/samples-java/blob/master/Basic/flow-http-access/workflows/src/main/java/net/corda/samples/flowhttp/HttpCallFlow.java#L35-L39
How does a notary/node verify that a specific flow has been called when it receives the transaction?
Does this mean Corda can guarantee that the flow has not been modified from what was stated in the corresponding Cordapp?
In detail:
It's a DLT (Distributed Ledger Technology); so in a sense, you can't really trust anyone.
The notary doesn't receive flows, it receives transactions and makes sure that there is no double-spend (i.e. consumed inputs are not being consumed again).
Even if you gave a node your CorDapp, it can override the responder flow. See links below.
Wrong assumptions about responder flows: https://www.corda.net/blog/corda-flow-responder-wrong-assumptions/
Configuring responder flows: https://docs.corda.net/flow-overriding.html
Overriding flows from external CorDapps: https://dzone.com/articles/extending-and-overriding-flows-from-external-corda
When you send and receive data between an initiator and its responders; the received data (on both ends) is considered untrusted; you must unwrap it and validate it: https://docs.corda.net/api-flows.html#receive
So in short:
Your initiator must validate any received data from the responder(s).
Your responder must validate any received data from the initiator; plus if you expect the initiator to be a certain entity, you must validate that the counter-party (that sent you the flow session) is who you expect it to be (e.g. flowSession.counterParty == "O=Good Org, L=London, C=UK").
Adel's answer covers the right ways to not trust your counterparties from the application flow level but there are also operational protections which can used. Strong contracts can help prevent badly formed transactions as Corda does not allow for unknown contracts in a well setup network.
The network parameters defines what smart contract cordapp jars are acceptable for validation. The most common form of contract constraints is signature constraints which means that any contract jar signed by the same developer key can be accepted. This prevents a malicious counterparty from forcing you to run weak validation: https://docs.corda.net/api-contract-constraints.html#signature-constraints
As of Corda 4 any unrecognized contract cordapp jar will not be trusted unless the node operator explicitly tells Corda to trust the jar. https://docs.corda.net/cordapp-build-systems.html#cordapp-contract-attachments Once a signature is trusted then any future jars signed by that signature will implicitly be trusted.
As per corda documentaion and my understanding contract verification called at time of transactionBuilder. For R&D I put logger on contract verification function. One thing observed that contract verification called at time of transactionBuilder also in collectSignature and in finalityflow also.
In collectSignatureFlow called 3 times and Finality flow also called 3 times.
Current setup have 2 nodes one notary in non-validating mode.
My question is that in collectSignatureFlow verify called on diffrent nodes and if yes does notary called verify function too. Same question is with finality flow.
CollectSignaturesFlow, called by the node gathering the signatures, calls verify. SignTransactionFlow, the responder flow called by the nodes adding their signatures, also calls verify before signing.
FinalityFlow calls verify. NotaryServiceFlow, the flow run by the notary in response to FinalityFlow, should call verify if the notary is validating (in fact, this is the definition of a validating notary). And finally, ReceiveTransactionFlow, the flow run by the transaction's participants in response to FinalityFlow, calls verify before storing the transaction.
There is a bank which creates a contract which is then accepted by the lender and the borrower. After signing the contract the lender provides fund to the borrower. The bank then creates an obligation state based on the data received by calling an external service automatically.
And Now
1) In API Layer, I am calling first flow which creates one state.
2) In API layer itself, On success of first flow , I am calling the http request to external service and get the data.
3) Now I pass the http response to the the second flow for creating the other state.
Can you please let me know if there is any issue with this approach.
Requirment is I want to trigger the first flow manually, but calling external service and initiating the second flow should happen automatically
I had referred the link given below.
Making asynchronous HTTP calls from flows
You'll make calls to an external service during the running of flows.
The best place to get started would be looking at the CorDapp samples here. In particular, take a look at the Accessing External Data section
Can someone please help me understand the difference between async scope and vm in mule. I understand vm is like in memory JMS, but if my requirement is to process something asynchronously, can I use either of them at will ? If not, what are the differences.
For example, what can be the difference between the following:
Main flow1 calling another flow using async scope:
<flow name="mainflow1">
...
<async>
<flow-ref name="anotherflow" />
</async>
...
</flow>
Main flow2 calling another flow using VM:
<flow name="mainflow2">
...
<outbound-endpoint address="vm://anotherflow" exchange-pattern="one-way" />
..
</flow>
Assume another flow writes a record to some database and is not of request-response type.And how about thread safety. Are both complete thread safe?
Matt's answer is correct. I might add a couple of additional benefits you get for the extra cost of going through a queue instead of just calling another flow:
Using a VM outbound endpoint (or indeed, any queue) offers you the ability to queue up messages under temporary high load, and control what happens when there are more messages than your target flow can consume (maximum outstanding messages, what to do when you reach that maximum, how long to wait for a publish to succeed before you error out, etc). This is the purpose of SEDA.
On the consumer end, using a queue instead of a flow reference allows you to use a transaction to be sure that you do not consume the message unless your intended processing succeeds.
VM queues offer a convenient place to test parts of your application in isolation. You can use the MuleClient in a functional test to send or receive messages from a VM queue to make sure your flows work properly.
The main difference between the two is the context of the flow. When using a vm endpoint, mule treats the message as if it were a completely new incoming message. It has a completely new context and separate properties and variables. If you want to inherit the properties and variables from the calling flow, you can use a flow-ref to call a flow or subflow (see here for the differences between a flow and subflow: http://www.mulesoft.org/documentation/display/current/Flows+and+Subflows). Since the vm endpoint creates a new context, there is more overhead in the call and it is less efficient, but with that extra overhead, you get all of the infrastructure that comes with making a full mule call.