As the Contract code, must the Flow code be the same on all nodes?
Supposing it's allowed to have different code, how does Corda handle that, in the sense of compatibility, versioning, etc?
The flow code can be different on each node, as long as each part of the flow follows the required sequence of sends and receives.
For example, if the initiator does:
Send a String
Send an Integer
Receive a String
Then the responder must:
Receive a String
Receive an Integer
Send a String
If the sequence doesn't match, an exception will be thrown.
We are also implementing flow versioning in Corda V1. See https://docs.corda.net/head/versioning.html#flow-versioning.
Related
I am new to Rebus and am trying to get up to speed with some patterns we currently use in Azure Logic Apps. The current target implementation would use Azure Service Bus with Saga storage preferably in Cosmos DB (still investigating that sample implementation). Maybe even use Rebus Mongo DB with Cosmos DB using the Mongo DB API (not sure if that is possible though).
One major use case we have is an event/timeout pattern, and after doing some reading of samples/forums/Stack Overflow this is not uncommon. The tricky part is that our Sagas would behave more as a Finite State Machine vs. a Directed Acyclic Graph. This mainly happens because dates are externally changed and therefore timeouts for events change.
The Defer() method does not return a timeout identifier, which we assume is an implementation restriction (Azure Service Bus returns a long). Since we must ignore timeouts that had been scheduled for an event which has now shifted in time, we see a way of having those timeouts "ignored" (since they cannot be cancelled) as follows:
Use a Dictionary<string, Guid> in our own SagaData-derived base class, where the key is some derivative of the timeout message type, and the Guid is the identifier given to the timeout message when it was created. I don't believe this needs to be a concurrent dictionary but that is why I am here...
On receipt of the event message, remove the corresponding timeout message type key from the above dictionary;
On receipt of the timeout message:
Ignore if it's timeout message type key is not present or the Guid does not match the dictionary key/value; else
Process. We could also remove the dictionary key at this point as well.
When event rescheduling occurs, simply add the timeout message type/Guid dictionary entry, or update the Guid with the new timeout message Guid.
Is this on the right track, or is there a more 'correct' way of handling defunct timeout (deferred) messages?
You are on the right track 🙂
I don't believe this needs to be a concurrent dictionary but that is why I am here...
Rebus lets your saga handler work on its own copy of the saga data (using optimistic concurrency), so you're free to model the saga data as if it's being only being accessed by one at a time.
How does a notary/node verify that a specific flow has been called when it receives the transaction?
Does this mean Corda can guarantee that the flow has not been modified from what was stated in the corresponding Cordapp?
In detail:
It's a DLT (Distributed Ledger Technology); so in a sense, you can't really trust anyone.
The notary doesn't receive flows, it receives transactions and makes sure that there is no double-spend (i.e. consumed inputs are not being consumed again).
Even if you gave a node your CorDapp, it can override the responder flow. See links below.
Wrong assumptions about responder flows: https://www.corda.net/blog/corda-flow-responder-wrong-assumptions/
Configuring responder flows: https://docs.corda.net/flow-overriding.html
Overriding flows from external CorDapps: https://dzone.com/articles/extending-and-overriding-flows-from-external-corda
When you send and receive data between an initiator and its responders; the received data (on both ends) is considered untrusted; you must unwrap it and validate it: https://docs.corda.net/api-flows.html#receive
So in short:
Your initiator must validate any received data from the responder(s).
Your responder must validate any received data from the initiator; plus if you expect the initiator to be a certain entity, you must validate that the counter-party (that sent you the flow session) is who you expect it to be (e.g. flowSession.counterParty == "O=Good Org, L=London, C=UK").
Adel's answer covers the right ways to not trust your counterparties from the application flow level but there are also operational protections which can used. Strong contracts can help prevent badly formed transactions as Corda does not allow for unknown contracts in a well setup network.
The network parameters defines what smart contract cordapp jars are acceptable for validation. The most common form of contract constraints is signature constraints which means that any contract jar signed by the same developer key can be accepted. This prevents a malicious counterparty from forcing you to run weak validation: https://docs.corda.net/api-contract-constraints.html#signature-constraints
As of Corda 4 any unrecognized contract cordapp jar will not be trusted unless the node operator explicitly tells Corda to trust the jar. https://docs.corda.net/cordapp-build-systems.html#cordapp-contract-attachments Once a signature is trusted then any future jars signed by that signature will implicitly be trusted.
Let's imagine I have a REST API with an endpoint /api/status. When this endpoint is accessed, the API sends a message to a message queue requesting the status of some other service.
Then in reply, the service sends a message with its status to a queue on which the REST API listens. So it's single message to request the status and single reply message.
My question is: Is there a design pattern for converting the asynchronous nature of this approach to a synchronous one in the API? In other words: Is there a pattern that the GetStatus(...) method in the pseudo code below can implement to synchronize the getting of the status with communication over multiple message queues or even pub/sub systems.
var statusRequestMsg = "get_status";
var statusResponseMsg = GetStatus(statusRequestMsg);
I know how to solve this in code but I was curious if there is a design patter that introduces a common approach.
I googled a lot in search for that but the only think that I found was a very technical explanation of an approach to do that in this article:
A Communication Model to Integrate the Request-Response and the Publish-Subscribe Paradigms into Ubiquitous Systems
Please note that I understand that this is not the perfect API design and that there are better ways to implement the example. I've created the above example to help me illustrate my question. Also I understand that some AMQP impl. (like RabbitMQ) provide a way to synchronize MQ communication to request/response style.
Thanks in advance.
Microsoft calls it Async Request Reply pattern and uses a solution that polls over HTTP:
https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply
I imagine it should be possible to avoid polling by subscribing to updates for a key. For example, it's possible to subscribe to updates to a single key in Redis with keyspace notifications (The page mentions two caveats: that "all the events delivered during the time the client [is] disconnected are lost" and "events' notifications are not broadcasted to all nodes".)
Have you considered something like this:
Request comes in
Create a correlation id
Send correlation id to other service as part of message sent via queue
Begin polling for that id in some data store (say Redis)
Time elapses...
Send correlation id back to originating service along with result of request in a message sent via queue
Worker reading queue sets value of correlation id in data store to result of asynchronous request
Polling discovers result and returns in as response to request
Would that work?
Use-Case of my task: For now the use case is that i am filling the details like First Name, Last name etc. in a form and then on click of Submit button, the data goes directly into Amazon SQS and then there is a listener defined which contains a button and on that button click the data goes from listener to MSSQL database.
Scenario(currently): For now all the message belonging even to the different message attributes are being sent.
Requirement: I want to send specific message belonging to a particular message attributes. For example: Suppose Class A, Class B and Class C are three different message attributes and Class A contains one message, class B contains two messages and Class C contains four messages, what i want is only the messages of the Class B attribute should be send, not of Class A and Class C.
I want to know that is it possible to send only selected/specific message on the basis of the Message Attribute from Amazon SQS to the SQL db ?
Any help will be greatly appreciated. Thanks in advance.
SQS messages cannot be selectively delivered based on message attributes. Standard practice is for each class of consumer to have its own queue, such that filtering would not be required, because the assumption is that all consumers of a given queue are able to process all messages in the queue.
If you do need a common source of messages to have its output filtered by message attributes, then the suggested solution is to publish messages to an SNS topic, then subscribe queues to that topic based on message attribute filters, which are supported in SNS. This configuration would allow, for example, Queue 1 to capture messages of types A, B, and C, while simultaneously allowing Queue 2 to capture messages of types C, D, and E. Note the overlap on type C. This class of messages would be delivered to both queues, in this configuration.
See Filtering Messages with Amazon SNS and Send Fanout Event Notifications.
SNS Topics and SQS Queues can be many-to-many mapped, where multiple queues subscribe to multiple topics, in any desired configuration.
Note that when an SQS queue subscribes to an SNS topic, the default behavior is for only the payload to be sent to the queue. To receive the entire SNS wrapper, including SNS message attributes, enable "raw message delivery" on each subscription.
This leaves the question, then, of what the purpose of SQS message attributes must be, if not for service-side filtering. They can be metadata for the benefit of the recipient, giving information about the payload without the need to actually deserialize the payload, or they can provide information about the transfer encoding of the payload, such as whether or not the payload had gzip compression and base64 encoding applied by the sender, which the consumer needs to know in order to unpack the payload.
I have an envelope message (EM), in this EM there some elements which are promoted on the context (for routing) and there is one Any element (called Payload) holding the actual schema instance for further use (other orchestrations are subscribing to that Payload instance).
This is a generic service (WCF, request-response) receiving the request message, returning a response message (having some elements of the request and with a new generated unique transaction ID) and a fault message (when applicable).
The Payload must be published on the MessageBox (direct binding) with some of context properties of the EM.
How can this be done most effectivly ?
Do you know about how to process envelope schemas using an xml disassembler component inside a receive pipeline? It is not clear from your question if you have tried this or not or if this is even the challenge you are facing.
If not then read here: http://msdn.microsoft.com/en-us/library/aa546772(v=BTS.20).aspx
Can I just confirm
WCF client sends a message matching the Envelope Schema
You wish to debatch the Envelope Schema into one or more Payload messages contained inside for the Payload processing orchestration.
? Do you need to wait until all Payload messages are processed until you respond back to the WCF client with a success / fail response (i.e. the response is dependent on the completion of the Payload messages).
If you don't need point 3) then your WCF orchestration can just send a 'yes' message back to the WCF client without worrying about what happens to your payload.
The standard XMLReceive on your WCF Receive location should be able to debatch the message automagically as long as it recognises it as an Envelope schema, i.e. contains
<b:schemaInfo is_envelope="yes" xmlns:b="somexmlns"/>
<b:recordInfo body_xpath="xpathtoroot"/>
However, if you do need point 3 a problem I can see is that because you are using WCF request-response, is that the client is going to want a synchronous response back depending on the Payload processing. It would be difficult to do this using the standard envelope debatching as you would need to correlate the progress and results of your payload processing back to your WCF orch - instead, you might just keep your outer (Envelope) schema as non-envelope, and use a custom receive pipeline in your WCF orch to split out the messages and then loop through each and call your payload processing Orchestration.
http://mstecharchitect.blogspot.com/2008/12/debatching-biztalk-xml-messages.html