How to handle flows migration to handle contract upgrade? - corda

What's the best way to maintain/approach contract upgrade of states in terms of flows.
Scenario.
Existing BondStateV1
and the flows are using class type of BondStateV1 i.e queryBy<BondStateV1>
Now. We want to upgrade BondStateV1 to BondStateV2.
How do we change the flows?
Do we keep the old flows, and deploy a new FlowCordappV2?
Or after migrating BondStateV1 to BondStateV2, do we deprecate/delete all the old FlowCordapp, refactor to handle V2 and redeploy?

State and contract upgrades happen independently of flows, by following the approach given here: https://docs.corda.net/upgrading-cordapps.html#contract-and-state-versioning.
But your flow will then need to handle the (potential) presence of both BondStateV1 and BondStateV2 states on the network. You can achieve this by following the instructions here: https://docs.corda.net/upgrading-cordapps.html#how-do-i-upgrade-my-flows.

Related

R3 Corda: how to share historical facts with newly added nodes?

Been reading up on Corda (no actual use yet) and other DLTs to see if we could use it in a project. What I was wondering after reading all the Corda key concepts: what would be the way to share data with everyone, including nodes that are only added later?
I've been reading things like https://corda.net/blog/broadcasting-a-transaction-to-external-organisations/ and https://stackoverflow.com/a/53205303/1382108. But what if another node joins later?
As an example use case: say an organization wants to advertise goods it's selling to all nodes in a network, while price negotiations or actual sales can then happen in private. If a new node joins, what's the best/easiest way to make sure they are also aware of the advertised goods? With blockchain technologies I'd think they'd just replicate the chain that has these facts upon joining but how would it work in Corda? Any links or code samples would be greatly appreciated!
You can share transactions with other nodes that have not previously seen them, but this sort of functionality doesn't come out of the box and has to be implemented using flows by the CorDapp developer.
As the author of ONIXLabs, I've implemented much of this functionality generally to make it easier for CorDapp developers to consume. There are quite a few feature-rich APIs available on GitHub.
In order to publish a transaction, the ONIXLabs Corda Core API contains functions that extend FlowLogic<*> to provide generalised transaction publishing:
publishTransaction called on the initiating-side of the flow, specifies the transaction to be published, and to whom.
publishTransactionHandler called on the initiated-by/handler side of the flow specifies the transaction to be recorded and who it's from.
As an example of how these APIs are consumed, take a look at the ONIXLabs Corda Identity Framework, where we have a mechanism for publishing accounts from one node to a collection of counterparties.
PublishAccountFlow consumes the publishTransaction function.
PublishAccountFlowHandler consumes the publishTransactionHandler function.

Can I expose partial fields of a corda state to another cordapp which is in the same network?

Scenario is I have two cordapps in same network. Party B of Cordapp 2 is requesting a data from Party A of Cordapp 1. So here Part A need to hide couple of fields from the state and need to expose to Party B as response.
Is it possible?
I have seen Transaction tear off, but I am not sure is it applicable here.
Please help
Maybe I'm misunderstanding but it should really only be a single cordapp that manages a particular set of states / usecases.
I think it's rare that there would be an instance that you'd want two different cordapps to be interoperable. This is an issue that should be fixed at the design.
Also remember that cordapps should be given only to the nodes you want to actually participate in specific transactions.

Implementing a CorDapp emulating a bank

I have just started learning the workings of R3 Corda and want to create a small CorDapp of Bank in which there will be one bank that will issue cash for users who can then spend or transfer to other nodes.
I want to use cash state as well as the cash contract.So, I am not able to understand how can I use them .
Do I make my own state or contract or directly create flows?
Tokens SDK has all that functionality, you don't have to implement anything yourself. Just use the provided data types and flows; below is a link to the most common tasks that you can achieve with the SDK:
https://github.com/corda/token-sdk/blob/master/docs/IWantTo.md
Take a look at the Corda finance module. It sort of implements the use case you are interested in: https://github.com/corda/corda/tree/release/os/4.5/finance.
Have a look at the CashIssueFlow which is used to issue the Cash and the CashPaymentFlow that is used to transfer the cash to another party.
Though the finance module would be replaced by the TokenSDK in the future, it is a good reference to try and understand how to implement such use case in Corda.

Migrating from IBM MQ to javax.jms.* (Weblogic)

I've been looking for days about how one could migrate from using IBM Websphere MQ to rather only using the QueueManager within Weblogic 10.3.x server. This would save cost of licenses for IBM MQ. The closest I came was finiding an external link which stated that IBM examples existed that did something similar(moving away from MQ to standard jms libraries), but when I attempted to follow the link: http://www.developer.ibm.com/isv/tech/sampmq.html
it lead to a dead page :\
More specifically I am interested in
What classes to use in my attempts to replace the following, com.ibm.mq.* classes:
MQEnvironment
MQQueueManager
MQGetMessageOptions
MQPutMessageOptions
and other classes which don't have an obvious javax.jms.* alternative.
Some of the nuances & work-arounds I may encounter in this migration process.
The database we are forwarding the queue messages to is Oracle 11 Standard (with advanced queuing) if that changes anything, so basically we are looking to "cut out the middle-man", so to speak. Your learned responses will be highly appreciated!
You seem to use the MQI api for MQ, to which there is no replacement at hand. There is no other way than to actually rewrite your MQ application logic to use the JMS API.
A good way might be to first migrate into JMS using the same WebSphere MQ server, since it allows you to verify your results in a reliable way.
You ask for what classes to replace say MQGetOptions with. There are no single 1-to-1 replacement (there are even some aspects of MQI that JMS cannot easily replace). Most of the MQPutOptions and other options are available by setting parameters on sessions and messages in JMS. You really need to understand the JMS api before trying this switch.
Then, when you have jms working with WebSphere MQ, you can do as Beryllium suggests, but swapping the libraries to Weblogic, switch any reference to com.ibm.mq.jms.MQConnectionFactory;, configuring the new parameters and pray to any a available god - press run :)
I have completed an application which supported both JBossMQ and MQSeries/WebSphere MQ.
The MQSeries specific classes I required were
import com.ibm.mq.jms.JMSC;
import com.ibm.mq.jms.MQConnectionFactory;
import com.ibm.mq.jms.MQQueueConnectionFactory;
import com.ibm.mq.jms.MQTopicConnectionFactory;
These were sufficient to create the javax.jms.QueueConnection/TopicConnection.
As for WebSphere MQ, I connected directly.
As for JBossMQ I looked up the factories using JNDI.
So on top of this there is only JMS.
So the first step is to rewrite your application so that only the initializing part uses WebSphere MQ specific classes (the ones I have listed above)
Replace the remaining MQ specific part with a JNDI/directory lookup for a queue connection factory provided by your application server
Remove the MQ series specific parts from your source.
Here is a simple example which shows how to send a message.

Routing/filtering messages without orchestrations

A lot of our use cases for Biztalk involve simply mapping and routing HL7 2.x messages from one system to another. Implementing maps and associating them to send/recieve ports is generally straightforward, but we also need to do some content based filtering on the sending side.
For example, we may want to only send ADT A04 and ADT A08 messages to system X if the sending facility is any 200 facilities (out of a possible 1000 facilities we have in our organization), but System Y needs ADT A04, A05, A8 for a totally different set of facilities and only for renal patients.
Because we're just routing messages and not really managing business processes here, utilzing orchestrations for the sole purpose to call out to the business rule engine is a little overkill here, especially considering that we'd probably need a seperate orchestration for each ADT type because of how schemas work. Is it possible to implement filter rules like this without using using orchestrations? The filters functionality of send ports looks a little too rudimentary for what we need, but at the same time I'd rather not develop and manage orchestrations.
You might be able to do this with property schemas...
You need to create a property schema and include the properties (from the other schemas) that you want to use for routing. Once you deploy the schema, those properties will be available for use as a filter in the send port. Start from here, you should be able to find examples somewhere...
As others have suggested you can use a custom pipeline component to call the Business Rules Engine.
And rather then trying to create your own, there is already an open source one available called the BizTalk Business Rules Engine Pipeline Framework
By calling BRE from the pipeline you can create complex rules which then set simple context properties on which you can route your messages.
Full disclosure: I've worked with the author of that framework when we were both at the same company.

Resources