Get an associated collections via RPC - corda

We have two states stored in Corda Vault (policy and event). Policy can have many events associated with it.
We are attempting to get a joined result (as if we run SQL with JOIN statement) via RPC client and we can't find a graceful way: either we should make several VaultQueries or just use direct JDBC connection to the underlying database and extract the required data. Neither of ways looks appealing and we wonder if there is a good way to extract the data.
As we cannot use JPA/Hibernate annotations to link objects inside the CordApp, we have just policy_id stored in event state.

For more complex queries, it is fine and even expected that the user will query the node's database directly using the JDBC connection.

Related

What are the ways to access the flink state from outside the flink cluster?

I am new to Apache flink and building a simple application where I am reading the events from a kinesis stream, say something like
TestEvent{
String id,
DateTime created_at,
Long amount
}
performing aggregation (sum) on field amount on above stream keyed by id. The transformation is equivalent to SQL select sum(amount) from testevents group by id where testevents are all the events received till now.
The aggregated result is stored in a flink state and I want the result to be exposed via an API. Is there any way to do so?
PS: Can we store the flink state in dynamoDB and create an API there? or any other way to persist and expose the state to outside world?
I'd recommend to ignore state for now and rather look at sinks as the primary way for a stream application to output results.
If you are already using Kinesis for input, you could also use Kinesis to output the results from Flink. You can then use the Kinesis adapter for DynamoDB that is provided by AWS as further described on a related stackoverflow post.
Coming back to your original question: you can query Flinks state and ship a REST API together with your stream application, but that's a whole lot of work that is not needed to achieve your goal. You could also access checkpointed/savepointed state through the state API, but again that's quite a bit of manual work that can be saved by going the usual route outlined above.
This is Flink's documentation, which provides some use cases queryable_state
You can also use the API to read it offlineState Processor API

Corda persistent API

I am reading the documentation about Corda Persistence https://docs.corda.net/api-persistence.html and I have several points that are not clear to me.
Am I right that data are persisted in parallel with vault storing. I.e. vault storage is not changed and new tables are being added to store data also.
When we use
cordaRPCClient.vaultQueryBy method will it understand by itself what to use: vault or data persisted in the custom database tables?
How the choice is done when for example only part of the data are available in the tables? is there any way to tell Corda explicitly that persisted data should be used for the query?
Here are the answers to your queries:
Yes, you are correct, new tables are created in the vault corresponding to your QueryableState. All states that are required to be persisted should implement the QueryableState interface.
Your states are stored as the normal binary format as well, thus cordaRPCClient.vaultQueryBy would always query the vault for the ContractState, not the PersistentState. You could, however, query the custom database table using a jdbc session/ jpa.
What part of the state is needed to be persisted is a call you make depending on your requirement. Persisted data could be queried using custom JDBC queries/ JPA. The vaultQuery API always works in ContractState.

DB SQL for Vault in corda 3

In corda open documentation I read the following:
The ORM mapping is specified using the Java Persistence API (JPA) as annotations and is converted to database table rows by the node automatically every time a state is recorded in the node’s local vault as part of a transaction.
Presently the node includes an instance of the H2 database but any database that supports JDBC is a candidate and the node will in the future support a range of database implementations via their JDBC drivers. Much of the node internal state is also persisted there.
Can I replace h2 DB with an SQL one using JDBC?
As I understood, the FinalityFlow is used to record the transaction in the local Vault using h2 DB.
If I implement a custom Flow to record in an SQL DB, i have to avoid the FinalityFlow call?
Yes, it is possible to run a node with a SQL database other than H2. In fact, support for PostgreSQL and SQLServer has been contributed by the open-source community. See the set-up instructions here. However, be aware that the Corda continuous integration pipeline does not run unit tests or integration tests of these databases, so they must be used at your own risk.
Note that in both cases, you configure the node to use the alternative database via the configuration file, and it stores all its data in this alternative database (transactions, states, identities, etc.). You are not expected to access the database directly in a flow to do this, and can rely upon the standard ServiceHub operations and standard flows like FinalityFlow.

Polling oracle records from biztalk based using parameters

I have a need to create a BizTalk 2010 application to poll information and I have found this useful blog Polling Oracle Database Using Stored Procedures, Functions, or Packaged Procedures and Functions.
My questions are 2 folds:
The blog hard coded the parameter for the procedure in the package in the polledDataAvailable and the PolledStatement. How do I pass the actual parameters that is going to change? For example, I want to have the ability poll all orders from a customer, and not just the customer hard-coded 'ABC'. The ID of the customer will be defined at real time.
Without using extra receive ports but just based BizTalk monitor (referring back to the blog), how do I examine the results (i.e. viewing the records being polled) on BizTalk monitor?
Maybe the parameter value seems to be hard coded to call the function in the query statement SELECT CCM_ADT_PKG.CCM_COUNT('A02') FROM DUAL , but the real parameters values are passed to the generated instance from the input schema in the section "Modify XML content setting up the right parameters."
I don't know how your message result will be used, but if you use a send port to send the result somewhere, but you can create a simple FILE send port to store your message instance.

Can a receive port be triggered on 2 diffrent reasons

I have normal receive port using a WCF-Adapter for oracle that uses a polling query. Now the problem is that the receive port not only needs to run once the polling query has a hit, but also once per day, regardless of the polling-statement.
Is there a way to make it possible without creating the entire process again?
The cleanest way will be to use an additional receive location. So you will end up with one receive port that contains two receive locations, one for each query.
In the past I have done this with the WCF adapter when polling SQL Server. The use of two locations did require duplicating the schema, unfortunately, to account for the different namespaces. You will probably need two different (and essentially identical) schemas as well.
WCF-SQL polling locations require distinct InboundId values while WCF Oracle polling (as you have noted in the comments) requires different a PollingId for each receive location.
The ESB toolkit includes pipeline components to remove and add namespaces, if you need additional downstream applications work with only a single schema on the messages coming from both locations and/or do not also want to duplicate a BizTalk map.
Change your polling statement so that it has an OR CURRENT_TIME() BETWEEN ....
That way it will trigger at the time you want.

Resources