I synced an IIB node with RIT and deployed a flow which has MQInput and FileOutput nodes. Only MQInput node is recognized and synced by RIT, but not FileOutput. How do I make RIT recognize and sync FileOutput node so I can create test suites?
-J
Jane,
As far as I remember RIT will be only able to recognize only queues because communication between Rational Integration Tester and IBM Integration Bus is performed using a WebSphere MQ queue manager.
https://www.ibm.com/support/knowledgecenter/SSBLQQ_8.7.0/com.ibm.rational.rit.integration.doc/topics/c_ritwmb_software_requirements.html
To solve your issue you will have to add FileOutput manually and create a listener for that.
Related
I see the following statement on Cordapp document -
Each Cordapp is installed at the level of the individual node
In real time, when I create a corda application, should I distribute the application to every network participant, so that cordapp is deployed on all nodes in the network?
If not so, how will initiating flows associate with responding flows?
Yes, You'll distribute to all appropriate nodes in then network. Once you've developed your cordapp, you'd simply place the cordapp jar in the cordapp folder of your node (probably via jenkins job) and after a node restart, your node should be able to load the new workflows and states present in your cordapp.
We need to change our Corda Network infrastructure. Currently we are working with one network map, three notaries (RAFT) and four additional nodes.
We will replace our network map and one notary server (notaryCluster one) with new servers.
Our plan is to perform following steps:
1. Stop all Nodes
2. Change all node.conf files needed to point to new networkmap and new notary
3. Deploy Networkmap and Notary service in new servers from scratch (not reusing data from old notary and network map)
4. Start new network map, start new notary servers, and rest of nodes (not old network and notary)
Is this process correct to ensure existing transactions will remain in the systems and will be able to work with them?
Thanks!!
There are several things you need to consider
- Stop all Nodes
You need to consider any flows that are currently in-flight to perform a clean shutdown of the node.
Version 3.1 of Corda adds a "Draining mode" feature, through which:
Commands requiring to start new flows through RPC will be rejected.
Scheduled flows due will be ignored.
Initial P2P session messages will not be processed, meaning peers will not be able to initiate new flows involving the node.
- Deploy Network map and Notary service in new servers from scratch (not reusing data from old notary and network map)
You would want to keep data from the old notary, else the notary would lose track of states that have been consumed, and the network would lose the guarantee of preventing double spends
Background-
I am extremely new to CORDA and Blockchain Platform. In the past few months i have had my share of experience working on a small project on Ethereum as platform. Ethereum blockchain was leveraged as ledger to mark Transaction as a proof of existence. It means for some action (success/failure) we have marked respective transaction on Blockchain. We may consider it as a proof of concept to show knowledge of interaction with nodes running on Ethereum Blockchain.
Infrastructure - Node.js web services, two ethereum (PoA) nodes
Question-
I would now like to port this running example on to CORDA blockchain. How would i achieve this with bare minimum changes. That means if i have a Corda network with two nodes running on my system and i want my web services to connect to one or both of the nodes and save the transaction (in its state). I understand that this certainly is not what CORDA might be meant for. Consider this question as my first step to interact with CORDA from Node.js web services.
Any inputs highly appreciated.
I recommend you go through the documentation first. your Tx will be a state. you need to build contracts and flows for a Tx to happen. Tx will happen using flows which will be initiated using Crash Shell or RPC Client. AFAIK, this client is in Kotlin or Java. so you'll have to create a JAVA or Kotlin application to instantiate this client. now in the Java application, you'll have to expose rest endpoints to communicate with the client which will initiate your flows. you can call these rest endpoints from your node application. All this has to be created in CodaApp. This is the bare minimum.
I just found there is a library.
look at this: https://gitlab.com/bluebank/braid
This can help you.
I recently strated exploring Corda. I have installed Corda and sample CorDapp (cordapp_example) on my location machine and ran the nodes and tried to access ious of one of the nodes (Lets say PartyA), by using below URL, it just showing empty []. I also noticed error:
netty.AMQPClient.operationComplete - Failed to connect to 20.198.218.65:10011 {}
Note that this IP address is not my local address.
http://localhost:10013/api/example/ious
[] is the standard output until an IOU has actually been issued onto the ledger. I think port 10013 corresponds to PartyC's webserver, so you'd have to create an IOU involving PartyC first.
I have 3 nodes which i am using for multi node setup. I am thinking of following the below structure
Controller: keystone, horizon, g-reg, g-api, n-api, n-crt, n-sch, n-cond, n-cauth, n-obj, n-novnc, n-xvnc, c-api, c-sch (this node will have mysql and rabbitmq as well)
Network: q-svc, q-agt, q-dhcp, q-l3, q-meta, quantum
Compute: n-cpu, c-vol
I have a few questions. 1. In Compute node, do i need to keep n-api? Also what else is needed apart from n-api and c-vol? Is q-agt needed in compute? 2. Will i need c-api along with c-vol? Does compute node need rabbit mq installed?
Q1)
You don't want the nova-api on the compute nodes generally. It's better on the controller.
Nova api makes use of pasted hard system credentials and you don't want that paste file exposed on any node that a user may compromise with a hypervisor escape.
nova-compute and nova-volume is all you probably need. they do communicate with the scheduler over rabbitmq so make sure that's working =P
Q2)
You don't NEED cinder to run an openstack cloud, though I see no reason not to include it.
I don't know what impact disabling cinder has on the devstack stack.sh script, I've never done it.
As per RabbitMQ see above answer.