Shutdown Hook Corda integration testing - corda

I have the following Corda service. It starts up a Jetty Server
#CordaService
class IOUService(private val serviceHub: AppServiceHub): SingletonSerializeAsToken() {
init {
val port = serviceHub.myInfo.addresses.first().port - 1002
log.println("IOUService init was called...")
log.println("Port: $port")
val jettyServer = JettyServer()
jettyServer.start(port)
}
}
My problem is how to release the stared Jetty port when running integration tests. Here are two example test (basically the same test twice to illustrate the problem):
#Test
fun `node test`() = withDriver {
val (partyAHandle, partyBHandle) = startNodes(bankA, bankB)
assertEquals(bankB.name, partyAHandle.resolveName(bankB.name))
assertEquals(bankA.name, partyBHandle.resolveName(bankA.name))
}
#Test
fun `node test2`() = withDriver {
val (partyAHandle, partyBHandle) = startNodes(bankA, bankB)
assertEquals(bankB.name, partyAHandle.resolveName(bankB.name))
assertEquals(bankA.name, partyBHandle.resolveName(bankA.name))
}
The first test will start up 3 nodes: one notary, BankA and BankB nodes with the following details:
Notary:
Advertised P2P messaging addresses : localhost:10000
RPC connection address : localhost:10001
RPC admin connection address : localhost:10002
Jetty Port: 8998
BankA:
Advertised P2P messaging addresses : localhost:10004
RPC connection address : localhost:10005
RPC admin connection address : localhost:10006
Jetty Port: 9002
BankB:
Advertised P2P messaging addresses : localhost:10008
RPC connection address : localhost:10009
RPC admin connection address : localhost:10010
Jetty Port: 9006
Unfortunately the second test will fails since the Jetty ports are still bound:
[ERROR] 14:22:04,825 [driver-pool-thread-0] internal.Node.installCordaServices - Corda service com.example.flows.IOUService failed to instantiate. Reason was: Address already in use [errorCode=1pryyp4, moreInformationAt=https://errors.corda.net/OS/4.1/1pryyp4]
java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:9006
How to register a shutdown hook during integration testing in order to shut down the Jetty servers?
The example code can be found here:
https://github.com/altfatterz/learning-corda

A proper lifecycle for corda services is currently being looked at. Hopefully, you can do this in the future.
For now, there is not a simple way to do this from within the node.

Related

ClientDeviceAuthorizer: Device isn't authorized to connect

I have a Client Device (thing-is-1)
A Greengrass Core Device (Corething2)
I have deployed the componenets: MQTT Moquette Broker, MQTT Bridge, Client Device Authenticator, IP Detector.
My MQTT Broker is listening on port 8883
I have double checked all my AWS IOT Policies for Core:
It has everything that has been mentioned in the AWS documentations in terms of policies.
But When I try to connect my client to my core after discovery, it gives the following error on the client device:
Performing greengrass discovery...
awsiot.greengrass_discovery.DiscoverResponse(gg_groups=[awsiot.greengrass_discovery.GGGroup(gg_group_id='greengrassV2-coreDevice-Corething2', cores=[awsiot.greengrass_discovery.GGCore(thing_arn='arn:aws:iot:eu-west-1:...:thing/Corething2', connectivity=[awsiot.greengrass_discovery.ConnectivityInfo(id='', host_address='', metadata='', port=8883), awsiot.greengrass_discovery.ConnectivityInfo(id='', host_address='', metadata='', port=8883)])], certificate_authorities=['-----BEGIN CERTIFICATE-----\n..\n-----END CERTIFICATE-----\n'])])
Trying core arn:aws:iot:eu-west-1:..:thing/Corething2 at host port 8883
Connection failed with exception AWS_IO_SOCKET_TIMEOUT: socket operation timed out.
Trying core arn:aws:iot:eu-west-1:..:thing/Corething2 at host port 8883
Connection failed with exception AWS_ERROR_MQTT_UNEXPECTED_HANGUP: The connection was closed unexpectedly.
All connection attempts failed
Now if I go to my Core device and check the greengrass.log.. I see this:
2022-04-11T15:07:43.899Z [INFO] (nioEventLoopGroup-5-3) com.aws.greengrass.device.DeviceAuthClient: Creating new session. {}
2022-04-11T15:07:44.454Z [INFO] (nioEventLoopGroup-5-3) com.aws.greengrass.device.SessionManager: Created the session. {sessionId=d65a97e6-1919-4798-8c2d-bb9b44398856}
2022-04-11T15:07:44.473Z [INFO] (nioEventLoopGroup-5-3) io.moquette.broker.metrics.MQTTMessageLogger: C->B CONNECT . {}
2022-04-11T15:07:44.473Z [INFO] (nioEventLoopGroup-5-3) com.aws.greengrass.mqttbroker.ClientDeviceAuthorizer: Retrieved client session. {clientId=thing-is-1, sessionId=d65a97e6-1919-4798-8c2d-bb9b44398856}
2022-04-11T15:07:44.799Z [INFO] (nioEventLoopGroup-5-3) com.aws.greengrass.mqttbroker.ClientDeviceAuthorizer: Device isn't authorized to connect. {clientId=thing-is-1, sessionId=d65a97e6-1919-4798-8c2d-bb9b44398856}
2022-04-11T15:07:44.799Z [INFO] (nioEventLoopGroup-5-3) com.aws.greengrass.device.SessionManager: Closing the session. {sessionId=d65a97e6-1919-4798-8c2d-bb9b44398856}
2022-04-11T15:07:44.800Z [INFO] (nioEventLoopGroup-5-3) io.moquette.broker.MQTTConnection: Authenticator has rejected the MQTT credentials CId=thing-is-1, certificate chain=[[
[
Version: V3
Subject: CN=AWS IoT Certificate
bla bla bla
]]. {}
2022-04-11T15:07:44.800Z [INFO] (nioEventLoopGroup-5-3) io.moquette.broker.MQTTConnection: Client didn't supply any password and MQTT anonymous mode is disabled CId=thing-is-1. {}
2022-04-11T15:07:44.802Z [INFO] (nioEventLoopGroup-5-3) io.moquette.broker.metrics.MQTTMessageLogger: Channel Inactive. {}
2022-04-11T15:08:41.247Z [INFO] (pool-1-thread-4) com.aws.greengrass.detector.IpDetectorManager: Acquired host IP addresses. {IpAddresses=[/, /]}
What am I missing here? Or maybe if there is a checklist that I can refer to for scratching out possibilities.
One question arises with the certs, do I need to add my clients public to some place in the Core? I didnt find that anywhere in the aws docs.
Also I see that the session is created but then my Authenticator rejects the client.
My Device Authenticator has a complete permissive configuration.
My thing-is-1 is associated to my core device. But the core device and the client device do not belong to the same thing group. (And I don't think that makes any difference)
ClientDeviceAuth Component config:
{
"reset": [],
"merge": {
"reset": [],
"merge": {
"deviceGroups": {
"formatVersion": "2021-03-05",
"definitions": {
"MyDeviceGroup": {
"selectionRule": "thingName: thing-*",
"policyName": "MyClientDevicePolicy"
}
},
"policies": {
"MyClientDevicePolicy": {
"AllowConnection": {
"statementDescription": "Allow client devices.",
"operations": [
"*"
],
"resources": [
"*"
]
}
}
}
}
}
}
}
I tried to be informative and concise at the same time. Let me know if im missing any info that might help to get a better understanding of the issue and I'll update the question accordingly.
Your client device auth configuration seems to have "merge" as a child of "merge"? That isn't correct. The device groups and policies should be keys under the top level merge.

net.corda.client.rpc.RPCException: Cannot connect to server(s). Tried with all available servers

OS: Windows 10 and Ubuntu 18.04
Corda: 4.4
I wanted to learn CordaRPCOps so I started by using the template’s Cordform deployNodes task that provides me with three running nodes.
First I used the following code running locally to connect to PartyA’s Corda node.
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("localhost", 10006);
CordaRPCClient client = new CordaRPCClient(nodeAddress);
CordaRPCConnection connection = client.start("user1", "test");
CordaRPCOps cordaRPCOps = connection.getProxy();
This worked great.
Then I tried connecting from a different PC on the same network with the following change:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
This failed with the following error:
net.corda.client.rpc.RPCException: Cannot connect to server(s). Tried with all available servers.
 
Assuming this was network related I went back to the local PC and ran the same code:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
This also failed. So I decided to try the PC name instead of the IP address. This failed both locally and on the other PC.
If the rpcSettings in the node.conf file use “localhost”:
rpcSettings {
address="localhost:10006"
adminAddress="localhost:10046"
}
…you cannot connect to the node using anything but “localhost” or “127.0.0.1”
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("localhost", 10006);
This also means you cannot connect to the node from across the network.
If you replace “localhost” with the IP address or the computer name:
rpcSettings {
address="192.168.1.149:10006"
adminAddress="localhost:10046"
}
… then you can reference the node by either IP or computer name locally, or from another PC on the network:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
You’ll notice the change on the opening screen of the Node Shell when they start and list their respective information.
--- Corda Open Source 4.4 (21e8c4f) -------------------------------------------------------------
Logs can be found in : F:\corda\Java\cordapp-template-java\build\nodes\PartyA\logs
! ATTENTION: This node is running in development mode! This is not safe for production deployment.
Jolokia: Agent started with URL http://127.0.0.1:7006/jolokia/
Advertised P2P messaging addresses : localhost:10005
RPC connection address : 192.168.1.149:10006
RPC admin connection address : localhost:10046

EUCA 4.4.5 VPCMIDO Instances Terminate at Launch

I have achieved a small test cloud on 3 pieces of hardware. It works fine when in EDGE mode but when I try to configure it for VPCMIDO, new instances begin to launch but then timeout and move to a terminated state. I can also see the instances' initial volume and config data appear in the NC and CC data directories. Below is my system layout and network.json.
HOST 1 : CLC/UFS/WALRUS/MIDO CLUSTER/MIDO GATEWAY/MIDOLMAN AGENT:
em1 (All Services including Mido Cluster): 10.0.0.21
em3 (Target VPCMIDO Adapter): 10.0.0.22
HOST 2 : CC/SC
em1 : 10.0.0.23
HOST 3 : NC/MIDOLMAN AGENT
em1 : 10.0.0.24
{
"Mido": {
"Gateways": [
{
"Ip": "10.0.0.22",
"ExternalDevice": "em3",
"ExternalCidr": "192.168.0.0/16",
"ExternalIp": "192.168.0.2",
"ExternalRouterIp": "192.168.0.1"
}
]
},
"Mode": "VPCMIDO",
"PublicIps": [
"10.0.100.1-10.0.100.254"
]
}
I may be misunderstanding the intent of reserving an interface just for the mido gateway. All of my eucalyptus/zookeeper/cassandra/midonet configs use the 10.0.0.21 interface and seem to communicate fine. The midonet tunnel zone reports my CLC host and NC host successfully in the tunnel zone. The only part of my config that references the interface I intend to use for the midonet gateway is the network.json. No errors were returned at any time during my config so I think I may be missing something conceptual.
You may need to start eucanetd as described here:
https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/starting_euca_clc.html
The eucanetd component in vpcmido mode runs on the cloud controller and is responsible for controlling midonet.
When eucanetd is not running instances will fail to start as the required network resources will not be created.
I configured a bridge on the NC and instances were able to launch and I no longer got an error in my nc.log. Docs and the eucalyptus.conf file comments tell me I shouldn't need to do this in VPCMIDO netowrking mode: https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configuring_bridge.html
Despite all that adding the bridge fixed this issue.

Running corda nodes in different machines

I have the problem in Corda regarding performing IOU from Party A to Party B.
Below is configuration detail:
3 node.conf [Party A, Party B, and Notary ].
Hosting application in AWS, So in node config file instead of "localhost", I gave the IP of the machines. I gave the same IP for Notary & Party A, different for Party B.
Network Bootstrapping was successful and moved the newly created node folders respective EC2 instances and started run nodes.
But when performed the IOU from Party A to Party B it's not working. Please suggest how to resolve the issue.
I see the following error in the node logs:
E 11:34:47+0000 [main] internal.Node.run - Exception during node startup {}
java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_161]
at sun.nio.ch.Net.bind(Unknown Source) ~[?:1.8.0_161]
at sun.nio.ch.Net.bind(Unknown Source) ~[?:1.8.0_161]
at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source) ~[?:1.8.0_161]
at io.netty.channe
Reference: https://docs.corda.net/tutorial-cordapp.html#running-nodes-across-machines
I reach node communication on different hosts by the following way.
First of all I deploy node with node.conf file which contains
"p2pAddress" : "host:10012",
"rpcSettings" : {
"address" : "host:10014",
"adminAddress" : "host:10013"
}
Then after node deployed I change host of rpcSettings to localhost
"rpcSettings" : {
"address" : "localhost:10014",
"adminAddress" : "localhost:10013"
}
Such way looks strange, however after this manipulation nodes started to communicate
This is related with NodeInfo file which is generated at node deploy and it should contains the host for rpc. After that rpc needs localhost for interaction. I think it might be a bug, but works fine in that way.
When using rpcSettings in Corda V3.1 the address and adminAddress need to be using 0.0.0.0.
rpcSettings {
address="0.0.0.0:10003"
adminAddress="0.0.0.0:10103"
}
These endpoints are not advertised externally so the local ip is solely a binding for Corda.
This should solve the following exception on starting your cordapp when using public ip or DNS:
E 21:28:56+0000 [main] internal.Node.run - Exception during node
startup {} io.netty.channel.unix.Errors$NativeIoException: bind(..)
failed: Cannot assign requested address

Passing hostname to netty

Background: I've got two machines with identical hostnames, I need to set up a local spark cluster for testing, setting up a master and a worker works fine, but trying to run an application with the driver causes problems, netty doesn't seem to be picking the correct host (regardless of what I put in there, it just picks the first host).
Identical hostname:
$ dig +short corehost
192.168.0.100
192.168.0.101
Spark config (used by master and the local worker):
export SPARK_LOCAL_DIRS=/some/dir
export SPARK_LOCAL_IP=corehost // i tried various like 192.168.0.x for
export SPARK_MASTER_IP=corehost // local, master and the driver
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=2
export SPARK_WORKER_MEMORY=2g
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_DIR=/some/dir
Spark starts up and I can see the worker in the web-ui.
When I run the spark "job" below:
val conf = new SparkConf().setAppName("AaA")
// tried 192.168.0.x and localhost
.setMaster("spark://corehost:7077")
val sc = new SparkContext(conf)
I get this exception:
15/04/02 12:34:04 INFO SparkContext: Running Spark version 1.3.0
15/04/02 12:34:04 WARN Utils: Your hostname, corehost resolves to a loopback address: 127.0.0.1; using 192.168.0.100 instead (on interface en1)
15/04/02 12:34:04 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/04/02 12:34:05 ERROR NettyTransport: failed to bind to corehost.home/192.168.0.101:0, shutting down Netty transport
...
Exception in thread "main" java.net.BindException: Failed to bind to: corehost.home/192.168.0.101:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
Process finished with exit code 1
Not sure how to proceed... its a whole jungle of ip addresses.
Not sure if this is a netty issue either.
My experience with the identical problem is that it revolves around setting things up locally. Try being more verbose in your spark driver code, add the SPARK_LOCAL_IP and driver host ip to the config:
val conf = new SparkConf().setAppName("AaA")
.setMaster("spark://localhost:7077")
.set("spark.local.ip","192.168.1.100")
.set("spark.driver.host","192.168.1.100")
This should tell netty which of the two identical hosts to use.

Resources