net.corda.client.rpc.RPCException: Cannot connect to server(s). Tried with all available servers - corda

OS: Windows 10 and Ubuntu 18.04
Corda: 4.4
I wanted to learn CordaRPCOps so I started by using the template’s Cordform deployNodes task that provides me with three running nodes.
First I used the following code running locally to connect to PartyA’s Corda node.
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("localhost", 10006);
CordaRPCClient client = new CordaRPCClient(nodeAddress);
CordaRPCConnection connection = client.start("user1", "test");
CordaRPCOps cordaRPCOps = connection.getProxy();
This worked great.
Then I tried connecting from a different PC on the same network with the following change:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
This failed with the following error:
net.corda.client.rpc.RPCException: Cannot connect to server(s). Tried with all available servers.
 
Assuming this was network related I went back to the local PC and ran the same code:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
This also failed. So I decided to try the PC name instead of the IP address. This failed both locally and on the other PC.

If the rpcSettings in the node.conf file use “localhost”:
rpcSettings {
address="localhost:10006"
adminAddress="localhost:10046"
}
…you cannot connect to the node using anything but “localhost” or “127.0.0.1”
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("localhost", 10006);
This also means you cannot connect to the node from across the network.
If you replace “localhost” with the IP address or the computer name:
rpcSettings {
address="192.168.1.149:10006"
adminAddress="localhost:10046"
}
… then you can reference the node by either IP or computer name locally, or from another PC on the network:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
You’ll notice the change on the opening screen of the Node Shell when they start and list their respective information.
--- Corda Open Source 4.4 (21e8c4f) -------------------------------------------------------------
Logs can be found in : F:\corda\Java\cordapp-template-java\build\nodes\PartyA\logs
! ATTENTION: This node is running in development mode! This is not safe for production deployment.
Jolokia: Agent started with URL http://127.0.0.1:7006/jolokia/
Advertised P2P messaging addresses : localhost:10005
RPC connection address : 192.168.1.149:10006
RPC admin connection address : localhost:10046

Related

Integration Ovirt and openstack, But got MTU interface error message

I want to manage Ovirt VM's network by openstack neutron(OVN).
My setup:
oVirt v4.4.10 (created one VM names: vm1)
Openstack victoria (5.4.0)
My steps:
created network 'vlan-1' at openstack dashboard network page.
created provider 'neutron' and select openstack networking type
import network 'vlan-1' from 'neutron' provider.
create nic1 in vm1 and selected network 'vlan-1'
run vm1.
But vm1 start failed, The engine.log shows:
2022-11-02 12:05:17,674+08 WARN [org.ovirt.engine.core.bll.provider.network.openstack.BaseNetworkProviderProxy] (EE-ManagedThreadFactory-engine-Thread-11256) [a0d16d0c-f5df-4992-808f-0095daff628a]
Host binding id for external network vlan-1 on host ovn-comp-nodes is null, using host id 192.168.3.215 to allocate vNIC nic1 instead. Please provide an after_get_caps hook for the plugin type OPEN_VSWITCH on host ovn-comp-nodes
2022-11-02 12:05:23,180+08 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-21) [6f97b9be] EVENT_ID: VM_DOWN_ERROR(119), VM ovn-comp-node1 is down with error. Exit message:
Cannot get interface MTU on 'vlan-1': No such device.
I am reading Ovirt's source code but still can't found the reason...

How to get the webport address of node in Corda?

How to get the running port address of a node declared in the NodeDriver.kt file like shown below:
startWebserver(startNode(
providedName = CordaX500Name("Common-name", "Organization", "Locality", "CN"),
rpcUsers = listOf(user)).getOrThrow())
Same way how to get the port address which we declared in the build.gradle (while running through terminal).
When using the node driver, you get the address of the node's webserver using:
val webserverHandle = startWebserver(partyAHandle).getOrThrow()
val webserverAddress = webserverHandle.listenAddress
When running the node via the terminal:
In Corda 3.1, you'd be able to retrieve the webserver address by parsing the node.conf file to retrieve the webAddress property
In Corda 4+, the webAddress property will be removed from the node.conf file. You'll have to pass the webserver address as a configuration property

rabbitMQ.Client in .NET System.IO.IOException: connection.start was never received, likely due to a network timeout

I am writing amqp 1.0 client (using rabbitMQ.Client in .NET) for a broker who provided me the following information:
amqps://brokerRemoteHostName:5671
certificate_openssl.p12
password for certificate as a string "mypassword"
queue name
I developed the following code in Visual Studio which is supposed to work (based on long searches on the web):
var cf = new ConnectionFactory();
cf.Uri = new Uri("amqps://brokerRemoteHostName:5671");
cf.Ssl.Enabled = true;
cf.Ssl.ServerName = "brokerRemoteHostName";
cf.Ssl.CertPath = #"C:\Users\mahmoud\Documents\certificate_openssl.p12";
cf.Ssl.CertPassphrase = "myPassword";
var connection = cf.CreateConnection();
However, the output shows an exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
None of the specified endpoints were reachable ---> System.IO.IOException:
connection.start was never received
likely due to a network timeout) as seen in the image.
Where line 50 corresponds to the line where we create the connection.
I appreciate your kind assistance on the error above.
If you're connecting to a docker container, you need to add the 5672 port in addition to 15672 port when creating the container. For those using ssl, the port would be 5671 instead of 5672.
Example: docker run -d --hostname my-rabbit --name rabbitmq --net customnet -p customport:15672 -p 5672:5672 rabbitmq:3-management.
You would connect from client by calling this: ConnectionFactory factory = new ConnectionFactory() { HostName = "localhost" };.
Feel free to pass in username and password if those were changed.
Official RabbitMq docker image https://hub.docker.com/_/rabbitmq starts RabbitMq broker on port 5672, but .NET RabbitMq library expects to see broker on port 5673 which for sure differs from what we have in fact in docker. The solution is just to remap 5672 to expected 5673 port
docker run -d --hostname my-rabbit --name ds-rabbit -p 8080:15672 -p 5673:5672 rabbitmq:3-management

Passing hostname to netty

Background: I've got two machines with identical hostnames, I need to set up a local spark cluster for testing, setting up a master and a worker works fine, but trying to run an application with the driver causes problems, netty doesn't seem to be picking the correct host (regardless of what I put in there, it just picks the first host).
Identical hostname:
$ dig +short corehost
192.168.0.100
192.168.0.101
Spark config (used by master and the local worker):
export SPARK_LOCAL_DIRS=/some/dir
export SPARK_LOCAL_IP=corehost // i tried various like 192.168.0.x for
export SPARK_MASTER_IP=corehost // local, master and the driver
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=2
export SPARK_WORKER_MEMORY=2g
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_DIR=/some/dir
Spark starts up and I can see the worker in the web-ui.
When I run the spark "job" below:
val conf = new SparkConf().setAppName("AaA")
// tried 192.168.0.x and localhost
.setMaster("spark://corehost:7077")
val sc = new SparkContext(conf)
I get this exception:
15/04/02 12:34:04 INFO SparkContext: Running Spark version 1.3.0
15/04/02 12:34:04 WARN Utils: Your hostname, corehost resolves to a loopback address: 127.0.0.1; using 192.168.0.100 instead (on interface en1)
15/04/02 12:34:04 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/04/02 12:34:05 ERROR NettyTransport: failed to bind to corehost.home/192.168.0.101:0, shutting down Netty transport
...
Exception in thread "main" java.net.BindException: Failed to bind to: corehost.home/192.168.0.101:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
Process finished with exit code 1
Not sure how to proceed... its a whole jungle of ip addresses.
Not sure if this is a netty issue either.
My experience with the identical problem is that it revolves around setting things up locally. Try being more verbose in your spark driver code, add the SPARK_LOCAL_IP and driver host ip to the config:
val conf = new SparkConf().setAppName("AaA")
.setMaster("spark://localhost:7077")
.set("spark.local.ip","192.168.1.100")
.set("spark.driver.host","192.168.1.100")
This should tell netty which of the two identical hosts to use.

failed to launch Openstack instance: 'authentication required' when trying to create port

I'm trying to deploy Openstack Icehouse on Ubuntu Server 14.04 by following the official document. But after Keystone\Nova\Neutron\Glance were deployed, when I tried to launch a CirrOS instance by
nova boot -nic ... -image ... -flavor ...
, it failed.
The log in Nova client shows that:
The Neutron client(Yes, it's neutron. I guess there are interactions between them in booting) tried to connect with Neutron server to create a port on tenant's network.
But Neutron client set up the token-getting request using {username:neutron, password:REDACTED} to Keystone server and used that token to request for creating port to Neutron server.
Finally, the Neutron Server decided that that's an authentication problem.
I'm sure that I requested to create instance using tenant 'demo''s info($OS_TENANT_NAME, $OS_USERNAME, $OS_PASSWORD, $OS_AUTH_URL were properly set with 'demo''s value) by
source demoopenrc.sh
with demo's credential in that file.
Is that something wrong in the Neutron client's configuration or booting process? I paste a part of the neutron.conf here:
the Keystone setting
[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutronpass
signing_dir = $state_path/keystone-signing
Since the Neutron client used 'neutron' user's credential for token getting, is there something wrong in this part?
The problem has been solved after nearly a month. For anyone still interested in this problem, please visit here

Resources