Getting error while redeploying nodes - corda

I am still using Corda 1.0 version. when i try to redeploy nodes with existing data, getting below error while start-up but able to access the nodes . If i clear the data and redeploy the nodes, i didn't face these error message.
Logs can be found in :
C:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\kotlin-
source\build\nodes\xxxxxxxx\logs
Database connection url is : jdbc:h2:tcp://xxxxxxxxx/node
E 18:38:46+0530 [main] core.client.createConnection - AMQ214016: Failed to
create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty
all-4.1.9.Final.jar:4.1.9.Final]
Incoming connection address : xxxxxxxxxxxx
Listening on port : 10014
RPC service listening on port : 10015
Loaded CorDapps : corda-finance-1.0.0, kotlin-
source-0.1, corda-core-1.0.0
Node for "xxxxxxxxxxx" started up and registered in 213.08 sec
Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut
down the node.
Wed May 23 18:39:20 IST 2018>>> E 18:39:24+0530 [Thread-6 (ActiveMQ-server-
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImp
l$3#4a532271)] core.client.createConnection - AMQ214016: Failed to create
netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty-
all-4.1.9.Final.jar:4.1.9.Final]

This looks like the Artemis failed to connect to the node which means the node fails to start.
You should look at the log and if there are any other previous Corda node started which occupy the node.
If there are any legacy Corda nodes that have not been killed, try ps -ef |grep java to see if there is any other java still alive. Especially look for the port number and check if they are overlapped

Related

tunneling socket could not be established | yarn create next-app

When I try to look up for this kind of problem, most are because they were using a proxy, but I don't. I get this error on my git bash:
$ yarn create next-app firstapp
yarn create v1.22.19
[1/4] Resolving packages...
info There appears to be trouble with your network connection. Retrying...
info There appears to be trouble with your network connection. Retrying...
info There appears to be trouble with your network connection. Retrying...
info There appears to be trouble with your network connection. Retrying...
error An unexpected error occurred: "https://registry.yarnpkg.com/create-next-app: tunneling socket could not be established, cause=getaddrinfo ENOTFOUND 8889".
info If you think this is a bug, please open a bug report with the information provided in "C:\\Users\\username\\AppData\\Local\\Yarn\\Data\\global\\yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/create for documentation about this command.
I'm following the guide on nextjs.org. I thought it was due to my bad connection, but when I try it another time, I still get the same error.
My Node.js is at v14.17.5.

Getting this error Status : Failure -Test failed: IO Error: The Network Adapter could not establish the connection

I am new to Oracle, installed oracle SQL developer but each time I try to connect, I get the error:
Status: Failure -Test failed: IO Error: The Network Adapter could not
establish the connection
and the oracleTNSlistener service turns off on its own. Each time I start the service, it turns off immediately on its own.

Unknown Peer error when trying to run corda on multiple nodes

I am trying to run corda on multiple nodes. As per this thread - https://github.com/corda/corda/issues/39, I have modifed the node.conf files and started the nodes. (BankA and Notary on machineA, BankB and BankCorda on MachineB) and started all four of them. Till node startup everything is fine.
When I try to run this - gradlew samples:trader-demo:runBuyer from machine A, I get the following error on NodeA - unknown peer - BankOfCorda
and the
Notary - [ERROR] 2018-06-01T12:37:22,766 [Node thread] StateMachineManager - Unknown peer C=UK,L=London,OU=corda,O=R3,CN=BankOfCorda in SessionInit(initiatorSessionId=6217119355343956857, flowName=net.corda.flows.NotaryFlow$Client, firstPayload=SignRequest(tx=SignedTransaction(txBits=[14010…], sigs=[[…]], id=xxx)))
Am I doing something wrong?
This is because you have messages queued up in your nodes' message queues when you stop them. After changing the nodes' names and restarting the nodes, the message queues can't find the desired recipients on the network.
You can fix this by deleting each node's artemis folder.

Spark & SparkR Configuration on EC2 - Java Timeout

I'm trying to get Spark, and SparkR, running on a small EC2 cluster using the provided scripts and directions. Whenever I ask for an operation that would require computation on an RDD (e.g., collect(), reduce()), I get the error logged below. Workers do appear to startup correctly -- if I only parallelize, I see the workers running via the master's web ui.
The error I get is similar to the one in Intermittent Timeout Exception using Spark and I've been through all of the solutions there (modifying the conf file for URL's, disabling the firewall, etc.), no luck.
Here is the error log, thank you in advance for your help:
15/02/17 19:10:22 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
15/02/17 19:10:22 INFO spark.SecurityManager: Changing view acls to: root,-
15/02/17 19:10:22 INFO spark.SecurityManager: Changing modify acls to: root,-
15/02/17 19:10:22 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, -); users with modify permissions: Set(root, -)
15/02/17 19:10:23 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/02/17 19:10:23 INFO Remoting: Starting remoting
15/02/17 19:10:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher#-.ec2.internal:60218]
15/02/17 19:10:23 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 60218.
15/02/17 19:10:53 ERROR security.UserGroupInformation: PriviledgedActionException as:- cause:java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException: Unknown exception in doAs
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1134)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:115)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:161)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.security.PrivilegedActionException: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
... 4 more
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:127)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
... 7 more
This was ultimately resolved by a combination of
- Updates to SparkR, which have resolved a number of serialization issues.
- Recognizing that the Spark-ec2 scripts require that the control node and master node be the same machine.
and
- Replacing calls to parallelize() with distributing and then loading the data by hadoop.
I am writing an intro to SparkR for R programmers that I hope will help people with things like this in the future.

what can cause connection to a port in the same node refused?

Got an EndpointWriter error:
14/10/30 23:12:29 ERROR EndpointWriter: AssociationError [akka.tcp://sparkWorker#node001:35249] -> [akka.tcp://sparkExecutor#node001:7088]: Error [Association failed with [akka.tcp://sparkExecutor#node001:7088]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor#node001:7088]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: node001/10.69.144.56:7088
the node001 and 10.69.144.56 are both the node itself. my understanding is that akka was trying to connect to a port in local but got rejected. The executor port was fixed to be '7087'.
Thanks for your help!
The usual reason for connection refused is that there is nothing listening on the port. If the port that executor is listening on is 7087, akka is trying to make a connection to port 7088 and there's probably nothing listening there. Check your code or configuration to see if you got 7088 instead of 7087.

Resources