Why Stub Runner is not booting up a Wiremock server in the Classpath mode? - spring-cloud-contract

I'm trying to run some tests in my consumer application using Spring Cloud Contract's Stub Runner.
I have noticed that when the stubsMode property is set to LOCAL.
#AutoConfigureStubRunner(
stubsMode = StubRunnerProperties.StubsMode.LOCAL,
ids = "com.example:spring-cloud-contract-producer:+:stubs:8090")
my build is successful because an embedded Wiremock instance boots up and listens in that port.
However, if I change the stubsMode property to CLASSPATH, my build fails because the test cannot establish a connection at that port.
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:8090/validate/prime-number": Connect to localhost:8090 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8090 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
According to the docs, this should only affect how the stubs are downloaded:
StubRunnerProperties.StubsMode.CLASSPATH (default value) - will pick
stubs from the classpath
What am I doing wrong here? Thanks beforehand!

If you turn on the classpath mode then, as the name suggests, you need to have the dependencies on your classpath. The jar with the stubs needs to contain a predefined structure described in the docs. In general, in your case, it would contain a META-INF/com.example/spring-cloud-contract-producer/mappings folder with WireMock stubs inside it. If you don't have the producer stubs on your classpath then the classpath mode will not work.

Related

How can CSR polling be restarted when joining a Corda Network?

When joining the UAT Corda Network and running initial registration as required the Corda node was shutdown before the CSR completed. https://uat.network.r3.com/pages/joining/joining.html
The CSR has been approved by the Corda node does not have the correct certificates yet. When trying to start the node it throws an exception for missing certificates.
[ERROR] 2019-07-26T16:47:51,099Z [main] internal.NodeStartupLogging.invoke - Exception during node startup: One or more keyStores (identity or TLS) or trustStore not found.
Please either copy your existing keys and certificates from another node, or if you don't have one yet, fill out the config file and run corda.jar initial-registration.
Read more at: https://docs.corda.net/permissioning.html [errorCode=16fn52g, moreInformationAt=https://errors.corda.net/ENT/4.1/16fn52g] {}
java.lang.IllegalArgumentException: One or more keyStores (identity or TLS) or trustStore not found. Please either copy your existing keys and certificates from another node, or if you don't have one yet, fill out the config file and run corda.jar initial-registration.
Read more at: https://docs.corda.net/permissioning.html
How can the CSR polling be completed?
Initial registration can be rerun and will resume polling based on the CSR id that is located in the certificates directory as certificate-request-id.txt. Rerun the same command used to start the CSR.
java -jar <CORDA JAR FILE> –initial-registration –network-root-truststore-password <TRUST STORE PASSWORD>

Getting error while redeploying nodes

I am still using Corda 1.0 version. when i try to redeploy nodes with existing data, getting below error while start-up but able to access the nodes . If i clear the data and redeploy the nodes, i didn't face these error message.
Logs can be found in :
C:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\kotlin-
source\build\nodes\xxxxxxxx\logs
Database connection url is : jdbc:h2:tcp://xxxxxxxxx/node
E 18:38:46+0530 [main] core.client.createConnection - AMQ214016: Failed to
create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty
all-4.1.9.Final.jar:4.1.9.Final]
Incoming connection address : xxxxxxxxxxxx
Listening on port : 10014
RPC service listening on port : 10015
Loaded CorDapps : corda-finance-1.0.0, kotlin-
source-0.1, corda-core-1.0.0
Node for "xxxxxxxxxxx" started up and registered in 213.08 sec
Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut
down the node.
Wed May 23 18:39:20 IST 2018>>> E 18:39:24+0530 [Thread-6 (ActiveMQ-server-
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImp
l$3#4a532271)] core.client.createConnection - AMQ214016: Failed to create
netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty-
all-4.1.9.Final.jar:4.1.9.Final]
This looks like the Artemis failed to connect to the node which means the node fails to start.
You should look at the log and if there are any other previous Corda node started which occupy the node.
If there are any legacy Corda nodes that have not been killed, try ps -ef |grep java to see if there is any other java still alive. Especially look for the port number and check if they are overlapped

Running corda nodes in different machines

I have the problem in Corda regarding performing IOU from Party A to Party B.
Below is configuration detail:
3 node.conf [Party A, Party B, and Notary ].
Hosting application in AWS, So in node config file instead of "localhost", I gave the IP of the machines. I gave the same IP for Notary & Party A, different for Party B.
Network Bootstrapping was successful and moved the newly created node folders respective EC2 instances and started run nodes.
But when performed the IOU from Party A to Party B it's not working. Please suggest how to resolve the issue.
I see the following error in the node logs:
E 11:34:47+0000 [main] internal.Node.run - Exception during node startup {}
java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_161]
at sun.nio.ch.Net.bind(Unknown Source) ~[?:1.8.0_161]
at sun.nio.ch.Net.bind(Unknown Source) ~[?:1.8.0_161]
at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source) ~[?:1.8.0_161]
at io.netty.channe
Reference: https://docs.corda.net/tutorial-cordapp.html#running-nodes-across-machines
I reach node communication on different hosts by the following way.
First of all I deploy node with node.conf file which contains
"p2pAddress" : "host:10012",
"rpcSettings" : {
"address" : "host:10014",
"adminAddress" : "host:10013"
}
Then after node deployed I change host of rpcSettings to localhost
"rpcSettings" : {
"address" : "localhost:10014",
"adminAddress" : "localhost:10013"
}
Such way looks strange, however after this manipulation nodes started to communicate
This is related with NodeInfo file which is generated at node deploy and it should contains the host for rpc. After that rpc needs localhost for interaction. I think it might be a bug, but works fine in that way.
When using rpcSettings in Corda V3.1 the address and adminAddress need to be using 0.0.0.0.
rpcSettings {
address="0.0.0.0:10003"
adminAddress="0.0.0.0:10103"
}
These endpoints are not advertised externally so the local ip is solely a binding for Corda.
This should solve the following exception on starting your cordapp when using public ip or DNS:
E 21:28:56+0000 [main] internal.Node.run - Exception during node
startup {} io.netty.channel.unix.Errors$NativeIoException: bind(..)
failed: Cannot assign requested address

Passing hostname to netty

Background: I've got two machines with identical hostnames, I need to set up a local spark cluster for testing, setting up a master and a worker works fine, but trying to run an application with the driver causes problems, netty doesn't seem to be picking the correct host (regardless of what I put in there, it just picks the first host).
Identical hostname:
$ dig +short corehost
192.168.0.100
192.168.0.101
Spark config (used by master and the local worker):
export SPARK_LOCAL_DIRS=/some/dir
export SPARK_LOCAL_IP=corehost // i tried various like 192.168.0.x for
export SPARK_MASTER_IP=corehost // local, master and the driver
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=2
export SPARK_WORKER_MEMORY=2g
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_DIR=/some/dir
Spark starts up and I can see the worker in the web-ui.
When I run the spark "job" below:
val conf = new SparkConf().setAppName("AaA")
// tried 192.168.0.x and localhost
.setMaster("spark://corehost:7077")
val sc = new SparkContext(conf)
I get this exception:
15/04/02 12:34:04 INFO SparkContext: Running Spark version 1.3.0
15/04/02 12:34:04 WARN Utils: Your hostname, corehost resolves to a loopback address: 127.0.0.1; using 192.168.0.100 instead (on interface en1)
15/04/02 12:34:04 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/04/02 12:34:05 ERROR NettyTransport: failed to bind to corehost.home/192.168.0.101:0, shutting down Netty transport
...
Exception in thread "main" java.net.BindException: Failed to bind to: corehost.home/192.168.0.101:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
Process finished with exit code 1
Not sure how to proceed... its a whole jungle of ip addresses.
Not sure if this is a netty issue either.
My experience with the identical problem is that it revolves around setting things up locally. Try being more verbose in your spark driver code, add the SPARK_LOCAL_IP and driver host ip to the config:
val conf = new SparkConf().setAppName("AaA")
.setMaster("spark://localhost:7077")
.set("spark.local.ip","192.168.1.100")
.set("spark.driver.host","192.168.1.100")
This should tell netty which of the two identical hosts to use.

what can cause connection to a port in the same node refused?

Got an EndpointWriter error:
14/10/30 23:12:29 ERROR EndpointWriter: AssociationError [akka.tcp://sparkWorker#node001:35249] -> [akka.tcp://sparkExecutor#node001:7088]: Error [Association failed with [akka.tcp://sparkExecutor#node001:7088]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor#node001:7088]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: node001/10.69.144.56:7088
the node001 and 10.69.144.56 are both the node itself. my understanding is that akka was trying to connect to a port in local but got rejected. The executor port was fixed to be '7087'.
Thanks for your help!
The usual reason for connection refused is that there is nothing listening on the port. If the port that executor is listening on is 7087, akka is trying to make a connection to port 7088 and there's probably nothing listening there. Check your code or configuration to see if you got 7088 instead of 7087.

Resources