I have defined my Driver based tests under src/integrationTest/kotlin/com.example.IntegationTest dir of my CorDapp project:
class IntegrationTest {
private val nodeAName = CordaX500Name("NodeA", "", "GB")
private val nodeBName = CordaX500Name("NodeB", "", "US")
#Test
fun `run driver test`() {
driver(DriverParameters(isDebug = true, startNodesInProcess = true)) {
// This starts three nodes simultaneously with startNode, which returns a future that completes when the node
// has completed startup. Then these are all resolved with getOrThrow which returns the NodeHandle list.
val (nodeAHandle, nodeBHandle) = listOf(
startNode(providedName = nodeAName),
startNode(providedName = nodeBName)
).map { it.getOrThrow() }
// This test will call via the RPC proxy to find a party of another node to verify that the nodes have
// started and can communicate. This is a very basic test, in practice tests would be starting flows,
// and verifying the states in the vault and other important metrics to ensure that your CorDapp is working
// as intended.
Assert.assertEquals(nodeAHandle.rpc.wellKnownPartyFromX500Name(nodeBName)!!.name, nodeBName)
Assert.assertEquals(nodeBHandle.rpc.wellKnownPartyFromX500Name(nodeAName)!!.name, nodeAName)
}
}
}
If we try to execute the test using gradle integrationTest from command line, how can we ensure that the integrationTest got executed successfully?
If tried with Inteliij IDE, the Junit test works as expected with appropriate test reports/logs.
To ensure the integration tests are actually run, you need to use the clean argument:
./gradlew clean integrationTest
The output of this command doesn't always make it clear which tests have been run. You can make it display more information using the --info flag:
./gradlew clean integrationTest --info
Related
Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955
In my IntelliJ project, I have two modules, which are CorDapps. I also have a run configuration for each
Run Participant A CorDapp
Run Participant B CorDapp
Running either of these runs the CorDapp on an in-memory node
package com.demo.cordapp.participant_a
import net.corda.core.utilities.getOrThrow
import net.corda.testing.driver.DriverParameters
import net.corda.testing.driver.driver
import net.corda.testing.node.User
class Application {
companion object {
#JvmStatic
fun main(args: Array<String>) {
val parameters = DriverParameters(
isDebug = true,
waitForAllNodesToFinish = true,
extraCordappPackagesToScan = listOf("com.demo.shared.domain")
)
driver(parameters) {
startNode(
providedName = PARTICIPANT_1_NAME,
rpcUsers = listOf(User("user1", "test", permissions = setOf("ALL")))
).getOrThrow()
}
}
}
}
If I start Participant A's node first, it works fine, but I get an error for Participant B, and vice-versa. The error is as follows
Exception in thread "main"
net.corda.testing.node.internal.ListenProcessDeathException: The
process that was expected to listen on localhost:10000 has died with
status: 2
My guess is that there is a port conflict as both of them are trying to use the same p2p, rpc, web ports?
DriverParameters has a portAllocation argument that determines how the ports are assigned to nodes.
It defaults to PortAllocation.Incremental(10000). For one of the nodes, you should set this to something else (e.g. PortAllocation.Incremental(20000)).
If you are running in debug mode, you also need to modify the debugPortAllocation.
I just started using Corda. I know that in unit testing, I'm supposed to add the line
setCordappPackages("net.corda.finance")
But when I debug using NodeDriver, I just receive the message.
net.corda.core.transactions.MissingContractAttachments: Cannot find contract attachments for [net.corda.finance.contracts.asset.Cash]
What's missing?
You set the packages the driver scans as follows:
driver(
startNodesInProcess = true,
extraCordappPackagesToScan = listOf("net.corda.examples.attachments"),
isDebug = true)
{
...
}
I have a library, built with Maven, that uses Spring 4.0.3.RELEASE and Togglz 2.2.0.Final. I'm trying to write a JUnit 4.11 test of my Spring class and running into the following error on the first test that gets executed:
testCreateItem_throwsItemServiceBusinessException(impl.ItemServiceImplTest) Time elapsed: 1.771 sec <<< ERROR!
java.util.ServiceConfigurationError: org.togglz.core.spi.LogProvider:
Provider org.togglz.slf4j.Slf4jLogProvider not a subtype
Here is the relevant java test snippet:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = AppConfig.class, loader = AnnotationConfigContextLoader.class)
#PrepareForTest({ ItemServiceImpl.class })
public class ItemServiceImplTest {
#Rule
public TogglzRule togglzRule = TogglzRule.allDisabled(Features.class);
#Rule
public PowerMockRule powerMockRule = new PowerMockRule();
#Test(expected = ItemServiceBusinessException.class)
public void testCreateItem_throwsItemServiceBusinessException() throws Exception {
PowerMockito.doReturn(mockMetricsData).when(serviceUnderTest, START_METRICS_METHOD_NAME, any(MetricsOperationName.class), any(RequestContext.class));
when(mockDao.createItem(any(Item.class), any(RequestContext.class))).thenThrow(dataBusinessException);
serviceUnderTest.createItem(item, context);
verify(mockItemServiceValidator).validate(any(Item.class), any(RequestContext.class));
PowerMockito.verifyPrivate(serviceUnderTest).invoke(START_METRICS_METHOD_NAME, any(MetricsOperationName.class), any(RequestContext.class));
verify(mockDao).createItem(any(Item.class), any(RequestContext.class));
}
}
Subsequent test calls get the following error:
java.lang.NoClassDefFoundError: Could not initialize class org.togglz.junit.TogglzRule
Here are some relevant dependencies I have:
org.mockito:mockito-all=org.mockito:mockito-all:jar:1.9.5:compile,
org.powermock:powermock-module-junit4=org.powermock:powermock-module-junit4:jar:1.5.6:test,org.powermock:powermock-module-junit4-common=org.powermock:powermock-module-junit4-common:jar:1.5.6:test,
org.powermock:powermock-reflect=org.powermock:powermock-reflect:jar:1.5.6:test,
org.powermock:powermock-api-mockito=org.powermock:powermock-api-mockito:jar:1.5.6:test,
org.powermock:powermock-api-support=org.powermock:powermock-api-support:jar:1.5.6:test,
org.powermock:powermock-module-junit4-rule=org.powermock:powermock-module-junit4-rule:jar:1.5.6:test,
org.powermock:powermock-classloading-base=org.powermock:powermock-classloading-base:jar:1.5.6:test,
org.powermock:powermock-core=org.powermock:powermock-core:jar:1.5.6:test,
org.powermock:powermock-classloading-xstream=org.powermock:powermock-classloading-xstream:jar:1.5.6:test,
org.togglz:togglz-core=org.togglz:togglz-core:jar:2.2.0.Final:compile,
org.togglz:togglz-slf4j=org.togglz:togglz-slf4j:jar:2.2.0.Final:compile,
org.togglz:togglz-spring-core=org.togglz:togglz-spring-core:jar:2.2.0.Final:compile,
org.togglz:togglz-testing=org.togglz:togglz-testing:jar:2.2.0.Final:test,
org.togglz:togglz-junit=org.togglz:togglz-junit:jar:2.2.0.Final:test
And I have provided a LogProvider (org.togglz.slf4j.Slf4jLogProvider) via SPI, located at META-INF/serivces/org.togglz.core.spi.LogProvider
This error is baffling as Slf4jLogProvider should be assignable from LogProvider. Sorry for the verbosity, but I wanted to try and show a complete picture. The code in class "under test" is making a call to see if a single feature is enabled inside the create method.
First of all: You don't need to configure the log provider in your application. Including togglz-slf4j on your application path is sufficient because this jar contains the corresponding SPI file.
Could you please check if there are multiple conflicting versions of the Togglz JAR files on your classpath? For example using togglz-core-2.2.0.Final.jar together with togglz-slf4j-2.1.0.Final.jar could result in an error like this.
This can happen if you update Togglz and your IDE didn't remove the old archives. Running a clean build and/or selecting "Update Maven Configuration" on Eclipse will fix this problem.
The following blog propose how to fetch an artifact directly from java using ivy (http://developers-blog.org/blog/default/2010/11/08/Embed-Ivy-How-to-use-Ivy-with-Java).
public class IvyArtifactResolver {
public File resolveArtifact(String groupId, String artifactId, String version) throws Exception {
//creates clear ivy settings
IvySettings ivySettings = new IvySettings();
//url resolver for configuration of maven repo
URLResolver resolver = new URLResolver();
resolver.setM2compatible(true);
resolver.setName("central");
//you can specify the url resolution pattern strategy
resolver.addArtifactPattern(
"http://repo1.maven.org/maven2/"
+ "[organisation]/[module]/[revision]/[artifact](-[revision]).[ext]");
//adding maven repo resolver
ivySettings.addResolver(resolver);
//set to the default resolver
ivySettings.setDefaultResolver(resolver.getName());
//creates an Ivy instance with settings
Ivy ivy = Ivy.newInstance(ivySettings);
File ivyfile = File.createTempFile("ivy", ".xml");
ivyfile.deleteOnExit();
String[] dep = null;
dep = new String[]{groupId, artifactId, version};
DefaultModuleDescriptor md =
DefaultModuleDescriptor.newDefaultInstance(ModuleRevisionId.newInstance(dep[0],
dep[1] + "-caller", "working"));
DefaultDependencyDescriptor dd = new DefaultDependencyDescriptor(md,
ModuleRevisionId.newInstance(dep[0], dep[1], dep[2]), false, false, true);
md.addDependency(dd);
//creates an ivy configuration file
XmlModuleDescriptorWriter.write(md, ivyfile);
String[] confs = new String[]{"default"};
ResolveOptions resolveOptions = new ResolveOptions().setConfs(confs);
//init resolve report
ResolveReport report = ivy.resolve(ivyfile.toURL(), resolveOptions);
//so you can get the jar library
File jarArtifactFile = report.getAllArtifactsReports()[0].getLocalFile();
return jarArtifactFile;
}
}
I'm wondering if sbt exposes this kind of interface since it uses ivy.
resolve :: ModuleId -> File
Scripts, REPL, and Dependencies
There's a document called Scripts, REPL, and Dependencies you might be interested in. Script runner for example lets you write something like this:
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
driving sbt programmatically
You can also use any subparts of sbt as a library and drive it yourself. Because of the plugin ecosystem, it's pretty good about maintaining binary compatibility among the point releases. The key task that grabs jars would be update, so def updateTask (Defaults.scala#L1113) could be a good place to start. If you are driving sbt from the client code, however, wouldn't you end up re-implementing sbt shell or including all the sbt's dependencies? You might as well have a separate sbt shell window or sbt script section.
Custom Resolvers
sbt ships with variety of customizable resolvers, so the first place to check out should be: Resolvers:
sbt provides an interface to the repository types available in Ivy: file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using patterns to configure repositories.
Construct a repository definition using the factory in sbt.Resolver for the desired type. This factory creates a Repository object that can be further configured. The following table contains links to the Ivy documentation for the repository type and the API documentation for the factory and repository class. The SSH and SFTP repositories are configured identically except for the name of the factory. Use Resolver.ssh for SSH and Resolver.sftp for SFTP.
For example you can do:
resolvers += Resolver.file("my-test-repo", file("test")) transactional()
RawRepository
But if you truly want a programmable resolver, there is RawRepository:
final class RawRepository(val resolver: DependencyResolver) extends Resolver
{
def name = resolver.getName
override def toString = "Raw(" + resolver.toString + ")"
}
This is a thin wrapper around org.apache.ivy.plugins.resolver.DependencyResolver, which you should be able to write by extending one of the resolvers they have. (I haven't tried this myself.)