Corda - Specifying an app name for a mock network - corda

If I call flowSession.getCounterpartyFlowInfo() from a unit test using MockNetwork, it returns FlowInfo(flowVersion=1, appName=<unknown>)
Here is my current MockNetwork configuration:
network = MockNetwork(
MockNetworkParameters(
cordappsForAllNodes = listOf(
TestCordapp.findCordapp("com.example.contract"),
TestCordapp.findCordapp("com.example.workflow")
),
networkParameters = testNetworkParameters(
minimumPlatformVersion = 5
)
)
)
Is there a way to specify the appName of an application running in a mock network?

I don't think there is a configuration for that. The appName is derived from the jar file name by removing the '.jar' extension.
For the MockNode, the packages are scanned and classes are loaded.
Here is how it's derived:
val Class<out FlowLogic<*>>.appName: String
get() {
val jarFile = location.toPath()
return if (jarFile.isRegularFile() && jarFile.toString().endsWith(".jar")) {
jarFile.fileName.toString().removeSuffix(".jar")
} else {
"<unknown>"
}
}

Related

I need support to add the AWS variables in Terraform

Need support to add following variables to my terraform code so that user can input the details and it can create the desired resources in AWS. I don't know how to do that your kind support will be highly appreciated.
`
resource "aws_instance" "ec2" {
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
}
variable "security_group_id" {
type = string
description = "Enter the SG"
}
variable "key" {
description = "Enter the Kaypair"
}
variable "subnet_id" {
type = string
description = "Enter the Subnet"
}
`
How can i add following AWS variables
Size, AZ, HDD, Port
I have tried the following code.But due to lack of knowledge i dont know how to add the required variables e.g there are resources + data source.
resource "aws_instance" "ec2" {
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
}
variable "security_group_id" {
type = string
description = "Enter the SG"
}
variable "key" {
description = "Enter the Kaypair"
}
variable "subnet_id" {
type = string
description = "Enter the Subnet"
}
#Malik, hopefully, this will help you and give you a bit of an idea how to initiate your config.
https://github.com/ishuar/terraform-eks/blob/main/examples/private_cluster/eks-ec2-private-jump-host.tf#L253
this only includes basic configuration for EC2 Instances.
If you are looking for a module then its better to look into: https://github.com/terraform-aws-modules/terraform-aws-ec2-instance
unfortunately, there are multiple ways of defining the resources that you have to add to your Ec2 instance but the very standard and basic way would be
resource "aws_instance" "ec2" {
count = var.instance_count
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
tags = { Name = "${var.name}-${count.index + 1}" }
## For the root volume attached to Ec2 instance
root_block_device {
delete_on_termination = var.delete_on_termination
encrypted = var.encrypted
iops = var.iops
volume_size = var.volume_size
volume_type = var.volume_type
throughput = var.throughput
}
## For the additional EBS block device attached to Ec2 instance
ebs_block_device {
delete_on_termination = var.delete_on_termination
device_name = var.device_name
encrypted = var.encrypted
iops = var.iops
kms_key_id = var.kms_key_id
snapshot_id = var.snapshot_id
volume_size = var.volume_size
volume_type = var.volume_type
throughput = var.throughput
}
}
Regarding ports I assume you mean the ports to access, those can be controlled via security groups.
I hope this info somewhat helps even though it is not exactly a module of complete aws_instance resource somewhat you can have an idea.

SBT clone git dependencies to a custom path using a plugin

I'm creating an aggregate SBT project which depends on several other Git projects. I understand that I can refer to them as a dependency using RootProject(uri("...")) and SBT clones them into an SBT-managed path.
However, I need to download these into a custom path. The idea is to create a workspace that automatically downloads the related Git projects that can be worked on as well.
I was able to create a plugin with a task that clones the git repos using sbt-git plugin:
BundleResolver.scala
def resolve: Def.Initialize[Task[Seq[String]]] = Def.task {
val log = streams.value.log
log.info("starting bundle resolution")
val bundles = WorkspacePlugin.autoImport.workspaceBundles.value
val bundlePaths = bundles.map(x => {
val bundleName = extractBundleName(x)
val localPath = file(".").toPath.toAbsolutePath.getParent.resolveSibling(bundleName)
log.info(s"Cloning bundle : $bundleName")
val (resultCode, outStr, errStr) = runCommand(Seq("git", "clone", x, localPath.toString))
resultCode match {
case 0 =>
log.info(outStr)
log.info(s"cloned $bundleName to path $localPath")
case _ =>
log.err(s"failed to clone $bundleName")
log.err(errStr)
}
localPath.toString
})
bundlePaths
}
WorkspacePlugin.scala
object WorkspacePlugin extends AutoPlugin {
override def trigger = allRequirements
override def requires: Plugins = JvmPlugin && GitPlugin
object autoImport {
// settings
val workspaceBundles = settingKey[Seq[String]]("Dependency bundles for this Workspace")
val stagingPath = settingKey[File]("Staging path")
// tasks
val workspaceClean = taskKey[Unit]("Remove existing Workspace depedencies")
val workspaceImport = taskKey[Seq[String]]("Download the dependency bundles and setup builds")
}
import autoImport._
override lazy val projectSettings = Seq(
workspaceBundles := Seq(), // default to no dependencies
stagingPath := Keys.target.value,
workspaceClean := BundleResolver.clean.value,
workspaceImport := BundleResolver.resolve.value,
)
override lazy val buildSettings = Seq()
override lazy val globalSettings = Seq()
}
However, this will not add the cloned repos as sub projects to the main project. How can I achieve this?
UPDATE:: I had an idea to extend RootProject logic, so that I can create custom projects that would accept a git url, clone it in a custom path, and return a Project from it.
object WorkspaceProject {
def apply(uri: URI): Project = {
val bundleName = GitResolver.extractBundleName(uri.toString)
val localPath = file(".").toPath.toAbsolutePath.getParent.resolveSibling(bundleName)
// clone the project
GitResolver.clone(uri, localPath)
Project.apply(bundleName.replaceAll(".", "-"), localPath.toFile)
}
}
I declared this in a plugin project, but can't access it where I'm using it. Do you think it'll work? How can I access it in my target project?
Can't believe it was this simple.
In my plugin project, I created a new object to use in place of RootProject
object WorkspaceProject {
def apply(uri: URI): RootProject = {
val bundleName = GitResolver.extractBundleName(uri.toString)
val localPath = file(".").toPath.toAbsolutePath.getParent.resolve(bundleName)
if(!localPath.toFile.exists()) {
// clone the project
GitResolver.clone(uri, localPath)
}
RootProject(file(localPath.toString))
}
}
Then use it like this:
build.sbt
lazy val depProject = WorkspaceProject(uri("your-git-repo.git"))
lazy val root = (project in file("."))
.settings(
name := "workspace_1",
).dependsOn(depProject)

Apache Mina SFTP: Mount Remote Sub-Directory instead of Filesystem Root

I would to use Apache SSHD to create an SFTP server and use SftpFileSystemProvider to mount a remote directory.
I successfully create the virtual file system with SftpFileSystemProvider following the documentation https://github.com/apache/mina-sshd/blob/master/docs/sftp.md#using-sftpfilesystemprovider-to-create-an-sftpfilesystem.
However I'm stuck when mouting remote directory even with the documentation https://github.com/apache/mina-sshd/blob/master/docs/sftp.md#configuring-the-sftpfilesystemprovider. It keeps mouting the root directory instead of the target one.
I tried:
adding the target directory into the sftp uri (not working)
getting new filesystem from path (not working)
Here is a quick example.
object Main:
class Events extends SftpEventListener
class Auth extends PasswordAuthenticator {
override def authenticate(username: String, password: String, session: ServerSession): Boolean = {
true
}
}
class FilesSystem extends VirtualFileSystemFactory {
override def createFileSystem(session: SessionContext): FileSystem = {
val uri = new URI("sftp://xxx:yyy#host/plop")
// val uri = SftpFileSystemProvider.createFileSystemURI("host", 22, "xxx", "yyy")
val fs = Try(FileSystems.newFileSystem(uri, Collections.emptyMap[String, Object](), new SftpFileSystemProvider().getClass().getClassLoader())) match {
case Failure(exception) =>
println("Failed to mount bucket")
println(exception.getMessage)
throw exception
case Success(filesSystem) =>
println("Bucket mounted")
filesSystem
}
//fs.getPath("plop").getFileSystem
fs
}
}
private val fs = new FilesSystem()
private val sftpSubSystem = new SftpSubsystemFactory.Builder().build()
sftpSubSystem.addSftpEventListener(new Events())
private val sshd: SshServer = SshServer.setUpDefaultServer()
sshd.setPort(22)
sshd.setHost("0.0.0.0")
sshd.setSubsystemFactories(Collections.singletonList(sftpSubSystem))
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(Paths.get("hostkey.ser")))
sshd.setShellFactory(new InteractiveProcessShellFactory())
sshd.setCommandFactory(new ScpCommandFactory())
sshd.setFileSystemFactory(fs)
sshd.setPasswordAuthenticator(new Auth())
sshd.setSessionHeartbeat(HeartbeatType.IGNORE, Duration.ofSeconds(30L))
#main def m() = {
sshd.start()
while (sshd.isStarted) {
}
}
end Main
Am I missing something ?
SSHD version 2.8.0, SFTP protocol version 3, Scala3, Java11
I could be wrong, but, I think that these two ...
sshd.setShellFactory(new InteractiveProcessShellFactory())
sshd.setCommandFactory(new ScpCommandFactory())
sshd.setFileSystemFactory(fs)
... are redundant and this ...
private val sftpSubSystem = new SftpSubsystemFactory.Builder().build()
... needs to be made aware of the virtual file system.

Kotlin reflection change instance and all members that use the instance

We are using reflection to enable our tests to be started in different environments.
A typical test would look like this:
class TestClass {
val environment: Environment = generateEnvironment("jUnit")
val path: String = environment.path
//Do test stuff
}
We are using reflection like this:
class PostgresqlTest{
val classList: List<KClass<*>> = listOf(TestClass::class)
val postgresEnv = generateEnvironment("postgres")
#TestFactory
fun generateTests(): List<DynamicTest> = classList.flatMap { testClass ->
val instance = testClass.createInstance()
environmentProperty(testclass).setter.call(instance, postgresEnv)
//<<generate the dynamic tests>>
}
fun environmentProperty(testClass: KClass<*>) =
testClass.memberProperties.find {
it.returnType.classifier == Environment::class
} as KMutableProperty<*>
}
Now we have the issue that path != environment.path in the PostgresqlTest
I know this can be solved in the TestClass with lazy or get() like this
class TestClass {
val environment: Environment = generateEnvironment("jUnit")
val path: String by lazy { environment.path }
// OR
val path: String get() = environment.path
}
However this seems like a potential pitfall for future developers, especially since the first code snippet will work in TestClass and only fail for the tests where the environment is overwritten.
What is the cleanest way to ensure that path == environment.path when overwritting the property?
Ideally, if you're using a dependency injection framework (e.g. Dagger) you would want the test classes to just inject the Environment (which would allow referencing the environment path only after it's provided), for example:
class TestClass {
#Inject lateinit var environment: Environment
private lateinit var path: String
#Before fun setup() {
// do injection here
path = environment.path
}
}
Otherwise, I think interface delegation could be a good option here and avoids reflection entirely. For instance, create an EnvironmentHost which surfaces an environment and path property:
interface EnvironmentHost {
var environment: Environment
val path: String
}
Create an implementation here for test classes:
class TestEnvironmentHost : EnvironmentHost {
override var environment: Environment = generateEnvironment("jUnit")
override val path: String
get() = environment.path
}
Test classes now can look like:
class TestClass : EnvironmentHost by TestEnvironmentHost() {
#Test fun myTest() {
val myPath = path
val myEnvironment = environment
}
}
And your test factory can be simplified to:
#TestFactory
fun generateTests(): List<DynamicTests> = classList.flatMap { testClass ->
val instance = testClass.createInstance()
// Assign an environment if the test is an EnvironmentHost. If not,
// you could choose to treat that as a failure and require the test
// class to be an EnvironmentHost.
(instance as? EnvironmentHost)?.environment = postgresEnv
...
}
I ended up creating a new test-task in gradle for each environment:
task postgresqlIntegrationTest(type: Test, group: "Verification", description: "Runs integration tests on postgresql.") {
dependsOn compileTestKotlin
mustRunAfter test
environment "env", "postgresql"
useJUnitPlatform {
filter {
includeTestsMatching "*IT"
}
}
}
where my testclass just loads the environment like this:
class TestClass {
val environment: Environment = generateEnvironment(System.getenv("env") ?: "junit")
//Do test stuff
}

list of objects (blocks for network)

In openstack_compute_instance_v2, Terraform can attach the existing networks, while I have 1 or n network to attach, in module:
...
variable "vm_network" {
type = "list"
}
resource "openstack_compute_instance_v2" "singlevm" {
name = "${var.vm_name}"
image_id = "${var.vm_image}"
key_pair = "${var.vm_keypair}"
security_groups = "${var.vm_sg}"
flavor_name = "${var.vm_size}"
network = "${var.vm_network}"
}
in my .tf file:
module "singlevm" {
...
vm_network = {"name"="NETWORK1"}
vm_network = {"name"="NETWORK2"}
}
Terraform returns expected object, got invalid error.
What am I doing wrong here?
That's not how you specify a list in your .tf file that sources the module.
Instead you should have something more like:
variable "vm_network" { default = [ "NETWORK1", "NETWORK2" ] }
module "singlevm" {
...
vm_network = "${var.vm_network}"
}

Resources