How to manually generate network parameters in Corda - x509certificate

I have manually created and distributed the required certificates for Corda nodes. Now for the nodes to start, among other things, they need to have a network parameter. The problem is that if I use the Corda network bootstrapper tool to generate the network parameter, the file will be signed by another issuer ("C=UK, L=London, OU=corda, O=R3, CN=Corda Node Root CA") which is different from the issuer of my certificates. My question is how can I manually create a network parameter so I can specify the correct issuer to avoid conflicts during node startup?

So I have figured a way to create network parameters:
private fun getSignedNetworkParameters(): NetworkParameters {
//load the notary from a Keystore. This avoids having to start a flow from a node in order to retrieve NotaryInfo
val notary = loadKeyStore("\\path-to-keystore\\keystore.jks", "keystore-password")
val certificateAndKeyPair : CertificateAndKeyPair = notary.getCertificateAndKeyPair("entry-alias", "entry-password")
val notaryParty = Party(certificateAndKeyPair.certificate)
val notaryInfo = listOf(NotaryInfo(notaryParty, false))
//map contract ID to the SHA-256 hash of its CorDapp contracts&states JAR file
val whitelistedContractImplementations = mapOf(
TestContract.TEST_CONTRACT_ID to listOf(getCheckSum(contractFile))
)
return NetworkParameters(minimumPlatformVersion = 3, notaries = notaryInfo,
maxMessageSize = n, maxTransactionSize = n, modifiedTime = Instant.now(),
epoch = 1, whitelistedContractImplementations = whitelistedContractImplementations)
}

You could sign your certificates with the development certificate that is used by the network bootstrapper: https://github.com/corda/corda/tree/master/node-api/src/main/resources/certificates
If that doesn't work for you, you could try this experimental tool: https://github.com/corda/corda/blob/master/experimental/netparams/src/main/kotlin/net.corda.netparams/NetParams.kt . I can't promise that it works with Corda 3.3 though.

Related

Azure Disk Encryption using terraform VM extension - forces replacement [Second run]

I created the following resource to encrypt 'All' disk of a VM, and it worked fine so far:
resource "azurerm_virtual_machine_extension" "vm_encry_win" {
count = "${var.vm_encry_os_type == "Windows" ? 1 : 0}"
name = "${var.vm_encry_name}"
location = "${var.vm_encry_location}"
resource_group_name = "${var.vm_encry_rg_name}"
virtual_machine_name = "${var.vm_encry_vm_name}"
publisher = "${var.vm_encry_publisher}"
type = "${var.vm_encry_type}"
type_handler_version = "${var.vm_encry_type_handler_version == "" ? "2.2" : var.vm_encry_type_handler_version}"
auto_upgrade_minor_version = "${var.vm_encry_auto_upgrade_minor_version}"
tags = "${var.vm_encry_tags}"
settings = <<SETTINGS
{
"EncryptionOperation": "${var.vm_encry_operation}",
"KeyVaultURL": "${var.vm_encry_kv_vault_uri}",
"KeyVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionKeyURL": "${var.vm_encry_kv_key_url}",
"KekVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionAlgorithm": "${var.vm_encry_key_algorithm}",
"VolumeType": "${var.vm_encry_volume_type}"
}
SETTINGS
}
When i ran the first time - ADE encryption is done for both OS and data disk.
However, When I re-run terraform using terraform plan or terraform apply, it wants to replace all my data disks I have already created, like the following screenshot illustrates.
I do not know how to solve it. And my already created disks should not be replaced.
I check on the lines of ignore_chnages
lifecycle {
ignore_changes = [encryption_settings]
}
I am not sure where to add or does this reference actually solves the problem?
Which resource block should i add them.
Or is there another way ?

update exisitng terraform compute instance when added new "components"

I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.

Corda - running in-memory nodes on separate processes doesn't work

In my IntelliJ project, I have two modules, which are CorDapps. I also have a run configuration for each
Run Participant A CorDapp
Run Participant B CorDapp
Running either of these runs the CorDapp on an in-memory node
package com.demo.cordapp.participant_a
import net.corda.core.utilities.getOrThrow
import net.corda.testing.driver.DriverParameters
import net.corda.testing.driver.driver
import net.corda.testing.node.User
class Application {
companion object {
#JvmStatic
fun main(args: Array<String>) {
val parameters = DriverParameters(
isDebug = true,
waitForAllNodesToFinish = true,
extraCordappPackagesToScan = listOf("com.demo.shared.domain")
)
driver(parameters) {
startNode(
providedName = PARTICIPANT_1_NAME,
rpcUsers = listOf(User("user1", "test", permissions = setOf("ALL")))
).getOrThrow()
}
}
}
}
If I start Participant A's node first, it works fine, but I get an error for Participant B, and vice-versa. The error is as follows
Exception in thread "main"
net.corda.testing.node.internal.ListenProcessDeathException: The
process that was expected to listen on localhost:10000 has died with
status: 2
My guess is that there is a port conflict as both of them are trying to use the same p2p, rpc, web ports?
DriverParameters has a portAllocation argument that determines how the ports are assigned to nodes.
It defaults to PortAllocation.Incremental(10000). For one of the nodes, you should set this to something else (e.g. PortAllocation.Incremental(20000)).
If you are running in debug mode, you also need to modify the debugPortAllocation.

How to create private subnet on OVH using Terraform?

How am I supposed to create a private network/subnet on OVH using Terraform?
There is a common resource related to OpenStack (openstack_networking_subnet_v2) and ovh-specific (ovh_publiccloud_private_network_subnet) if you use ovh provider.
I am asking because when I follow this guide, my private network interface does not get ipv4-address assigned on interface (looks like the same problem was already described in this question: Private network creation with Terraform on OVH's Openstack). I can see an ip-addr in Horizon control-panel, but when i ssh to instance using Ext-net ipv4 addr and type ifconfig I see there is no ipv4 addr assigned for private network interface. Interface is UP but no ipv4 assigned. I just use Terraform code from guide like this:
# Create a private sub network
resource "ovh_publiccloud_private_network_subnet" "private_subnet" {
# Get the id of the resource ovh_publiccloud_private_network named
# private_network
network_id = "${ovh_publiccloud_private_network.private_network.id}"
project_id = "${var.project_id}" # With OS_TENANT_ID your tenant id's project
region = "WAW1" # With OS_REGION_NAME the OS_REGION_NAME environment variable
network = "192.168.42.0/24" # Global network
start = "192.168.42.2" # First IP for the subnet
end = "192.168.42.200" # Last IP for the subnet
dhcp = false # Deactivate the DHCP service
provider = "ovh.ovh" # Provider's name
}
resource "openstack_compute_instance_v2" "front" {
# Number of time the instance will be created
count = "${length(var.front_private_ip)}"
provider = "openstack.ovh" # Provider's name
name = "front" # Instance's name
flavor_name = "s1-2" # Flavor's name
image_name = "CoreOS Stable" # Image's name
key_pair = "${openstack_compute_keypair_v2.test_keypair.name}"
security_groups = ["default"] # Add into a security group
network = [
{
name = "Ext-Net" # Public interface name
}
,
{
# Private interface name
name = "${ovh_publiccloud_private_network.private_network.name}"
# Give an IP address depending on count.index
fixed_ip_v4 = "192.168.42.4"
}
]
}
So as I said the above example does not work for me (because I have to manually assign private ipv4-addr on interface while I would like Terraform to do it for me). Then I discovered terraform-ovh-publiccloud-network module on OVH github. Tried simple example from this repo (copy-pasted from ReadMe) and I can see that second interface on Bastion node gets Ipv4 addr assigned from private net range successfully. From module's code I can also see that openstack_networking_subnet_v2 resource is used instead of OVH-specific ovh_publiccloud_private_network_subnet? Why and what is the difference between them? Which one am I supposed to use when I write my own Terraform definition from scratch?
My goal is just to create a private network/subnet and create a compute instance with two interfaces (connected to public Ext-Net and private subnet I just created). Please provide me a short working example for OVH if you have such experience or let me know if I am missing something.
You can rent a /24 of public IPs from OVH for like $800. But you gotta do that first.

Corda: I can't use binary streaming from the web UI to attach a pdf (file) to Corda node

We are trying to upload a pdf as an attachment to the corda transaction using binary streaming. in fact we got our inspiration from here (https://github.com/corda/corda/blob/release-M13.0/core/src/main/kotlin/net/corda/core/Utils.kt) check out fun sizedInputStreamAndHash(). Are there any other suggestions what we could try?
The following is a snippet of how we wrote for binary streaming in the API:
logger.debug(numOfClearBytes)
val baos = ByteArrayOutputStream()
ZipOutputStream(baos).use({ zos ->
val arraySize = numOfClearBytes.toByteArray().size
val bytes = numOfClearBytes.toByteArray()
val n = (numOfClearBytes.toByteArray().size - 1) / arraySize + 1 // same as Math.ceil(numOfExpectedBytes/arraySize).
zos.setLevel(Deflater.BEST_COMPRESSION)
zos.putNextEntry(ZipEntry("z"))
for (i in 0 until n) {
zos.write(bytes, 0, arraySize)
}
zos.closeEntry()
})
val bytes = baos.toByteArray()
val inputAndHash: InputStreamAndHash = InputStreamAndHash(ByteArrayInputStream(bytes), bytes.sha256())
val attachmentId = services.uploadAttachment(inputAndHash.inputStream)
val flowHandle = services.startTrackedFlow(::Payer, exchangeAmount, otherParty, attachmentId)
val attachmentId = services.uploadAttachment(inputAndHash.inputStream)
val flowHandle = services.startTrackedFlow(::Payer, exchangeAmount, otherParty, attachmentId)
To upload an attachment to the node via HTTP using the built-in webserver, you have to hit the /upload/attachment endpoint. This endpoint is provided by the node by default - you do not have to add it youself. See https://github.com/corda/corda/blob/release-V1/webserver/src/main/kotlin/net/corda/webserver/servlets/DataUploadServlet.kt.
If you send an attachment to this endpoint, it will be uploaded to the node and the endpoint will return the hash of the file on the node.
You can then use either CordaRPCOps.openAttachment or ServiceHub.attachments.openAttachment to retrieve the attachment as an input stream using its hash. You can then process it as you see fit (either in a web endpoint or in a flow).
To see an example CorDapp that uses a HTTP endpoint to upload an attachment, see the Blacklist sample here: https://github.com/corda/samples (details in the readme).

Resources