Advertised P2P messaging addresses changes when compatibilityZoneURL is informed - corda

I'm trying to setup an environment on separate machines:
Server1: Node
Server2: Bridge
Server3: Float
When I execute the node registration or:
java -jar corda.jar --just-generate-node-info
The address on nodeInfo-XXX is generated correctly, pointing to Server3 IP (float), which I've put on p2paddress on node.conf.
But when I inform the compatibilityZoneURL parameter to a configured Cordite Network Map service, and start the node, the nodeInfo-XXX and the "Advertised P2P messaging addresses" just changes to the Server1 IP, although this IP doesn't appear in node.conf.
My node.conf:
myLegalName="O=Node Test,L=Sao Paulo,C=BR"
p2pAddress="float-server-IP-or-alias:10005"
rpcSettings {
useSsl = false
standAloneBroker = false
address="0.0.0.0:10031"
adminAddress="0.0.0.0:10061"
}
security {
authService {
dataSource {
type=INMEMORY
users=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
}
}
}
useTestClock = false
enterpriseConfiguration = {
externalBridge = false
mutualExclusionConfiguration = {
on = true
updateInterval = 20000
waitInterval = 40000
}
}
devMode=false
compatibilityZoneURL : "http://10.102.32.106:8080/"
keyStorePassword = "cordacadevpass"
trustStorePassword = "trustpass"
Edit: I'm using Corda Enterprise v3.1

Are you able to try adding the following line to your node.conf:
detectPublicIp = false
From the docs:
This flag toggles the auto IP detection behaviour, it is enabled by default. On startup the node will attempt to discover its externally visible IP address first by looking for any public addresses on its network interfaces, and then by sending an IP discovery request to the network map service. Set to false to disable.
Let us know if this works.

Related

AWS EKS load balancer creation fails while creating for EKS cluster

I have created an EKS cluster with two public subnets, however, created one worker node (instance type = t3.small) in one of the public subnets successfully.
I am able to create Nginx deployment and nodePort service and able to query the deployment and other k8s objects and also able to access this web application using node port (<Service-public-ip:nodeport>) successfully.
I am trying to create a load balancer alb and nlb, but both are failing
The whole setup is using the terraform file. I need help to identify why lb (both types) creation is failing, and how can i fix this in my terraform files
Terraform file for network load balancer:
resource "kubernetes_service_v1" "nlb-nginx-service" {
metadata {
name = "nlb-nginx-service"
annotations = {
"service.beta.kubernetes.io/aws-load-balancer-type" = "external"
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type" = "ip"
"service.beta.kubernetes.io/aws-load-balancer-scheme" = "internet-facing"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when I describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
terraform file for application load balancer:
resource "kubernetes_service_v1" "alb-nginx-service" {
metadata {
name = "alb-nginx-service"
annotations = {
"kubernetes.io/ingress.class" = "alb"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when i describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 52s (x6 over 3m29s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 52s (x6 over 3m28s) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
Steps I took to identify, but unfortunately didnt work
tried to create these services on the cluster one by one
checked the services logs but didnt not get a clue
Seems somewhere lb is not able to find a public subnet to place the lb service in aws, as it says 'could not find any suitable subnets for creating the ELB', but not aware where to mention/assign public subnet for my lb

Terraform AWS - Creation of multiple AWS S2S VPN

Requirement is to establish communication from multiple offices to AWS through Terraform. Below is the code so far where the customer gateway and virtual private gateway will be created based on the variable map cgw (Lets assume 5 customer gateways)
resource "aws_customer_gateway" "cgw-main" {
for_each = toset(var.cgw)
ip_address = each.value
bgp_asn = 65000
type = "ipsec.1"
}
resource "aws_vpn_gateway" "vpn-gw-main" {
count = length(var.cgw)
vpc_id = aws_vpc.myvpc.id
}
resource "aws_vpn_connection" "vpn-main" {
customer_gateway_id= ??
vpn_gateway_id = ??
type="ipsec.1"
static_routes_only = true
}
How do i dynamically pickup each created customer gateway and map to created vpn gateway in aws_vpn_connection for 5 iterations?

GCP GKE cannot access Redis Memorystore

I have Kubernetes cluster in GCP (GKE) that cannot connecto to Memorystore(Redis).
All of the resources related to the project are in a dedicated network.
network module:
resource "google_compute_network" "my_project" {
name = "my_project"
auto_create_subnetworks = true
}
output "my_project_network_self_link" {
value = google_compute_network.my_project_network.self_link
}
I use the network in the GKE cluster (network = "${var.network_link}"):
resource "google_container_cluster" "my_project" {
name = "my_project-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b"]
network = "${var.network_link}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
Node pools omitted.
and I set the network as authorized_network in the memorystore configuration:
resource "google_redis_instance" "cache" {
name = "my_project-redis"
tier = "STANDARD_HA"
memory_size_gb = 1
authorized_network = "${var.network_link}"
# location_id = "us-central1-a"
redis_version = "REDIS_4_0"
display_name = "my_project Redis cache"
}
variable "network_link" {
description = "The link of the network instance is in"
type = string
}
I guess that the problem is related to the network, because previously using the default network this was working fine.
Currently the GKE nodes are in us-central1-a and us-central1-b(specified in the TF script) and memory store is in us-central1-c. So the cluster and the Redis are in the same VPC but in different sub-networks. Could this be the problem?
I had to add the following section to the cluster module in terraform:
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
This seems to enable the VPC-native (alias IP) property of the cluster.

Corda - Failed to find a store at certificates\sslkeystore.jks

Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955

Terraform Provisioner "local-exec" not working as expected | VPC Peering Connection Accept issue

I'm unable to get the auto accept peering done through the work around mentioned in the link (Why am I getting a permissions error when attempting to auto_accept vpc peering in Terraform?"] via provisioner option
See below Terraform code of mine. Can some one help me out?
provider "aws" {
region = "us-east-1"
profile = "default"
}
provider "aws" {
region = "us-east-1"
profile = "peer"
alias = "peer"
}
data "aws_caller_identity" "peer" {
provider = "aws.peer"
}
resource "aws_vpc_peering_connection" "service-peer" {
vpc_id = "vpc-123a56789bc"
peer_vpc_id = "vpc-YYYYYY"
peer_owner_id = "012345678901"
peer_region = "us-east-1"
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
Output I'm getting:
Error: Error applying plan:
1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: 1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: Unable to modify peering options. The VPC Peering Connection "pcx-08ebd316c82acacd9" is not active. Please set `auto_accept` attribute to `true`, or activate VPC Peering Connection manually.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure
Where as I'm able to run the aws cli command successfully via linux shell, outside the terraform template. Let me know if I'm missing out something in the terraform script.
Try with moving out your "local-exec" and add depends on link with your VPC peering.
resource "null_resource" "peering-provision" {
depends_on = ["aws_vpc_peering_connection.service-peer"]
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
As said Koe it's may be better to use auto_accept option.

Resources