Not able to initialise Vault - initialization

I am trying to initilaise vault with below configurations
vault.hcl
path "*"{
capabilities = [ "read", "list", "update","create" ]
}
vault.conf
backend "file" {
path = "/vault/vaultsecrets"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
tls_cert_file = "/vault/certs/host.pem"
tls_key_file = "/vault/certs/host.key"
tls_cipher_suites = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA"
tls_prefer_server_cipher_suites = "true"
}
disable_mlock = "true"
and it gives me following error
{'errors': ['failed to initialize barrier: failed to persist keyring: mkdir /vault/vaultsecrets/core: permission denied']}
I feel this something to do with file permissions but not sure where to change it.
Note:works fines with vault:1.0.1 but throws the above error when I migrate to vault:1.4.3
Thanks in advance

Related

How to retrieve the EKS kubeconfig?

I have defined an aws_eks_cluster and aws_eks_node_group as follows:
resource "aws_eks_cluster" "example" {
count = var.create_eks_cluster ? 1 : 0
name = local.cluster_name
role_arn = aws_iam_role.example[count.index].arn
vpc_config {
subnet_ids = [
aws_subnet.main2.id,
aws_subnet.main3.id
]
security_group_ids = [
module.network.security_group_allow_all_from_client_ip,
module.network.security_group_main_id
]
endpoint_private_access = true
endpoint_public_access = false
}
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.example-AmazonEKSVPCResourceController,
]
}
resource "aws_eks_node_group" "example" {
count = var.create_eks_cluster ? 1 : 0
cluster_name = aws_eks_cluster.example[count.index].name
node_group_name = random_uuid.deployment_uuid.result
node_role_arn = aws_iam_role.eks-node-group-example[count.index].arn
subnet_ids = [
aws_subnet.main2.id,
aws_subnet.main3.id
]
scaling_config {
desired_size = 1
max_size = 5
min_size = 1
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.example-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.example-AmazonEC2ContainerRegistryReadOnly,
]
}
How can I retrieve the KubeConfig?
I have seen that the kubeconfig is available as an output on the eks module.
Do I need to replace aws_eks_cluster and aws_eks_node_group with the eks module?
The EKS module composes a kubeconfig based on a template.
You can include that template alongside your terraform code.
You will need to provide default values for all the variables in the templatefile function call and reference your own EKS resource name. It's fine to drop all the coalescelist functions too.
e.g.:
locals {
kubeconfig = templatefile("templates/kubeconfig.tpl", {
kubeconfig_name = local.kubeconfig_name
endpoint = aws_eks_cluster.example.endpoint
cluster_auth_base64 = aws_eks_cluster.example.certificate_authority[0].data
aws_authenticator_command = "aws-iam-authenticator"
aws_authenticator_command_args = ["token", "-i", aws_eks_cluster.example.name]
aws_authenticator_additional_args = []
aws_authenticator_env_variables = {}
})
}
output "kubeconfig" { value = local.kubeconfig }

Issue while addinng Postgresq as database for one Node in Enterprise Cordapp

I was working on trial version of corda enterprise edition ( cordapp-example-release-enterpise-v3 ). I tried to change one Node's database from H2 to PostgreSQL by using code shown below
node {
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://localhost:5432/postgres"
dataSource.user = test
dataSource.password = test123
}
database = {
transactionIsolationLevel = READ_COMMITTED
}
name "O=PartyC,L=Paris,C=FR"
p2pPort 10013
rpcSettings {
address("localhost:10014")
adminAddress("localhost:10054")
}
webPort 10015
cordapps = ["$corda_release_group:corda-finance:$corda_release_version"]
rpcUsers = [[user: "user1", "password": "test", "permissions": ["ALL"]]]
}
while start buid using gradlew clean build am getting an error like Could not set unknown property 'dataSourceProperties' for object of type net.corda.plugins.Node. can somebody help me with this. also if I am using IntelliJ to Run the code then how do I edit NodeDriver kt file?.
The dataSourceProperties belong to node.conf and not deployNodes task and hence it won't work, as cordform doesn't know anything about dataSourceProperties, you see the error Could not set unknown property. You can use extraConfig to make this work. However, I'd recommend to make these changes in the node.conf and use bootstrapper tool for bootstrapping. One example to use extraConfig is below.
eg:
node {
....
extraConfig = [
dataSourceProperties : [
'dataSourceClassName' : "org.h2.jdbcx.JdbcDataSource",
'"dataSource.url"' : "jdbc:h2:tcp://localhost:9105/persistence;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=10000;WRITE_DELAY=100;AUTO_RECONNECT=TRUE;",
'"dataSource.user"' : "sa",
'"dataSource.password"' : ""
],
database : ["transactionIsolationLevel" :"READ_COMMITTED"]
]
}

Corda: Trying to put the RPC Permissions on an external database

I'm trying to put the RPC Permissions, along with the users and their password on an external database. I've followed the documentation for Corda v. 3.3 (https://docs.corda.net/clientrpc.html#rpc-security-management).
It says that I need to create a "security" field for the node in question and fill out all the necessary information. I've done it, but as soon as I try to deploy the Node, it gives me this error:
"Could not set unknown property 'security' for object of type net.corda.plugins.Node."
The node's information looks like this in the build.gradle document:
node {
name "O=myOrganisation,L=Lisbon,C=PT"
p2pPort 10024
rpcSettings {
address("localhost:10025")
adminAddress("localhost:10026")
}
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "localhost:3306"
username = "*******"
password = "*******"
driverClassName = "com.mysql.jdbc.Driver"
}
}
}
}
cordapps = [
"$project.group:cordapp:$project.version"
]
}
You are confusing two syntaxes:
The syntax for configuring a node block inside a Cordform task such as deployNodes
The syntax for configuring a node directly via node.conf
The security settings are for inside node.conf. You have to create the node first, then modify the node's node.conf with these settings once it has been created.
Corda 4 will introduce an extraConfig option for use inside Cordfrom node blocks, as described here.

Terraform auth not working when adding resources

I wanted to try out terraform on our OpenStack environment. I tried to set it up and it seems to work when only the following is defined:
provider "openstack" {
user_name = "test"
tenant_name = "test"
password = "testpassword"
auth_url = "https://test:5000/v3/"
region = "test"
}
I can run terraform plan without any problem it says:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
When i try to add a resource:
resource "openstack_compute_instance_v2" "test" {
name = "test_server"
image_id = "test_id123"
flavor_id = "3"
key_pair = "test"
security_groups = ["default"]
network {
name = "Default Network"
}
}
When i run terraform plan i now get
Error: Error running plan: 1 error(s) occurred:
provider.openstack: Authentication failed
The authentication is working. Something in your provider section is incorrect.
Terraform does not verify the provider information when there is no resource using it.
I validated your findings, and then took it a step farther. I created two providers, one for AWS and one for OpenStack using your example. I then added a resource to create an AWS VPC. My AWS credentials were correct. When I ran terraform plan it returned the action plan for building the VPC. It did not check the fake OpenStack credentials.
One other thing, once there is a resource for a provider it always uses the credentials even if there is nothing to do.
provider "aws" {
access_key = "<redacted>"
secret_key = "<redacted>"
region = "us-east-1"
}
provider "openstack" {
user_name = "test"
tenant_name = "test"
password = "testpassword"
auth_url = "https://test:5000/v3/"
region = "test"
}
/* Create VPC */
resource "aws_vpc" "default" {
cidr_block = "10.200.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "testing"
}
}
Produced the following output verifying the OpenStack provider wasn't checked:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ aws_vpc.default
id: <computed>
arn: <computed>
assign_generated_ipv6_cidr_block: "false"
cidr_block: "10.200.0.0/16"
default_network_acl_id: <computed>
default_route_table_id: <computed>
default_security_group_id: <computed>
dhcp_options_id: <computed>
enable_classiclink: <computed>
enable_classiclink_dns_support: <computed>
enable_dns_hostnames: "true"
enable_dns_support: "true"
provider "aws" {
instance_tenancy: "default"
ipv6_association_id: <computed>
ipv6_cidr_block: <computed>
main_route_table_id: <computed>
tags.%: "1"
tags.Name: "testing"
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

AppSync BatchResolver AssumeRole Error

I’m trying to use the new DynamoDB BatchResolvers to write to two DynamoDB table in an AppSync resolver (currently using a Lambda function to do this). However, I’m getting the following permission error when looking at the CloudWatch logs:
“User: arn:aws:sts::111111111111:assumed-role/appsync-datasource-ddb-xxxxxx-TABLE-ONE/APPSYNC_ASSUME_ROLE is not authorized to perform: dynamodb:BatchWriteItem on resource: arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException;
I’m using TABLE-ONE as my data source in my resolver.
I added the "dynamodb:BatchWriteItem" and "dynamodb:BatchGetItem" to TABLE-ONE’s permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE/*",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO/*"
]
}
]
}
I have another resolver that uses the BatchGetItem operation and was getting null values in my response - changing the table’s policy access level fixed the null values:
However, checking the box for BatchWriteItem doesn’t seem to solve the issue either adding the permissions to the data source table’s policy.
I also tested my resolver test feature in AppSync, the evaluated request and response are working as intended.
Where else could I set the permissions for a BatchWriteItem operation between two tables? It seems like it's invoking the user's assumed-role instead of the table's role - can I 'force' it to use the table's role?
It is using the role that you have configured for the table in the AppSync console. Note that that particular role, should have appsync as a trusted entity.
Or if you use the new role tick box when creating the data source in the console, it should take care of it.
variable "dynamodb_table" {
description = "Name of DynamoDB table"
type = any
}
# value of var
dynamodb_table = {
"dyn-notification-inbox" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable
}
"dyn-notification-count" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable2
}
}
locals {
roles_arns = {
dynamodb = var.dynamodb_table
kms = var.kms_keys
}
}
data "aws_iam_policy_document" "invoke_dynamodb_document" {
statement {
actions = [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:UpdateItem",
"dynamodb:Query"
]
# dynamic dynamodb table
# for dynamodb table v.table.arn and v.table.arn/*
resources = flatten([
for k, v in local.roles_arns.dynamodb : [
v.table.arn,
"${v.table.arn}/*"
]
])
}
}
# make policy
resource "aws_iam_policy" "iam_invoke_dynamodb" {
name = "policy-${var.name}"
policy = data.aws_iam_policy_document.invoke_dynamodb_document.json
}
# attach role
resource "aws_iam_role_policy_attachment" "invoke_dynamodb" {
role = aws_iam_role.iam_appsync_role.name
policy_arn = aws_iam_policy.iam_invoke_dynamodb.arn
}
Result:
resources: [
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table',
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table/*'
]

Resources