How to retrieve the EKS kubeconfig? - terraform-provider-aws

I have defined an aws_eks_cluster and aws_eks_node_group as follows:
resource "aws_eks_cluster" "example" {
count = var.create_eks_cluster ? 1 : 0
name = local.cluster_name
role_arn = aws_iam_role.example[count.index].arn
vpc_config {
subnet_ids = [
aws_subnet.main2.id,
aws_subnet.main3.id
]
security_group_ids = [
module.network.security_group_allow_all_from_client_ip,
module.network.security_group_main_id
]
endpoint_private_access = true
endpoint_public_access = false
}
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.example-AmazonEKSVPCResourceController,
]
}
resource "aws_eks_node_group" "example" {
count = var.create_eks_cluster ? 1 : 0
cluster_name = aws_eks_cluster.example[count.index].name
node_group_name = random_uuid.deployment_uuid.result
node_role_arn = aws_iam_role.eks-node-group-example[count.index].arn
subnet_ids = [
aws_subnet.main2.id,
aws_subnet.main3.id
]
scaling_config {
desired_size = 1
max_size = 5
min_size = 1
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.example-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.example-AmazonEC2ContainerRegistryReadOnly,
]
}
How can I retrieve the KubeConfig?
I have seen that the kubeconfig is available as an output on the eks module.
Do I need to replace aws_eks_cluster and aws_eks_node_group with the eks module?

The EKS module composes a kubeconfig based on a template.
You can include that template alongside your terraform code.
You will need to provide default values for all the variables in the templatefile function call and reference your own EKS resource name. It's fine to drop all the coalescelist functions too.
e.g.:
locals {
kubeconfig = templatefile("templates/kubeconfig.tpl", {
kubeconfig_name = local.kubeconfig_name
endpoint = aws_eks_cluster.example.endpoint
cluster_auth_base64 = aws_eks_cluster.example.certificate_authority[0].data
aws_authenticator_command = "aws-iam-authenticator"
aws_authenticator_command_args = ["token", "-i", aws_eks_cluster.example.name]
aws_authenticator_additional_args = []
aws_authenticator_env_variables = {}
})
}
output "kubeconfig" { value = local.kubeconfig }

Related

Terraform with Localstack basic setup doesn't work

I want to try basic setup for localstack with terraform.
My docker-compose file, from the localstack docs
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
- "127.0.0.1:53:53" # DNS config (only required for Pro)
- "127.0.0.1:53:53/udp" # DNS config (only required for Pro)
- "127.0.0.1:443:443" # LocalStack HTTPS Gateway (only required for Pro)
environment:
- DEBUG=1
- LOCALSTACK_HOSTNAME=localhost
- PERSISTENCE=${PERSISTENCE-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-} # only required for Pro
- DOCKER_HOST=unix:///var/run/docker.sock
- SERVICES=s3
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
My terraform file to create s3 bucket
# Public Cloud Configuration
provider "aws" {
region = "us-east-1"
access_key = "test123"
secret_key = "testabc"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
endpoints {
s3 = "http://localhost:4566"
}
}
# Create Bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "bucket"
}
I got the next error when running the command terraform apply
│ Error: creating Amazon S3 (Simple Storage) Bucket (bucket): RequestError: send request failed
│ caused by: Put "http://bucket.localhost:4566/": dial tcp: lookup bucket.localhost on 10.222.50.10:53: no such host
│
│ with aws_s3_bucket.my_bucket,
│ on main.tf line 15, in resource "aws_s3_bucket" "my_bucket":
│ 15: resource "aws_s3_bucket" "my_bucket" {
I'm able to create s3 bucket manually in the localstack with command like
aws --endpoint-url http://localhost:4566 s3 mb s3://user-uploads
docker ps output
c9497bcff0e3 localstack/localstack "docker-entrypoint.sh" 23 minutes ago Up 18 minutes (healthy) 127.0.0.1:53->53/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:4510-4559->4510-4559/tcp, 127.0.0.1:4566->4566/tcp, 127.0.0.1:53->53/udp, 5678/tcp localstack_main
So why localstack is trying to access some different address like 192.168.178.1:53? Do I need to specify somewhere different address? Checked a number of tutorials and for everyone, the setup works fine.
The issue herein lies with your Terraform configuration. The following code inside main.tf works perfectly with tflocal (LocalStack's wrapper for Terraform CLI):
resource "aws_s3_bucket" "my_bucket" {
bucket = "bucket"
}
tflocal init
tflocal apply
If you don't wish to use tflocal, you would need to have a configuration like this:
provider "aws" {
access_key = "mock_access_key"
secret_key = "mock_secret_key"
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
s3 = "http://s3.localhost.localstack.cloud:4566"
}
}
resource "aws_s3_bucket" "test-bucket" {
bucket = "my-bucket"
}
I hope this helps.
I solve it adding a property in provider aws
s3_use_path_style = true
Sample:
terraform {
required_version = ">= 0.12"
backend local {}
}
provider "aws" {
region = "localhost"
access_key = "local"
secret_key = "local"
skip_region_validation = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
s3_use_path_style = true // <<- this property here
endpoints {
dynamodb = "http://localhost:4566"
s3 = "http://localhost:4566"
}
}
// S3
resource "aws_s3_bucket" "my-bucket" {
bucket = "teste"
}
I hope help you

Not able to initialise Vault

I am trying to initilaise vault with below configurations
vault.hcl
path "*"{
capabilities = [ "read", "list", "update","create" ]
}
vault.conf
backend "file" {
path = "/vault/vaultsecrets"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
tls_cert_file = "/vault/certs/host.pem"
tls_key_file = "/vault/certs/host.key"
tls_cipher_suites = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA"
tls_prefer_server_cipher_suites = "true"
}
disable_mlock = "true"
and it gives me following error
{'errors': ['failed to initialize barrier: failed to persist keyring: mkdir /vault/vaultsecrets/core: permission denied']}
I feel this something to do with file permissions but not sure where to change it.
Note:works fines with vault:1.0.1 but throws the above error when I migrate to vault:1.4.3
Thanks in advance

Corda: Trying to put the RPC Permissions on an external database

I'm trying to put the RPC Permissions, along with the users and their password on an external database. I've followed the documentation for Corda v. 3.3 (https://docs.corda.net/clientrpc.html#rpc-security-management).
It says that I need to create a "security" field for the node in question and fill out all the necessary information. I've done it, but as soon as I try to deploy the Node, it gives me this error:
"Could not set unknown property 'security' for object of type net.corda.plugins.Node."
The node's information looks like this in the build.gradle document:
node {
name "O=myOrganisation,L=Lisbon,C=PT"
p2pPort 10024
rpcSettings {
address("localhost:10025")
adminAddress("localhost:10026")
}
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "localhost:3306"
username = "*******"
password = "*******"
driverClassName = "com.mysql.jdbc.Driver"
}
}
}
}
cordapps = [
"$project.group:cordapp:$project.version"
]
}
You are confusing two syntaxes:
The syntax for configuring a node block inside a Cordform task such as deployNodes
The syntax for configuring a node directly via node.conf
The security settings are for inside node.conf. You have to create the node first, then modify the node's node.conf with these settings once it has been created.
Corda 4 will introduce an extraConfig option for use inside Cordfrom node blocks, as described here.

Terraform: how to support different providers

I have a set of terraform codes in a directory called myproject:
\myproject\ec2.tf
\myproject\provider.tf
\myproject\s3.tf
....
the provider.tf shows:
provider "aws" {
region = "us-west-1"
profile = "default"
}
so, if I terraform apply in myproject folder, a set of aws resources are launched in us-west-1 under my account.
Now I want to introduce a AWS Glue resource, which is only available in a different region us-west-2. then how do I layout glue.tf file?
Currently I store it in a sub-directory under myproject and run terraform apply in that sub-directory i.e.
\myproject\glue\glue.tf
\myproject\glue\another_provider.tf
another_provider.tf is:
provider "aws" {
region = "us-west-2"
profile = "default"
}
Is it the only way to store a file launching resources in different regions? any better way?
If there is no better way, then I need to have another backend file in glue sub-folder as well, besides, some common variables in myproject directory cannot be shared.
--------- update:
I followed the link posted by Phuong Nguyen,
provider "aws" {
region = "us-west-1"
profile = "default"
}
provider "aws" {
alias = "oregon"
region = "us-west-2"
profile = "default"
}
resource "aws_glue_connection" "example" {
provider = "aws.oregon"
....
}
But I saw:
Error: aws_glue_connection.example: Provider doesn't support resource: aws_glue_connection
you can use provider alias to define multiple providers, .e.g.
# this is default provider
provider "aws" {
region = "us-west-1"
profile = "default"
}
# additional provider
provider "aws" {
alias = "west-2"
region = "us-west-2"
profile = "default"
}
and then in your glue.tf, you can refer to alias provider as:
resource "aws_glue_job" "example" {
provider = "aws.west-2"
# ...
}
More details at Multiple Provider Instances section: https://www.terraform.io/docs/configuration/providers.html
Read my comment ...
Which basically means that you should keep out aws profiles and regions and what not from your terraform code as much as possible and use them as configuration as follows:
terraform {
required_version = "1.0.1"
required_providers {
aws = {
version = ">= 3.56.0"
source = "hashicorp/aws"
}
}
backend "s3" {}
}
provider "aws" {
region = var.region
profile = var.profile
}
Than use tfvars configuration files:
cat cnf/env/spe/prd/tf/03-static-website.backend-config.tfvars
profile = "prd-spe-rcr-web"
region = "eu-north-1"
bucket = "prd-bucket-spe"
foobar = "baz"
which you will apply during the terraform plan and apply calls as follows:
terraform -chdir=$tf_code_path plan -var-file=<<line-one-^^^>>.tfvars
terraform -chdir=$tf_code_path plan -var-file=<<like-the-one-^^^>>.tfvars -auto-approve
As a rule of thumb you SHOULD separate your code and configuration always, the more mixed they are the deeper you will get into troubles ... this applies to ANY programming language / project etc. Now some wise heads will argue that terraform code is in itself configuration , but no it is not. The terraform code in your application is the declarative source code, which is used to provision your binary infrastructure used by the application source code etc. in your application ...

AppSync BatchResolver AssumeRole Error

I’m trying to use the new DynamoDB BatchResolvers to write to two DynamoDB table in an AppSync resolver (currently using a Lambda function to do this). However, I’m getting the following permission error when looking at the CloudWatch logs:
“User: arn:aws:sts::111111111111:assumed-role/appsync-datasource-ddb-xxxxxx-TABLE-ONE/APPSYNC_ASSUME_ROLE is not authorized to perform: dynamodb:BatchWriteItem on resource: arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException;
I’m using TABLE-ONE as my data source in my resolver.
I added the "dynamodb:BatchWriteItem" and "dynamodb:BatchGetItem" to TABLE-ONE’s permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE/*",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO/*"
]
}
]
}
I have another resolver that uses the BatchGetItem operation and was getting null values in my response - changing the table’s policy access level fixed the null values:
However, checking the box for BatchWriteItem doesn’t seem to solve the issue either adding the permissions to the data source table’s policy.
I also tested my resolver test feature in AppSync, the evaluated request and response are working as intended.
Where else could I set the permissions for a BatchWriteItem operation between two tables? It seems like it's invoking the user's assumed-role instead of the table's role - can I 'force' it to use the table's role?
It is using the role that you have configured for the table in the AppSync console. Note that that particular role, should have appsync as a trusted entity.
Or if you use the new role tick box when creating the data source in the console, it should take care of it.
variable "dynamodb_table" {
description = "Name of DynamoDB table"
type = any
}
# value of var
dynamodb_table = {
"dyn-notification-inbox" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable
}
"dyn-notification-count" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable2
}
}
locals {
roles_arns = {
dynamodb = var.dynamodb_table
kms = var.kms_keys
}
}
data "aws_iam_policy_document" "invoke_dynamodb_document" {
statement {
actions = [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:UpdateItem",
"dynamodb:Query"
]
# dynamic dynamodb table
# for dynamodb table v.table.arn and v.table.arn/*
resources = flatten([
for k, v in local.roles_arns.dynamodb : [
v.table.arn,
"${v.table.arn}/*"
]
])
}
}
# make policy
resource "aws_iam_policy" "iam_invoke_dynamodb" {
name = "policy-${var.name}"
policy = data.aws_iam_policy_document.invoke_dynamodb_document.json
}
# attach role
resource "aws_iam_role_policy_attachment" "invoke_dynamodb" {
role = aws_iam_role.iam_appsync_role.name
policy_arn = aws_iam_policy.iam_invoke_dynamodb.arn
}
Result:
resources: [
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table',
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table/*'
]

Resources