I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.
Related
any idea how i can use terraform to create nfs file share?
I need to create the s3 bucket
then
create nfs file share on existing storage gateway which I need to use the bucket name I created in step 1
any idea how to do in terraform?
You will need 3 terraform resources :
aws_s3_bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-nfs-bucket"
acl = "private"
tags = {
Author = "me"
Environment = "dev"
}
}
aws_storagegateway_gateway
resource "aws_storagegateway_gateway" "my-storagegateway" {
gateway_ip_address = "1.2.3.4"
gateway_name = "storage-gateway"
gateway_timezone = "GMT"
gateway_type = "FILE_S3"
}
aws_storagegateway_nfs_file_share
resource "aws_storagegateway_nfs_file_share" "my_bucket" {
client_list = ["0.0.0.0/0"]
gateway_arn = aws_storagegateway_gateway.my-storagegateway.arn
location_arn = aws_s3_bucket.my_bucket.arn
role_arn = aws_iam_role.my-role.arn
}
You will also need in the role_arn key-value the ARN of the AWS Identity and Access Management (IAM) role that a file gateway assumes when it accesses the underlying storage.
aws_iam_role
Managing your file gateway
I created the following resource to encrypt 'All' disk of a VM, and it worked fine so far:
resource "azurerm_virtual_machine_extension" "vm_encry_win" {
count = "${var.vm_encry_os_type == "Windows" ? 1 : 0}"
name = "${var.vm_encry_name}"
location = "${var.vm_encry_location}"
resource_group_name = "${var.vm_encry_rg_name}"
virtual_machine_name = "${var.vm_encry_vm_name}"
publisher = "${var.vm_encry_publisher}"
type = "${var.vm_encry_type}"
type_handler_version = "${var.vm_encry_type_handler_version == "" ? "2.2" : var.vm_encry_type_handler_version}"
auto_upgrade_minor_version = "${var.vm_encry_auto_upgrade_minor_version}"
tags = "${var.vm_encry_tags}"
settings = <<SETTINGS
{
"EncryptionOperation": "${var.vm_encry_operation}",
"KeyVaultURL": "${var.vm_encry_kv_vault_uri}",
"KeyVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionKeyURL": "${var.vm_encry_kv_key_url}",
"KekVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionAlgorithm": "${var.vm_encry_key_algorithm}",
"VolumeType": "${var.vm_encry_volume_type}"
}
SETTINGS
}
When i ran the first time - ADE encryption is done for both OS and data disk.
However, When I re-run terraform using terraform plan or terraform apply, it wants to replace all my data disks I have already created, like the following screenshot illustrates.
I do not know how to solve it. And my already created disks should not be replaced.
I check on the lines of ignore_chnages
lifecycle {
ignore_changes = [encryption_settings]
}
I am not sure where to add or does this reference actually solves the problem?
Which resource block should i add them.
Or is there another way ?
I have Kubernetes cluster in GCP (GKE) that cannot connecto to Memorystore(Redis).
All of the resources related to the project are in a dedicated network.
network module:
resource "google_compute_network" "my_project" {
name = "my_project"
auto_create_subnetworks = true
}
output "my_project_network_self_link" {
value = google_compute_network.my_project_network.self_link
}
I use the network in the GKE cluster (network = "${var.network_link}"):
resource "google_container_cluster" "my_project" {
name = "my_project-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b"]
network = "${var.network_link}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
Node pools omitted.
and I set the network as authorized_network in the memorystore configuration:
resource "google_redis_instance" "cache" {
name = "my_project-redis"
tier = "STANDARD_HA"
memory_size_gb = 1
authorized_network = "${var.network_link}"
# location_id = "us-central1-a"
redis_version = "REDIS_4_0"
display_name = "my_project Redis cache"
}
variable "network_link" {
description = "The link of the network instance is in"
type = string
}
I guess that the problem is related to the network, because previously using the default network this was working fine.
Currently the GKE nodes are in us-central1-a and us-central1-b(specified in the TF script) and memory store is in us-central1-c. So the cluster and the Redis are in the same VPC but in different sub-networks. Could this be the problem?
I had to add the following section to the cluster module in terraform:
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
This seems to enable the VPC-native (alias IP) property of the cluster.
Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955
I'm unable to get the auto accept peering done through the work around mentioned in the link (Why am I getting a permissions error when attempting to auto_accept vpc peering in Terraform?"] via provisioner option
See below Terraform code of mine. Can some one help me out?
provider "aws" {
region = "us-east-1"
profile = "default"
}
provider "aws" {
region = "us-east-1"
profile = "peer"
alias = "peer"
}
data "aws_caller_identity" "peer" {
provider = "aws.peer"
}
resource "aws_vpc_peering_connection" "service-peer" {
vpc_id = "vpc-123a56789bc"
peer_vpc_id = "vpc-YYYYYY"
peer_owner_id = "012345678901"
peer_region = "us-east-1"
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
Output I'm getting:
Error: Error applying plan:
1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: 1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: Unable to modify peering options. The VPC Peering Connection "pcx-08ebd316c82acacd9" is not active. Please set `auto_accept` attribute to `true`, or activate VPC Peering Connection manually.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure
Where as I'm able to run the aws cli command successfully via linux shell, outside the terraform template. Let me know if I'm missing out something in the terraform script.
Try with moving out your "local-exec" and add depends on link with your VPC peering.
resource "null_resource" "peering-provision" {
depends_on = ["aws_vpc_peering_connection.service-peer"]
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
As said Koe it's may be better to use auto_accept option.