Terraform, How to run the provisioner on existing resources? - openstack

My question is similar to this git hub post:
https://github.com/hashicorp/terraform/issues/745
It is also related to another stack exchange post of mine:
Terraform stalls while trying to get IP addresses of multiple instances?
I am trying to bootstrap several servers and there are several commands I need to run on my instances that require the IP addresses of all the other instances. However I cannot access the variables that hold the IP addresses of my newly created instances until they are created. So when I try to run a provisioner "remote-exec" block like this:
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y curl",
"echo ${openstack_compute_instance_v2.consul.0.network.0.fixed_ip_v4}",
"echo ${openstack_compute_instance_v2.consul.1.network.1.fixed_ip_v4}",
"echo ${openstack_compute_instance_v2.consul.2.network.2.fixed_ip_v4}"
]
}
Nothing happens because all the instances are waiting for all the other instances to finish being created and so nothing is created in the first place. So I need a way for my resources to be created and then run my provisioner "remote-exec" block commands after they are created and terraform can access the IP addresses of all my instances.

The solution is to create a resource "null_resource" "nameYouWant" { } and then run your commands inside that. They will run after the initial resources are created:
resource "aws_instance" "consul" {
count = 3
ami = "ami-ce5a9fa3"
instance_type = "t2.micro"
key_name = "ansible_aws"
tags {
Name = "consul"
}
}
resource "null_resource" "configure-consul-ips" {
count = 3
connection {
user = "ubuntu"
private_key="${file("/home/ubuntu/.ssh/id_rsa")}"
agent = true
timeout = "3m"
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y curl",
"sudo echo '${join("\n", aws_instance.consul.*.private_ip)}' > /home/ubuntu/test.txt"
]
}
}
Also see the answer here:
Terraform stalls while trying to get IP addresses of multiple instances?
Thank you so much #ydaetskcor for the answer

In addition to #alex-cohen answer, another tip from https://github.com/hashicorp/terraform/issues/8266#issuecomment-454377049.
If you want to initiate a call to local-exec, regardless of a resource creation, use triggers:
resource "null_resource" "deployment" {
provisioner "local-exec" {
command = "echo ${PATH} > output.log"
}
triggers = {
always_run = timestamp()
}
}

Related

How to verify existence of resources and components in Jfrog artifactory

We are trying to automate the repos, group and permission creation in Jfrog as part of the azuredevops pipeline. So here I decided to use JfrogCLI and API calls as the azuredevops tasks. But facing some difficulties in Jfrog cli or API calls related as the idempotent behavior is not enabled for most of the API calls operations and CLI commands.
Scenario:- For Application app-A1,
2 repos(local-A1 and virtual-A1) ,
2 groups(appA1-developers, appA1-contributors)
1 permission target (appA1-permission) where we will include the created 2 repos and groups with the permissions
Below is what I tried to achieve so far.
For creating the groups
jf rt group-create appA1-developers --url https://myrepo/artifactory --user jfuser --password jfpass
jf rt group-create appA1-contributors --url https://myrepo/artifactory --user jfuser --password jfpass
Create Repos using the command as below
Repo creation and Update template
jfrog rt rc local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
Repo Update Command
jfrog rt ru local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
For creating permission
curl -u 'jfuser' -X PUT "https://myrepo/artifactory/api/v2/security/permissions/java-developers" -H "Content-type: application/json" -T permision.json
{
"name": "appA1-developers",
"repo": {
"include-patterns": ["**"],
"exclude-patterns": [""],
"repositories": ["appA1-local"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","annotate"]
}
}
},
"build": {
"include-patterns": ["testmaven/**"],
"exclude-patterns": [""],
"repositories": ["artifactory-build-info"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","write","annotate","delete"]
}
}
}
}
But when I trying in Azuredevops all the above tasks, Not able to identify and condition if any of the resources above are already existing.
Looking for a way to Identify first,
If the specified groupname is already existing or not, if existing skip group creation, else create the groups
check if the repos already existing,
if not existing create as per the template
But if it exists and all the properties are same- Skip it
But if it exists and there are property changes- use the update command
Similarly, the permission target also need to be updated only if there are changes , But existing properties or settings shouldnt be altered.

How to generate HTTPS Git credentials for AWS CodeCommit?

I use terraform for create IAM User.
How to use terraform for generate HTTPS Git credentials for AWS CodeCommit ?
My code :
resource "aws_iam_user" "gitlab" {
name = "user-gitlab"
}
resource "aws_iam_policy_attachment" "gitlab" {
name = "iam-gitlab"
users = ["${aws_iam_user.gitlab.name}"]
policy_arn = "arn:aws:iam::aws:policy/AWSCodeCommitPowerUser"
}
Regards,
Use data.external to execute a CLI script:
credentials=$(aws --profile dev iam list-service-specific-credentials \
--user-name jenkins --service-name codecommit.amazonaws.com --query 'ServiceSpecificCredentials[0]')
if [[ $credentials == "null" ]]; then
credentials=$(aws --profile dev iam create-service-specific-credential --user-name jenkins \
--service-name codecommit.amazonaws.com --query ServiceSpecificCredential)
fi
echo "$credentials"
Then the terraform:
data "external" "jenkins" {
program = ["${path.root}/jenkins.sh"]
}
resource "aws_ssm_parameter" "jenkins_cc_id" {
name = "${local.jenkins}/codecommit_https_user"
value = "${lookup(data.external.jenkins.result, "ServiceUserName", "")}"
}
resource "aws_ssm_parameter" "jenkins_cc_p" {
name = "${local.jenkins}/codecommit_https_pass"
value = "${lookup(data.external.jenkins.result, "ServicePassword", "")}"
}
Unfortunately, there appears to be no support for this API in Terraform. I recommend that you post a feature request in the AWS provider GitHub repo.
The feature has been implemented in 4.1.0 (https://github.com/hashicorp/terraform-provider-aws/blob/v4.1.0/CHANGELOG.md).
Take a look at https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_service_specific_credential#service_specific_credential_id
resource "aws_iam_service_specific_credential" "example" {
service_name = "codecommit.amazonaws.com"
user_name = aws_iam_user.example.name
}

EMR cluster monitoring configuration: Ganglia + InfluxDb

i have an EMR cluster. It is set up by terraform script
resource "aws_emr_cluster" "emr-test" {
name = "emr-test"
applications = [..., "Ganglia", ...]
...
}
I would like to integrate ganglia with influxDb+Grafana. Found an example of cofiguration: example.
That requires to update gmetad.conf file on master node. Is that possible to do that with terraform script? emr step?
You can use the bootstrap_action attribute to list actions that should run before Hadoop is started on the cluster nodes. You can also apply filters to only run those actions on the master node:
resource "aws_emr_cluster" "emr-test" {
...
bootstrap_action {
path = "s3://your-bucket/update-gmetad.sh"
name = "update-gmetad-on-master-node"
args = ["instance.isMaster=true"]
}
}

How to test an app created with Angular CLI ng serve from another device?

I have an app generated with Angular CLI from scratch. CLI version angular-cli: 1.0.0-beta.11-webpack.2
I am trying to test it from my smartphone but I get Connection refused.
So, I run ng serve on my laptop and try to access the app:
From laptop, using localhost: Works
From laptop, using IP: Connection refused
From smartphone, using IP: Connection refused
This used to work with the previous, SystemJS version of CLI. I checked that I don't have a firewall running.
How could I fix or debug this error?
I am using a Mac.
Adding the host-flag with value "0.0.0.0" should allow you to access the webserver from any device on your local network.
This should work:
ng serve --host 0.0.0.0
For an explanation:
https://github.com/angular/angular-cli/pull/1475#issuecomment-235986121
In package.json
"start": "ng serve --host 0.0.0.0 --port 4200 --disable-host-check ",
However --disable-host-check would be a security risk
and you will need
"#angular/cli": "^1.1.0-rc.2" as this flag appeared in 1.1 version
Following the advice on this page:
https://medium.com/webpack/webpack-dev-server-middleware-security-issues-1489d950874a, this worked for me:
ng serve --host 0.0.0.0 --host my-computer
Maybe this can be helpfull (a bit automated version of #Captain Whippet's answer):
dev-server.js:
const os = require('os');
const { spawn } = require('child_process');
function getLocalIp(ipMatchArr) {
const networkInterfaces = os.networkInterfaces();
let matchingIps = Object.keys(networkInterfaces).reduce((arr, name) => {
const matchingInterface = networkInterfaces[name].find(iface =>
iface.family === 'IPv4' && ipMatchArr.find(match => iface.address.indexOf(match) > -1));
if (matchingInterface) arr.push(matchingInterface.address);
return arr;
}, []);
if (matchingIps.length) {
return matchingIps[0];
}
else {
throw(`Error. Unable to find ip to use as public host: ipMatches=['${ipMatchArr.join("', '")}']`);
}
}
function launchDevServer(address) {
const port = process.env.port || 4200;
const publicHostname = address + ":" + port;
console.log(`[[[ Access your NG LIVE DEV server on \x1b[33m ${publicHostname} \x1b[0m ]]]`);
spawn(
"ng serve"
, [
"--host 0.0.0.0"
, `--public ${publicHostname}`
]
, { stdio: 'inherit', shell: true }
);
}
/* execute */
launchDevServer(getLocalIp(['192.168.1.', '192.168.0.']));
package.json:
"scripts": {
"start": "node dev-server.js"
}
then run "npm start"
You can then open your app on any device on your local network via address printed in yellow.
#angular/cli: 1.3.2, node: 6.9.5
tested on Mac and Windows
you have to find in node_modules angular cli folder all the occurences of localhost and replace (one in particular, depending of your angular-cli version) with 0.0.0.0.
then in package.json put ng serve --host 0.0.0.0
In my case the file is commands/serve.js

Terraform and docker networking

I have defined a terraform recipe with a docker provisioner like this:
provider "docker" {
host = "tcp://127.0.0.1:2375/"
}
# Create the network
resource "docker_network" "private_network" {
name = "${var.customer_name}_network"
}
resource "docker_container" "goagent" {
image = "${docker_image.goagent.latest}"
name = "${var.customer_name}_goagent"
command = [ "/bin/sh", "-c", "/usr/bin/supervisord" ]
network_mode = "bridge"
networks = [ "${docker_network.private_network.name}" ]
hostname = "${var.customer_name}_goagent"
}
resource "docker_image" "goagent" {
name = "local/goagent"
}
I would expect that the container will be connected just to the network created on the fly (using the variable customer_name).
But what I see is that the container gets connected also to the default bridge network (172.17.0.0/16), so it gets connected to two networks.
Is there a way to configure the container in terraform in a way that it gets connected only to the network I specify in the networks list?
Apparently this is an unresolved bug as of 0.10.8

Resources