I use terraform for create IAM User.
How to use terraform for generate HTTPS Git credentials for AWS CodeCommit ?
My code :
resource "aws_iam_user" "gitlab" {
name = "user-gitlab"
}
resource "aws_iam_policy_attachment" "gitlab" {
name = "iam-gitlab"
users = ["${aws_iam_user.gitlab.name}"]
policy_arn = "arn:aws:iam::aws:policy/AWSCodeCommitPowerUser"
}
Regards,
Use data.external to execute a CLI script:
credentials=$(aws --profile dev iam list-service-specific-credentials \
--user-name jenkins --service-name codecommit.amazonaws.com --query 'ServiceSpecificCredentials[0]')
if [[ $credentials == "null" ]]; then
credentials=$(aws --profile dev iam create-service-specific-credential --user-name jenkins \
--service-name codecommit.amazonaws.com --query ServiceSpecificCredential)
fi
echo "$credentials"
Then the terraform:
data "external" "jenkins" {
program = ["${path.root}/jenkins.sh"]
}
resource "aws_ssm_parameter" "jenkins_cc_id" {
name = "${local.jenkins}/codecommit_https_user"
value = "${lookup(data.external.jenkins.result, "ServiceUserName", "")}"
}
resource "aws_ssm_parameter" "jenkins_cc_p" {
name = "${local.jenkins}/codecommit_https_pass"
value = "${lookup(data.external.jenkins.result, "ServicePassword", "")}"
}
Unfortunately, there appears to be no support for this API in Terraform. I recommend that you post a feature request in the AWS provider GitHub repo.
The feature has been implemented in 4.1.0 (https://github.com/hashicorp/terraform-provider-aws/blob/v4.1.0/CHANGELOG.md).
Take a look at https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_service_specific_credential#service_specific_credential_id
resource "aws_iam_service_specific_credential" "example" {
service_name = "codecommit.amazonaws.com"
user_name = aws_iam_user.example.name
}
Related
We are trying to automate the repos, group and permission creation in Jfrog as part of the azuredevops pipeline. So here I decided to use JfrogCLI and API calls as the azuredevops tasks. But facing some difficulties in Jfrog cli or API calls related as the idempotent behavior is not enabled for most of the API calls operations and CLI commands.
Scenario:- For Application app-A1,
2 repos(local-A1 and virtual-A1) ,
2 groups(appA1-developers, appA1-contributors)
1 permission target (appA1-permission) where we will include the created 2 repos and groups with the permissions
Below is what I tried to achieve so far.
For creating the groups
jf rt group-create appA1-developers --url https://myrepo/artifactory --user jfuser --password jfpass
jf rt group-create appA1-contributors --url https://myrepo/artifactory --user jfuser --password jfpass
Create Repos using the command as below
Repo creation and Update template
jfrog rt rc local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
Repo Update Command
jfrog rt ru local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
For creating permission
curl -u 'jfuser' -X PUT "https://myrepo/artifactory/api/v2/security/permissions/java-developers" -H "Content-type: application/json" -T permision.json
{
"name": "appA1-developers",
"repo": {
"include-patterns": ["**"],
"exclude-patterns": [""],
"repositories": ["appA1-local"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","annotate"]
}
}
},
"build": {
"include-patterns": ["testmaven/**"],
"exclude-patterns": [""],
"repositories": ["artifactory-build-info"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","write","annotate","delete"]
}
}
}
}
But when I trying in Azuredevops all the above tasks, Not able to identify and condition if any of the resources above are already existing.
Looking for a way to Identify first,
If the specified groupname is already existing or not, if existing skip group creation, else create the groups
check if the repos already existing,
if not existing create as per the template
But if it exists and all the properties are same- Skip it
But if it exists and there are property changes- use the update command
Similarly, the permission target also need to be updated only if there are changes , But existing properties or settings shouldnt be altered.
I'm trying to connect to my Cognitive Services resource but I'm getting the following error:
(node:3246) UnhandledPromiseRejectionWarning: Error: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
I created the the resource with kind CognitiveServices like this:
az cognitiveservices account create -n <name> -g <group> --kind CognitiveServices --sku S0 -l eastus --yes
Using kind CustomVision.Training didn't work too.
I have already looked at this answer but it is no the same problem. I believe I am entering the correct credentials and endpoint.
I checked both Azure Portal and customvision.ai resource, I'm using the correct URL and key, but it is not working.
I even tried reseting the key but also it had no effect.
import { TrainingAPIClient } from "#azure/cognitiveservices-customvision-training";
const { CognitiveServicesCredentials } = require("#azure/ms-rest-azure-js");
const cognitiveServiceCredentials = new CognitiveServicesCredentials("<MY_API_KEY>");
const client = new TrainingAPIClient(cognitiveServiceCredentials, "https://eastus.api.cognitive.microsoft.com");
const projects = client.getProjects()
I was also able to run it using the REST API, got HTTP 200.
You can clone this Microsoft Cognitive Services sample (UWP application) and check out Computer Vision feature in the sample. You will have to setup App Settings in the app before you proceed to check.
You can follow the below steps on how to do that through azure cli commands from bash / git bash:
Create resource group
# Create resource group, replace resouce group name and location of resource group as required
az group create -n kiosk-cog-service-keys -l westus
Generate keys and echo the keys
Please note! jq needs to be installed to execute the commands below. If you do not want to use jq then you can just execute the az group deployment command and then search in the outputs section of the json where you will find the keys.
To get the keys with the default parameters execute the following commands
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json
If instead you want to modify the default parameters you need to get the cognitive-keys-azure-deploy.json and cognitive-keys-azure-deploy.parameters.json files locally and execute the following commands
# Change working directory to Kiosk
cd Kiosk
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys. You can modifiy the tiers associated with the generated keys by modifying the parameter values
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json
Sample output of above commands is as follows:
# Sample output of above command
{
"bingAugosuggestKey1": {
"type": "String",
"value": "cb4******************************"
},
"bingSearchKey1": {
"type": "String",
"value": "88*********************************"
},
"compVisionEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/vision/v1.0"
},
"compVisionKey1": {
"type": "String",
"value": "fa5**************************************"
},
"faceEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/face/v1.0"
},
"faceKey1": {
"type": "String",
"value": "87f7****************************************"
},
"textAnalyticsEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0"
},
"textAnalyticsKey1": {
"type": "String",
"value": "ba3*************************************"
}
}
Also note that you can follow similar steps to generate CV key and endpoint only and use them in your application.
The correct credentials object is this one:
import { ApiKeyCredentials } from "#azure/ms-rest-js";
Documentation updated, full discussion at #10362
I have a set of terraform codes in a directory called myproject:
\myproject\ec2.tf
\myproject\provider.tf
\myproject\s3.tf
....
the provider.tf shows:
provider "aws" {
region = "us-west-1"
profile = "default"
}
so, if I terraform apply in myproject folder, a set of aws resources are launched in us-west-1 under my account.
Now I want to introduce a AWS Glue resource, which is only available in a different region us-west-2. then how do I layout glue.tf file?
Currently I store it in a sub-directory under myproject and run terraform apply in that sub-directory i.e.
\myproject\glue\glue.tf
\myproject\glue\another_provider.tf
another_provider.tf is:
provider "aws" {
region = "us-west-2"
profile = "default"
}
Is it the only way to store a file launching resources in different regions? any better way?
If there is no better way, then I need to have another backend file in glue sub-folder as well, besides, some common variables in myproject directory cannot be shared.
--------- update:
I followed the link posted by Phuong Nguyen,
provider "aws" {
region = "us-west-1"
profile = "default"
}
provider "aws" {
alias = "oregon"
region = "us-west-2"
profile = "default"
}
resource "aws_glue_connection" "example" {
provider = "aws.oregon"
....
}
But I saw:
Error: aws_glue_connection.example: Provider doesn't support resource: aws_glue_connection
you can use provider alias to define multiple providers, .e.g.
# this is default provider
provider "aws" {
region = "us-west-1"
profile = "default"
}
# additional provider
provider "aws" {
alias = "west-2"
region = "us-west-2"
profile = "default"
}
and then in your glue.tf, you can refer to alias provider as:
resource "aws_glue_job" "example" {
provider = "aws.west-2"
# ...
}
More details at Multiple Provider Instances section: https://www.terraform.io/docs/configuration/providers.html
Read my comment ...
Which basically means that you should keep out aws profiles and regions and what not from your terraform code as much as possible and use them as configuration as follows:
terraform {
required_version = "1.0.1"
required_providers {
aws = {
version = ">= 3.56.0"
source = "hashicorp/aws"
}
}
backend "s3" {}
}
provider "aws" {
region = var.region
profile = var.profile
}
Than use tfvars configuration files:
cat cnf/env/spe/prd/tf/03-static-website.backend-config.tfvars
profile = "prd-spe-rcr-web"
region = "eu-north-1"
bucket = "prd-bucket-spe"
foobar = "baz"
which you will apply during the terraform plan and apply calls as follows:
terraform -chdir=$tf_code_path plan -var-file=<<line-one-^^^>>.tfvars
terraform -chdir=$tf_code_path plan -var-file=<<like-the-one-^^^>>.tfvars -auto-approve
As a rule of thumb you SHOULD separate your code and configuration always, the more mixed they are the deeper you will get into troubles ... this applies to ANY programming language / project etc. Now some wise heads will argue that terraform code is in itself configuration , but no it is not. The terraform code in your application is the declarative source code, which is used to provision your binary infrastructure used by the application source code etc. in your application ...
i have an EMR cluster. It is set up by terraform script
resource "aws_emr_cluster" "emr-test" {
name = "emr-test"
applications = [..., "Ganglia", ...]
...
}
I would like to integrate ganglia with influxDb+Grafana. Found an example of cofiguration: example.
That requires to update gmetad.conf file on master node. Is that possible to do that with terraform script? emr step?
You can use the bootstrap_action attribute to list actions that should run before Hadoop is started on the cluster nodes. You can also apply filters to only run those actions on the master node:
resource "aws_emr_cluster" "emr-test" {
...
bootstrap_action {
path = "s3://your-bucket/update-gmetad.sh"
name = "update-gmetad-on-master-node"
args = ["instance.isMaster=true"]
}
}
My question is similar to this git hub post:
https://github.com/hashicorp/terraform/issues/745
It is also related to another stack exchange post of mine:
Terraform stalls while trying to get IP addresses of multiple instances?
I am trying to bootstrap several servers and there are several commands I need to run on my instances that require the IP addresses of all the other instances. However I cannot access the variables that hold the IP addresses of my newly created instances until they are created. So when I try to run a provisioner "remote-exec" block like this:
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y curl",
"echo ${openstack_compute_instance_v2.consul.0.network.0.fixed_ip_v4}",
"echo ${openstack_compute_instance_v2.consul.1.network.1.fixed_ip_v4}",
"echo ${openstack_compute_instance_v2.consul.2.network.2.fixed_ip_v4}"
]
}
Nothing happens because all the instances are waiting for all the other instances to finish being created and so nothing is created in the first place. So I need a way for my resources to be created and then run my provisioner "remote-exec" block commands after they are created and terraform can access the IP addresses of all my instances.
The solution is to create a resource "null_resource" "nameYouWant" { } and then run your commands inside that. They will run after the initial resources are created:
resource "aws_instance" "consul" {
count = 3
ami = "ami-ce5a9fa3"
instance_type = "t2.micro"
key_name = "ansible_aws"
tags {
Name = "consul"
}
}
resource "null_resource" "configure-consul-ips" {
count = 3
connection {
user = "ubuntu"
private_key="${file("/home/ubuntu/.ssh/id_rsa")}"
agent = true
timeout = "3m"
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y curl",
"sudo echo '${join("\n", aws_instance.consul.*.private_ip)}' > /home/ubuntu/test.txt"
]
}
}
Also see the answer here:
Terraform stalls while trying to get IP addresses of multiple instances?
Thank you so much #ydaetskcor for the answer
In addition to #alex-cohen answer, another tip from https://github.com/hashicorp/terraform/issues/8266#issuecomment-454377049.
If you want to initiate a call to local-exec, regardless of a resource creation, use triggers:
resource "null_resource" "deployment" {
provisioner "local-exec" {
command = "echo ${PATH} > output.log"
}
triggers = {
always_run = timestamp()
}
}