Default Tags using ECS Service - terraform-provider-aws

Most of our ECS Services are not in the "new" format that allow tags to be set.
We recently added default tags to our aws provider e.g.:
provider "aws" {
region = local.workspace["aws_region"]
profile = local.workspace["aws_profile"]
default_tags {
tags = {
Environment = local.workspace["releaseStage"]
Owner = "terraform"
Project = "infrastructure"
Account = local.workspace["releaseStage"]
}
}
}
However, if we run terraform apply, it barks at ecs service resources as they don't support tagging:
Error: error updating ECS Service (arn:aws:ecs:us-east-1:xxxx:service/myservice) tags: error tagging resource (arn:aws:ecs:us-east-1:xxxx:service/myservice): InvalidParameterException: Long arn format must be used for tagging operations
If I override the tags in the resource e.g.:
resource "aws_ecs_service" "myservice" {
name = "myservice"
...
tags = {
Environment = ""
Owner = ""
Project = ""
Account = ""
}
}
It works, but I never get a clean terraform plan as it always needs to evaluate the merged tags.
Is there a way to exclude tagging of default_tags with certain resources?

You should be able to use lifecycle block with ignore_changes to tags which will exclude any external changes. I'm not sure whether this would work. But you can try.

Related

Terraform script destroying previously created ec2 before creating a new one

I am new to Terraform and this is my first script trying it out
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-2757f631"
instance_type = "t2.micro"
}
I have the script above stored on my windows desktop in C:\TerraformScripts\First.tf
Now when I run it the first time, the script executes and creates a new instance for me. I wanted to run it the second time with just changing the name from example to example2. I assumed it would create a new instance with the same configuration since I changed the name against the resource setting. But it instead destroyed the instance I had created on the first run and then recreated it again. Why is this happening without my specifying destroy?
Apologies, if I may have missed out something in the documentation, but I couldn't see it when I looked.
Thanks.
Terraform is a declarative language, which means that the script you write is telling terraform the state you want to get to (then terraform works out how to get there). It's effectively like saying "I want you to make sure I have an aws_instance", rather than "I want you to create an aws_instance".
If I'm understanding correctly, you are probably aiming to do this:
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-2757f631"
instance_type = "t2.micro"
}
resource "aws_instance" "example2" {
ami = "ami-2757f631"
instance_type = "t2.micro"
}
If you run terraform apply now, you will have two EC2 instances regardless of how many were created by the script previously. That's because under the hood, terraform is tracking the resources it's created previously for that script in a state file, comparing them to the current script, then working out what actions to take to make them line up.
Alternatively, you could use the count parameter to get multiple copies of the same resource:
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
count = 2
ami = "ami-2757f631"
instance_type = "t2.micro"
}

Corda: Trying to put the RPC Permissions on an external database

I'm trying to put the RPC Permissions, along with the users and their password on an external database. I've followed the documentation for Corda v. 3.3 (https://docs.corda.net/clientrpc.html#rpc-security-management).
It says that I need to create a "security" field for the node in question and fill out all the necessary information. I've done it, but as soon as I try to deploy the Node, it gives me this error:
"Could not set unknown property 'security' for object of type net.corda.plugins.Node."
The node's information looks like this in the build.gradle document:
node {
name "O=myOrganisation,L=Lisbon,C=PT"
p2pPort 10024
rpcSettings {
address("localhost:10025")
adminAddress("localhost:10026")
}
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "localhost:3306"
username = "*******"
password = "*******"
driverClassName = "com.mysql.jdbc.Driver"
}
}
}
}
cordapps = [
"$project.group:cordapp:$project.version"
]
}
You are confusing two syntaxes:
The syntax for configuring a node block inside a Cordform task such as deployNodes
The syntax for configuring a node directly via node.conf
The security settings are for inside node.conf. You have to create the node first, then modify the node's node.conf with these settings once it has been created.
Corda 4 will introduce an extraConfig option for use inside Cordfrom node blocks, as described here.

Terraform: how to support different providers

I have a set of terraform codes in a directory called myproject:
\myproject\ec2.tf
\myproject\provider.tf
\myproject\s3.tf
....
the provider.tf shows:
provider "aws" {
region = "us-west-1"
profile = "default"
}
so, if I terraform apply in myproject folder, a set of aws resources are launched in us-west-1 under my account.
Now I want to introduce a AWS Glue resource, which is only available in a different region us-west-2. then how do I layout glue.tf file?
Currently I store it in a sub-directory under myproject and run terraform apply in that sub-directory i.e.
\myproject\glue\glue.tf
\myproject\glue\another_provider.tf
another_provider.tf is:
provider "aws" {
region = "us-west-2"
profile = "default"
}
Is it the only way to store a file launching resources in different regions? any better way?
If there is no better way, then I need to have another backend file in glue sub-folder as well, besides, some common variables in myproject directory cannot be shared.
--------- update:
I followed the link posted by Phuong Nguyen,
provider "aws" {
region = "us-west-1"
profile = "default"
}
provider "aws" {
alias = "oregon"
region = "us-west-2"
profile = "default"
}
resource "aws_glue_connection" "example" {
provider = "aws.oregon"
....
}
But I saw:
Error: aws_glue_connection.example: Provider doesn't support resource: aws_glue_connection
you can use provider alias to define multiple providers, .e.g.
# this is default provider
provider "aws" {
region = "us-west-1"
profile = "default"
}
# additional provider
provider "aws" {
alias = "west-2"
region = "us-west-2"
profile = "default"
}
and then in your glue.tf, you can refer to alias provider as:
resource "aws_glue_job" "example" {
provider = "aws.west-2"
# ...
}
More details at Multiple Provider Instances section: https://www.terraform.io/docs/configuration/providers.html
Read my comment ...
Which basically means that you should keep out aws profiles and regions and what not from your terraform code as much as possible and use them as configuration as follows:
terraform {
required_version = "1.0.1"
required_providers {
aws = {
version = ">= 3.56.0"
source = "hashicorp/aws"
}
}
backend "s3" {}
}
provider "aws" {
region = var.region
profile = var.profile
}
Than use tfvars configuration files:
cat cnf/env/spe/prd/tf/03-static-website.backend-config.tfvars
profile = "prd-spe-rcr-web"
region = "eu-north-1"
bucket = "prd-bucket-spe"
foobar = "baz"
which you will apply during the terraform plan and apply calls as follows:
terraform -chdir=$tf_code_path plan -var-file=<<line-one-^^^>>.tfvars
terraform -chdir=$tf_code_path plan -var-file=<<like-the-one-^^^>>.tfvars -auto-approve
As a rule of thumb you SHOULD separate your code and configuration always, the more mixed they are the deeper you will get into troubles ... this applies to ANY programming language / project etc. Now some wise heads will argue that terraform code is in itself configuration , but no it is not. The terraform code in your application is the declarative source code, which is used to provision your binary infrastructure used by the application source code etc. in your application ...

Can't create cloudsql role for Service Account via api

I have been trying to use the api to create service accounts in GCP.
To create a service account I send the following post request:
base_url = f"https://iam.googleapis.com/v1/projects/{project}/serviceAccounts"
auth = f"?access_token={access_token}"
data = {"accountId": name}
# Create a service Account
r = requests.post(base_url + auth, json=data)
this returns a 200 and creates a service account:
Then, this is the code that I use to create the specific roles:
sa = f"{name}#dotmudus-service.iam.gserviceaccount.com"
sa_url = base_url + f'/{sa}:setIamPolicy' + auth
data = {"policy":
{"bindings": [
{
"role": roles,
"members":
[
f"serviceAccount:{sa}"
]
}
]}
}
If roles is set to one of roles/viewer, roles/editor or roles/owner this approach does work.
However, if I want to use, specifically roles/cloudsql.viewer The api tells me that this option is not supported.
Here are the roles.
https://cloud.google.com/iam/docs/understanding-roles
I don't want to give this service account full viewer rights to my project, it's against the principle of least privilege.
How can I set specific roles from the api?
EDIT:
here is the response using the resource manager api: with roles/cloudsql.admin as the role
POST https://cloudresourcemanager.googleapis.com/v1/projects/{project}:setIamPolicy?key={YOUR_API_KEY}
{
"policy": {
"bindings": [
{
"members": [
"serviceAccount:sa#{project}.iam.gserviceaccount.com"
],
"role": "roles/cloudsql.viewer"
}
]
}
}
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.cloudresourcemanager.projects.v1beta1.ProjectIamPolicyError",
"type": "SOLO_REQUIRE_TOS_ACCEPTOR",
"role": "roles/owner"
}
]
}
}
With the code provided it appears that you are appending to the first base_url which is not the correct context to modify project roles.
This will try to place the appended path to: https://iam.googleapis.com/v1/projects/{project}/serviceAccount
The POST path for adding roles needs to be: https://cloudresourcemanager.googleapis.com/v1/projects/{project]:setIamPolicy
If you remove /serviceAccounts from the base_url and it should work.
Edited response to add more information due to your edit
OK, I see the issue here, sorry but I had to set up a new project to test this.
cloudresourcemanager.projects.setIamPolicy needs to replace the entire policy. It appears that you can add constraints to what you change but that you have to submit a complete policy in json for the project.
Note that gcloud has a --log-http option that will help you dig through some of these issues. If you run
gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NAME --role roles/cloudsql.viewer --log-http
It will show you how it pulls the existing existing policy, appends the new role and adds it.
I would recommend using the example code provided here to make these changes if you don't want to use gcloud or the console to add the role to the user as this could impact the entire project.
Hopefully they improve the API for this need.

How to change ip to current url from subject in reset password email meteor

I have used the following code to set my reset email subject:
Accounts.emailTemplates.resetPassword.subject = function(user, url) {
var ul = Meteor.absoluteUrl();
var myArray = ul.split("//");
var array = myArray[1].split('/');
return "How to reset your password on "+array[0];
};
I want it to contain the current browser's url, but it's not happening.
This is what the subject looks like
How to reset your password on 139.59.9.214
but the desired outcome is:
How to reset your password on someName.com
where someName.com is my URL.
I would recommend handling this a bit differently. Your host name is tied to your environment, and depending on what your production environment looks like, deriving your hostname from the server might not always be the easiest thing to do (especially if you're behind proxies, load balancers, etc.). You could instead look into leveraging Meteor's Meteor.settings functionality, and create a settings file for each environment with a matching hostname setting. For example:
1) Create a settings_local.json file with the following contents:
{
"private": {
"hostname": "localhost:3000"
}
}
2) Create a settings.json file with the following contents:
{
"private": {
"hostname": "somename.com"
}
}
3) Adjust your code to look like:
Accounts.emailTemplates.resetPassword.subject = function (user, url) {
const hostname = Meteor.settings.private.hostname;
return `How to reset your password on ${hostname}`;
};
4) When working locally, start meteor like:
meteor --settings=settings_local.json
5) When deploying to production, make sure the contents or your settings.json file are taken into consideration. How you do this depends on how you're deploying to your prod environment. If using mup for example, it will automatically look for a settings.json to use in production. MDG's Galaxy will do the same.

Resources