I wanted to try out terraform on our OpenStack environment. I tried to set it up and it seems to work when only the following is defined:
provider "openstack" {
user_name = "test"
tenant_name = "test"
password = "testpassword"
auth_url = "https://test:5000/v3/"
region = "test"
}
I can run terraform plan without any problem it says:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
When i try to add a resource:
resource "openstack_compute_instance_v2" "test" {
name = "test_server"
image_id = "test_id123"
flavor_id = "3"
key_pair = "test"
security_groups = ["default"]
network {
name = "Default Network"
}
}
When i run terraform plan i now get
Error: Error running plan: 1 error(s) occurred:
provider.openstack: Authentication failed
The authentication is working. Something in your provider section is incorrect.
Terraform does not verify the provider information when there is no resource using it.
I validated your findings, and then took it a step farther. I created two providers, one for AWS and one for OpenStack using your example. I then added a resource to create an AWS VPC. My AWS credentials were correct. When I ran terraform plan it returned the action plan for building the VPC. It did not check the fake OpenStack credentials.
One other thing, once there is a resource for a provider it always uses the credentials even if there is nothing to do.
provider "aws" {
access_key = "<redacted>"
secret_key = "<redacted>"
region = "us-east-1"
}
provider "openstack" {
user_name = "test"
tenant_name = "test"
password = "testpassword"
auth_url = "https://test:5000/v3/"
region = "test"
}
/* Create VPC */
resource "aws_vpc" "default" {
cidr_block = "10.200.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "testing"
}
}
Produced the following output verifying the OpenStack provider wasn't checked:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ aws_vpc.default
id: <computed>
arn: <computed>
assign_generated_ipv6_cidr_block: "false"
cidr_block: "10.200.0.0/16"
default_network_acl_id: <computed>
default_route_table_id: <computed>
default_security_group_id: <computed>
dhcp_options_id: <computed>
enable_classiclink: <computed>
enable_classiclink_dns_support: <computed>
enable_dns_hostnames: "true"
enable_dns_support: "true"
provider "aws" {
instance_tenancy: "default"
ipv6_association_id: <computed>
ipv6_cidr_block: <computed>
main_route_table_id: <computed>
tags.%: "1"
tags.Name: "testing"
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Related
I am using GitHub Actions to deploy via Bicep:
- name: Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Deploy Bicep file
uses: azure/arm-deploy#v1
with:
scope: subscription
subscriptionId: ${{ secrets.AZURE_CREDENTIALS_subscriptionId }}
region: ${{ env.DEPLOY_REGION }}
template: ${{ env.BICEP_ENTRY_FILE }}
parameters: parameters.${{ inputs.selectedEnvironment }}.json
I have used a contributor access for my AZURE_CREDENTIALS based on the output of the next command:
az ad sp create-for-rbac --n infra-bicep --role contributor --scopes /subscriptions/my-subscription-guid --sdk-auth
I am using Azure Keyvault with RBAC. This Bicep has worked fine until I tried to give an Azure Web App a keyvault read access to the Keyvault as such:
var kvSecretsUser = '4633458b-17de-408a-b874-0445c86b69e6'
var kvSecretsUserRole = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', kvSecretsUser)
resource kx_webapp_roleAssignments 'Microsoft.Authorization/roleAssignments#2022-04-01' = {
name: 'kv-webapp-roleAssignments'
scope: kv
properties: {
principalId: webappPrincipleId
principalType: 'ServicePrincipal'
roleDefinitionId: kvSecretsUserRole
}
}
Then I was hit with the following error:
'Authorization failed for template resource 'kv-webapp-roleAssignments'
of type 'Microsoft.Authorization/roleAssignments'.
The client 'guid-value' with object id 'guid-value' does not have permission to perform action
'Microsoft.Authorization/roleAssignments/write' at scope
'/subscriptions/***/resourceGroups/rg-x/providers/Microsoft.KeyVault/vaults/
kv-x/providers/Microsoft.Authorization/roleAssignments/kv-webapp-roleAssignments'.'
What are the total minimal needed permissions and what should my az ad sp create-for-rbac statement(s) be and are there any other steps I need to do to assign role permissions?
To assign RBAC roles, you need to have either User Access
Administrator or Owner role that includes below permission:
Microsoft.Authorization/roleAssignments/write
With Contributor role, you cannot assign RBAC roles to Azure resources. To confirm that, you can check this MS Doc.
I tried to reproduce the same in my environment and got below results:
I used the same command and created one service principal with Contributor role as below:
az ad sp create-for-rbac --n infra-bicep --role contributor --scopes /subscriptions/my-subscription-guid --sdk-auth
Response:
I generated one access token via Postman with below parameters:
POST https://login.microsoftonline.com/<tenantID>/oauth2/v2.0/token
grant_type:client_credentials
client_id: <clientID from above response>
client_secret: <clientSecret from above response>
scope: https://management.azure.com/.default
Response:
When I used this token to assign Key Vault Secrets User role with below API call, I got same error as you like below:
PUT https://management.azure.com/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxxx/providers/Microsoft.Authorization/roleAssignments/xxxxxxxxxxx?api-version=2022-04-01
{
"properties": {
"roleDefinitionId": "/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/4633458b-17de-408a-b874-0445c86b69e6",
"principalId": "456c2d5f-12e7-4448-88ba-xxxxxxxxx",
"principalType": "ServicePrincipal"
}
}
Response:
To resolve the error, create a service principal with either User Access Administrator or Owner role.
In my case, I created a service principal with Owner role like below:
az ad sp create-for-rbac --n infra-bicep-owner --role owner --scopes /subscriptions/my-subscription-guid --sdk-auth
Response:
Now, I generated access token again via Postman by replacing clientId and clientSecret values like below:
POST https://login.microsoftonline.com/<tenantID>/oauth2/v2.0/token
grant_type:client_credentials
client_id: <clientID from above response>
client_secret: <clientSecret from above response>
scope: https://management.azure.com/.default
Response:
When I used this token to assign Key Vault Secrets User role with below API call, I got response successfully like below:
PUT https://management.azure.com/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxxx/providers/Microsoft.Authorization/roleAssignments/xxxxxxxxxxx?api-version=2022-04-01
{
"properties": {
"roleDefinitionId": "/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/4633458b-17de-408a-b874-0445c86b69e6",
"principalId": "456c2d5f-12e7-4448-88ba-xxxxxxxxx",
"principalType": "ServicePrincipal"
}
}
Response:
UPDATE:
Considering least privileges principle, you need to create custom RBAC role instead of assigning Owner role.
To create custom RBAC role, follow below steps:
Go to Azure Portal -> Subscriptions -> Your Subscription -> Access control (IAM) -> Add -> Add custom role
Fill the details with name and description, make sure to select Contributor role after choosing Clone a role like below:
Now, remove below permission from NotAction Microsoft.Authorization/roleAssignments/write
Now, add Microsoft.Authorization/roleAssignments/write permission in Action:
Now, click on Create like below:
You can create service principal with above custom role using this command:
az ad sp create-for-rbac --n infra_bicep_custom_role --role 'Custom Contributor' --scopes /subscriptions/my-subscription-guid --sdk-auth
Response:
Most of our ECS Services are not in the "new" format that allow tags to be set.
We recently added default tags to our aws provider e.g.:
provider "aws" {
region = local.workspace["aws_region"]
profile = local.workspace["aws_profile"]
default_tags {
tags = {
Environment = local.workspace["releaseStage"]
Owner = "terraform"
Project = "infrastructure"
Account = local.workspace["releaseStage"]
}
}
}
However, if we run terraform apply, it barks at ecs service resources as they don't support tagging:
Error: error updating ECS Service (arn:aws:ecs:us-east-1:xxxx:service/myservice) tags: error tagging resource (arn:aws:ecs:us-east-1:xxxx:service/myservice): InvalidParameterException: Long arn format must be used for tagging operations
If I override the tags in the resource e.g.:
resource "aws_ecs_service" "myservice" {
name = "myservice"
...
tags = {
Environment = ""
Owner = ""
Project = ""
Account = ""
}
}
It works, but I never get a clean terraform plan as it always needs to evaluate the merged tags.
Is there a way to exclude tagging of default_tags with certain resources?
You should be able to use lifecycle block with ignore_changes to tags which will exclude any external changes. I'm not sure whether this would work. But you can try.
I am new to Terraform and this is my first script trying it out
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-2757f631"
instance_type = "t2.micro"
}
I have the script above stored on my windows desktop in C:\TerraformScripts\First.tf
Now when I run it the first time, the script executes and creates a new instance for me. I wanted to run it the second time with just changing the name from example to example2. I assumed it would create a new instance with the same configuration since I changed the name against the resource setting. But it instead destroyed the instance I had created on the first run and then recreated it again. Why is this happening without my specifying destroy?
Apologies, if I may have missed out something in the documentation, but I couldn't see it when I looked.
Thanks.
Terraform is a declarative language, which means that the script you write is telling terraform the state you want to get to (then terraform works out how to get there). It's effectively like saying "I want you to make sure I have an aws_instance", rather than "I want you to create an aws_instance".
If I'm understanding correctly, you are probably aiming to do this:
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-2757f631"
instance_type = "t2.micro"
}
resource "aws_instance" "example2" {
ami = "ami-2757f631"
instance_type = "t2.micro"
}
If you run terraform apply now, you will have two EC2 instances regardless of how many were created by the script previously. That's because under the hood, terraform is tracking the resources it's created previously for that script in a state file, comparing them to the current script, then working out what actions to take to make them line up.
Alternatively, you could use the count parameter to get multiple copies of the same resource:
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
count = 2
ami = "ami-2757f631"
instance_type = "t2.micro"
}
I'm trying to put the RPC Permissions, along with the users and their password on an external database. I've followed the documentation for Corda v. 3.3 (https://docs.corda.net/clientrpc.html#rpc-security-management).
It says that I need to create a "security" field for the node in question and fill out all the necessary information. I've done it, but as soon as I try to deploy the Node, it gives me this error:
"Could not set unknown property 'security' for object of type net.corda.plugins.Node."
The node's information looks like this in the build.gradle document:
node {
name "O=myOrganisation,L=Lisbon,C=PT"
p2pPort 10024
rpcSettings {
address("localhost:10025")
adminAddress("localhost:10026")
}
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "localhost:3306"
username = "*******"
password = "*******"
driverClassName = "com.mysql.jdbc.Driver"
}
}
}
}
cordapps = [
"$project.group:cordapp:$project.version"
]
}
You are confusing two syntaxes:
The syntax for configuring a node block inside a Cordform task such as deployNodes
The syntax for configuring a node directly via node.conf
The security settings are for inside node.conf. You have to create the node first, then modify the node's node.conf with these settings once it has been created.
Corda 4 will introduce an extraConfig option for use inside Cordfrom node blocks, as described here.
I have a set of terraform codes in a directory called myproject:
\myproject\ec2.tf
\myproject\provider.tf
\myproject\s3.tf
....
the provider.tf shows:
provider "aws" {
region = "us-west-1"
profile = "default"
}
so, if I terraform apply in myproject folder, a set of aws resources are launched in us-west-1 under my account.
Now I want to introduce a AWS Glue resource, which is only available in a different region us-west-2. then how do I layout glue.tf file?
Currently I store it in a sub-directory under myproject and run terraform apply in that sub-directory i.e.
\myproject\glue\glue.tf
\myproject\glue\another_provider.tf
another_provider.tf is:
provider "aws" {
region = "us-west-2"
profile = "default"
}
Is it the only way to store a file launching resources in different regions? any better way?
If there is no better way, then I need to have another backend file in glue sub-folder as well, besides, some common variables in myproject directory cannot be shared.
--------- update:
I followed the link posted by Phuong Nguyen,
provider "aws" {
region = "us-west-1"
profile = "default"
}
provider "aws" {
alias = "oregon"
region = "us-west-2"
profile = "default"
}
resource "aws_glue_connection" "example" {
provider = "aws.oregon"
....
}
But I saw:
Error: aws_glue_connection.example: Provider doesn't support resource: aws_glue_connection
you can use provider alias to define multiple providers, .e.g.
# this is default provider
provider "aws" {
region = "us-west-1"
profile = "default"
}
# additional provider
provider "aws" {
alias = "west-2"
region = "us-west-2"
profile = "default"
}
and then in your glue.tf, you can refer to alias provider as:
resource "aws_glue_job" "example" {
provider = "aws.west-2"
# ...
}
More details at Multiple Provider Instances section: https://www.terraform.io/docs/configuration/providers.html
Read my comment ...
Which basically means that you should keep out aws profiles and regions and what not from your terraform code as much as possible and use them as configuration as follows:
terraform {
required_version = "1.0.1"
required_providers {
aws = {
version = ">= 3.56.0"
source = "hashicorp/aws"
}
}
backend "s3" {}
}
provider "aws" {
region = var.region
profile = var.profile
}
Than use tfvars configuration files:
cat cnf/env/spe/prd/tf/03-static-website.backend-config.tfvars
profile = "prd-spe-rcr-web"
region = "eu-north-1"
bucket = "prd-bucket-spe"
foobar = "baz"
which you will apply during the terraform plan and apply calls as follows:
terraform -chdir=$tf_code_path plan -var-file=<<line-one-^^^>>.tfvars
terraform -chdir=$tf_code_path plan -var-file=<<like-the-one-^^^>>.tfvars -auto-approve
As a rule of thumb you SHOULD separate your code and configuration always, the more mixed they are the deeper you will get into troubles ... this applies to ANY programming language / project etc. Now some wise heads will argue that terraform code is in itself configuration , but no it is not. The terraform code in your application is the declarative source code, which is used to provision your binary infrastructure used by the application source code etc. in your application ...