flow log for specific ENI - terraform-provider-aws

so I am supposed be able to create a vpc flow log for a specific instance network interface.
I have been able to create a vpc flow log for the entire VPC but not a specific instance network interface. If I create an instance. it comes with a eni. I would think that I should be able to inspect the instance to find the eni and get the eni id.
for this source code
resource "aws_instance" "master_inst" { ...}
resource "aws_flow_log" "example-instance-flow-log" {
provider = aws.region_master
iam_role_arn = aws_iam_role.master-vpc-flow-log-role.arn
log_destination = aws_cloudwatch_log_group.master-instance-flow-log.arn
traffic_type = "ALL"
eni_id = aws_instance.master_inst.network_interface.id
}
resource "aws_cloudwatch_log_group" "master-instance-flow-log" {
provider = aws.region_master
name = "master-instance-flow-log"
}
I am getting
Error: Cannot index a set value
│
│ on ../../modules/instances.tf line 78, in resource "aws_flow_log" "example-instance-flow-log":
│ 78: eni_id = aws_instance.master_inst.network_interface.id
│
│ Block type "network_interface" is represented by a set of objects, and set elements do not have addressable keys. To find elements matching specific criteria, use a "for" expression with an "if"
│ clause.

this does the trick
In order for Terraform destroy to clean up the log group the role needs to have permission to destroy the log group. Now unfortunately adding the delete to the policy for some reason 1 out of 3 does not actually delete the log group. so you have to keep the console open to manually delete the log group.
resource "aws_iam_role" "flowlog-role" {
provider = aws.region_master
name = "flowlog-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "flowlog-role-policy" {
provider = aws.region_master
name = "flowlog-role-policy"
role = aws_iam_role.flowlog-role.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:DeleteLogGroup",
"logs:CreateLogStream",
"lots:DeleteLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_cloudwatch_log_group" "master-instance-flowlog-grp" {
count = var.enable_instance_flowlog? 1 : 0
provider = aws.region_master
name = "master-instance-flowlog-grp"
retention_in_days = 3 ## need to specify number of days otherwise terraform destroy will not remove log group
}
resource "aws_flow_log" "master-instance-flowlog" {
count = var.enable_instance_flowlog? 1 : 0
provider = aws.region_master
iam_role_arn = aws_iam_role.flowlog-role.arn
log_destination = aws_cloudwatch_log_group.master-instance-flowlog-grp[count.index].arn
traffic_type = "ALL"
eni_id = aws_instance.master_instance.primary_network_interface_id
}

Related

Google-beta seems to be using non-existent project with google_firebase_project. What should I do?

Objective
I am trying to fix a Firebase deployment managed in Terraform. My module looks something like this...
data "google_client_config" "default_project" {
provider = google-beta
}
data "google_project" "project" {
provider = google-beta
project_id = var.gcp_project
}
resource "google_firebase_project" "default" {
provider = google-beta
project = var.gcp_project
}
# enable billing API
resource "google_project_service" "cloud_billing" {
provider = google-beta
project = google_firebase_project.default.id
service = "cloudbilling.googleapis.com"
}
# enable firebase
resource "google_project_service" "firebase" {
provider = google-beta
project = google_firebase_project.default.id
service = "firebase.googleapis.com"
}
# enable access context manage api
resource "google_project_service" "access_context" {
provider = google-beta
project = google_firebase_project.default.id
service = "accesscontextmanager.googleapis.com"
}
resource "google_firebase_web_app" "app" {
provider = google-beta
project = data.google_project.project.project_id
display_name = "firestore-controller-${google_firebase_project.default.display_name}"
depends_on = [
google_firebase_project.default,
google_project_service.firebase,
google_project_service.access_context,
google_project_service.cloud_billing
]
}
data "google_firebase_web_app_config" "app" {
provider = google-beta
web_app_id = google_firebase_web_app.app.app_id
}
resource "google_storage_bucket" "storage" {
provider = google-beta
name = "firestore-controller-${google_firebase_project.default.display_name}"
location = "US"
}
locals {
firebase_config = jsonencode({
appId = google_firebase_web_app.app.app_id
apiKey = data.google_firebase_web_app_config.app.api_key
authDomain = data.google_firebase_web_app_config.app.auth_domain
databaseURL = lookup(data.google_firebase_web_app_config.app, "database_url", "")
storageBucket = lookup(data.google_firebase_web_app_config.app, "storage_bucket", "")
messagingSenderId = lookup(data.google_firebase_web_app_config.app, "message_sender_id", "")
measurementId = lookup(data.google_firebase_web_app_config.app, "measurement_id", "")
})
}
resource "google_storage_bucket_object" "firebase_config" {
provider = google-beta
bucket = google_storage_bucket.storage.name
name = "firebase-config.json"
content = local.firebase_config
}
Issue
Unfortunately, this fails at google_firebase_project.default with the following message:
{
│ "#type": "type.googleapis.com/google.rpc.ErrorInfo",
│ "domain": "googleapis.com",
│ "metadata": {
│ "consumer": "projects/764086051850",
│ "service": "firebase.googleapis.com"
│ },
│ "reason": "SERVICE_DISABLED"
│ }
This is strange because a project with that number does not exist (unless it's some kind of root project that I'm having trouble finding). If this is the the project number for some child of the project I am providing to google_firebase_project.default that is also strange; var.gcp_project_name certainly has this service enabled.
What I've tried thusfar
Removing tfstate.
Refactoring back and forth from legacy modules.
I have double-checked and confirmed that the google-beta provider does indeed recognize the correct project in its configuration when using data.google_project without specifying a project_id.
Where is this mysterious projects/764086051850 coming from?
cross-post

Terraform aws_s3_bucket_website_configuration keeps creating website block of aws_s3_bucket resource

I'm using ~3.0 as AWS provider version on Terraform and last terraform init downloaded 3.75.1. When I ran terraform plan, a WARNING came up;
Warning: Argument is deprecated
on main.tf line 14, in resource "aws_s3_bucket" "xxx":
14: resource "aws_s3_bucket" "xxx" {
Use the aws_s3_bucket_website_configuration resource instead
My bucket resource was like this;
resource "aws_s3_bucket" "bucket" {
bucket = "bucket"
acl = "public-read"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
EOF
website {
index_document = "index.html"
error_document = "index.html"
}
}
And due to latest changes on provider configuration and Deprecation warning I got because of changes, I divided my bucket resource to 3 like below;
resource "aws_s3_bucket" "bucket" {
bucket = "bucket"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
EOF
}
resource "aws_s3_bucket_acl" "bucket-acl" {
bucket = aws_s3_bucket.bucket.id
acl = "public-read"
}
resource "aws_s3_bucket_website_configuration" "bucket-website-config" {
bucket = aws_s3_bucket.bucket.id
index_document {
suffix = "index.html"
}
error_document {
key = "index.html"
}
}
I ran terraform plan, Output was like below;
# aws_s3_bucket.bucket will be updated in-place
~ resource "aws_s3_bucket" "bucket" {
~ acl = "public-read" -> "private"
id = "bucket"
tags = {}
# (13 unchanged attributes hidden)
- website {
- error_document = "index.html" -> null
- index_document = "index.html" -> null
}
# (1 unchanged block hidden)
}
# aws_s3_bucket_acl.bucket-acl will be created
+ resource "aws_s3_bucket_acl" "bucket-acl" {
+ acl = "public-read"
+ bucket = "bucket"
+ id = (known after apply)
+ access_control_policy {
+ grant {
+ permission = (known after apply)
+ grantee {
+ display_name = (known after apply)
+ email_address = (known after apply)
+ id = (known after apply)
+ type = (known after apply)
+ uri = (known after apply)
}
}
+ owner {
+ display_name = (known after apply)
+ id = (known after apply)
}
}
}
# aws_s3_bucket_website_configuration.bucket-website-config will be created
+ resource "aws_s3_bucket_website_configuration" "bucket-website-config" {
+ bucket = "bucket"
+ id = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ error_document {
+ key = "index.html"
}
+ index_document {
+ suffix = "index.html"
}
}
Despite the confusion (because I couldn't understand the changes on aws_s3_bucket. Because I'm using the same configuration values basically), I ran terraform apply to see what will be happening.
After every change is applied, I ran terraform plan to make sure everything is up-to-date. After this point, my environment entered kind of a vicious circle here.
Second terraform plan output is;
aws_s3_bucket.bucket will be updated in-place
~ resource "aws_s3_bucket" "bucket" {
id = "bucket"
tags = {}
# (14 unchanged attributes hidden)
- website {
- error_document = "index.html" -> null
- index_document = "index.html" -> null
}
# (1 unchanged block hidden)
}
As we can see, it tries to remove website configuration from bucket. I ran terraform apply for this as well and after apply, I ran terraform plan for the 3rd time;
# aws_s3_bucket_website_configuration.bucket-website-config will be created
+ resource "aws_s3_bucket_website_configuration" "bucket-website-config" {
+ bucket = "bucket"
+ id = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ error_document {
+ key = "index.html"
}
+ index_document {
+ suffix = "index.html"
}
}
When I apply this, Terraform is trying to remove website config again, And these circle of changes goes on and on.
Is this a bug, are there anyone stumbled upon this issue? Is there any solution other than adding ignore_changes block or downgrading provider version?
Any help will be appreciated,
Thank you very much.
I had exactly the same case and I ran into it because of a too old provider version.
I was also using a ~3.62 AWS provider.
According to the provider changelog some of this resources just got added with 4.0.0:
New Resource: aws_s3_bucket_website_configuration (#22648)
New Resource: aws_s3_bucket_acl (#22853)
I switched to version >= 4.4 for the AWS provider and afterwards everything was working as expected (just to mention it, I have chosen 4.4 for additional reasons not related to this problem. 4.0 should have also already been enough).
as #lopin said, it's an old version provider problem. Additionally to #Oguzhan Aygun lifecycle workaround, you can use the old version provider method which is the website block inside the aws_s3_bucket resource like the following;
resource "aws_s3_bucket" "b" {
bucket = "s3-website-test.hashicorp.com"
website {
index_document = "index.html"
error_document = "error.html"
routing_rules = ...
}```

Error: Error getting Backup Vault: AccessDeniedException:

Can Someone please help what is this error for? I was configuring AWS Backup and got this error message. I have tried in many ways (IAM policy etc) but no luck. Any assistance is much appreciated.
Error: Error getting Backup Vault: AccessDeniedException:
status code: 403, request id: 501c0713-0ce9-4879-93f6-1887322a38be
I ran into this issue using terraform. I figured this out by adding the "backup-storage:MountCapsule" permission to the policy of the role I am using to create the resource. Here is a slightly edited policy and role configuration. Hopefully, this helps someone.
data "aws_iam_policy_document" "CloudFormationServicePolicy" {
statement {
sid = "AllResources"
effect = "Allow"
actions = [
"backup:*",
"backup-storage:MountCapsule",
...
]
resources = ["*"]
}
statement {
sid = "IAM"
effect = "Allow"
actions = ["iam:PassRole"]
resources = ["*"]
}
}
resource "aws_iam_policy" "CloudFormationServicePolicy" {
name = "${local.resource_name}-CloudFormationServicePolicy"
description = "policy for the IAM role "
path = "/${local.metadata["project"]}/${local.metadata["application"]}/"
policy = data.aws_iam_policy_document.CloudFormationServicePolicy.json
}
resource "aws_iam_role" "CloudFormationServiceRole" {
name = "${local.resource_name}-CloudFormationServiceRole"
description = "Allow cluster to manage node groups, fargate nodes and cloudwatch logs"
force_detach_policies = true
assume_role_policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "sts:AssumeRole",
"Principal" : {
"Service" : ["cloudformation.amazonaws.com", "ecs-tasks.amazonaws.com"]
},
"Effect" : "Allow",
"Sid" : "TrustStatement"
},
{
"Effect" : "Allow",
"Principal" : {
"AWS" : "arn:aws:iam::xxxxxxx:role/OrganizationAdministratorRole"
},
"Action" : "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "CloudFormationService_task_role_policy_attachment" {
role = aws_iam_role.CloudFormationServiceRole.name
policy_arn = aws_iam_policy.CloudFormationServicePolicy.arn
}

Pass variables from terraform to arm template

I am deploying an ARM template with Terraform.
We deploy all our Azure infra with Terraform but for AKS there are some preview features which are not in terraform yet so we want to deploy an AKS cluster with an ARM template.
If I create a Log Analytics workspace with TF, how can I pass the workspace id to ARM.
resource "azurerm_resource_group" "test" {
name = "k8s-test-bram"
location = "westeurope"
}
resource "azurerm_log_analytics_workspace" "test" {
name = "lawtest"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "PerGB2018"
retention_in_days = 30
}
So here is a snippet of the AKS ARM where I want to enable monitoring and I refer to the workspaceresourceId. But how do I define/declare the parameter to get the id from the workspace that I created with TF
"properties": {
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"enableRBAC": "[parameters('EnableRBAC')]",
"dnsPrefix": "[parameters('DnsPrefix')]",
"addonProfiles": {
"httpApplicationRouting": {
"enabled": false
},
omsagent": {
"enabled": true,
"config": {
"logAnalyticsWorkspaceResourceID": "[parameters('workspaceResourceId')]"
}
}
},
you could use the parameters property of the azurerm_template_deployment deployment to pass in parameters:
parameters = {
"workspaceResourceId" = "${azurerm_log_analytics_workspace.test.id}"
}
I think it should look more or less like that, here's the official doc on this.

Amazon S3: GetObject Request throwing an exception "Access denied" 403

I've just started to work with Amazon S3 in my ASP.NET project. I can upload images, delete them, and show on browser. But when I was trying to get image-object from code-behind by a simple GetObjectRequest to load it to a simple stream, I've got an exeption "Access denied: The remote server returned an error: (403) Forbidden.". And it's very strange 'cause i can delete an object but have no access to get it?
Here is my Get Request code:
using (var client = new AmazonS3Client(Amazon.RegionEndpoint.EUWest1))
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
GetObjectResponse response = client.GetObject(request);
return response.ResponseStream;
}
Which doesn't work.
And this DELETE request works correct
DeleteObjectRequest deleteObjectRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};
client.DeleteObject(deleteObjectRequest);
I think that it could be a problem with my bucket policy, but i don't understand what exactly
{
"Version": "2008-10-17",
"Id": "Policy1437483839592",
"Statement": [
{
"Sid": "Stmt1437483828676",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ama.dyndns.tv/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"MyIP",
"MyTeammateIP"
]
}
}
},
{
"Sid": "Givenotaccessifrefererisnomysites",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ama.dyndns.tv/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"MyIP",
"MyTeammateIP"
]
}
}
}
]
}
Your first bucket policy statement allows a GET request when the Referer: http request leader's value is present and matches one of the supplied values. (Note that this is is a very primitive access control mechanism, as the header is easily forged).
The second policy denies requests where the referer doesn't match any value from the supplied list.
The referer is nothing more than a request header sent by the browser or http user agent library. When you are sending your GET request from code, there's not going to be a referer header present, unless you forge one yourself, as part of the request. Any matching Deny policy overrides not only any matching Allow policy... it also overrides any authentication credentials you supply. Hence, the problem.
If you don't set the acl on the object to something that allows public access (such as x-amz-acl: public-read) then the Deny policy is unnecessary. The object will not be downloadable in that case, because the deny is implicit unless the Allow policy is matched or you provide valid authentication credentials. Everything is denied by default in S3 unless you allow it via the object permissions/acl, bucket policy, or IAM user policy, and even if you do, a matching explicit Deny always prevails.
If the object does not exist and the executing code does not have ListBucket permission, then a 403 will be returned even if the calling code has getObject permissions.
Take a look at the permissions section: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html

Resources