Terraform module output need to use within the same module - lambda environment variable - Trigger api_endpoint - terraform-provider-aws

I need to use the trigger api endpoint value in the same lambda environment variable [within the module]
I can able to create the lambda and API Gateway trigger using module, but while creating the lambda need to assign the api gateway endpoint as enviornment variable.
Can you please suggest the way to get the api gateway endpoint and it should be assigned into environment variable.
==> modules/lambda
main.tf
resource aws_lambda_function create-lambda
resource aws_apigatewayv2_api api
resource aws_lambda_permission apigw
outputs.tf
output lambda-http-api-endpoint {
value = aws_apigatewayv2_api.api.api_endpoint
description = "Lambda Trigger http api endpoint"
}
==> main.lambda.create.tf
module lambda_publish {
source = "../modules/lambda"
environment_variables = {
API_GATEWAY_ENDPOINT = module.lambda_publish.lambda-http-api-endpoint
}
}
Terraform plan produce cycle error instead of create the lambda, api gateway and add a gateway endpoint as enviornment variable.
│ Error: Cycle: module.key.module.lambda_publish.aws_lambda_function.create-lambda, module.key.module.lambda_publish.aws_apigatewayv2_api.api, module.key.module.lambda_publish.output.lambda-http-api-endpoint (expand), module.key.module.lambda_publish.var.environment_variables (expand)

Related

Using Vault UI to get secrets

I have the following policies:
path "/kv/dev/*" {
capabilities = ["read","list", "update"]
}
path "/kv/data/dev/*" {
capabilities = ["read","list", "update"]
}
Using the CLI I and able to use the following command to get the secrets:
vault kv get -mount=kv dev/db
And it outputs the secrets correctly. The issue occurs when using the the UI
-With the input of dev/db I get Ember Data Request POST /v1/sys/capabilities-self returned a 400 Payload (application/json) [object Object]
-With the input of /data/dev/db I get undefined is not an object (evaluating 'n.data')
Any advice on how to access the secrets using the UI ?
I think I get the state you are looking for. Let me share with you what i did:
First I specified in my terminal what I need in terms of my Vault:
export VAULT_TOKEN='the token I use to authenticate myself in the UI'
export VAULT_ADDR='my vault address'
Login myself in the same way i will do in the UI:
vault login -method=token token=$VAULT_TOKEN
Creating policy
vault policy write my-policy - << EOF
path "/kv/dev/*" {
capabilities = ["read","list", "update"]
}
path "/kv/data/dev/*" {
capabilities = ["read","list", "update"]
}
EOF
Enabling secrets engine for specific path. As you can see in this StackOverflow question
vault secrets enable -path=kv kv
Inserting and reading secret:
vault kv put kv/dev/db value=yes
vault kv get -mount=kv dev/db
After all of this steps I can see the secret in:
VAULT_ADDR/ui/vault/secrets/kv/show/dev/db
So, if VAULT_ADDR was http://127.0.0.1:8200 the full path in the browser will be:
http://127.0.0.1:8200/ui/vault/secrets/kv/show/dev/db

How to retrieve associated CloudFront Distribution URL in Amplify+Terraform?

Within Terraform, is it possible to retrieve the URL of an Amplify app's CloudFront Distribution?
If I was to create a new Next.js app within the web console, there is an important 200 rewrite that gets added in 'Redirects & Rewrites', which points all front-end paths to the associated CloudFront Distribution. However, I'm now migrated to Terraform aws_amplify_app, which doesn't add this rewrite automatically.
So, does anyone know how to do this within the TF provider? I appreciate that this may be impossible because the CloudFront Distribution is only created as part of the Amplify App itself. So, alternatively, if it is not exposed, is there a way for Amplify to handle this Next-specific logic when using Terraform (rather than just through the web console)?
Here is the relevant code from my main.tf, with a TODO comment showing what needs to be automatic.
resource "aws_amplify_app" "ui" {
name = "my-ui"
...
custom_rule {
source = "/<*>"
status = "200"
## TODO -- this should be automatically set as the URL of the newly created CloudFront distribution
target = "https://xxxxxxxxx.cloudfront.net/<*>"
}
}

firebase callable functions returning CORS error and not being called from client

I have been using firebase functions for quite some time and all deployments of functions have been going quite smoothly. All of a sudden any new functions deployed have stopped working and any calls from the client return a CORS error. If I check the functions list in the firebase dashboard I can't see the functions being called which is similar to what I would expect if the functions simply didn't exist at all.
I am now just trying a simple basic function like below:
exports.createSession = functions.region('europe-west2').https.onCall(async (data, context) => {
return ({status: 200})
});
On the frontend I am doing a simple call like below:
const createSessionFunction = functions.httpsCallable('createSession');
const response = await createSessionFunction({});
All the other functions that were create and deployed prior to this week are working fine. Any new functions are not.
The error I get is below:
Access to fetch at 'https://europe-west2-xxxxxxx.cloudfunctions.net/createSession' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
index.cjs.js:614 POST https://europe-west2-xxxxxxxz.cloudfunctions.net/createSession net::ERR_FAILED
My function list on the firebase GUI show this function does exist:
createSession - Request - https://europe-west2-xxxxxxxx.cloudfunctions.net/createSession-europe-west2-Node.js 8 -256 MB - 60s
However the logs show that it is never called from the client when I'm trying to test it which means the client might not be detecting this function at all.
I have tried the following steps with no luck:
Delete and redeploy the functions
Rename the function and redeploy
Deploy the same new function on different applications (dev/test etc)
Any ideas?
This was resolved on the google cloud dashboard by granting all my functions public access. The default permissions has changed from public to private.
https://cloud.google.com/functions/docs/securing/managing-access-iam#allowing_unauthenticated_function_invocation

Is it possible to setup a custom hostname for AWS Transfer SFTP via Terraform

I'm trying to set up an SFTP server with a custom hostname using AWS Transfer. I'm managing the resource using Terraform. I've currently got the resource up and running, and I've used Terraform to create a Route53 record to point to the SFTP server, but the custom hostname entry on the SFTP dashboard is reading as blank.
And of course, when I create the server manually throught the AWS console and associate a route53 record with it, it looks like what I would expect:
I've looked through the terraform resource documentation and I've tried to see how it might be done via aws cli or cloudformation, but I haven't had any luck.
My server resource looks like:
resource "aws_transfer_server" "sftp" {
identity_provider_type = "SERVICE_MANAGED"
logging_role = "${aws_iam_role.logging.arn}"
force_destroy = "false"
tags {
Name = ${local.product}-${terraform.workspace}"
}
}
and my Route53 record looks like:
resource "aws_route53_record" "dns_record_cname" {
zone_id = "${data.aws_route53_zone.sftp.zone_id}"
name = "${local.product}-${terraform.workspace}"
type = "CNAME"
records = ["${aws_transfer_server.sftp.endpoint}"]
ttl = "300"
}
Functionally, I can move forward with what I have, I can connect to the server with my DNS, but I'm trying to understand the complete picture.
In AWS,
When you create a server using AWS Cloud Development Kit (AWS CDK) or through the CLI, you must add a tag if you want that server to have a custom hostname. When you create a Transfer Family server by using the console, the tagging is done automatically.
So, you will need to be able to add those tags using Terraform. In v4.35.0 they added support for a new resource: aws_transfer_tag.
An example supplied in the GitHub Issue (I haven't tested it personally yet.):
resource "aws_transfer_server" "with_custom_domain" {
# config here
}
resource "aws_transfer_tag" "with_custom_domain_route53_zone_id" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:route53HostedZoneId"
value = "/hostedzone/ABCDE1111222233334444"
}
resource "aws_transfer_tag" "with_custom_domain_name" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:customHostname"
value = "abc.example.com"
}

How to route requests to desired endpoint using Environment Variables in APIGEE

I've a situation where I need to route requests to desired endpoint based on Environment the request hits. for example QA - QA, Prod to Prod
I've configured a proxy and defined a default target host during initial config.
Then I'm using a javascript to decide target host based on the env the request comes in.
var env = context.getVariable('environment.name');
if(env=="prod") {
var host = 'https://prod.com';
}
if(env=="test") {
var host = 'https://qa.com';
}
I've used this JS file in target endpoint(default) preflow as a step.
I see that all requests are sent to the default host that I configured during initial process.
Am I missing something here please help.
Also I've seen about using Target Server Env config. I've configured the hosts but how do I reference/use it in my proxy.
I usually set the target endpoint (it is the same to host of yours) in Key Value Mapping of 'Environment Configuration' of Apigee.
And then assign it to variable (example assign it to variable name endpointUrl) in Key Value Maps Operation policy
Finally, use it in your Target Request Message like below.
<AssignVariable>
<Name>target.url</Name>
<Ref>endpointUrl</Ref>
</AssignVariable>
Adventage of this method is if your host changed, you just edit the value in Key Value Mapping not edit in your code and do not need to re-deploy your API.
However, I answer you from my work experience only.
Maybe you have try to go Apigee Community, you may found the solution that suits you.

Resources