Is it possible to setup a custom hostname for AWS Transfer SFTP via Terraform - sftp

I'm trying to set up an SFTP server with a custom hostname using AWS Transfer. I'm managing the resource using Terraform. I've currently got the resource up and running, and I've used Terraform to create a Route53 record to point to the SFTP server, but the custom hostname entry on the SFTP dashboard is reading as blank.
And of course, when I create the server manually throught the AWS console and associate a route53 record with it, it looks like what I would expect:
I've looked through the terraform resource documentation and I've tried to see how it might be done via aws cli or cloudformation, but I haven't had any luck.
My server resource looks like:
resource "aws_transfer_server" "sftp" {
identity_provider_type = "SERVICE_MANAGED"
logging_role = "${aws_iam_role.logging.arn}"
force_destroy = "false"
tags {
Name = ${local.product}-${terraform.workspace}"
}
}
and my Route53 record looks like:
resource "aws_route53_record" "dns_record_cname" {
zone_id = "${data.aws_route53_zone.sftp.zone_id}"
name = "${local.product}-${terraform.workspace}"
type = "CNAME"
records = ["${aws_transfer_server.sftp.endpoint}"]
ttl = "300"
}
Functionally, I can move forward with what I have, I can connect to the server with my DNS, but I'm trying to understand the complete picture.

In AWS,
When you create a server using AWS Cloud Development Kit (AWS CDK) or through the CLI, you must add a tag if you want that server to have a custom hostname. When you create a Transfer Family server by using the console, the tagging is done automatically.
So, you will need to be able to add those tags using Terraform. In v4.35.0 they added support for a new resource: aws_transfer_tag.
An example supplied in the GitHub Issue (I haven't tested it personally yet.):
resource "aws_transfer_server" "with_custom_domain" {
# config here
}
resource "aws_transfer_tag" "with_custom_domain_route53_zone_id" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:route53HostedZoneId"
value = "/hostedzone/ABCDE1111222233334444"
}
resource "aws_transfer_tag" "with_custom_domain_name" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:customHostname"
value = "abc.example.com"
}

Related

Terraform kubernetes_pod resource is not created

I am very new to Terraform and I am given a project to make a basic Wordpress website in Google Cloud Platform using Kubernetes automated by Terraform. I am trying to set up the kubernetes pod for wordpress as follows:
resource "kubernetes_pod" "kubPod" {
metadata {
labels = {
app = "MyApp"
}
name = "terraform-example"
}
spec {
container {
image = "wordpress:4.8-apache"
name = "mywp"
}
}
}
When I build the project using terraform apply, I get the following output:
kubernetes_pod_v1.kubPod: Creating...
and after that terraform closes without creating the pod. I did not understand what is wrong with the code, I just suspect that it might be related to a lack in declarations, but examples I looked for were quite similar with my code below, I would be glad if I could get some help.
My version information is as follows:
Terraform v1.2.8
on windows_amd64
provider registry.terraform.io/hashicorp/google v4.34.0
provider registry.terraform.io/hashicorp/kubernetes v2.13.1
provider registry.terraform.io/hashicorp/null v3.1.1
Regards.

Connect apache airflow to services that are behind api gateway

How can I connect airflow to a service that is accessible through an api gateway? I cant figure out how create airflow connection and add the path to the service on the gateway's address.
When creating a new connection in airflow admins tab, there are fields for ports, hostname and extra params.
Extra params work as a json in where you add extra parameters to the connection string.
Example:
Extra:
{
"param1": "val1",
"param2": "val2"
}
the conn url ends looking like:
my-conn-type://login:password#<hostname>:<port>/param1=val1&param2=val2
In my case I am trying to acces to a livy server that is behind a apache knox api gateway, so the url for accesing the service looks like this:
https ://login:password#hostname:8444/gateway/cdp-proxy-api/livy
Cant either find documentation for this in airflow's docs
(probably its a newbie question, sorry)
Thanks!

How to resolve 'error - InvalidDatasourceError: Datasource URL should use prisma'

So I am using Prisma as an ORM on my project to communicate with the database that I set up with AWS. Not happy with the AWS service I am now switching my database to railway.app - which is working out well for me. However, I set up a Prisma data proxy on my app with the AWS connection string, and now that I don't seem to want/ need it anymore I removed it but getting an error:
error - InvalidDatasourceError: Datasource URL should use Prisma:// protocol.
If you are not using the Data Proxy, remove the data proxy from the preview features in your
schema and ensure that PRISMA_CLIENT_ENGINE_TYPE environment variable is not set to data proxy.
Since getting the error I have removed previewFeatures = ["dataProxy"] from the prisma.schema file to make it look like this (back to what it was before configuring with dataproxy):
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url= env("DATABASE_URL")
}
but the error still persists, how do I fix this?
running prisma generate fixes this issue

How to route requests to desired endpoint using Environment Variables in APIGEE

I've a situation where I need to route requests to desired endpoint based on Environment the request hits. for example QA - QA, Prod to Prod
I've configured a proxy and defined a default target host during initial config.
Then I'm using a javascript to decide target host based on the env the request comes in.
var env = context.getVariable('environment.name');
if(env=="prod") {
var host = 'https://prod.com';
}
if(env=="test") {
var host = 'https://qa.com';
}
I've used this JS file in target endpoint(default) preflow as a step.
I see that all requests are sent to the default host that I configured during initial process.
Am I missing something here please help.
Also I've seen about using Target Server Env config. I've configured the hosts but how do I reference/use it in my proxy.
I usually set the target endpoint (it is the same to host of yours) in Key Value Mapping of 'Environment Configuration' of Apigee.
And then assign it to variable (example assign it to variable name endpointUrl) in Key Value Maps Operation policy
Finally, use it in your Target Request Message like below.
<AssignVariable>
<Name>target.url</Name>
<Ref>endpointUrl</Ref>
</AssignVariable>
Adventage of this method is if your host changed, you just edit the value in Key Value Mapping not edit in your code and do not need to re-deploy your API.
However, I answer you from my work experience only.
Maybe you have try to go Apigee Community, you may found the solution that suits you.

Telegraf - how to monitor multiple Tomcat instances?

I managed to gather data from single Tomcat instance to Telegraf as follows.
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:19090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
## Request timeout
# timeout = "5s"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
Now, I want to monitor multiple Tomcat instances, but there does not seem to be an example of how to monitor multiple. Does anybody know?
The answer turned out to be very simple. Just declare the inputs.tomcat block multiple times as follows.
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:19090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:29090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
So as far as I recall there are couple of ways.
1) Easiest way is to create, use and try via using different configuration files where you may create tomcat1.conf place it under /etc/telegraf/telegraf.d/tomcat1.conf folder where you'd end up using the same plugin that you have mentioned above (inputs.tomcat) and similarly, create another configuration file for tomcat2.conf etc.. for all Tomcat instances. This way you may be able to monitor multiple Tomcat instances. See if that helps! Con of this approach is, you have to create N no. of tomcatXX.conf files under telegrad.d folder (Which can be easily fixed if you create these files on the fly while provisioning a machine using Ansible/similar tools - templating the file and iterating over the tomcatXX list).
2) Other way, which which may help as well using just one configuration file.
In one configuration file, use the following plugins together to capture what you are looking for. PS: If you use inputs.exec plugin, then the output you'll generate from your custom script (which you'll call in inputs.exec plugin) must generate the output in a known format (InfluxDB/Line Protocol) that Telegraf and InfluxDB can understand / store or you'll see some minor errors for which you can see few of my posts.
exec plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
http_* plugin (especially http_response): https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
filestat plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat
logparser plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logparser
procstat plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat
Look at the plugin links mentioned above for what they do and how to set them up in Telegraf and that'd get you most of what you are looking at if you don't want to have multiple conf files for each Tomcat instance.
https://github.com/influxdata/telegraf/tree/master/plugins/inputs contains all input plugins (see if there are some that you may be interested in).
See if you can utilize how to use prefix property efficiently to distinguish between various metrics/events coming from using these plugin(s).

Resources