How to get the redshift snapshots with terraform - terraform-provider-aws

Terraform provides data sources to get the snapshots for RDS using aws_db_cluster_snapshot and aws_db_snapshot.
How can I get the cluster snapshots for a redshift cluster in terraform?
Thanks,
Bob

You can't (yet).
My work around is to run:
aws redshift describe-cluster-snapshots --cluster-identifier ${cluster_identifier} --max-items 1 --query 'Snapshots[0].SnapshotIdentifier' --region ${aws_region}
As we run Terraform on Jenkins it's a reasonable work around. Alternatively you could run similar as a null_resource provisioner, output the result to a file, then use terraform to read that file
resource "null_resource" "redshift_snap" {
provisioner "local-exec" {
when = "create"
command = "aws redshift describe-cluster-snapshots --cluster-identifier ${var.cluster_identifier} --max-items 1 --query 'Snapshots[0].SnapshotIdentifier' --region ${var.region} > snapshot_identifier.txt"
}
}
snapshot_identifier = file("${path.module}/snapshot_identifier.txt")
I know it's a bit dirty, but should work until Terraform release a data for Redshift snaps.

Related

Ansible Ad-Hoc command with ssh keys

I would like to setup ansible on my Mac. I've done something similar in GNS3 and it worked but here there are more factors I need to take into account. so I have the Ansible installed. I added hostnames in /etc/hosts and I can ping using the hostnames I provided there.
I have created ansible folder which I am going to use and put ansible.cfg inside:
[defaults]
hostfile = ./hosts
host_key_checking = false
timeout = 5
inventory = ./hosts
In the same folder I have hosts file:
[tp-lab]
lab-acc0
When I try to run the following command: ansible tx-edge-acc0 -m ping
I am getting the following errors:
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: Unhandled error in Python interpreter discovery for host tx-edge-acc0: unexpected output from Python interpreter discovery
[WARNING]: sftp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: Platform unknown on host tx-edge-acc0 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
tx-edge-acc0 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to tx-edge-acc0 closed.\r\n",
"module_stdout": "\r\nerror: unknown command: /bin/sh\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
Any idea what might the problem here? much appreciated
At first glance it seems that you ansible controller does not load configuration files (especially ansible.cfg) when playbook is fired.
(From documentation) Ansible searches for configuration files in the following order, processing the first file it finds and ignoring the rest:
$ANSIBLE_CONFIG if the environment variable is set.
ansible.cfg if it’s in the current directory.
~/.ansible.cfg if it’s in the user’s home directory.
/etc/ansible/ansible.cfg, the default config file.
Edit: For peace of mind it is good to use full paths
EDIT Based on comments
$ cat /home/ansible/ansible.cfg
[defaults]
host_key_checking = False
inventory = /home/ansible/hosts # <-- use full path to inventory file
$ cat /home/ansible/hosts
[servers]
server-a
server-b
Command & output:
# Supplying inventory host group!
$ ansible servers -m ping
server-a | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
server-b | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

Boto3 unable to connect to local DynamoDB running in Docker container

I'm at a complete loss. I have a Docker container running DynamoDB locally. From the terminal window, I run:
docker run -p 8010:8000 amazon/dynamodb-local
to start the container. It starts fine. I then run:
aws dynamodb list-tables --endpoint-url http://localhost:8010
to verify that the container and the local instance is working fine. I get:
{
"TableNames": []
}
That's exactly what I expect. It tells me that the aws client can connect to the local DB instance properly.
Now the problem. I get to a python shell, and type:
import boto3
db = boto3.client('dynamodb', region_name='us-east-1', endpoint_url='http://localhost:8010', use_ssl=False, aws_access_key_id='my_secret_key', aws_secret_access_key='my_secret_access_key', verify=False)
print(db.list_tables())
I get a ConnectionRefusedError. I have tried the connection with and without the secret keys, with and without use_ssl and verify, and nothing works. At this point I'm thinking it must be a bug with boto3. What am I missing?

Showing error while mounting EFS to an instance in my Elastic Beanstalk environment

Followed the following procedure for attaching the EFS file to instances created using EB:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-mount-efs-volumes/#:~:text=In%20an%20Elastic%20Beanstalk%20environment,scale%20up%20to%20multiple%20instances.
But the logs of Elastic Beanstalk are showing following error:
[Instance: i-06593*****] Command failed on instance. Return code: 1 Output: (TRUNCATED)...fs ... mount -t efs -o tls fs-d9****:/ /efs Failed to resolve "fs-d9****.efs.us-east-1.amazonaws.com" - check that your file system ID is correct. See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail. ERROR: Mount command failed!. command 01_mount in .ebextensions/storage-efs-mountfilesystem.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Just used **** in EFS ID for security.
Based on the comments.
The solution was to create new EFS filesystem, instead of using the original one.

How to setup AWS KMS on Airflow?

Can you please advice if Airflow supports AWS KMS server side encryption? If yes, is there any documentation on how to setup? I am using Airflow 1.9.0 version.
I tried with creating s3 connection with extra args like -
{"aws_access_key_id":"xx", "aws_secret_access_key": "xx", "sse": "aws:kms", "sse-kms-key-id": "xx"}
and used s3 hook to upload a file in the code but it is throwing this error -
an error occurred (accessdenied) when calling the createmultipartupload operation access denied
Where as s3 cp from command line did work!
aws s3 cp test.txt s3://xxx/xx/test.txt --sse aws:kms --sse-kms-key-id "xx"
upload: ./test.txt to s3://xxx/xx/test.txt
thanks in advance.

sparklyr - Connect remote hadoop cluster

It is possible to connect sparklyr with a remote hadoop cluster or it is only possible to use it local?
And if it is possible, how? :)
In my opinion the connection from R to hadoop via spark is very important!
Do you mean Hadoop or Spark cluster? If Spark, you can try to connect through Livy, details here:
https://github.com/rstudio/sparklyr#connecting-through-livy
Note: Connecting to Spark clusters through Livy is under experimental development in sparklyr
You could use livy which is a Rest API service for the spark cluster.
once you have set up your HDinsight cluster on Azure check for livy service using curl
#curl test
curl -k --user "admin:mypassword1!" -v -X GET
#r-studio code
sc <- spark_connect(master = "https://<yourclustername>.azurehdinsight.net/livy/",
method = "livy", config = livy_config(
username = "admin",
password = rstudioapi::askForPassword("Livy password:")))
Some useful URL
https://learn.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-livy-rest-interface

Resources