how to add ec2 metadata flag http-token:requied at the time of ec2 creation using slatscript - salt-stack

how to configure instance metadata info http-token:requied at the time of ec2 creation using slat script
aws official document found we can use IMDsv2 at the time of instance creation, but did found proper parameter in salt
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-options.html

Related

Managing DBT profile file in MWAA

I would like to use DBT in MWAA Airflow enviroment. To achieve this I need to install DBT in the managed environment and from there run the dbt commands via the Airflow operators or CLI (BashOperator).
My problem with solution is that I need store the dbt profile file(s) -which contains the target / source database credentials- in S3. Otherwise the file is not going to be deployed to the Airflow worker nodes hence cannot be used by dbt.
Is there any other option? I feel this is a big security risk and also undermines the use of Airflow (because I would like to use its inbuilt password manager)
My ideas:
Create the profile file on the fly in the Airflow dag as a task and
write it out to local. I do not think this is a feasible workaround, because there is no guarantee that the dbt task is going to run on the same worker node which my code created.
Move the profile file manually to S3 (Exclude it from CI/CD). Again, I see a security risk, as I am storing credentials on S3.
Create a custom operator, which builds the profile file on the same machine as command will run. Maintenance nightmare.
Use MWAA environment variables (https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html) and combine it with dbt's env_var command. (https://docs.getdbt.com/reference/dbt-jinja-functions/env_var)
Storing credentials in System wide EVs, this way feels awkward.
Any good ideas or best practices?
#PeterRing, in our case we use Dbt Cloud. Once the connection is set up in the Airflow UI, you are calling Dbt Job IDs to trigger the job (then using a sensor to monitor it until it completes).
If you can't use Dbt Cloud, perhas you can use AWS Secrets Manager to store your db profile/creds: Configuring an Apache Airflow connection using a Secrets Manager secret

AWS MSK - Update storage volume size by modifying the CF

I have created an MSK cluster and all associated configuration using cloud formation template. Now I have to update the stack with more storage. Can I just modify the CF and specify the new storage requirement in the template and run the CF again? Will it create a new MSK cluster?
Updating a version in CF manifest will update a kafka version since May 2021.
Be sure, that you have enough disk space free for update though, otherwise you'll get the error: Amazon CloudFormation wasn't able to complete the request because of an issue with the server.
This operation will never create a new cluster since the cluster id will stay the same all the time

How to connect to postgres read replica in Hasura

I have the main postgres database running with Hasura. I want to add a new Hasura service that connects only to the read replica.
But it is getting this error
..."hint":"You can use REPEATABLE READ instead.","message":"cannot use serializable mode in a hot standby","status_code":"0A000"...
I also tried adding --tx-iso=repeatable-read but no luck.

Airflow logs in s3 bucket

Would like to write the airflow logs to s3. Following are the parameter that we need to set according to the doc-
remote_logging = True
remote_base_log_folder =
remote_log_conn_id =
If Airflow is running in AWS, why do I have to pass the AWS keys? Shouldn't the boto3 API be able to write/read to s3 if correct permission are set on IAM role attached to the instance?
Fair point, but I think it allows for more flexibility if Airflow is not running on AWS or if you want to use a specific set of credentials rather than give the entire instance access. It might have also been easier implementation as well because the underlying code for writing logs into S3 uses the S3Hook (https://github.com/apache/airflow/blob/1.10.9/airflow/utils/log/s3_task_handler.py#L47), which requires a connection id.

Instance creation in Openstack Nova - Logfile

I need to keep track of Instance creation in openstack Nova.
That is I need to perform some special operations on creation of new instance in openstack.
So that I need to know where all the details are getting stored (In Log file).
Please some one guide me regarding the Log file for tracking instance creation or some other way to track the same.
As I am aware you have to look in the following services' log files
nova-scheduler (oftenly installed on controller node). This will show which 'server' will host the newly created Virtual Machine.
The logs of nova-compute service running on the host that the Virtual Machine was instantiated.
You can additionally check the logs of qemu and libvirt (again on the host that the Virtual Machine was instantiated)
Have in mind that the info you will find there, depends on the 'logging level' you have set in each service configuration files. For more information about how you can configure the OpenStack Components logging refer to the official documentation "Logging and Monitoring".

Resources