Passing AWS ParameterStore values to Apache process in Fargate? - wordpress

I'm using AWS to start up an Apache+Wordpress task. The WP username and password values are stored in AWS ParameterStore and passed to the Fargate container as environment variables. I can shell into the Fargate container (as root) via the aws cli utility and verify the variables from ParameterStore are in the root environment and correct.
The Apache/WP process (www-data user) is configured to retrieve the WP_PASSWORD variable from it's environment via wp-config.php and getenv('WP_DB_PASSWORD') PHP function. Previously (on-prem), this was achieved by using /etc/apache2/envvars however I'd rather not have the cleartext DB pass laying on disk like this.
How do I get the values from ParameterStore passed to the www-data environment without writing a file somewhere?

Related

Managing DBT profile file in MWAA

I would like to use DBT in MWAA Airflow enviroment. To achieve this I need to install DBT in the managed environment and from there run the dbt commands via the Airflow operators or CLI (BashOperator).
My problem with solution is that I need store the dbt profile file(s) -which contains the target / source database credentials- in S3. Otherwise the file is not going to be deployed to the Airflow worker nodes hence cannot be used by dbt.
Is there any other option? I feel this is a big security risk and also undermines the use of Airflow (because I would like to use its inbuilt password manager)
My ideas:
Create the profile file on the fly in the Airflow dag as a task and
write it out to local. I do not think this is a feasible workaround, because there is no guarantee that the dbt task is going to run on the same worker node which my code created.
Move the profile file manually to S3 (Exclude it from CI/CD). Again, I see a security risk, as I am storing credentials on S3.
Create a custom operator, which builds the profile file on the same machine as command will run. Maintenance nightmare.
Use MWAA environment variables (https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html) and combine it with dbt's env_var command. (https://docs.getdbt.com/reference/dbt-jinja-functions/env_var)
Storing credentials in System wide EVs, this way feels awkward.
Any good ideas or best practices?
#PeterRing, in our case we use Dbt Cloud. Once the connection is set up in the Airflow UI, you are calling Dbt Job IDs to trigger the job (then using a sensor to monitor it until it completes).
If you can't use Dbt Cloud, perhas you can use AWS Secrets Manager to store your db profile/creds: Configuring an Apache Airflow connection using a Secrets Manager secret

Using R and paws: How to set credentials using profile in config file?

I use SSO and a profile as defined in ~/.aws/config (MacOS) to access AWS services, for instance:
aws s3 ls --profile myprofilename
I would like to access AWS services from within R, using the paws() package. In order to do this, I need to set my credentials in the R code. I want to do this through accessing the profile in the ~/.aws/config file (as opposed to listing access keys in the code), but I haven't been able to figure out how to do this.
I looked at the extensive documentation here, but it doesn't seem to cover my use case.
The best I've been able to come up with is:
x = s3(config = list(credentials = list(profile = "myprofilename")))
x$list_objects()
... which throws an error: "Error in f(): No credentials provided", suggesting that the first line of code above does not connect to my profile as stored in ~/.aws/config.
An alternative is to generate a user/key with programmatic access to your S3 data. Then, assuming that ~/.aws/env contains the values of the generated key:
AWS_ACCESS_KEY_ID=abc
AWS_SECRET_ACCESS_KEY=123
AWS_REGION=us-east-1
insert the following line at the beginning of your file:
readRenviron("~/.aws/env")
This AWS blog provides details about how to get the temporary credentials for programatic access. If you can get the credentials and set the appropriate environment variables, then the code should work fine without the profile name.
Or You can also try the following if you can get temporary credentials using aws cli
Check if you can generate temporary credentials
aws sts assume-role --role-arn <value> --role-session-name <some-meaningful-session-name> --profile myprofilename
If you can execute the above successfully, then you can use this method to automate the process of generating credentials before your code runs.
Put the above command in a bash script get-temp-credentials.sh and generate a JSON containing the temporary credentials as per the documentation.
Add a new profile programmatic-access in the ~/.aws/config
[profile programmatic-access]
credential_process = "/path/to/get-temp-credentials.sh"
Finally update the code to use the profile name as programmatic-access
If you have AWS cli credentials set up as a bash profile eg. ~/.aws/config:
[profile myprof]
region=eu-west-2
output=json
.. and credentials eg. ~/.aws/credentials:
[myprof]
aws_access_key_id = XXX
aws_secret_access_key = xxx
.. paws will use these if you add a line to ~/.Renviron:
AWS_PROFILE=myprof

Web.Config transforms for Multi-Tenant deployment of WebForms app in docker over AWS ECS

Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html

How to add an Airflow Pool via environment variable?

Just like it is possible to set connections via an environment variable following the name AIRFLOW_CONN_{conn_id}, is there a way to set pools?
This is so I can set up a local Docker test environment with all configurations populated.

Can you FTP into an EC2 instance if you opted out of creating a key pair when you generated it?

I followed this AWS tutorial to get a Wordpress site up and running but it instructed me not to use the keypair option so now I can not follow those instructions to FTP and make simple CSS etc. changes.
Before I blow up the whole instance, am I missing an approach that can make FTP possible?
If you skipped creating key pair during instance launch, you can't connect to it. The only way to connect to that instance with (S)FTP now is to put a working key on the disk:
Stop the instance.
Detach the EBS volume and attach it to the instance that you can connect to.
Mount the volume and put a public key in ./ssh folder in the home directory of the user named bitnami.
Dismount the volume, detach it and attach back to the original instance.
Seem like it's easier to just recreate the instance, this time with a private key.

Resources