What's the aws cli command to create the default EMR-managed security groups? - emr

When using the EMR web console, you can create a cluster and AWS automatically creates the EMR-managed security groups named "ElasticMapReduce-master" & "ElasticMapReduce-slave". How do you create those via the aws cli?
I found aws emr create-default-roles but there's no aws emr create-default-security-groups.

As of right now, it looks like you can't. See http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-man-sec-groups.html section "To specify Amazon EMR–managed security groups using the AWS CLI":
Amazon EMR–managed security groups are not supported in the Amazon EMR
CLI.
Also see https://github.com/aws/aws-cli/issues/2485

Related

How can I run dynamo DB on my local machine with serverless API structure(node js)?

I just want to run my serverless apis on local... how can I achieve that...do I need to run dynamoDB locally?? OR we can achieve this without configuring Dynamo on local!!
I was trying with dynamo db local plugin. . . but I want to run dynamo on AWS console...but my serverless apis on my local machine
You can use environment variables to specify the AWS access key and secret key that Local should use to connect to the DynamoDB service on AWS.

Can I use the NebulaGraph Dashboard service deployed on AWS to manage a NebulaGraph database node on another cloud such as Google Cloud?

I deployed a Nebula Graph database Enterprise cluster on AWS according to their doc here. It has a NebulaGraph Dashboard service that seems to be able to manage different NebulaGraph nodes.
Does anyone know if I can use Dashboard to manage my NebulaGraph database on GCP?
It's technically doable but not worth it hacking so.
Dashboard could:
do lifecycle management(including scale-in & scale-out, stop&start) towards services on hosts via SSH
do observability/monitoring things towards services/hosts via exporters(node or nebulagraph)
do NebulaGraph related ops via GraphClient
In theory, if 2. and 3. are network-wise connected, there is no blocking issue.
While for 1., apart from the network perspective, the lifecycle management(scale-out, for instance) is overlapped with the cloud infra orchestration(cloud formation/terraform etc.), before this was integrated(that dashboard is calling cloud formation stacks or terraform to provision nodes and/or service binaries), the scaling feature cannot be used from the dashboard.

How to configure aws sso for terraform?

I have been using aws as cloud service and terraform as IaC. It's very annoying to copy paste the credentials frequently. Is there any solution available for that or any work around other to use aws sso?
Premise
It was my understanding that there is a current issue between AWS SSO (authentication v2) and terraform; that only V1 authentication (access key and secret key) is reliably accepted.
For example, this open PR or this issue or this ongoing referenced merge
Work Around
There are a couple of projects that circumvent this issue by generating V1 creds from AWS SSO.
The one I use is a PyPi library called yawsso.
Try this:
pip3 install yawsso
yawsso login # this will authenticate - you no longer need to run 'aws sso login'
Note
Just make sure you use the right profile with export AWS_PROFILE=foo where "foo" would be in ~/.aws/config as [profile foo]
Bonus
yawsso will log you in on all profiles listed in the AWS config file, so you don't need to log in one-by-one into all profiles required at work

How to configure usage of AWS profile during amplify init?

During amplify init there is a question:
"Do you want to use AWS profile"
What is "AWS profile in this context"? When should i choose yes, and when no? What is the decision impact on the project?
After installation of AWS CLI, you can configure your CLI using aws configure command. You provide access key, secret access key and default region. Once you are done with this it creates a default profile for your CLI. All your aws commands use credentials from this default profile. Your amplify init command refers to this profile.
You can have multiple AWS profiles for your CLI to use.
Coming to your question.
1) If your aws default profile is configured for the same account where you want your amplify project to deploy you can say yes to that question.
2) If you are not sure what is there in your default profile you can opt for no and provide access key, secret key and other information by own.
Hope this will clear your doubt.

How to allow an access to a Compute Engine VM in Airflow (Google Cloud Composer )

I try to run a bash command in this pattern ssh user#host "my bash command" using BashOperator in Airflow. This works locally because I have my publickey in the target machine.
But I would like to run this command in Google Cloud Composer, which is Airflow + Google Kubernetes Engine. I understood that the Airflow's core program is running in 3 pods named according to this pattern airflow-worker-xxxxxxxxx-yyyyy.
A naive solution was to create an ssh keys for each pod and add it's public key to the target machine in Compute Engine. The solution worked until today, somehow my 3 pods have changed so my ssh keys are gone. It was definitely not the best solution.
I have 2 questions:
Why Google cloud composer have changed my pods ?
How can I resolve my issue ?
Pods restarts are not specifics to Composer. I would say this is more related to kubernetes itself:
Pods aren’t intended to be treated as durable entities.
So in general pods can be restarted for different reasons, so you shouldn't rely on any changes that you make on them.
How can I resolve my issue ?
You can solve this taking into account that Cloud Composer creates a Cloud Storage bucket and links it to your environment. You can access the different folders of this bucket from any of your workers. So you could store your key (you can use only one key-pair) in "gs://bucket-name/data", which you can access through the mapped directory "/home/airflow/gcs/data". Docs here

Resources