AWS Multifactor Authentication and shiny-server - r

I have a shiny-server app deployed on an ec2 AWS instance.
This app uses the library aws.s3 to perform reading/writing operation to s3 bucket.
The problem is, due to company policy reasons, i should use MFA authentication on the aws IAM users.
If i add the MFA authentication to the user used in the shiny-server instance, these will fail to download/upload data to the bucket s3 (permission denied)
R Code to read s3 bucket:
Sys.setenv("AWS_ACCESS_KEY_ID" = "ACCESSKEY",
"AWS_SECRET_ACCESS_KEY" = "SECRETACCESSKEY",
"AWS_DEFAULT_REGION" = "REGION",
"AWS_SESSION_TOKEN" = "")
aws.s3::s3read_using(FUN, trim_ws = TRUE, object = "myobject")
Is there some ways to download/upload s3 files through R, i can use other methods than this one but i can't change the iam policy.

You can improve on your approach here. You should not be using IAM Users to access S3 from EC2 instances, so there should not be a need for 2-factor authentication in the first place.
When accessing AWS Services, you should try to look for IAM Roles rather than IAM users, wherever possible. You can read more about different identities in the official docs here.
Among other things, AWS IAM Roles are automatically rotated behind the scenes and you should not be required to maintain or pass in AWS users credentials anywhere. This means the credentials are short lives and that reduces the impact in case they are compromised.
You can refer to this guide from AWS Knowledge Center for steps to get this up and running.

Related

Grant access to Cloud Storage to my Firebase users

My application has Firebase users (i.e. users created in Firebase Authentication, NOT in Firebase IAM or in GCP IAM). These users are not linked to a G Mail or Google Workspaces (formerly G Suite) account, and are not part of my organization.
I need to grant each of these users write access (not read) to a Cloud Storage bucket (1 user = 1 bucket), while not allowing any kind of access to that bucket to unauthenticated users or to other Firebase users.
How would I go about doing that?
I have tried verifying auth and generating a presigned URL from my Cloud Functions backend, but it has turned out a bit problematic with uploading thousands of files, which is why I'm looking at alternatives.
Time-limited access is not a requirement for me either way (I'm fine with users only having a few hours of access or having forever access). Also, if one bucket per user is too problematic, one folder per user, all inside the same bucket, would also be acceptable.
I know that in AWS I could use Cognito User Pools for the users, and then link the users to an Identity Pool so they can obtain temporary AWS credentials with the required scope, but I haven't been able to find the equivalent in GCP. The service comparison table hasn't helped in this regard.
I realize I might have the wrong idea in my head, coming from AWS. I don't mind if I have to link my Firebase users to GCP IAM users or to Firebase IAM users for this, though to me it sounds counter-intuitive, and I haven't found any info on that either. Maybe I don't even need GCP credentials, but I haven't found a way to do this with a bucket ACL either. I'm open to anything.
Since your users are signed in with Firebase Authentication, the best way to control their access is through security rules that sit in front of the files in your storage bucket when you access them through the Firebase SDK.
Some example of common access patterns are only allowing the owner of a file to access it or attribute or role based access control.
When implementing security rules, keep in mind that download URLs that you can generate through the Firebase SDK (if have read access to a file) provide public read-only access to the file too. These download URLs bypass the rules, so you should only generate them for files that you want to be publicly access to anyone with that URL.

Assign GCP functions service account roles to engage with Firebase using Terraform

I want to use the Firebase Admin SDK in my GCP cloud function, specifically for creating custom auth tokens.
I was getting auth/insufficient-permission errors after deployment and got to this thread. Note that it talks about Firebase functions, while I use pure GCP Cloud Functions.
To my understanding, GCP Cloud Functions uses the default App Engine service account, which is missing the Firebase Admin SDK admin service agent role.
I manually added it through the GCP console and it seems to solve the issue, but now I want to automate it via terraform where I manage my infrastructure.
How do I access the default App Engine service account? I think it's auto created when the GCP project is created.
How do I add the relevant role to it without changing other service accounts using that roles?
Is this it right approach, or is there a better way I'm missing?
The relevant documentation I was looking at is here. Note that I'm using initializeApp() without arguments, i.e. letting the library to discover the service account implicitly.
How to get the default App Engine service account through Terraform: google_app_engine_default_service_account
How to work with 'additional' IAM roles assigned to a service account:
IAM policy for service account
For general recommendations - I would prefer to use a specifically created service account and completely delete (or disable) the default App Engine service account.
Edit ==> Additional details as requested
Here is a description of Cloud Function service account in runtime:
The App Engine service account has the Editor role, which allows it broad access to many Google Cloud services. While this is the fastest way to develop functions, Google recommends using this default service account for testing and development only. For production, you should grant the service account only the minimum set of permissions required to achieve its goal.
Thus, it may be useful to delete/disable App Engine service account, create a specific service account for the given cloud function, assign it all relevant minimum of IAM roles, and use it.
As a side note I also would suggest to delete/disable the default Compute Engine service account, delete the default network with all firewall rules and subnetworks... But this is a separate story.

AWS CloudWatch with mobile applications

I have a backend system built in AWS and I'm utilizing CloudWatch in all of the services for logging and monitoring. I really like the ability to send structured JSON logs into CloudWatch that are consistent and provide a lot of context around the log message. Querying the logs and getting to the root of an issue is simple or just exploring the health of the environment - makes CloudWatch a must have for my backend.
Now I'm working on the frontend side of things, mobile applications using Xamarin.Forms. I know AWS has Amplify but I really wanted to stick with Xamarin.Forms as that's a skill set I've already got and I'm comfortable with. Since Amplify didn't support Xamarin.Forms I've been stuck looking at other options for logging - one of them being Microsoft's AppCenter.
If I go the AppCenter route I'll end up having to build out a mapping of the AppCenter installation identifier and my users between the AWS environment and the AppCenter environment. Before I start down that path I wanted to ask a couple questions around best practice and security of an alternative approach.
I'm considering using the AWS SDK for .Net, creating an IAM Role with a Policy that allows for X-Ray and CloudWatch PUT operations on a specific log group and then assigning it to an IAM User. I can issue access keys for the user and embed them in my apps config files. This would let me send log data right into CloudWatch from the mobile apps using something like NLog.
I noticed with AppCenter I have to provide a client secret to the app, which wouldn't be any different than providing an IAM User access key to my app for pushing into CloudWatch. I'm typically a little shy about issuing access keys from AWS but as long as the Policy is tight I can't think of any negative side-effects... other than someone flooding me with log data should they pull the key out of the app data.
An alternative route I'm exploring is instead of embedding the access keys in my config files - I could request them from my API services and hold it in-memory. Only downside to that is when the user doesn't have internet connectivity logging might be a pain (will need to look at how NLog handles sinks that aren't currently available - queueing and flushing)
Is there anything else I'm not considering or is this approach a feasible solution with minimal risk?

How to I pass secrets stored in AWS Secret Manager to a Docker container in Sagemaker?

My code is in R. And I need to excess external database. I am storing database credentials in AWS Secret Manager.
So I first tried using paws library to get aws secrets in R but that would require storing access key, secret id and session token, and I want to avoid that.
Is there a better way to do this? I have created IAM role for Sagemaker. Is it possible to pass secrets as environment variables?
Edit: I wanted to trigger Sagemaker Processing
I found a simple solution to it. Env variables can be passed via Sagemaker sdk. It minimizes the dependencies.
https://sagemaker.readthedocs.io/en/stable/api/training/processing.html
As another answer suggested, paws can be used as well to get secrets from aws. This would be a better approach
You should be able to use Paws for this. According to documentation it will use the IAM role configured for your Sagemaker instance
If you are running the package on an instance with an appropriate IAM role, Paws will use it automatically and you don’t need to do anything extra.
You only have to add the relevant access permissions (e.g. Allow ssm:GetParameters) to the Sagemaker IAM role.

How do I give an Openstack server permissions to call the Openstack APIs?

I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.
Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.

Resources