How do I give an Openstack server permissions to call the Openstack APIs? - openstack

I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.

Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.

Related

Grant access to Cloud Storage to my Firebase users

My application has Firebase users (i.e. users created in Firebase Authentication, NOT in Firebase IAM or in GCP IAM). These users are not linked to a G Mail or Google Workspaces (formerly G Suite) account, and are not part of my organization.
I need to grant each of these users write access (not read) to a Cloud Storage bucket (1 user = 1 bucket), while not allowing any kind of access to that bucket to unauthenticated users or to other Firebase users.
How would I go about doing that?
I have tried verifying auth and generating a presigned URL from my Cloud Functions backend, but it has turned out a bit problematic with uploading thousands of files, which is why I'm looking at alternatives.
Time-limited access is not a requirement for me either way (I'm fine with users only having a few hours of access or having forever access). Also, if one bucket per user is too problematic, one folder per user, all inside the same bucket, would also be acceptable.
I know that in AWS I could use Cognito User Pools for the users, and then link the users to an Identity Pool so they can obtain temporary AWS credentials with the required scope, but I haven't been able to find the equivalent in GCP. The service comparison table hasn't helped in this regard.
I realize I might have the wrong idea in my head, coming from AWS. I don't mind if I have to link my Firebase users to GCP IAM users or to Firebase IAM users for this, though to me it sounds counter-intuitive, and I haven't found any info on that either. Maybe I don't even need GCP credentials, but I haven't found a way to do this with a bucket ACL either. I'm open to anything.
Since your users are signed in with Firebase Authentication, the best way to control their access is through security rules that sit in front of the files in your storage bucket when you access them through the Firebase SDK.
Some example of common access patterns are only allowing the owner of a file to access it or attribute or role based access control.
When implementing security rules, keep in mind that download URLs that you can generate through the Firebase SDK (if have read access to a file) provide public read-only access to the file too. These download URLs bypass the rules, so you should only generate them for files that you want to be publicly access to anyone with that URL.

How Can I Obtain GCP service account credentials on Google Cloud Run?

This page explains both:
Obtaining and providing service account credentials manually for developing local, deploying on-premises, or deploying to another public cloud.
Obtaining credentials on Compute Engine, Kubernetes Engine, App Engine flexible environment, and Cloud Functions
But there is no mention of obtaining credentials on Cloud Run. I'd appreciate it if you give instructions for obtaining credentials and setting firebase-admin initializeApp and firebase initializeApp for authentication on Cloud Run.
The documentation suggests that you can use the default service account just like other Google Cloud products as described here. The Firebase Admin SDK should use that account when initialized with no parameters.
There are also steps described if you want to use a non-default service account, which you can simply configure in the console or provide with gcloud.
If you must provide a file that's readable at runtime, you will have to deploy an image with that file added to the image. There is no short set of steps to add that file - you will have to make your docker build include it in a readable location, and your code will know where to look for it in order to load it.

Which service account is used when running Firebase Cloud Functions?

I'm trying to create a schedule Cloud Function exporting my Firestore database to create backups. The code is running fine when serving on my local machine (which uses my personal user account with owner role) but failes once deployed. I already found out that I need to add the 'Storage Admin' and 'Datastore Import Export Admin' to the service account used when running the cloud function, but I can't figure out which service account is used for the functions.
Does anyone know which service account is used?
Firebase Cloud Functions use the {project-id}#appspot.gserviceaccount.com service account (App Engine default service account). Roles and permissions added to this service account carry over to the Cloud Functions runtime.
Good to know: When using Google Cloud Functions, the service account being used while running the function can be defined when deploying the function.
You can specify a custom service account with the runWith() method if you prefer not to use the default one nowadays. It accepts a number of RuntimeOptions that can be defined.

Mount Azure File Storage using SAS token for authentication

The documentation shows how to connect using Storage Account Key:
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows
That does work. However, I'd like to mount file storage using read-only SAS token.
Is this possible?
is this possible?
Unfortunately, no. We must set the storage account key when mounting Azure File shares, everyone who has storage account and account key will have full permissions to manage and operate file shares. From the feedback we could know that Microsoft has no plan to do that.
At the moment, Microsoft does not have plans to support SAS tokens with SMB access. Instead, we are looking into supporting AD integration for mounted file shares.
It's possible on different approach and secure. You still use the mount cifs (net use windows) but you stored the credentials in the key vault. You should mount this on the bootup (with systemctl) using the technique of curl to get the credentials. You need to allow key vault access policy on the vm, now this is tricky too to automate but it's possible.

Accessing Amazon cloud drive from EC2

I'm using Amazon cloud drive to save many video recordings. I want to run my own program in the cloud to process and/or edit these files.
Is there a way to access the data in my cloud drive from a program running on an EC2 instance?
Yes and no.
You can use Amazon Cloud Drive REST API to access files. To authenticate on Cloud Drive through your personal device you also need small web server to get authentication tokens.
Main problem you need to get security tokens by registering your security profile and "whitelist" it. Since September "whitelist" became unavailable without special review which takes about a month and almost 100% you get rejected. Even documentation for Cloud Drive is not accessible on dev console page anymore.
I would recommend to reconsider using Amazon Cloud Drive.

Resources