The documentation shows how to connect using Storage Account Key:
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows
That does work. However, I'd like to mount file storage using read-only SAS token.
Is this possible?
is this possible?
Unfortunately, no. We must set the storage account key when mounting Azure File shares, everyone who has storage account and account key will have full permissions to manage and operate file shares. From the feedback we could know that Microsoft has no plan to do that.
At the moment, Microsoft does not have plans to support SAS tokens with SMB access. Instead, we are looking into supporting AD integration for mounted file shares.
It's possible on different approach and secure. You still use the mount cifs (net use windows) but you stored the credentials in the key vault. You should mount this on the bootup (with systemctl) using the technique of curl to get the credentials. You need to allow key vault access policy on the vm, now this is tricky too to automate but it's possible.
Related
My application has Firebase users (i.e. users created in Firebase Authentication, NOT in Firebase IAM or in GCP IAM). These users are not linked to a G Mail or Google Workspaces (formerly G Suite) account, and are not part of my organization.
I need to grant each of these users write access (not read) to a Cloud Storage bucket (1 user = 1 bucket), while not allowing any kind of access to that bucket to unauthenticated users or to other Firebase users.
How would I go about doing that?
I have tried verifying auth and generating a presigned URL from my Cloud Functions backend, but it has turned out a bit problematic with uploading thousands of files, which is why I'm looking at alternatives.
Time-limited access is not a requirement for me either way (I'm fine with users only having a few hours of access or having forever access). Also, if one bucket per user is too problematic, one folder per user, all inside the same bucket, would also be acceptable.
I know that in AWS I could use Cognito User Pools for the users, and then link the users to an Identity Pool so they can obtain temporary AWS credentials with the required scope, but I haven't been able to find the equivalent in GCP. The service comparison table hasn't helped in this regard.
I realize I might have the wrong idea in my head, coming from AWS. I don't mind if I have to link my Firebase users to GCP IAM users or to Firebase IAM users for this, though to me it sounds counter-intuitive, and I haven't found any info on that either. Maybe I don't even need GCP credentials, but I haven't found a way to do this with a bucket ACL either. I'm open to anything.
Since your users are signed in with Firebase Authentication, the best way to control their access is through security rules that sit in front of the files in your storage bucket when you access them through the Firebase SDK.
Some example of common access patterns are only allowing the owner of a file to access it or attribute or role based access control.
When implementing security rules, keep in mind that download URLs that you can generate through the Firebase SDK (if have read access to a file) provide public read-only access to the file too. These download URLs bypass the rules, so you should only generate them for files that you want to be publicly access to anyone with that URL.
I want to have a per client namespace and storage in my kubernetes environment where a dedicated instance of app runs per client and only client should be able to encrypt/decrypt the storage being used by that particular client's app.
I have seen hundreds of examples on secrets encryption in kubernetes environment but struggling to achieve actual storage encryption that is controlled by the client. is it possible to have a storage encryption in K8s environment where only client has the knowledge of encryption keys (and not the k8s admin) ?
The only thing that comes to my mind as suggested already in the comment is hashicorp vault.
Vault is a tool for securely accessing secrets. A secret is anything
that you want to tightly control access to, such as API keys,
passwords, or certificates. Vault provides a unified interface to any
secret, while providing tight access control and recording a detailed
audit log.
Some of the features that you might to check out:
API driven interface
You can access all of its features programatically due to HTTP API.
In addition, there are several officially supported libraries for programming languages (Go and Ruby). These libraries make the interaction with the Vault’s API even more convenient. There is also a command-line interface available.
Data Encryption
Vault is capable of encrypting/decrypting data without storing it. The main implication from this is if an intrusion occurs, the hacker will not have access to real secrets even if the attack is successful.
Dynamic Secrets
Vault can generate secrets on-demand for some systems, such as AWS or SQL databases. For example, when an application needs to access an S3 bucket, it asks Vault for credentials, and Vault will generate an AWS keypair with valid permissions on demand. After creating these dynamic secrets, Vault will also automatically revoke them after the lease is up. This means that the secret does not exist until it is read.
Leasing and Renewal: All secrets in Vault have a lease associated with them. At the end of the lease, Vault will automatically revoke that secret. Clients are able to renew leases via built-in renew APIs.
Convenient Authentication
Vault supports authentication using tokens, which is convenient and secure.
Vault can also be customized and connected to various plugins to extend its functionality. This all can be controlled from web graphical interface.
I am bit new to AWS and DynamoDB.
My aim is to embed a small piece of code.
The problem I am facing is how to make a connection in python code. I made a connection using AWS cli and then entering access ID and key.
But how to do it in my code as i wish to deploy my code on other systems.
Thanks in advance !!
First of all read documentation for boto3 dynamo, it's pretty simple:
http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html
If you want to provide access keys while connecting to dynamo, you can do the following:
client = boto3.client('dynamodb',aws_access_key_id='yyyy', aws_secret_access_key='xxxx', region_name='***')
But, remember, it is against best practices from security perspective to store such keys within the code.
For best security efforts use IAM roles.
boto3 driver will automatically consume IAM role if it is attached to the instance.
Link to the docs: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Also, if IAM roles is to complicated, you can install and aws-cli and run aws configure on your server, and boto3 will use the key from here (less secure than a previous approach).
After implementing one of the options, you can connect to DynamoDB without the keys from code:
client = boto3.client('dynamodb', region_name='***')
Can somebody else get the Firebase credentials from my APK and use them? Is this prevented by adding the SHA-1 keys for Android?
If it is prevented, what do I need security rules for since only code from my app with my SHA-1 can manipulate database at all?
If it is not prevented, can somebody else use my Firebase database as long as his requests fit the security rules? (Write 2nd client, which actually cannot do bad things but should not be allowed at all.)
Im not sure how I should think about security rules:
A) Protecting data against access and manipulation from bad guys + B?
B) Just a set of rules to keep data in a certain state and prevent my software from doing invalid database request?
A Firebase Database can be accessed via the REST API, or any of the client libraries. The decision about whether a client can or can't do something is entirely based on the rules.
You can even just access the Database URL in a web browser and see a JSON response by putting .json on the end, e.g. https://[YOUR_PROJECT_ID].firebaseio.com/.json
So the answer is definitely B! The default rules in a new Firebase project are that read and write to the database require auth, but you can configure them to provide whatever levels of protection you need.
Take a look at the Database Rules quickstart to see what you can do!
We don't ship the Realtime Database secret (or any other "secret" material) in the json file that gets baked into your app. That file simply contains resource identifiers that allow us to know which resources (database, storage bucket, analytics, etc.) to properly authenticate to (we use Firebase Authentication for these purposes), and we handle server side authorization to ensure that users are properly logged in.
If you are authorizing your requests properly (using Firebase Realtime Database Rules, for instance), your data is secure!
I'd recommend watching The Key to Firebase Security, one of our I/O talks, which talks in greater detail about how this works.
firebaser here
Thanks to the new feature called Firebase App Check, it is now actually possible to limit calls to your Realtime Database to only those coming from iOS, Android and Web apps that are registered in your Firebase project.
You'll typically want to combine this with the user authentication based security that Mike and Ian describe in their answers, so that you have another shield against abusive users that do use your app.
I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.
Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.