I have been doing a lot of research but I can't understand where I should save the encryption key in a production environment?
In local environment I have a .env file, but it feels very risky to have the encryption key written there in plain text in a production environment. I could encrypt it but then I just have another key to save somewhere.
I am not using AWS or any other big cloud platform, so I can't use AWS KMS etc.
I have looked into alternatives to AWS KMS, such as Doppler (doppler.com). You can store the key there, but to get the key with their API they use tokens to authenticate the requests, so then I have to store the token somewhere safe.. so it feels like I just running a rat race.
So I really need help here. Where should I store the encryption key? Where would you (and where can you) store it if you were not using any big cloud platform?
Related
We use AES encryption with static key and static vector.
Encryption and decryption of data made on Windows, MacOS and Android.
We know it is not secure to store the key in the app and we do not care about security, we just need to be able to support legacy data format, we need encryption not for security, but for back compatibility. We know we do not provide any data security and our customers know it too. Is where any way to force Play Store to ignore the error and publish our app without moving encryption and decryption functions to NDK?
I have a use case to encrypt the data while loading from S3 bucket to Snowflake tables. The S3 bucket is enabled with SSE-S3.
The files in S3 is additionally encrypted using KMS key before they are pushed to S3 (which I like to call as double encryption). I wanted to understand how Snowflake works on decryption of these data files. To be specific, is the data in transit (while undergoing auto-ingest) also encrypted.
Secondly, if the external stage in Snowflake is configured with the same KMS key id
encryption = (type = 'AWS_SSE_KMS' kms_key_id = 'xxxx-yyyy'
will Snowflake decrypt the data files and make it readable upon querying the table on which the files are loaded?
Thanks in advance
Snowflake supports either client-side encryption or server-side encryption. Either can be configured to decrypt files staged in S3 buckets.
Client-side encryption:
AWS_CSE: Requires a MASTER_KEY value. The master key must be a 128-bit or 256-bit key in Base64-encoded form.
For more information, see the AWS documentation for client-side encryption. Note that for client-side encryption, Snowflake supports using a master key stored in Snowflake; using a master key stored in AWS Key Management Service (AWS KMS) is not supported.
Server-side encryption:
AWS_SSE_S3: Requires no additional encryption settings.
AWS_SSE_KMS: Accepts an optional KMS_KEY_ID value.
For more information, see the AWS documentation for server-side encryption.
Using AWS Key Management Service (KMS) to manage keys requires configuring an IAM policy. For information, see the KMS documentation.
Details: https://docs.snowflake.com/en/user-guide/data-load-s3-encrypt.html#aws-data-file-encryption
I'm not sure if I am misunderstanding something or if the firebase docs contradict themselves.
Here it seems to suggest to store api keys in in env variables:
https://firebase.google.com/docs/functions/config-env
For instance, to store the Client ID and API key for "Some Service", you might run:
firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID"
But here it seems to say never to do it:
https://firebase.google.com/support/guides/security-checklist#cloud_function_safety
Cloud Function safety
Never put sensitive information in a Cloud Function’s environment variables
Often in a self-hosted Node.js app, you use environment variables to contain sensitive information like private keys. Do not do this in Cloud Functions.
There isn't only one correct answer. Generally, storing a critical/confidential data in plain text is a bad idea. It's better to use dedicated service to store the secret like secret manager.
However, you can imagine these use case:
Your deployment is automatic and no human can access to your Cloud Functions parameters (and env vars) -> Your secret is kept secure even if in plain text in Cloud Functions env vars
Your secret is stored in secret manager, but all the team members (and more) have the secret manager accessor role, and everyone can browse and see the secret in plain text. -> Even if you use secret manager, your IAM roles policy breaks the security and the confidentiality of the secret, it's public for everyone!
Think the security globally. There are best practices, but if you focus only on one topic, you can create bigger breach just beside!
I have actually one SPA in ReactJs + one mobile application in Flutter + one REST API developed with SailsJs running on a separate server. I managed user authentication with the secured session cookie generated by Firebase Authentication sent back by the API when we are login with valid information (id/password).
Now, I want to encrypt highly sensitive data (medicines, treatments, patients) in the Firestore database so no one can see the data in clear when an intrusion happens or with the basic admin access to the console for the production database.
Do I need to encrypt the data at the client-level considering the fact that the connection between the clients and the API server is over HTTPS? Or can I just encrypt the received body at the api-level before storing it in Firestore and decrypt the encrypted data at the GET endpoints?
My idea is to generate an encryption key with AES at the user registration and store it in another database from an European/French hosting company in order to avoid any risk with the US Cloud Act or whatever (user id from Firebase Authentication <-> encryption key). Is it a good idea? What other solution can I choose to securely store and use the encryption keys of my users?
Thanks for your help.
Do I need to encrypt the data at the client-level considering the fact that the connection between the clients and the API server is over HTTPS? Or can I just encrypt the received body at the api-level before storing it in Firestore and decrypt the encrypted data at the GET endpoints?
If you encrypt/decrypt the data in your custom API, that API will need to have access to the encryption keys. While the chances are small, it does mean the keys could be taken from here, and then be used to compromise the data.
If you encrypt/decrypt the data in the client-side code, only that code will need access to those keys. If you then exchange the keys through some out-of-band mechanism, something that doesn't get stored on your servers along the way, there is no way for anyone with access to those servers to decrypt the data.
I want to have a per client namespace and storage in my kubernetes environment where a dedicated instance of app runs per client and only client should be able to encrypt/decrypt the storage being used by that particular client's app.
I have seen hundreds of examples on secrets encryption in kubernetes environment but struggling to achieve actual storage encryption that is controlled by the client. is it possible to have a storage encryption in K8s environment where only client has the knowledge of encryption keys (and not the k8s admin) ?
The only thing that comes to my mind as suggested already in the comment is hashicorp vault.
Vault is a tool for securely accessing secrets. A secret is anything
that you want to tightly control access to, such as API keys,
passwords, or certificates. Vault provides a unified interface to any
secret, while providing tight access control and recording a detailed
audit log.
Some of the features that you might to check out:
API driven interface
You can access all of its features programatically due to HTTP API.
In addition, there are several officially supported libraries for programming languages (Go and Ruby). These libraries make the interaction with the Vault’s API even more convenient. There is also a command-line interface available.
Data Encryption
Vault is capable of encrypting/decrypting data without storing it. The main implication from this is if an intrusion occurs, the hacker will not have access to real secrets even if the attack is successful.
Dynamic Secrets
Vault can generate secrets on-demand for some systems, such as AWS or SQL databases. For example, when an application needs to access an S3 bucket, it asks Vault for credentials, and Vault will generate an AWS keypair with valid permissions on demand. After creating these dynamic secrets, Vault will also automatically revoke them after the lease is up. This means that the secret does not exist until it is read.
Leasing and Renewal: All secrets in Vault have a lease associated with them. At the end of the lease, Vault will automatically revoke that secret. Clients are able to renew leases via built-in renew APIs.
Convenient Authentication
Vault supports authentication using tokens, which is convenient and secure.
Vault can also be customized and connected to various plugins to extend its functionality. This all can be controlled from web graphical interface.