.Net Core Store Private keys in AWS - .net-core

I am implementing a secured system( using .Net Core 2.0 ) where there is a requirement of key pair ( public and private ) generation and transmit the public key to a recipient. At the moment I do generate the key pair( using the .Net Core Crypto library) and persist the private key in the DB. I need to host this in an AWS EC2 instance.
I know this is a bad practice(storing a private key in a DB) and I need to generate these keys in a secure vault ( AWS ? ) and persist the private key in the vault itself. The application needs to retrieve the corresponding private key when there is a requirement for decryption.
I went through many AWS docs but could not find a clear answer which caters my requirement. It would be great if someone can provide me with some clear instructions on how to achieve this.

You are right in pointing out that self-storage of secrets in DB is a bad practice. Depending on the extent of functionality you wish to offer via your application, you could use one of the below AWS offerings:
AWS Key Management Service
In case you need the key generation as well as the key storage to occur in AWS, Key Management Service(KMS) is the closest match. Here is a link to the AWS KMS home page, along with documentation. Bear in mind that choosing this option will restrict the exact mechanism of key generation to whatever AWS offers out of the box. Also, the standard use case for KMS doesn't include generating keys in high volumes which could be a possibility for your application.
AWS Parameter Store
If you decide to include the key generation logic within your application, and leave the storing part to AWS then Parameter Store is the offering for you. In order to add a new key to a parameter store, you can do the following:
aws ssm put-parameter --name Generated_Public_Key --value "Generated_Private_Key" --type SecureString
When a client of your application requires a previously created private key by providing the public key, you can use the following:
aws ssm get-parameter --name User_Provided_Public_Key --with-decryption
Just a side note in case you decide to look outside of AWS, Microsoft Azure has an offering similar to Parameter Store called Azure Key Vault.

You can try t-vault
https://github.com/tmobile/t-vault
Its an open source tool built on top of hashicorp vault. It simplifies the secret management for applications.
Here is a quick demo

Related

store private api key in flutter

I am using firebase and my backend API to get data for my APP.
The api requires a GCP key from access. This key expires every 90 days. So, I cannot store the key in the client/phone.
Need some ideas of where can I store the key.. I tried Firebase remote config and it works but the Firebase documentation said that sensitive data should not be used .
Creating a backend service that return the keys will not be secure as anyone can call the service,,
Need some suggestion. Is Firestore/Realtime database an option. The app will only read the data. However when the data changes(new keys) the App should get the latest.
Thanks for any suggestion
While distributing a private API key to the app at runtime through a mechanism like Remote Config, or a cloud database, may reduce the risk of it being intercepted, it is not enough to deter a sufficiently motivated malicious user. That's why the Remote Config documentation recommends against it, and the same applies to other distribution mechanisms (such as the databases you mention).
If this is a private API key, you should not use it in client-side code, period. That's really the only solution. When you use it in client-side code, a malicious user may get access to it and then abuse the backend service that is protected by the private API key.
Private API keys should be kept private, and only be used in a trusted environment (such as your development machine, a server you control, or Cloud Functions). When you allow users of your app to make calls through that private environment (by defining your own API for them), you will have to secure that end point yourself to ensure only authorized users can access them.

How to encrypt actual storage/volume being used by Kubernetes pods using client managed keys(least/zero knowledge of keys on the provider side)?

I want to have a per client namespace and storage in my kubernetes environment where a dedicated instance of app runs per client and only client should be able to encrypt/decrypt the storage being used by that particular client's app.
I have seen hundreds of examples on secrets encryption in kubernetes environment but struggling to achieve actual storage encryption that is controlled by the client. is it possible to have a storage encryption in K8s environment where only client has the knowledge of encryption keys (and not the k8s admin) ?
The only thing that comes to my mind as suggested already in the comment is hashicorp vault.
Vault is a tool for securely accessing secrets. A secret is anything
that you want to tightly control access to, such as API keys,
passwords, or certificates. Vault provides a unified interface to any
secret, while providing tight access control and recording a detailed
audit log.
Some of the features that you might to check out:
API driven interface
You can access all of its features programatically due to HTTP API.
In addition, there are several officially supported libraries for programming languages (Go and Ruby). These libraries make the interaction with the Vault’s API even more convenient. There is also a command-line interface available.
Data Encryption
Vault is capable of encrypting/decrypting data without storing it. The main implication from this is if an intrusion occurs, the hacker will not have access to real secrets even if the attack is successful.
Dynamic Secrets
Vault can generate secrets on-demand for some systems, such as AWS or SQL databases. For example, when an application needs to access an S3 bucket, it asks Vault for credentials, and Vault will generate an AWS keypair with valid permissions on demand. After creating these dynamic secrets, Vault will also automatically revoke them after the lease is up. This means that the secret does not exist until it is read.
Leasing and Renewal: All secrets in Vault have a lease associated with them. At the end of the lease, Vault will automatically revoke that secret. Clients are able to renew leases via built-in renew APIs.
Convenient Authentication
Vault supports authentication using tokens, which is convenient and secure.
Vault can also be customized and connected to various plugins to extend its functionality. This all can be controlled from web graphical interface.

ASP.NET Core on AWS Lambda - Data Protection keys store as Env Variable

NET Core web service, hosted on AWS Lambda, which requires built-in Cookie Authentication. I therefore need to share Data Protection key between multiple lambda instances.
I would like to save key either as environment variable in Lambda or through AWS Secrets Manager (or how exactly is it called)
Can I configure Data Protection to read key from Environment Variable? Has anyone done something like this yet?
Thanks!
Have you taken a look at the AWS DataProtection storage provider using SSM Parameter Store? https://github.com/aws/aws-ssm-data-protection-provider-for-aspnet

disclosing firebase apiKey

Documentation of firebase https://firebase.google.com/docs/web/setup tell us we can safely expose firebase apiKey:
Note: The Firebase config object contains unique, but non-secret
identifiers for your Firebase project.
The tutorial explains how to obtain the apiKey and insert it into the HTML code of our web app. Everyone can read that key. I would understand this is only an identification key.
But recently I received this message from google:
We have detected a publicly accessible Google API key associated with
the following Google Cloud Platform project:
[...]
The key was found at the following URL:
[...]
We believe that you or your organization may have inadvertently
published the affected API key in public sources or on public websites
(for example, credentials mistakenly uploaded to a service such as
GitHub.)
Please note that as the project/account owner, you are responsible for
securing your keys. Therefore, we recommend that you take the
following steps to remedy this situation:
If this key is intended to be public (or if a publicly accessible
key isn’t preventable):
Log in to the Google Cloud Console and review the API and
billing activity on your account, ensuring the usage is in line with
what you expected.
Add API key restrictions to your API key, if applicable.
If this key was NOT meant to be public:
Regenerate the compromised API key: Search for Credentials in
the cloud console platform, Edit the leaked key, and use the
Regenerate Key button to rotate the key. For more details, review the
instructions on handling compromised GCP credentials.
Take immediate steps to ensure that your API key(s) are not embedded in public source code systems, stored in download
directories, or unintentionally shared in other ways.
Add API key restrictions to your API key, if applicable.
In general I would say that the two sources of information are in contrast each-other. Is it true that the apiKey is "non-secret"? Reading also the related question Is it safe to expose Firebase apiKey to the public? I'm not really sure. I understand that the apiKey is enough to access the whole database if rules allow to.
First question: I wonder if I can be assured that the apiKey only gives access to the database (which can be restricted by rules) or if it gives also access to other information about the project. What about storage? The user can read files? Can write them? The key is called "web API key" so I understand is a unique identifier of the project. Before receiving the message from google I have considered it more as an identifier than a key. Since every access to the project API is a potential cost for me, the owner of the project, I understand that a key is required for billing purposes.
Second question. Since I would like to have full control of what user can access in the database my application is presenting a REST api as an interface to the database (using functions). So the user is not supposed to directly access the database. I have the following rules
service cloud.firestore {
match /databases/{database}/documents {
match /global/public {
allow read;
}
}
}
The intention is that user can only read the documents prefixed with /global/public (currently empty). So I think the database is secured. Now I wonder if I really need to expose the apiKey... Is the apiKey required for user authentication? If so, can I ignore the message from google and leave the apiKey public?
​

How to establish a connection to DynamoDB using python using boto3

I am bit new to AWS and DynamoDB.
My aim is to embed a small piece of code.
The problem I am facing is how to make a connection in python code. I made a connection using AWS cli and then entering access ID and key.
But how to do it in my code as i wish to deploy my code on other systems.
Thanks in advance !!
First of all read documentation for boto3 dynamo, it's pretty simple:
http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html
If you want to provide access keys while connecting to dynamo, you can do the following:
client = boto3.client('dynamodb',aws_access_key_id='yyyy', aws_secret_access_key='xxxx', region_name='***')
But, remember, it is against best practices from security perspective to store such keys within the code.
For best security efforts use IAM roles.
boto3 driver will automatically consume IAM role if it is attached to the instance.
Link to the docs: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Also, if IAM roles is to complicated, you can install and aws-cli and run aws configure on your server, and boto3 will use the key from here (less secure than a previous approach).
After implementing one of the options, you can connect to DynamoDB without the keys from code:
client = boto3.client('dynamodb', region_name='***')

Resources