I am developing a web based application that will be employed by numerous third party organisations
in numerous countries around the world.
The browser based client will feed sensitive data into a shared back end database.
All organisations in all countries will Read/Write data into the same database.
I wish to encrypt the data entered within the browser so that its safe while in transit
to the back end database. e.g. client side encryption.
I also wish to encrypt the data while at rest in my database.
This application will be developed using Java, Javascript (REACT.js), and Scala.
The backend database will be MongoDB.
I cannot find a good Key Management example/description e.g. how a key is properly generated,
distributed, stored, replaced, deleted, and recovered during its life time.
I have the following choices/decisions to make:-
Flavour of encryption, e.g TripleDES, RSA, Blowfish, Twofish, AES etc..
Key(s) Symmetric/Asymmetric and its/thier length
How should I securely distribute the keys to my clients.
How to keep my keys safe on my back end servers.
If keys should have a lifecycle of generated, distributed, stored, replaced, deleted.
How do I decrypt data that was encrypted with Key0 when I am now using Key1, or Key2?
How should I store my multiple keys for my multiple clients to enable me to encrypt/decrypt
each clients data.
Use HTTPS with certificate pinning to secure the data in transit.
Use AES for encryption. Do not use TripleDES, RSA, Blowfish or Twofish in new work.
Use an HSM.
Encrypt the data with a long-life key that is not distributed, encrypt that key with short life keys that can be changed as needed.
Considering the scope of the project get a cryptographic domain expert to design the security and vet the code.
Related
We know that there are three options
1.(default) Google-managed encryption keys
2. Customer-supplied encryption keys
3. Customer-managed encryption keys
For a particular customer data, how we can restrict the access so that the data remains unreadable even if it is downloadable.
You cannot download encrypted data from GCP/BQ/etc without the encryption key or access to the encryption key.
The exception is if you upload self-encrypted data as normal data.
Folks,
I need to encrypt some string data into a SQL database from and MVC Core 2.0 application.
I'm thinking of using the Data Protection API with PersistKeysToFileSystem so that I can restore the data to another server and decrypt the data using the same key file.
I am impressed with the performance of the DPAPI in Net core and I don't want to fo for any custom crypto solution as its too risky. I would be storing bulk uploads of data to SQL. Strings before encryption would be 200 chars or less.
I believe that DPAPI is considered more suited to encrypting small pieces of data e.g. passwords as opposed sql bulk operations. Do folks consider using DPAPI to encrypt data into a database a good use case?
The Data Protection API is not necessarily only for small pieces of data, but it is meant for relatively transient data. In other words, it's not really intended to be used to encrypt/decrypt long-term. The keys will be cycled at some point, and while old keys are kept around to allow for transition to new keys, you should not really rely on that.
According to the docs:
The ASP.NET Core data protection APIs are not primarily intended for indefinite persistence of confidential payloads. Other technologies like Windows CNG DPAPI and Azure Rights Management are more suited to the scenario of indefinite storage, and they have correspondingly strong key management capabilities.
It does go on to say that you can do so if you desire, though. However, things have to be handled in a different way if you might potentially be working with revoked keys. The documentation link above goes into all the detail on that. However, bear in mind that you're inherently operating on your data in a less secure way, since you're explicitly allowing revoked keys to be used to decrypt data.
A pair of Amazon Lambdas will symmetrically encrypt and decrypt a small piece of application data. I want to use Amazon KMS to facilitate this, because it solves the problems of secret storage and key rotation, and then some.
The Amazon KMS Developer Guide indicates:
These operations are designed to encrypt and decrypt data keys. They use an AWS KMS customer master key (CMK) in the encryption operations and they cannot accept more than 4 KB (4096 bytes) of data. Although you might use them to encrypt small amounts of data, such as a password or RSA key, they are not designed to encrypt application data.
It goes on to recommend using AWS Encryption SDK or the Amazon S3 encryption client for encrypting application data.
While the listed advantages of the AWS Encryption SDK are clear as day, and very attractive, especially to a developer who is not a cryptographer, let's assume for the purpose of this question that circumstances are not favorable to those alternatives.
If my application data is sure never to exceed 4k, why specifically shouldn't I simply use Amazon KMS to encrypt and decrypt this data?
Use case
My team is implementing a new authentication layer to be used across the services and APIs at our company. We're implementing a JWT specification, but whereas we intend to steer clear of the widely documented cryptographic grievances beleaguering JWE / JWS compliant token signing, we're symmetrically encrypting the payload. Thus, we keep the advantage of standard library implementations of non-cryptographic token validation operations (expiry and the rest,) and we leave behind the cryptographic "foot-gun."
I suspect it's about performance: scaling and and latency.
KMS encrypt/decrypt has a limit of 5500 reqs/s per account, which is shared with some other KMS operations.
"Why?" Is also discussed a bit more thoroughly in the FAQ.
Why use envelope encryption? Why not just send data to AWS KMS to encrypt directly?
While AWS KMS does support sending data less than 4 KB to be encrypted, envelope encryption can offer significant performance benefits. When you encrypt data directly with KMS it must be transferred over the network. Envelope encryption reduces the network load for your application or AWS cloud service. Only the request and fulfillment of the data key through KMS must go over the network. Since the data key is always stored in encrypted form, it is easy and safe to distribute that key where you need it to go without worrying about it being exposed. Encrypted data keys are sent to AWS KMS and decrypted under master keys to ultimately allow you to decrypt your data. The data key is available directly in your application without having to send the entire block of data to AWS KMS and suffer network latency.
https://aws.amazon.com/kms/faqs/
I am going through this issue with AWS support right now. There is the throttling limit mentioned in the accepted answer. Also, if you reuse and cache data keys as allowed by the SDK, you can save money at the expense of lowered security (one data key can decrypt multiple objects).
However, if neither of those are relevant to you, direct CMK encryption is appealing. The security is excellent because the data key cannot be leaked, every decryption requires a API call to KMS and can be audited. In the KMS Best Practices whitepaper, it states that encryption of credit card numbers in this way is PCI compliant.
In light of the upcoming GDPR regulations, the company I work for is looking at upgrading their encryption algorithms and encrypting significantly more data than before. As the one appointed to take care of this, I have replaced our old CAST-128 encryption (I say encryption but it was more like hashing, no salt and resulting in the same ciphertext every time) with AES-256 and written the tools to migrate the data. However, the encryption key is still hardcoded in the application, and extractable within a couple of minutes with a disassembler.
Our product is a desktop application, which most of our clients have installed in-house. Most of them are also hosting their own DBs. Since they have the entirety of the product locally, securing the key seems like a pretty difficult task.
After some research, I've decided to go with the following approach. During the installation, a random 256-bit key will be generated for every customer and used to encrypt their data with AES encryption. The key itself will then be encrypted with DPAPI in user mode, where the only user who can access the data will be a newly created locked down domain service account with limited permissions, who is unable to actually log in to the machine. The encrypted key will the be stored in an ACL-ed part of the registry. The encryption module will then impersonate that user to perform its functions.
The problem is that since the key will be randomly generated at install time, and encrypted immediately, not even we will have it. If customers happen to delete this account, reinstall the server OS, or manage to lose the key in some other manner, the data will be unrecoverable. So after all that exposition, here comes the actual question:
I am thinking of having customers back up the registry where the key is stored and assuming that even after a reinstall or user deletion, as long as the same user account is created with the same password, on the same machine, it will create the same DPAPI secrets and be able to decrypt the key. However, I do not know whether or not that is the case since I'm not sure how these secrets are generated in the first place. Can anyone confirm whether or not this is actually the case? I'm also open to suggestions for a completely different key storage approach if you can think of a better one.
I don't see the link with GDPR but let's say this is just context.
It takes more than the user account, its password and the machine. there is more Entropy added to the ciphering of data with DPAPI.
See : https://msdn.microsoft.com/en-us/library/ms995355.aspx#windataprotection-dpapi_topic02
A small drawback to using the logon password is that all applications
running under the same user can access any protected data that they
know about. Of course, because applications must store their own
protected data, gaining access to the data could be somewhat difficult
for other applications, but certainly not impossible. To counteract
this, DPAPI allows an application to use an additional secret when
protecting data. This additional secret is then required to unprotect
the data. Technically, this "secret" should be called secondary
entropy. It is secondary because, while it doesn't strengthen the key
used to encrypt the data, it does increase the difficulty of one
application, running under the same user, to compromise another
application's encryption key. Applications should be careful about how
they use and store this entropy. If it is simply saved to a file
unprotected, then adversaries could access the entropy and use it to
unprotect an application's data. Additionally, the application can
pass in a data structure that will be used by DPAPI to prompt the
user. This "prompt structure" allows the user to specify an additional
password for this particular data. We discuss this structure further
in the Using DPAPI section.
For a security application I want to do the following:
Each data related to a user is encrypted with this user's key (the key is unique for each user).
The only data that are not encrypted are password (because it's already hashed, no need to crypt it on top of that), email (identifier for login) and the key (to decrypt data on server side).
The goal is to make data storage safe even if my database gets full dumped, since the attacker will have to find which algorithm(s) is used for the encryption, for each user, even if he has the key.
I'm making a RESTful API connected to this database, and I want to use Spring Data neo4j + spring Rest and Spring boot (just going to do API mapping by myself, since all my attempts to let spring generate API implementation failed).
So, the real question is How to encrypt/decrypt data in SDN's transactions? I mean I need to store data encrypted, and return it decrypted, so I need to be able to encrypt it on Java side.
If I can't do it with SDN, I'll do it using Neo4j Core API instead, just wanted to give SDN a chance since it can be really time saver.