Is it better to store encryption keys, or to regenerate them every time? - encryption

I'm currently studying web security, purely on my own, to expand my skillset going forward in my career. I've been studying different encryption techniques, how best to employ them, etc.
The current situation I'm thinking about involves a multi-tenant database. Each schema in the database represents a different tenant. The data in each schema needs to be secured not only from outsiders, but from the other tenants as well.
To do this, I would use symmetric encryption, with a different key for each schema. But that's where my question comes in.
There are two ways to handle the keys, as I see it. One would be to have a secure location for storing the keys, like on a separate server. That would still require storing each and every key.
The second way would be re-generating the encryption key each time. The key would be a combination of a series of values related to the tenant that are stored in the database. Each time someone connects to the application, the key is re-generated by putting those values in the exact same order and hashing them.
I'm wondering if the second idea is overkill, or if it is a viable option. In general, though, I'm looking for guidance on how best to design database security policies.
Thank you.

Related

Can the DynamoDB "single table design" play nicely with a Microservices architecture?

Microservices - multiple DBs/tables
When I first read about Microservices (MS) one of the most striking things was that each MS has it's own DB. I think I understand this concept now and I am embracing it.
NoSQL DBs - single table
I then started researching NoSQL DBs, namely DynamoDB. I watched this deep dive video where the presenter discusses the idea of taking a relational model - say 4 tables, and representing the data in one table. He then uses various techniques to make the data super fast to query even at scale.
Again, I think I understand this concept.
Combining the two is where I get confused. MSs want me to split things out into separate services and therefore separate DBs (or tables) but NoSQL patterns want me to have one table....
Do these 2 design patterns/architectures not work together or am I missing something?
If you combine the two ideas, then you end up with each microservice having its own database, and each database having only one table.
If you have multiple micro services running in the same AWS account, I can see why you might be confused because you would end up having multiple tables in dynamodb. There are some questions I will address to try to clear things up for you.
How can I have separate databases in DynamoDB?
In DynamoDB, the notion of “separate databases” isn’t a very meaningful idea. From DynamoDB’s perspective, each table is independent of every other table (unlike a relational database). There’s no hardware you need to manage, so you can’t see whether your tables are on the same servers or not, and there’s definitely no concept of database instances.
How can I have separate databases if DynamoDB doesn’t have “separate databases”?
The goal is not necessarily to have a separate database for each micro service. The goal is to make sure that the only coupling between micro services happens through APIs provided by the micro services. Having separate databases is one way to help enforce that (so that the micro services aren’t tied to a shared internal data mode), but it’s not the only way.
So what should I do?
Each microservice should have whatever table(s) are necessary in order for it to function. Any given table should be read and written to by only one microservice. In order to achieve the isolation between micro services which are running in the same AWS account, you should use IAM policies to make sure that each microservice accesses only its own dynamodb table. In some cases, you might be better off putting each microservice into its own AWS account to provide an even high level of separation between them. (An added benefit of this approach is that if one of the accounts ever gets compromised, the attacker has access to only one of the microservices.)

Encrypting data in SQL Server Azure database with separate key for each user's data

I'm trying to create a service based on an Azure SQL Database backend.
The service will be multi-tenant, and would contain highly sensitive information from multiple "clients" (potentially hundreds of thousands), that must be strictly isolated from one another and secured heavily against data leaks. "by design"
Using so many individual databases would not be feasible, as there will be a lot of clients with very little information per client.
I have looked into the transparent encryption offered by Azure, but this would essentially encrypt the whole database as one, so it would in other words not protect against leaks between clients or someone else; due to development errors, or hostile attacks, and it's very critical that one "client's" information never comes into anyone else's hands.
So what I would really like to achieve, is to encrypt each client's data in the database with a different key, so that you would have to obtain the key from each client (from their "physical" location) to de-crypt any data you might manage to extract from the database for that particular client, which would be virtually impossible for anyone to do.
Is it clear what I mean?
Do you guys have any suggestions for me on how to manage this problem, or know of any third party solution that allows for this functionality? Any other advise?
You're looking at protecting/isolating the tenants "by design" in a single table, why not check out Row Level Security. You could configure it to serve up only the applicable rows to a specific tenant.
This doesn't directly address your initial question about encrypting the data with a separate key for each tenant; If you have a separate table for each tenant, then you could do this via Always Encrypted, but this would seem to have some complexity in key management, if you're trying to handle 200k keys.
AFAIK, there isn't a native SQL Server functionality to encrypt each set of rows that belongs to a tenant with a distinct key- but there may be some elegant solutions that I haven't seen yet; Of course, you could do this on the app side and store it in SQL and there would be no issues; the trick would be the same as the AE based solution above- managing a large number of keys.

Handling Confidential Data in web application

I want to handle some confidential data in one of my web application. So that the data shouldn't able to read by the Developer or Database administrator.
We can easily hide the data from DB administrator by implementing some encryption technique. But still the developer can see the data since he only creating the decryption technique. I want only the end user should see his data.
I can't encrypt data using some algorithms like PBKDF2 or DB side encryption methods Like TDE & EKM because still I need to keep the encryption key somewhere. If I keep in server side or in db the developer can access and decrypt the data. If I keep it in client side, the user can't access the information from a separate machine.
So How to handle this situation? Thanks in advance.
You are heading the direction of Zero Knowledge Web Applications, such as implemented by SpiderOak (see also crypton). These applications typically work by deriving a key from the user's password using something like PBKDF2, and performing encryption/decryption on client side. However, there are a number of complexities to overcome to make it true zero-knowledge, and also to meet usability requirements. One could write an essay on this, but instead I suggest you start by reading the linked references. If you have any questions, let me know.
In a nutshell, the "more zero-knowledge" you want the system to be, the harder it is to realise without sacrificing usability (one example is overcoming the points made in Javascript Cryptography Considered Harmful). However, there are various tradeoffs you can make in order to make it sufficiently difficult to cheat without affecting usability too much.
I need to keep the encryption key somewhere
No you don't. The user only has to remember it. For convenience you could save it in the browser's local storage.

Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it.
I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place.
It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation.
Do you have a better suggestion? Which method would you recommend? More importantly why?
Encryption in SQL is really only good for securing the data as it rests on the server, although that doesn't mean that it is unimportant. When you mention that a prime concern is injection attacks or the likes, my concern would be whether or not the database uses a single account (SQL or otherwise) to connect to the database, which would be common for a public internet site. If you use integrated authentication, or connect to SQL using the same credentials supplied to the application, then SQL's encryption might work fine.
However, if you're using a single login, SQL's encryption is going to manage encrypting and decrypting the data for you, based on your login. So, if your application is compromised, SQL may not be able to protect that data for you, as it implicitly decrypts it and doesn't know anything is wrong.
You may want to, as you suggested, encrypt/decrypt the data in the application, and store as bytes in the database. That way you control who can decrypt the data and when (for example, you could assign the key to decrypting this data to those few employees you mentioned that are in a specific role). You could look into Microsoft's Security Application Block, or Bouncy Castle, etc. for good encryption utilities. Just be careful about how you manage the key.
Update:
Although you could potentially use two connection strings: one normal, with no rights to the encrypted data, and one that has the key and the rights to the data. Then have your application use the appropriate connection when the user has the rights. Of course, that's pretty kludgy.
Some practices that we follow:
Never use dynamic sql. It's completely unnecessary.
Regardless of #1, always parameterize your queries. This alone will get rid of sql injection, but there are lots of other entry points.
Use the least priviledged account you can for accessing the database server. This typically means the account should NOT have the ability to run ad hoc queries (see #1). It also means that it shouldn't have access to run any DDL statements (create, drop, ..).
Don't trust the web application, much less any input received from a browser. Sanitize everything. Web App servers are cracked on a regular basis.
We also deal with a lot of PII and are extremely strict (to the point of paranoia) on how the data is accessed and by whom. Everything that comes through the server is logged. To make sure this happens we only allow access to the database through stored procedures. The procs always test to see if the user account is even authorized to execute the query. Further they log when, who, and what. We do not have any mass delete queries at all.
Our IDs are completely non-guessable. This is for every table in the system.
We do not use ORM tools. They typically require way too much access to the database server to work right and we just aren't comfortable with that.
We do background checks on the DBA's and our other production support people every 6 months. Access to production is tightly controlled and actively monitored. We don't allow contractors access to production for any reason and everything is code reviewed prior to being allowed into the code base.
For the encrypted data, allow specific users access to the decryption keys. Change those keys often, as in once a month if possible.
ALL data transfer between machines is encrypted. Kerberos between servers and desktops; SSL between IIS and browsers.
Recognize and architect for the fact that a LOT of data theft is from internal employees. Either by actively hacking the system, actively granting unauthorized users access, or passively by installing crap (like IE 6) on their machines. Guess how Google got hacked.
The main question in your situation is identifying all of the parts that need access to the PII.
Things like how does the information get into your system? The main thing here is where does the initial encryption key get stored?
Your issue is key management. No matter how many way's you turn the problem around, you'll end up with one simple elementary fact: the service process needs access to the keys to encrypt the data (is important that is a background service because that implies it cannot obtain the root of the encryption hierarchy key from a human entered password whenever is needed). Therefore compromise of the process leads to compromise of the key(s). There are ways to obfuscate this issue, but no ways to truly hide it. To put this into perspective though, only a compromise of the SQL Server process itself could expose this problem, something which is significantly higher bar than a SQL Injection vulnerability.
You are trying to circumvent this problem by relying on the public key/private key asymmetry and use the public key to encrypt the data so that it can only be decrypted by the owner of the private key. So that the service does not need access to the private key, therefore if compromised it cannot be used to decrypt the data. Unfortunately this works only in theory. In the real world RSA encryption is so slow that is cannot be used for bulk data. This is why common RSA based encryption scheme uses a symmetric key to encrypt the data and encrypts the symmetric key with the RSA key.
My recommendation would be to stick with tried and tested approaches. Use a symmetric key to encrypt the data. Use an RSA key to encrypt the symmetric key(s). Have SQL Server own and control the RSA private key. Use the permission hierarchy to protect the RSA private key (really, there isn't anything better you could do). Use module signing to grant access to the encryption procedures. This way the ASP service itself does not even have the privileges to encrypt the data, it can only do so by the means of the signed encryption procedure. It would take significant 'creative' administration/coding mistakes from your colleagues to compromise such a scheme, significantly more than a mere 'operator error'. A system administrator would have an easier path, but any solution that is designed to circumvent a sysadmin is doomed.

Is it possible to "measure" the usage (e.g. in MBs) per user, of an SQL Server database in web farm conditions?

I have an ASP.NET web application hosted in a web-farm environment, and I need a way to be able to indicate how much a user is using my database.
There are several reasons for this, and I mention a couple. First, because I pay for the database space per month, I want to have a reasonable way to charge my users. Second, it would be nice to know (again in a per user basis) when to inform the user to upgrade his subscription.
I don't have enough experience in RDBMS, I come from a different background (windows applications, graphics), and so I can't figure out if this is possible, and if it is, how this can be handled: through SQL or ASP.NET (some tool, library, etc.).
If you, also, have some other idea, I'd like to hear what you suggest.
Any other advice on this subject, including good places to learn, would also be appreciated.
It depends on your schema. If you use a database-per-user multi-tenant schema then is very easy, the size of the database is the size consumed, and is really easy to measure and, morei mportantly, enforce. If you use a shared database schema then you'll need to keep track in each table of what rows belong to which user and keep accounting. Both measurement and enforcement are more difficult and there is no general answer, you will have to properly code for accounting the bytes used and to enforce any max size per user constraint.

Resources