Here at the job, we're working on an ASP.NET MVC application for a proof of concept. Some of the operations that the application performs require transmission of credentials, so we're storing those creds in an encrypted section of the web.config. The difficulty we're having is that when one developer encrypts the data and commits it, the next developer who updates his local copy and tries to use that web.config gets exceptions because their machine can't decrypt the config for use.
How ought we to handle this?
In the past, I've used the machine.config for sensitive credentials i.e. connection strings and such. It's located at C:\Windows\Microsoft.Net\Framework\V4.0.30319\Config
This will allow you to omit the credentials out of commits altogether. Just make sure each developer and/or server has its own machine.config with the required credential settings.
I'm assuming that you are using aspnet_regiis.exe to encrypt the section. If this is the case the reason that you are having problems is that the keys used for encryption/decryption are different on the machines.
You can either use the same keys on all of the machines, from a configuration perspective this would be similar to a farm setup so you can use the information in this SO question.
Alternatively since there's an inherent assumption that the developers have access to the credentials leave it decrypted until the app is deployed to the production server and then encrypt that section. This is a common solution when the username/password are specified in the web.config as part of the connection string for database connections, the connection would be updated to reflect the production DB server as part of the deployment process just prior to encryption.
In first place not sure why you chose this option when there are other much better ways to handle keys secrets in DevOps best practices. That seems classic way. Also during debug time any developer can peek into actual value or spit out in log?
Anyways, if you put entire dilivery life cycle as context to this problem may here what I would do to achieve what you are trying to protect keys secrets:
Do not store anything even encrypted keys secrets which team doesn't need to run locally except dev or local environment.
In web.config have local or remote keys secrets
In release transformation, clean up all keys secrets to accidental use to environmental
Use release time variable replacement pretty common on any deployment tool to choose e.g Azure/TFS DevOps deployment support it with many different ways - deploy definition level, stage level, library variable or even better key valut store with software+hardware encryption options
Hope this helps in your design approach at least.
Related
In my .net core app (.net5), I'd like to store my environment variables values encrypted. This means I need to decrypt the values when loaded via EnvironmentVariablesConfigurationProvider. For example,
SET Product_SecretKeyEnc=#$SELOW#RLJLSKDFJ
In the product I'd like this realized as a configuration value "SecretKey" with a value of "DecryptedString"
So, I'd like to translate the key and the value during bootstrap.
This application is hosted in AWS Elastic Beanstalk which does not have integration with AWS Secrets Manager. AWS EB docs recommends storing configuration in environment variables. But I understand these are not secured. My intent in encrypting the environment variables is to prevent someone from getting a dump from being able to get anything useful.
Note: Andrew Lock does have a great blog post on using AWS Secrets Manager from .net core. But I thought the encrypted environment variables would suffice.
I agree with the commenter that you may be XY'ing this a bit, and that storing encrypted environment variables is not a good idea.
That said, if you persist, then what you need to do is implement your own provider derived from EnvironmentVariablesConfigurationProvider that knows how to identify which environment variables are encrypted and knows how to decrypt them.
You'd then add it to the set of configuration providers in the usual way.
Recently stored our projects connection strings via Azure Key vault and retrieve them with the Azure key vault config builder for our local builds. This lets us get rid of of our connection string in our source control repo. A fellow dev told me I should look into encrypting with ASP.NET IIS_Reg as it's the "de facto standard" for web.config secret encryption. I can't really find any doc that compares these two techniques. Is it possible/ Would it be redundant to try and use both? Can they be used in tandem?
If you are using Azure Key Vault today, then I would continue to do so, as that is a more future-proof approach than encrypting things in web.config. Especially if you later want to migrate to .NET Core, then your can still keep using AKV.
Encrypting things in web.config is just a pain to administrate. With AKV you can version your secrets and you can better control who has access to what.
Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it.
I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place.
It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation.
Do you have a better suggestion? Which method would you recommend? More importantly why?
Encryption in SQL is really only good for securing the data as it rests on the server, although that doesn't mean that it is unimportant. When you mention that a prime concern is injection attacks or the likes, my concern would be whether or not the database uses a single account (SQL or otherwise) to connect to the database, which would be common for a public internet site. If you use integrated authentication, or connect to SQL using the same credentials supplied to the application, then SQL's encryption might work fine.
However, if you're using a single login, SQL's encryption is going to manage encrypting and decrypting the data for you, based on your login. So, if your application is compromised, SQL may not be able to protect that data for you, as it implicitly decrypts it and doesn't know anything is wrong.
You may want to, as you suggested, encrypt/decrypt the data in the application, and store as bytes in the database. That way you control who can decrypt the data and when (for example, you could assign the key to decrypting this data to those few employees you mentioned that are in a specific role). You could look into Microsoft's Security Application Block, or Bouncy Castle, etc. for good encryption utilities. Just be careful about how you manage the key.
Update:
Although you could potentially use two connection strings: one normal, with no rights to the encrypted data, and one that has the key and the rights to the data. Then have your application use the appropriate connection when the user has the rights. Of course, that's pretty kludgy.
Some practices that we follow:
Never use dynamic sql. It's completely unnecessary.
Regardless of #1, always parameterize your queries. This alone will get rid of sql injection, but there are lots of other entry points.
Use the least priviledged account you can for accessing the database server. This typically means the account should NOT have the ability to run ad hoc queries (see #1). It also means that it shouldn't have access to run any DDL statements (create, drop, ..).
Don't trust the web application, much less any input received from a browser. Sanitize everything. Web App servers are cracked on a regular basis.
We also deal with a lot of PII and are extremely strict (to the point of paranoia) on how the data is accessed and by whom. Everything that comes through the server is logged. To make sure this happens we only allow access to the database through stored procedures. The procs always test to see if the user account is even authorized to execute the query. Further they log when, who, and what. We do not have any mass delete queries at all.
Our IDs are completely non-guessable. This is for every table in the system.
We do not use ORM tools. They typically require way too much access to the database server to work right and we just aren't comfortable with that.
We do background checks on the DBA's and our other production support people every 6 months. Access to production is tightly controlled and actively monitored. We don't allow contractors access to production for any reason and everything is code reviewed prior to being allowed into the code base.
For the encrypted data, allow specific users access to the decryption keys. Change those keys often, as in once a month if possible.
ALL data transfer between machines is encrypted. Kerberos between servers and desktops; SSL between IIS and browsers.
Recognize and architect for the fact that a LOT of data theft is from internal employees. Either by actively hacking the system, actively granting unauthorized users access, or passively by installing crap (like IE 6) on their machines. Guess how Google got hacked.
The main question in your situation is identifying all of the parts that need access to the PII.
Things like how does the information get into your system? The main thing here is where does the initial encryption key get stored?
Your issue is key management. No matter how many way's you turn the problem around, you'll end up with one simple elementary fact: the service process needs access to the keys to encrypt the data (is important that is a background service because that implies it cannot obtain the root of the encryption hierarchy key from a human entered password whenever is needed). Therefore compromise of the process leads to compromise of the key(s). There are ways to obfuscate this issue, but no ways to truly hide it. To put this into perspective though, only a compromise of the SQL Server process itself could expose this problem, something which is significantly higher bar than a SQL Injection vulnerability.
You are trying to circumvent this problem by relying on the public key/private key asymmetry and use the public key to encrypt the data so that it can only be decrypted by the owner of the private key. So that the service does not need access to the private key, therefore if compromised it cannot be used to decrypt the data. Unfortunately this works only in theory. In the real world RSA encryption is so slow that is cannot be used for bulk data. This is why common RSA based encryption scheme uses a symmetric key to encrypt the data and encrypts the symmetric key with the RSA key.
My recommendation would be to stick with tried and tested approaches. Use a symmetric key to encrypt the data. Use an RSA key to encrypt the symmetric key(s). Have SQL Server own and control the RSA private key. Use the permission hierarchy to protect the RSA private key (really, there isn't anything better you could do). Use module signing to grant access to the encryption procedures. This way the ASP service itself does not even have the privileges to encrypt the data, it can only do so by the means of the signed encryption procedure. It would take significant 'creative' administration/coding mistakes from your colleagues to compromise such a scheme, significantly more than a mere 'operator error'. A system administrator would have an easier path, but any solution that is designed to circumvent a sysadmin is doomed.
I want to make a secure website using ASP.NET, but when I publish it, the domain administrator can see all the data stored in my database (SQL Server). I want to hide my data and code from the domain administrator too. Are there any procedures to do that? Please give me the address of a good domain I can use, which will give me all administrative power of my website (Domain owner also cannot access my databases and files.) Thanks for your suggestion.
Have you looked at: SQL Server 2008 Transparent Data Encryption?
Also:
SQL Server 2008 Transparent Data Encryption
Understanding Transparent Data Encryption (TDE)
Have you considered using a Virtual Private Server? I believe with a VPS you should be able to have complete control over who has access to what at the operating system level.
You can encrypt data, but there's no way to protect code (especially not web-facing code), but frankly the question doesn't make sense - if you have trust issues with someone you have an implicit trust relationship with then you need to find a different provider.
If you don't trust anyone (personal psychology not withstanding) you need to host it yourself.
Addendum: look at it from the other way round, why would you host something for someone without being able to inspect it for security and even legal concerns?
If you want total security there's quite a few things you need to implement:
As others have said you need physical encryption of your database. Merely blocking them from accessing the database is not enough because they have access to the physical database files and can use tools on them to access the data directly.
You will want to use web.config encryption
Walkthrough: Encrypting Configuration Information Using Protected Configuration
How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI
This is rather questionable security however since it requires a key container to be installed upon the server it would be arguably achievable for a nefarious administration to copy your key and then use it to manually decrypt your web.config. To protect yourself further than that you would need to create a secured web service (secured both for message transport, SSL, and secured message that the content itself is encrypted inside the SSL transport tunnel, see WCF services security) that your application constantly talks to for protected data like the login users for the sql server database and then apply rotating passwords to make it if they intercepted one password that it might not be valid anymore if it's been rotated.
After this point you will need to use source code protection that includes decompilation protection and code obfuscation. This will add a layer of protection from prohibiting viewing the source of your application directly for information about how else you protect your application (this will only go so far to stop a sophisticated cracker though).
All in all at this point you've achieved nearly the highest level of code/data security you can inside a hosted environment but this goes back to the core problem. If you have concerns that the system operator is nefarious then all of these protections even can still be beaten if the admin is skilled enough and has enough motivation to do it.
If you need protection above and behind this you would really want to look at colocation hosting or at the very least dedicated server hosting that would allow you to apply encryption at the operating system level as this protects you from the most effective brute strength attacks which involve just ripping out hard drives from a machine and spraying ram with air duster upside down to freeze it and then attempt to steal encryption keys from the ram itself disconnected from the server.
Having security that makes you immune (or nearly immune) to this kind of attack basically requires using TrueCrypt for native encryption of your file system where you do not have it cache the keys/key files in memory. At this point the only last part of security left is to host at a reputable data center like ThePlanet or Rackspace that has 24/7 electronic surveillance that it would be nearly impossible for a nefarious employee to be able to compromise your server without video recordings of it occuring.
Remove the BUILTIN\Administrators group from the sysadmin role - obviously this can only be done by a server admin, but in a proper environment, it is possible for domain admins to only be able to maintain servers nad not see data.
In 2008, the default is to not include this.
As for code, you can obfuscate your DLLs, but there is no complete way to hide code from someone who can access the filesystem.
You won't be able to hide the source code, but you do have some options to make it less inviting to admins:
obfuscate - deter people from knowing what is happening syntactically. While they can follow the code and eventually figure it out (if they want), it requires more effort. After all, with enough effort and know-how, anything can be cracked.
encrypt - because the web page needs to be decrypted by the server, the server needs to have a key to decrypt it. This key needs to be stored in a file that the server (and thus admin) has access to. Using some obfuscation, you can try and hide this (again), but any places there is a symmetric encryption, a superuser has the ability to get at it.
Note:
Any time something is encrypted, it will most likely require a decrypt to use/view. The process will be a negative performance impact.
When things are encrypted, especially from an admin perspective, it is essentially an invitation calling for alarm; it creates curiosity. If it's data, that's one thing, but code should not need to be encrypted where there is trust. It's like saying that you have something you want to hide, generally meaning something "bad" that you don't want found out.
We are in the process of writing a native windows app (MFC) that will be uploading some data to our web app. Windows app will allow user to login and after that it will periodically upload some data to our web app. Upload will be done via simple HTTP POST to our web app. The concern I'm having is how can we ensure that the upload actually came from our app, and not from curl or something like that. I guess we're looking at some kind of public/private key encryption here. But I'm not sure if we can somehow just embed a public key in our win app executable and be done with it. Or would that public key be too easy to extract and use outside of our app?
Anyway, we're building both sides (client and server) so pretty much anything is an option, but it has to work through HTTP(S). However, we do not control the execution environment of win (client) app, plus the user that is running the app on his/her system is the only one that stands to gain something by gaming the system.
Ultimately, it's not possible to prove the identity of an application this way when it's running on a machine you don't own. You could embed keys, play with hashes and checksums, but at the end of the day, anything that relies on code running on somebody else's machine can be faked. Keys can be extracted, code can be reverse-engineered- it's all security through obscurity.
Spend your time working on validation and data cleanup, and if you really want to secure something, secure the end-user with a client certificate. Anything else is just a waste of time and a false sense of security.
About the best you could do would be to use HTTPS with client certificates. Presumably with WinHTTP's interface.
But I'm not sure if we can somehow just embed a public key in our win app executable and be done with it.
If the client is to be identifying itself to the server, it would have to be the private key embedded.
Or would that be too easy to extract and use outside of our app?
If you don't control the client app's execution environment, anything your app can do can be analysed, automated and reproduced by an attacker that does control that environment.
You can put obfuscatory layers around the communications procedure if you must, but you'll never fix the problem. Multiplayer games have been trying to do this for years to combat cheating, but in the end it's just an obfuscation arms race that can never be won. Blizzard have way more resources than you, and they can't manage it either.
You have no control over the binaries once your app is distributed. If all the signing and encryption logic reside in your executable it can be extracted. Clever coders will figure out the code and build interoperable systems when there's enough motivation to do so. That's why DRM doesn't work.
A complex system tying a key to the MAC address of a PC for instance is sure to fail.
Don't trust a particular executable or system but trust your users. Entrust each of them with a private key file protected by a passphrase and explain to them how that key identify them as submitters of contents on your service.
Since you're controlling the client, you might as well embed the key in the application, and make sure the users don't have read access to the application image - you'll need to separate the logic to 2 tiers - 1 that the user runs, the other that connects to the service over HTTP(S) - since the user will always have read access to an application he's running.
If I understand correctly, the data is sent automatically after the user logs on - this sounds like only the service part is needed.