Encrypt/Decrypt across machines is a no-no - encryption

I'm using an identical call to "CryptUnprotectData" (exposed from Crypt32.dll) between XP and Vista. Works fine in XP. I get the following exception when I run in Vista:
"Decryption failed. Key not valid for use in specified state."
As expected, the versions of crypt32.dll are different between XP and Vista (w/XP actually having the more recent, possibly as a result of SP3 or some other update).
More specifically, I'm encrypting data, putting it in the registry, then reading and decrypting using "CryptUnprotectData". UAC is turned off.
Anyone seen this one before?

The CryptUnprotectData function documentation states that it usually only works when the user has the same logon credentials as the encrypter.
This suggests to me that maybe the key is tied to the user's current token. Since you mention Vista, this makes me think UAC and restricted tokens.
Can you show us some code? Can you give us more information about what you're doing with the data -- i.e. are you moving it between processes, or users, or computers?

Nice. Hopefully this is my bone-head move of the week! ;-)
This suggests to me that maybe the key
is tied to the user's current token.
That was it. Turns out I was using encrypted data from another machine (the XP one) and trying to decrypt on the Vista machine.
As the MSDN documentation states:
Usually, only a user with the same
logon credentials as the encrypter can
decrypt the data. In addition, the
encryption and decryption must be done
on the same computer.
Once I re-encrypted the data on the Vista machine, decryption works as expected.
Thanks.

Related

Scheduled process - providing key for encrypted config

I have developed a tool that loads in an configuration file at runtime. Some of the values are encrypted with an AES key.
The tool will be scheduled to run on a regular basis from a remote machine. What is an acceptable way to provide the decryption key to the program. It has a command line interface which I can pass it through. I can currently see three options
Provide the full key via CLI, meaning the key is available in the clear at OS config level (i.e. CronJob)
Hardcode the key into the binary via source code. Not a good idea for a number of reasons. (Decompiling and less portable)
Use a combination of 1 and 2 i.e. Have a base key in exe and then accept partial key via CLI. This way I can use the same build for multiple machines, but it doesn't solve the problem of decompiling the exe.
It is worth noting that I am not too worried about decompiling the exe to get key. If i'm sure there are ways I could address via obfuscation etc.
Ultimately if I was really conscious I wouldn't be storing the password anywhere.
I'd like to hear what is considered best practice. Thanks.
I have added the Go tag because the tool is written in Go, just in case there is a magical Go package that might help, other than that, this question is not specific to a technology really.
UPDATE:: I am trying to protect the key from external attackers. Not the regular physical user of the machine.
Best practice for this kind of system is one of two things:
A sysadmin authenticates during startup, providing a password at the console. This is often extremely inconvenient, but is pretty easy to implement.
A hardware device is used to hold the credential. The most common and effective are called HSMs (Hardware Security Modules). They come in all kinds of formats, from USB keys to plug-in boards to external rack-mounted devices. HSMs come with their own API that you would need to interface with. The main feature of an HSM is that it never divulges its key, and it has physical safeguards to protect against it being extracted. Your app sends it some data and it signs the data and returns it. That proves that that the hardware module was connected to this machine.
For specific OSes, you can make use of the local secure credential storage, which can provide some reasonable protection. Windows and OS X in particular have these, generally keyed to some credential the admin is required to type at startup. I'm not aware of a particularly effective one for Linux, and in general this is pretty inconvenient in a server setting (because of manual sysadmin intervention).
In every case that I've worked on, an HSM was the best solution in the end. For simple uses (like starting an application), you can get them for a few hundred bucks. For a little more "roll-your-own," I've seen them as cheap as $50. (I'm not reviewing these particularly. I've mostly worked with a bit more expensive ones, but the basic idea is the same.)

System.Security.Cryptography.CryptographicException: Not enough storage is available to process this command

Our asp.net app was working fine, then the DBA decided to encrypt the db password in the web.config. Now I'm getting this error:
System.Security.Cryptography.CryptographicException: Not enough storage is available to process this command.
There is only one other article on SO that has this error listed and the user resorted to a refactor instead of identifying a solution.
The weird thing is that we have plenty of space (RAM, HDD, etc). Even more weird, three of the people on my team don't have this problem (with the exact same url). Another guy had it yesterday, but it works today.
I'm worried about when we move this to prod. Especially, if this needs some kind of incremental storage or permissions for EACH user.
Edit: The other error that seems to show up is:
"Failed to decrypt using provider 'RsaProtectedConfigurationProvider'"
It turns out that this is a generic error message that happens whenever the server has trouble decrypting with RSA. Not very helpful, because it is misleading (at worst) and at best, very vague.
For us, the error was only happening for me because our dev servers are load-balanced (which I didn't know till today). The encryption key was generated on one machine (server1) and installed on both servers. When I got load-balanced onto server2, I see this error (so would anyone else on server2).
The solution is to export the private key from server1 and install it onto server2.

How to keep multiple connectionString passwords safe, separate, and easy to deploy?

I know there are plenty of questions here already about this topic (I've read through as many as I could find), but I haven't yet been able to figure out how best to satisfy my particular criteria. Here are the goals:
The ASP.NET application will run on a few different web servers, including localhost workstations for development. This means encrypting web.config using a machine key is out. Each "type" or environment of web server (dev, test, prod) has its own corresponding database (dev, test, prod). We want to separate these connection strings so that a developer working on the "dev" code is not able to see any "prod" connection string passwords, nor allow these production passwords to ever get deployed to the wrong server or committed to SVN.
The application will should be able to decide which connection string to attempt to use based on the server name (using a switch statement). For example, "localhost" and "dev.example.com" will should know to use the DevDatabaseConnectionString, "test.example.com" will use the TestDatabaseConnectionString, and "www.example.com" will use the ProdDatabaseConnectionString, for example. The reason for this is to limit the chance for any deployment accidents, where the wrong type of web server connects to the wrong database.
Ideally, the exact same executables and web.config should be able to run on any of these environments, without needing to tailor or configure each environment separately every time that we deploy (something that seems like it would be easy to forget/mess up one day during a deployment, which is why we moved away from having just one connectionstring that has to be changed on each target). Deployment is currently accomplished via FTP. Update: Using "build events " and revising our deployment procedures is probably not a bad idea.
We will not have command-line access to the production web server. This means using aspnet_regiis.exe to encrypt the web.config is out. Update: We can do this programmatically so this point is moot.
We would prefer to not have to recompile the application whenever a password changes, so using web.config (or db.config or whatever) seems to make the most sense.
A developer should not be able to get to the production database password. If a developer checks the source code out onto their localhost laptop (which would determine that it should be using the DevDatabaseConnectionString, remember?) and the laptop gets lost or stolen, it should not be possible to get at the other connection strings. Thus, having a single RSA private key to un-encrypt all three passwords cannot be considered. (Contrary to #3 above, it does seem like we'd need to have three separate key files if we went this route; these could be installed once per machine, and should the wrong key file get deployed to the wrong server, the worst that should happen is that the app can't decrypt anything---and not allow the wrong host to access the wrong database!)
UPDATE/ADDENDUM: The app has several separate web-facing components to it: a classic ASMX Web Services project, an ASPX Web Forms app, and a newer MVC app. In order to not go mad having the same connection string configured in each of these separate projects for each separate environment, it would be nice to have this only appear in one place. (Probably in our DAL class library or in a single linked config file.)
I know this is probably a subjective question (asking for a "best" way to do something), but given the criteria I've mentioned, I'm hoping that a single best answer will indeed arise.
Thank you!
Integrated authentication/windows authentication is a good option. No passwords, at least none that need be stored in the web.config. In fact, it's the option I prefer unless admins have explicity taken it away from me.
Personally, for anything that varies by machine (which isn't just connection string) I put in a external reference from the web.config using this technique: http://www.devx.com/vb2themax/Tip/18880
When I throw code over the fence to the production server admin, he gets a new web.config, but doesn't get the external file-- he uses the one he had earlier.
you can have multiple web servers with the same encrypted key. you would do this in machine config just ensure each key is the same.
..
one common practice, is to store first connection string encrypted somewhere on the machine such as registry. after the server connects using that string, it will than retrieve all other connection strings which would be managed in the database (also encrypted). that way connection strings can be dynamically generated based on authorization requirements (requestor, application being used, etc) for example the same tables can be accessed with different rights depending on context and users/groups
i believe this scenario addresses all (or most?) of your points..
(First, Wow, I think 2 or 3 "quick paragraphs" turned out a little longer than I'd thought! Here I go...)
I've come to the conclusion (perhaps you'll disagree with me on this) that the ability to "protect" the web.config whilst on the server (or by using aspnet_iisreg) has only limited benefit, and is perhaps maybe not such a good thing as it may possibly give a false sense of security. My theory is that if someone is able to obtain access to the filesystem in order to read this web.config in the first place, then they also probably have access to create their own simple ASPX file which can "unprotect" it and reveal its secrets to them. But if unauthorized people are trouncing around in your filesystem—well… then you have bigger problems at hand, so my whole concern is now moot! 1
I also realize that there isn’t a foolproof way to securely hide passwords within a DLL either, as they can eventually be disassembled and discovered, perhaps by using something like ILDASM. 2 An additional measure of security obscurity can be obtained by obfuscating and encrypting your binaries, such as by using Dotfuscator, but this isn’t to be considered “secure.” And again, if someone has read access (and likely write access too) to your binaries and filesystem, you’ve again got bigger problems at hand methinks.
To address the concerns I mentioned about not wanting the passwords to live on developer laptops or in SVN: solving this through a separate “.config” file that does not live in SVN is (now!) the blindingly obvious choice. Web.config can live happily in source control, while just the secret parts do not. However---and this is why I’m following up on my own question with such a long response---there are still a few extra steps I’ve taken to try and make this if not any more secure, then at least a little bit more obscure.
Connection strings we want to try to keep secret (those other than the development passwords) won’t ever live as plain text in any files. These are now encrypted first with a secret (symmetric) key---using, of course, the new ridiculous Encryptinator(TM)! utility built just for this purpose---before they get placed in a copy of a “db.config” file. The db.config is then just uploaded only to its respective server. The secret key is compiled directly into the DAL’s dll, which itself would then (ideally!) be further obfuscated and encrypted with something like Dotfuscator. This will hopefully keep out any casual curiosity at the least.
I’m not going to worry much at all about the symmetric "DbKey" living in the DLLs or SVN or on developer laptops. It’s the passwords themselves I’ll keep out. We do still need to have a “db.config” file in the project in order to develop and debug, but it has all fake passwords in it except for development ones. Actual servers have actual copies with just their own proper secrets. The db.config file is typically reverted (using SVN) to a safe state and never stored with real secrets in our subversion repository.
With all this said, I know it’s not a perfect solution (does one exist?), and one that does still require a post-it note with some deployment reminders on it, but it does seem like enough of an extra layer of hassle that might very well keep out all but the most clever and determined attackers. I’ve had to resign myself to "good-enough" security which isn’t perfect, but does let me get back to work after feeling alright about having given it the ol’ "College Try!"
1. Per my comment on June 15 here http://www.dotnetcurry.com/ShowArticle.aspx?ID=185 - let me know if I'm off-base! -and some more good commentary here Encrypting connection strings so other devs can't decrypt, but app still has access here Is encrypting web.config pointless? and here Encrypting web.config using Protected Configuration pointless?
2. Good discussion and food for thought on a different subject but very-related concepts here: Securely store a password in program code? - what really hit home is the Pidgin FAQ linked from the selected answer: If someone has your program, they can get to its secrets.

Get unique System ID with Flex

Is there a way to get a unique machine-specific system ID in a Flex application running in a browser, so that is can be used for example to determine if the machine is properly licensed to run the application?
I can't think of any way to do this based off the users machine or OS. The whole point of browser applications is to have them able to run anywhere, any time via a browser. To my knowledge Flash provides no information that could reasonable be converted into a unique machine ID for licensing purposes, not even the MAC address of a network card on the machine.
Personally, I think you'd be better off requiring a username/password for users to log in, and then using a session key stored in a cookie to allow the user to skip that step (e.g. a 'remember me on this computer' type of feature, such as GMail has). This has the advantage of the user being able to run the application from any PC they like.
Create a UUID inside flex
import mx.utils.UIDUtil;
var myUUID:string = UIDUtil.createUID();
I suppose if you want to get really clever you could encrypt this string with a locally known salt and generate some encrypted license key that can't be shared. You could change the salts or keys at regular intervals to enforce license expiration.
You will need to manage the key data on a backend somehow.
ILog Elixir does this, but they do it through a traditional install process. The swc files are watermarked, but when you enter your valid serial number unmarked swc files are unlocked and the source code is made available.
I don't have any details as to how they actually go about this, but it isn't directly through flex. Perhaps researching traditional software installation processes and unlocking encrypted data that way would produce the answer you are looking for.
You cannot really access machine specific information like MAC address or other ID's from a flex app. You should probably use some other technique like using ASP.NET or JSP.

asp.net Generating RSA public key pair without a key store

I have to do some secure communication between a windows service and an asp.net website. In the asp.net website I am generating a key pair, sending my public key to my windows service and then receiving the encrypted message from my service and decrypting with asp.net.
The first problem is this.. The user profile is not created in asp.net so I must use RSAParams.Flags = CspProviderFlags.UseMachineKeyStore;.
This doesnt work in my hosting provider because I do not have access to my machine store.
I think my solution would be to generate the key pair in memory and never use the keystore, is this possible?
Checkout http://www.codeproject.com/KB/security/EZRSA.aspx Exerpt from the article:
"Help! What do we do?? A bit of Googling around, and a quick email to our (excellent) Web hosting providers Liquid Six, revealed that the reason for this lies deep inside the Windows crypt API, on which RSACryptoServiceProvider is based. Essentially, to allow scripts to load up their own private keys would compromise the security of the Windows key store, so all sensible Web hosting providers turn it off lest a rogue script steals / overwrites the hosting provider's own private keys. This strikes me as a major snafu in the Windows crypt API but there you go. I guess we're stuck with it.
Some more Googling turned up two essential resources: Chew Keong TAN's most excellent BigInteger class and some LGPL 'C' code to do the requisite calculations and PKCS#1 encapsulation from XySSL (originally written by Christopher Devine). These resources were particularly useful to me because (a) the ability to manipulate numbers with hundreds of digits is a specialist area, and (b) I hate ASN.1 (on which the PKCS#1 format is built). The calculations themselves are deceptively simple.
A day or two of stitching and patching later and EZRSA was born. EZRSA does pretty much everything that RSACryptoServiceProvider can do but entirely in managed code and without using the Windows crypt API. As a result, it will run anywhere, no matter what trust level your Web hosting provider imposes on you (which is what we needed)."
Hope it helps!

Resources