Is the filesystem for Raven DB encrypted? - encryption

I'm just trying to determine if the files on the filesystem used by Raven DB are encrypted or not? Can someone just open the files on the filesystem and convert them from binary to ASCII directly, or are they encrypted?
I am trying to convince our management to give RavenDB a shot, but they have concerns about security. They gave the example that you can't just open up an MS SQL db file, convert it from binary to ASCII, and read it. So I am trying to verify if RavenDB prevented that kind of thing as well?

Well, personally I think that your management sucks if they come up with such straw-man arguments.
To answer your question: No, you can't just open any file inside ravens data folder with Notepad and expect to see something meaningful. So, for the ones that don't know how to program, yes they are encrypted.
To convice your management you can tell them that raven uses the same encryption algorithm as Microsofts Exchange Server does. If they want to dig deeper - it's called Esent.

RavenDb storage is not encrypted. You can open it with notepad and see some pieces of data. At the same time I do not think that MS SQL encrypts files by default either.

RavenDB added encryption in mid-2012. Get RavenDB's “bundle:encryption” and then make sure your key is properly encrypted in the .NET config file or whatever.
http://ravendb.net/docs/article-page/3.0/csharp/server/bundles/encryption
http://ayende.com/blog/157473/awesome-ravendb-feature-of-the-day-encryption

SQL Server 2008 does have encryption, but you need to prepare the DB instance beforehand to enable it, then create the DB with encryption enabled and then store data.
If you haven't, you could just copy the DB off the machine and open it in a tool that does have access to it.
With RavenDB, you can tick the box and off you go! (although I do not know the intricacies of moving backups to another machine and restoring them).
In relation to the point your management made, this is a relatively pointless argument.
If you had access directly to the file of a DB, it's game over. Encryption is your very last line of defence.
[I don't think hackers are going to be opening a 40GB file in Notepad .. thats just silly :-)]
So instead of ending up at the worst case, you have to look at the controls you can implement to even get to that level of concern.
You need to work out how would someone even get to that file (and the costs associated with all of the mitigation techniques):
What if they steal the server, or the disk inside it?
What if they can get to the DB via a file share?
What if they can log onto the DB server?
What if an legitimate employee syphons off the data?
Physical Access
Restricting direct access to a server mitigates stealing it. You have to think about all of the preventative controls (door locks, ID cards, iris scanners), detective controls (alarm systems, CCTV) and how much you want to spend on that.
Hence why cloud computing is so attractive!
Access Controls
You then have to get onto the machine via RDP or connect remotely to its file system via Active Directory, so that only a select few could access it - probably IT support and database administrators. Being administrators, they should be vetted and trusted within the organisation (through an Information Security Governance Framework).
If you also wanted to reduce the risk even further, maybe implement 2 Factor Authentication like banks do, so that even knowing the username and password doesn't get you to the server!
Then there's the risk of employees of your company accessing it - legitimately and illegitimately. I mean why go to all of the trouble of buying security guards, dogs and a giant fence when users can query it anyway! You would only allow certain operations on certain parts of the data.
In summary ... 'defence in depth' is how you respond to it. There is always a risk that can be identified, but you need to consider the number of controls in place, add more if the risk is too high. But adding more controls to your organisation in general makes the system less user friendly.

Related

Cipher/Encrypt and uncrypt passwords in .properties files using Talend Data Integration

One suggested way to run jobs is to save context parameters in properties files.
Like this one:
#
#Wed Dec 16 18:23:03 CET 2015
MySQL_AdditionalParams=noDatetimeStringSync\=true
MySQL_Port=3306
MySQL_Login=root
MySQL_Password=secret_password_to_cipher
MySQL_Database=talend MySQL_Server=localhost
This is really easy and useful, but the issue with this is that passwords are saved in clear.
So I'm looking for ways to do easily ciphering.
Here are 2 very insteresting questions already discussed in Stack overflow about password ciphering technics:
Encrypt passwords in configuration files
Securing passwords in properties file
But they are Java native and I'm searching for a better Talend integration. I've already tried different ways in my Talend jobs:
Simple obfuscation using base64 encoding of passwords
Using tEncrypt and tDecrypt components from the forge
Using Jasypt ot JavaXCrypto librairies
Using pwdstore routine from the forge
All these technics are described in a tutorial (in french, sorry) explaining how to crypt passwords in Talend
But another issue is encountered: keys used to cipher/uncipher are always in clear, so if you know good ways to address this point I'll be glad to experiment it.
Fundamentally, anything an application can reach can be reached by somebody breaking in into the system/taking over control of the application.
Even if you use obfuscation (such as base64 or more advanced), or real encryption where the keys are available (even if they too might be obfuscated).
So essentially there is no good enough way to do what you seek to do and worse: it simply cannot exist.
So what do you do instead ?
1. Limit the rights
MySQL_Login=root is a big problem ... a compromise of the application will lead to an immediate compromise of the database (and its data).
So, limit the rights to what is absolutely needed for the application.
This should really be done and is quite easy to achieve.
2. Separate user and admin level access
If certain things are only needed after user interaction, you can use secrets provided by the user (e.g. a password of the user can give a hash and that can be xor-ed with and get you a key that's not always present in the application nor configuration files).
You can use this e.g. to separate out permissions in two levels: the normal user level which only has the bare minimal rights to make the application work for the average user, (but e.g. not the application management rights that allow managing the application itself), and use the secrets kept by the user to keep (pert of) the key outside of the application while there's no admin logged in into the administrative part of the application.
This is rarely done to be honest, nor all that easy.
But even with all that you essentially have to consider the access to e.g. the database to be compromised if the application is compromised.
That's also why data such as application user password should not (must not) be stored in the database without proper precautions.

Need a Optimised solution for Asp.Net Fileupload [duplicate]

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
So I'm using an app that stores images heavily in the DB. What's your outlook on this? I'm more of a type to store the location in the filesystem, than store it directly in the DB.
What do you think are the pros/cons?
I'm in charge of some applications that manage many TB of images. We've found that storing file paths in the database to be best.
There are a couple of issues:
database storage is usually more expensive than file system storage
you can super-accelerate file system access with standard off the shelf products
for example, many web servers use the operating system's sendfile() system call to asynchronously send a file directly from the file system to the network interface. Images stored in a database don't benefit from this optimization.
things like web servers, etc, need no special coding or processing to access images in the file system
databases win out where transactional integrity between the image and metadata are important.
it is more complex to manage integrity between db metadata and file system data
it is difficult (within the context of a web application) to guarantee data has been flushed to disk on the filesystem
As with most issues, it's not as simple as it sounds. There are cases where it would make sense to store the images in the database.
You are storing images that are
changing dynamically, say invoices and you wanted
to get an invoice as it was on 1 Jan
2007?
The government wants you to maintain 6 years of history
Images stored in the database do not require a different backup strategy. Images stored on filesystem do
It is easier to control access to the images if they are in a database. Idle admins can access any folder on disk. It takes a really determined admin to go snooping in a database to extract the images
On the other hand there are problems associated
Require additional code to extract
and stream the images
Latency may be
slower than direct file access
Heavier load on the database server
File store. Facebook engineers had a great talk about it. One take away was to know the practical limit of files in a directory.
Needle in a Haystack: Efficient Storage of Billions of Photos
This might be a bit of a long shot, but if you're using (or planning on using) SQL Server 2008 I'd recommend having a look at the new FileStream data type.
FileStream solves most of the problems around storing the files in the DB:
The Blobs are actually stored as files in a folder.
The Blobs can be accessed using either a database connection or over the filesystem.
Backups are integrated.
Migration "just works".
However SQL's "Transparent Data Encryption" does not encrypt FileStream objects, so if that is a consideration, you may be better off just storing them as varbinary.
From the MSDN Article:
Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data.
FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing.
File paths in the DB is definitely the way to go - I've heard story after story from customers with TB of images that it became a nightmare trying to store any significant amount of images in a DB - the performance hit alone is too much.
In my experience, sometimes the simplest solution is to name the images according to the primary key. So it's easy to find the image that belongs to a particular record, and vice versa. But at the same time you're not storing anything about the image in the database.
The trick here is to not become a zealot.
One thing to note here is that no one in the pro file system camp has listed a particular file system. Does this mean that everything from FAT16 to ZFS handily beats every database?
No.
The truth is that many databases beat many files systems, even when we're only talking about raw speed.
The correct course of action is to make the right decision for your precise scenario, and to do that, you'll need some numbers and some use case estimates.
In places where you MUST guarantee referential integrity and ACID compliance, storing images in the database is required.
You cannot transactionaly guarantee that the image and the meta-data about that image stored in the database refer to the same file. In other words, it is impossible to guarantee that the file on the filesystem is only ever altered at the same time and in the same transaction as the metadata.
As others have said SQL 2008 comes with a Filestream type that allows you to store a filename or identifier as a pointer in the db and automatically stores the image on your filesystem which is a great scenario.
If you're on an older database, then I'd say that if you're storing it as blob data, then you're really not going to get anything out of the database in the way of searching features, so it's probably best to store an address on a filesystem, and store the image that way.
That way you also save space on your filesystem, as you are only going to save the exact amount of space, or even compacted space on the filesystem.
Also, you could decide to save with some structure or elements that allow you to browse the raw images in your filesystem without any db hits, or transfer the files in bulk to another system, hard drive, S3 or another scenario - updating the location in your program, but keep the structure, again without much of a hit trying to bring the images out of your db when trying to increase storage.
Probably, it would also allow you to throw some caching element, based on commonly hit image urls into your web engine/program, so you're saving yourself there as well.
Small static images (not more than a couple of megs) that are not frequently edited, should be stored in the database. This method has several benefits including easier portability (images are transferred with the database), easier backup/restore (images are backed up with the database) and better scalability (a file system folder with thousands of little thumbnail files sounds like a scalability nightmare to me).
Serving up images from a database is easy, just implement an http handler that serves the byte array returned from the DB server as a binary stream.
Here's an interesting white paper on the topic.
To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem
The answer is "It depends." Certainly it would depend upon the database server and its approach to blob storage. It also depends on the type of data being stored in blobs, as well as how that data is to be accessed.
Smaller sized files can be efficiently stored and delivered using the database as the storage mechanism. Larger files would probably be best stored using the file system, especially if they will be modified/updated often. (blob fragmentation becomes an issue in regards to performance.)
Here's an additional point to keep in mind. One of the reasons supporting the use of a database to store the blobs is ACID compliance. However, the approach that the testers used in the white paper, (Bulk Logged option of SQL Server,) which doubled SQL Server throughput, effectively changed the 'D' in ACID to a 'd,' as the blob data was not logged with the initial writes for the transaction. Therefore, if full ACID compliance is an important requirement for your system, halve the SQL Server throughput figures for database writes when comparing file I/O to database blob I/O.
One thing that I haven't seen anyone mention yet but is definitely worth noting is that there are issues associated with storing large amounts of images in most filesystems too. For example if you take the approach mentioned above and name each image file after the primary key, on most filesystems you will run into issues if you try to put all of the images in one big directory once you reach a very large number of images (e.g. in the hundreds of thousands or millions).
Once common solution to this is to hash them out into a balanced tree of subdirectories.
Something nobody has mentioned is that the DB guarantees atomic actions, transactional integrity and deals with concurrency. Even referentially integrity is out of the window with a filesystem - so how do you know your file names are really still correct?
If you have your images in a file-system and someone is reading the file as you're writing a new version or even deleting the file - what happens?
We use blobs because they're easier to manage (backup, replication, transfer) too. They work well for us.
The problem with storing only filepaths to images in a database is that the database's integrity can no longer be forced.
If the actual image pointed to by the filepath becomes unavailable, the database unwittingly has an integrity error.
Given that the images are the actual data being sought after, and that they can be managed easier (the images won't suddenly disappear) in one integrated database rather than having to interface with some kind of filesystem (if the filesystem is independently accessed, the images MIGHT suddenly "disappear"), I'd go for storing them directly as a BLOB or such.
At a company where I used to work we stored 155 million images in an Oracle 8i (then 9i) database. 7.5TB worth.
Normally, I'm storngly against taking the most expensive and hardest to scale part of your infrastructure (the database) and putting all load into it. On the other hand: It greatly simplifies backup strategy, especially when you have multiple web servers and need to somehow keep the data synchronized.
Like most other things, It depends on the expected size and Budget.
We have implemented a document imaging system that stores all it's images in SQL2005 blob fields. There are several hundred GB at the moment and we are seeing excellent response times and little or no performance degradation. In addition, fr regulatory compliance, we have a middleware layer that archives newly posted documents to an optical jukebox system which exposes them as a standard NTFS file system.
We've been very pleased with the results, particularly with respect to:
Ease of Replication and Backup
Ability to easily implement a document versioning system
If this is web-based application then there could be advantages to storing the images on a third-party storage delivery network, such as Amazon's S3 or the Nirvanix platform.
Assumption: Application is web enabled/web based
I'm surprised no one has really mentioned this ... delegate it out to others who are specialists -> use a 3rd party image/file hosting provider.
Store your files on a paid online service like
Amazon S3
Moso Cloud Storage
Another StackOverflow threads talking about this here.
This thread explains why you should use a 3rd party hosting provider.
It's so worth it. They store it efficiently. No bandwith getting uploaded from your servers to client requests, etc.
If you're not on SQL Server 2008 and you have some solid reasons for putting specific image files in the database, then you could take the "both" approach and use the file system as a temporary cache and use the database as the master repository.
For example, your business logic can check if an image file exists on disc before serving it up, retrieving from the database when necessary. This buys you the capability of multiple web servers and fewer sync issues.
I'm not sure how much of a "real world" example this is, but I currently have an application out there that stores details for a trading card game, including the images for the cards. Granted the record count for the database is only 2851 records to date, but given the fact that certain cards have are released multiple times and have alternate artwork, it was actually more efficient sizewise to scan the "primary square" of the artwork and then dynamically generate the border and miscellaneous effects for the card when requested.
The original creator of this image library created a data access class that renders the image based on the request, and it does it quite fast for viewing and individual card.
This also eases deployment/updates when new cards are released, instead of zipping up an entire folder of images and sending those down the pipe and ensuring the proper folder structure is created, I simply update the database and have the user download it again. This currently sizes up to 56MB, which isn't great, but I'm working on an incremental update feature for future releases. In addition, there is a "no images" version of the application that allows those over dial-up to get the application without the download delay.
This solution has worked great to date since the application itself is targeted as a single instance on the desktop. There is a web site where all of this data is archived for online access, but I would in no way use the same solution for this. I agree the file access would be preferable because it would scale better to the frequency and volume of requests being made for the images.
Hopefully this isn't too much babble, but I saw the topic and wanted to provide some my insights from a relatively successful small/medium scale application.
SQL Server 2008 offers a solution that has the best of both worlds : The filestream data type.
Manage it like a regular table and have the performance of the file system.
It depends on the number of images you are going to store and also their sizes. I have used databases to store images in the past and my experience has been fairly good.
IMO, Pros of using database to store images are,
A. You don't need FS structure to hold your images
B. Database indexes perform better than FS trees when more number of items are to be stored
C. Smartly tuned database perform good job at caching the query results
D. Backups are simple. It also works well if you have replication set up and content is delivered from a server near to user. In such cases, explicit synchronization is not required.
If your images are going to be small (say < 64k) and the storage engine of your db supports inline (in record) BLOBs, it improves performance further as no indirection is required (Locality of reference is achieved).
Storing images may be a bad idea when you are dealing with small number of huge sized images. Another problem with storing images in db is that, metadata like creation, modification dates must handled by your application.
I have recently created a PHP/MySQL app which stores PDFs/Word files in a MySQL table (as big as 40MB per file so far).
Pros:
Uploaded files are replicated to backup server along with everything else, no separate backup strategy is needed (peace of mind).
Setting up the web server is slightly simpler because I don't need to have an uploads/ folder and tell all my applications where it is.
I get to use transactions for edits to improve data integrity - I don't have to worry about orphaned and missing files
Cons:
mysqldump now takes a looooong time because there is 500MB of file data in one of the tables.
Overall not very memory/cpu efficient when compared to filesystem
I'd call my implementation a success, it takes care of backup requirements and simplifies the layout of the project. The performance is fine for the 20-30 people who use the app.
Im my experience I had to manage both situations: images stored in database and images on the file system with path stored in db.
The first solution, images in database, is somewhat "cleaner" as your data access layer will have to deal only with database objects; but this is good only when you have to deal with low numbers.
Obviously database access performance when you deal with binary large objects is degrading, and the database dimensions will grow a lot, causing again performance loss... and normally database space is much more expensive than file system space.
On the other hand having large binary objects stored in file system will cause you to have backup plans that have to consider both database and file system, and this can be an issue for some systems.
Another reason to go for file system is when you have to share your images data (or sounds, video, whatever) with third party access: in this days I'm developing a web app that uses images that have to be accessed from "outside" my web farm in such a way that a database access to retrieve binary data is simply impossible. So sometimes there are also design considerations that will drive you to a choice.
Consider also, when making this choice, if you have to deal with permission and authentication when accessing binary objects: these requisites normally can be solved in an easier way when data are stored in db.
I once worked on an image processing application. We stored the uploaded images in a directory that was something like /images/[today's date]/[id number]. But we also extracted the metadata (exif data) from the images and stored that in the database, along with a timestamp and such.
In a previous project i stored images on the filesystem, and that caused a lot of headaches with backups, replication, and the filesystem getting out of sync with the database.
In my latest project i'm storing images in the database, and caching them on the filesystem, and it works really well. I've had no problems so far.
Second the recommendation on file paths. I've worked on a couple of projects that needed to manage large-ish asset collections, and any attempts to store things directly in the DB resulted in pain and frustration long-term.
The only real "pro" I can think of regarding storing them in the DB is the potential for easy of individual image assets. If there are no file paths to use, and all images are streamed straight out of the DB, there's no danger of a user finding files they shouldn't have access to.
That seems like it would be better solved with an intermediary script pulling data from a web-inaccessible file store, though. So the DB storage isn't REALLY necessary.
The word on the street is that unless you are a database vendor trying to prove that your database can do it (like, let's say Microsoft boasting about Terraserver storing a bajillion images in SQL Server) it's not a very good idea. When the alternative - storing images on file servers and paths in the database is so much easier, why bother? Blob fields are kind of like the off-road capabilities of SUVs - most people don't use them, those who do usually get in trouble, and then there are those who do, but only for the fun of it.
Storing an image in the database still means that the image data ends up somewhere in the file system but obscured so that you cannot access it directly.
+ves:
database integrity
its easy to manage since you don't have to worry about keeping the filesystem in sync when an image is added or deleted
-ves:
performance penalty -- a database lookup is usually slower that a filesystem lookup
you cannot edit the image directly (crop, resize)
Both methods are common and practiced. Have a look at the advantages and disadvantages. Either way, you'll have to think about how to overcome the disadvantages. Storing in database usually means tweaking database parameters and implement some kind of caching. Using filesystem requires you to find some way of keeping filesystem+database in sync.

Hows does one prevent passwords and other sensitive information from appearing in an ASP.NET dump?

How does one prevent passwords and other sensitive data submitted to and received from ASP.NET web pages in IIS/ASP.NET dump files?
Steps to reproduce
Using Visual Studio 2010, create a ASP.NET MVC 3 intranet application.
Configure it to use IIS 7.5.
Fire it up and register an account (say bob123 as the user and Pa$$w0Rd as the password. I'm assuming that the SQL Express database is created and the site is fully functional.
Using task manager, right click on the w3wp process and create a dump.
Open the dump in an editor capable of displaying its contents as hex, such as SlickEdit.
Search for "Pa$$0Rd" and "Pa%24%24w0Rd" in the hex dump. You should be able to find several copies of it stored as ASCII, Unicode, or encoded.
Note that it doesn't matter whether you use HTTPS because it only encrypts the communication. ASP.NET stores that data in the clear in memory or disk.
The problem
Common wisdom has it to encrypt sensitive data and not to store it in the clear. However an employee may receive a dump of an IIS/ASP.NET application and discover passwords and other confidential data of users because this information is neither encrypted, nor is memory used by ASP.NET cleared after usage.
This puts them at risk simply because they have access to it. Dump are sometimes shared with partners (such as Microsoft) to help them diagnose issues in their code. It is a necessary part of diagnosing some really complex problems in one's application.
Things I looked at
Use SecureString for passwords and other sensitive data. However, the ASP.NET Membership provider, along with other frameworks like WCF, often accepts passwords as System.String, which means that those copies will still be in the dump.
Looked to see if there is anything in the framework to clear out a copy of System.String when it is no longer being used. I couldn't find anything.
Investigated whether one can zero out the memory used for requests and responses once IIS is done with it, but I was unable to find anything.
I investigated wether one can encrypt files IIS receives (as HttpPostFile) so that they are not stored in the clear. We may receive documents that are extremely confidential and every step is made to encrypt and protect them on the server. However, someone can extract them in the clear from an IIS dump.
What I was hoping for is to tell IIS/ASP.NET that a specific request/response contains sensitive data and that IIS/ASP.NET will clear out the memory when it is done using it.
A dump file by definition dumps all the memory the application uses at the moment it is dumped, If you were to create a filter so that certain things were excluded then you could never be sure that you had enough data to zero in on a problem.
Would you be comfortable handing over your databases / configuration settings to a third party? if not then you probably shouldn't be handing over dumpfiles either. (imho)
I know this doesn't answer the question directly but why not look at this problem differently.
You could easily put together some javascript to do the hashing client side. I would combine the password with something random such as a guid that is sent down by the server and is valid only for a single use. The data exchange would no longer contain the password and the hash could easily be compared server side. It would only be valid for the session so someone looking at the dump data couldn't use the hash to "authenticate" themselves in future.
On the file side, how are these files being uploaded? directly from a web page? There are quite a few javascript libraries that do encryption (blowfish) and I would definitely use this option when sending a sensitive file. It does have a speed penalty but you can be sure that the data is secure.

Encrypted, Compressible, Cross Platform, File system in a file

We wish to make a desktop application that searches a locally packaged text database that will be a few GB in size. We are thinking of using lucene.
So basically the user will search for a few words and the local lucene database will give back a result. However, we want to prevent the user from taking a full text dump of the lucene index as the text database is valuable and proprietary. A web application is not the solution here as the Customer would like for this desktop application to work in areas where the internet is not available.
How do we encrypt lucene's database so that only the client application can access lucene's index and a prying user can't take a full text dump of the index?
One way of doing this, we thought, was if the lucene index could be stored on an encrypted file system within a file (something like truecrypt). So the desktop application would "mount" the file containing the lucene indexes.
And this needs to be cross platform (Linux, Windows)...We would be using Qt or Java to write the desktop application.
Is there an easier/better way to do this?
[This is for a client. Yes, yes, conceptually this is bad thing :-) but this is how they want it. Basically the point is that only the Desktop application should be able to access the lucene index and no one else. Someone pointed that this is essentially DRM. Yeah, it resembles DRM]
How do we encrypt lucene's database so
that only the client application can
access lucene's index and a prying
user can't take a full text dump of
the index?
You don't. The user would have the key and the encrypted data, so they could access everything. You can bury the key in an obfuscated file, but that only adds a slight delay. It certainly will not keep out prying users. You need to rethink.
The problem here is that you're trying to both provide the user with data and deny it from em, at the same time. This is basically the DRM problem under a different name - the attacker (user) is in full control of the application's environment (hardware and OS). No security is possible in such situation, only obfuscation and illusion of security.
While you can make it harder for the user to get to the unencrypted data, you can never prevent it - because that would mean breaking your app. Probably the closest thing is to provide a sealed hardware box, but IMHO that would make it unusable.
Note that making a half-assed illusion of security might be sufficient from a legal standpoint (e.g. DMCA's anti-circumvention clauses) - but that's outside SO's scope.
Technically, there is little you can do. Lucene is written in Java and Java code can always be decompiled or run in a debugger to get the key which you need to store somewhere (probably in the license key which you sell the user).
Your only option is the law (or the contract with the user). The text data is copyrighted, so you can sue the user if they use it in any way that is outside the scope of the license agreement.
Or you can write your own text indexing system.
Or buy a commercial one which meets your needs.
[EDIT] If you want to use an encrypted index, just implement your own FSDirectory. Check the source for SimpleFSDirectory for an example.
Why not building an index that contains only the data that user can access and ship that index with the desktop app?
True-crypt sounds like a solid plan to me. You can mount volumes and encrypt them in all sorts of crazy overkill ways, and access them just as any other file.
No, it isn't entirely secure, but it should work well enough.
One-way hash function.
You don't store the plaintext, you store hashes. When you want to search for a term, you push the term through the function and then search for the hash. If there's a match in the database, return thumbs up.
Are you willing to entertain false positives in order to save space? Bloom filter.

How to keep multiple connectionString passwords safe, separate, and easy to deploy?

I know there are plenty of questions here already about this topic (I've read through as many as I could find), but I haven't yet been able to figure out how best to satisfy my particular criteria. Here are the goals:
The ASP.NET application will run on a few different web servers, including localhost workstations for development. This means encrypting web.config using a machine key is out. Each "type" or environment of web server (dev, test, prod) has its own corresponding database (dev, test, prod). We want to separate these connection strings so that a developer working on the "dev" code is not able to see any "prod" connection string passwords, nor allow these production passwords to ever get deployed to the wrong server or committed to SVN.
The application will should be able to decide which connection string to attempt to use based on the server name (using a switch statement). For example, "localhost" and "dev.example.com" will should know to use the DevDatabaseConnectionString, "test.example.com" will use the TestDatabaseConnectionString, and "www.example.com" will use the ProdDatabaseConnectionString, for example. The reason for this is to limit the chance for any deployment accidents, where the wrong type of web server connects to the wrong database.
Ideally, the exact same executables and web.config should be able to run on any of these environments, without needing to tailor or configure each environment separately every time that we deploy (something that seems like it would be easy to forget/mess up one day during a deployment, which is why we moved away from having just one connectionstring that has to be changed on each target). Deployment is currently accomplished via FTP. Update: Using "build events " and revising our deployment procedures is probably not a bad idea.
We will not have command-line access to the production web server. This means using aspnet_regiis.exe to encrypt the web.config is out. Update: We can do this programmatically so this point is moot.
We would prefer to not have to recompile the application whenever a password changes, so using web.config (or db.config or whatever) seems to make the most sense.
A developer should not be able to get to the production database password. If a developer checks the source code out onto their localhost laptop (which would determine that it should be using the DevDatabaseConnectionString, remember?) and the laptop gets lost or stolen, it should not be possible to get at the other connection strings. Thus, having a single RSA private key to un-encrypt all three passwords cannot be considered. (Contrary to #3 above, it does seem like we'd need to have three separate key files if we went this route; these could be installed once per machine, and should the wrong key file get deployed to the wrong server, the worst that should happen is that the app can't decrypt anything---and not allow the wrong host to access the wrong database!)
UPDATE/ADDENDUM: The app has several separate web-facing components to it: a classic ASMX Web Services project, an ASPX Web Forms app, and a newer MVC app. In order to not go mad having the same connection string configured in each of these separate projects for each separate environment, it would be nice to have this only appear in one place. (Probably in our DAL class library or in a single linked config file.)
I know this is probably a subjective question (asking for a "best" way to do something), but given the criteria I've mentioned, I'm hoping that a single best answer will indeed arise.
Thank you!
Integrated authentication/windows authentication is a good option. No passwords, at least none that need be stored in the web.config. In fact, it's the option I prefer unless admins have explicity taken it away from me.
Personally, for anything that varies by machine (which isn't just connection string) I put in a external reference from the web.config using this technique: http://www.devx.com/vb2themax/Tip/18880
When I throw code over the fence to the production server admin, he gets a new web.config, but doesn't get the external file-- he uses the one he had earlier.
you can have multiple web servers with the same encrypted key. you would do this in machine config just ensure each key is the same.
..
one common practice, is to store first connection string encrypted somewhere on the machine such as registry. after the server connects using that string, it will than retrieve all other connection strings which would be managed in the database (also encrypted). that way connection strings can be dynamically generated based on authorization requirements (requestor, application being used, etc) for example the same tables can be accessed with different rights depending on context and users/groups
i believe this scenario addresses all (or most?) of your points..
(First, Wow, I think 2 or 3 "quick paragraphs" turned out a little longer than I'd thought! Here I go...)
I've come to the conclusion (perhaps you'll disagree with me on this) that the ability to "protect" the web.config whilst on the server (or by using aspnet_iisreg) has only limited benefit, and is perhaps maybe not such a good thing as it may possibly give a false sense of security. My theory is that if someone is able to obtain access to the filesystem in order to read this web.config in the first place, then they also probably have access to create their own simple ASPX file which can "unprotect" it and reveal its secrets to them. But if unauthorized people are trouncing around in your filesystem—well… then you have bigger problems at hand, so my whole concern is now moot! 1
I also realize that there isn’t a foolproof way to securely hide passwords within a DLL either, as they can eventually be disassembled and discovered, perhaps by using something like ILDASM. 2 An additional measure of security obscurity can be obtained by obfuscating and encrypting your binaries, such as by using Dotfuscator, but this isn’t to be considered “secure.” And again, if someone has read access (and likely write access too) to your binaries and filesystem, you’ve again got bigger problems at hand methinks.
To address the concerns I mentioned about not wanting the passwords to live on developer laptops or in SVN: solving this through a separate “.config” file that does not live in SVN is (now!) the blindingly obvious choice. Web.config can live happily in source control, while just the secret parts do not. However---and this is why I’m following up on my own question with such a long response---there are still a few extra steps I’ve taken to try and make this if not any more secure, then at least a little bit more obscure.
Connection strings we want to try to keep secret (those other than the development passwords) won’t ever live as plain text in any files. These are now encrypted first with a secret (symmetric) key---using, of course, the new ridiculous Encryptinator(TM)! utility built just for this purpose---before they get placed in a copy of a “db.config” file. The db.config is then just uploaded only to its respective server. The secret key is compiled directly into the DAL’s dll, which itself would then (ideally!) be further obfuscated and encrypted with something like Dotfuscator. This will hopefully keep out any casual curiosity at the least.
I’m not going to worry much at all about the symmetric "DbKey" living in the DLLs or SVN or on developer laptops. It’s the passwords themselves I’ll keep out. We do still need to have a “db.config” file in the project in order to develop and debug, but it has all fake passwords in it except for development ones. Actual servers have actual copies with just their own proper secrets. The db.config file is typically reverted (using SVN) to a safe state and never stored with real secrets in our subversion repository.
With all this said, I know it’s not a perfect solution (does one exist?), and one that does still require a post-it note with some deployment reminders on it, but it does seem like enough of an extra layer of hassle that might very well keep out all but the most clever and determined attackers. I’ve had to resign myself to "good-enough" security which isn’t perfect, but does let me get back to work after feeling alright about having given it the ol’ "College Try!"
1. Per my comment on June 15 here http://www.dotnetcurry.com/ShowArticle.aspx?ID=185 - let me know if I'm off-base! -and some more good commentary here Encrypting connection strings so other devs can't decrypt, but app still has access here Is encrypting web.config pointless? and here Encrypting web.config using Protected Configuration pointless?
2. Good discussion and food for thought on a different subject but very-related concepts here: Securely store a password in program code? - what really hit home is the Pidgin FAQ linked from the selected answer: If someone has your program, they can get to its secrets.

Resources