We've had to extend our website to communicate user credentials to a suppliers website (in the query string) using AES with a 256-bit key, however they are using a static IV when decrypting the information.
I've advised that the IV should not be static and that it is not in our standards to do that, but if they change it their end we would incur the [big] costs so we have agreed to accept this as a security risk and use the same IV (much to my extreme frustration).
What I wanted to know is, how much of a security threat is this? I need to be able to communicate this effectively to management so that they know exactly what they are agreeing to.
*UPDATE:*We are also using the same KEY throughout as well.
Thanks
Using a static IV is always a bad idea, but the exact consequences depend on the Mode of Operation in use. In all of them, the same plaintext will produce the same ciphertext, but there may be additional vulnerabilities: For example, in CFB mode, given a static key, the attacker can extract the cipherstream from a known plaintext, and use it to decrypt all subsequent strings!
Using a static IV is always a bad idea. Using a static key is always a bad idea. I bet that your supplier had compiled the static key into their binaries.
Sadly, I've seen this before. Your supplier has a requirement that they implement encryption and they are attempting to implement the encryption in a manner that's as transparent as possible---or as "checkbox" as possible. That is, they aren't really using encryption to provide security, they are using it to satisfy a checkbox requirement.
My suggestion is that you see if the supplier would be willing to forsake this home-brewed encryption approach and instead run their system over SSL. Then you get the advantage of using a quality standard security protocol with known properties. It's clear from your question that neither your supplier nor you should be attempting to design a security protocol. You should, instead, use one that is free and available on every platform.
As far as I know (and I hope others will correct me if I'm wrong / the user will verify this), you lose a significant amount of security by keeping a static key and IV. The most significant effect you should notice is that when you encrypt a specific plaintext (say usernameA+passwordB), you get the same ciphertext every time.
This is great for pattern analysis by attackers, and seems like a password-equivalent that would give attackers the keys to the kingdom:
Pattern analysis: The attacker can see that the encrypted user+password combination "gobbbledygook" is used every night just before the CEO leaves work. The attacker can then leverage that information into the future to remotely detect when the CEO leaves.
Password equivalent: You are passing this username+password in the URL. Why can't someone else pass exactly the same value and get the same results you do? If they can, the encrypted data is a plaintext equivalent for the purposes of gaining access, defeating the purpose of encrypting the data.
What I wanted to know is, how much of a security threat is this? I need to be able to communicate this effectively to management so that they know exactly what they are agreeing to.
A good example of re-using the same nonce is Sony vs. Geohot (on a different algorithm though). You can see the results for sony :) To the point. Using the same IV might have mild or catastrophic issues depending on the encryption mode of AES you use. If you use CTR mode then everything you encrypted is as good as plaintext. In CBC mode your first block of plaintext will be the same for the same encrypted data.
Related
i have a project for a website, running on Django. One function of it needs to store user/password for a third party website. So it needs to be symmetric encryption, as it needs to use these credentials in an automated process.
Storing credentials is never a good idea, I know, but for this case there is no other option.
My idea so far is, to create a Django app, that will save and use these passwords, and do nothing else. With this I can have 2 "webservers" that will not receive any request from outside, but only get tasking via redis or something. Therefore I can isolate them to some degree (they are the only servers who will have access to this extra db, they will not handle any web request, etc)
First question: Does this plan sound solid or is there a major flaw?
Second question is about the encryption itself:
AES requires an encryption key for all its work, ok that needs to be "secured" in some way. But I am more interested in the IV.
Every user can have one or more credential sets saved in the extra db. Would it be a good idea to use some hash of sort over the user id or something to generate a per user custom IV? Most of the time I see IV to be just random generated. But then I will have to also store them somewhere in addition to the key.
For me it gets a bit confusing here. I need key and IV to decrypt, but I would "store" them the same way. So wouldn't it be likely if one get compromised, that also the IV will be? Would it then make any difference if I generate the IV on the fly over a known procedure? Problem then, everyone could know the IV if they know their user id, as the code will be open source....
In the end, I need some direction guidance as how to handle key and best unique IV per user. Thank you very much for reading so far :-)
Does this plan sound solid or is there a major flaw?
The need to store use credentials is imho a flaw by design, at least we all appreciate you are aware of it.
Having a separate credential service with dedicated datastore seems to be best you can do under stated conditions. I don't like the option to store user credentials, but let's skip academic discussion to practical things.
AES requires an encryption key for all its work, ok that needs to be "secured" in some way.
Yes, there's the whole problem.
to generate a per user custom IV?
IV allows reusing the same key for multiple encryptions, so effectively it needs to be unique for each ciphertext (if a user has multiple passwords, you need an IV for each password). Very commonly IV is prepended to the ciphertext as it is needed to decrypt it.
Would it then make any difference if I generate the IV on the fly over a known procedure?
IV doesn't need to be secret itself.
Some encryption modes require the IV to be unpredictable (e.g. CBC mode), therefore it's best if you generate the IV as random. There are some modes that use IV as a counter to encrypt/decrypt only part of data (such as CTR or OFB), but still it is required the IV is unique for each key and encryption.
This is a question about whether my security process is adequate for the kind of information i am storing.
I am building a website using ASP.NET 4.0 with a SQL backend and need to know how my security would hold up with regards to passwords and hashes etc.
I don't store any critical information on someone - No real names, addresses, credit card details or anything like that... just email and username.
For now, I am deliberately leaving out some specifics as I am not sure if telling you them will weaken my security but if not I can reveal slightly more.
Here is how I do it:
The user registers with their email and a unique username up to 50 chars long
They create a password (minimum 6 chars) using any characters on the keyboard (I HTMLEncode the input and am using parameterized stored procedures so I don't restrict the chars)
I send them an email with a link to verify they are real.
I use FormsAuthentication to set an auth cookie but I'm not using SSL at the moment... I understand the implications of sending auth details across plain http but I have asked my host to add the cert so it should be ready shortly.
It's the hashing bit I need to be sure of!
I create a random 100 character salt from the following char set (I just use the System.Random class, nothing cryptographic) - abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ0123456789!£$%^*()_{}[]#~#<,>.?
This is then merged with the password and then hashed using SHA-512 (SHA512Managed class) tens of thousands of times (takes nearly 2 seconds on my i7 laptop to generate the final hash).
This final hash is then converted to a base64 string and compared with the already-hashed password in the database (the salt is stored in another column in the DB too)
A few questions (ignore the lack of SSL for the moment, I just haven't bought the certificate yet but it will be ready in a week or so):
Does this strike you as secure enough? I understand there are degrees of security and that given enough time and resources anything is breakable but given that I don't store critical data, does it seem like enough?
Would revealing the actual number of times I hash the password weaken my security?
Does a 100 character salt make any difference over, say, a 20 character one?
By revealing how I join a password and salt together, would that weaken my security?
So, let's try to answer your questions one by one:
Does this strike you as secure enough? I understand there are degrees of security and that given enough time and resources anything is breakable but given that I don't store critical data, does it seem like enough?
No. It is definitely not "secure enough".
Without seeing code, it's hard to say more. But the fact that you're doing a straight SHA512 instead of doing a HMAC indicates one problem. Not because you need to be using a HMAC, but because most algorithms that are designed for this purpose use HMAC under the hood (for several reasons).
And it seems likely you're doing hash = SHA512(hash) (just from your wording) which is proven to be bad.
So without seeing code, it's hard to say for sure, but it's not pointing in the right direction...
Would revealing the actual number of times I hash the password weaken my security?
No, it shouldn't. If it does, you have a problem somewhere else in the algorithm.
Does a 100 character salt make any difference over, say, a 20 character one?
Nope. All the salt does is make the hash unique (forcing the attacker to attack each password separately). All you need is a salt long enough to be statistically unique. Thanks to the Birthday Problem, 128 bits is more than enough for a 1/10^12 chance of collision. Which is plenty for us. So that means that 16 characters is the upper bound on salt effectiveness.
That doesn't mean it's bad to use a longer salt. It just means that making it longer than 16 characters doesn't significantly increase the security it provides...
By revealing how I join a password and salt together, would that weaken my security?
If it does, your algorithm is severely flawed. If it does, it amounts to Security Through Obscurity.
The Real Answer
The real answer here is to not re-invent the wheel. Algorithms like PBKDF2 and BCRYPT exist for exactly this purpose. So use them.
Further Information (Note that these talk about PHP, but the concepts are 100% applicable to ASP.NET and C#):
YouTube Video - Password Storage and Hacking in PHP
Blog Post - The Rainbow Table Is Dead
Blog Post - Properly Salting Passwords
PHP password_hash RFC
Blog Post - Seven Ways To Screw Up BCrypt
In theory, your hashing scheme sounds ok. In practice, it sounds like you have rolled your own crypto, which is bad. Use bcrypt, scrypt, or pbkdf2. All of these are designed by security professionals.
Not really, but I don't think anyone needs to know that anyway.
No. It just needs to be unique to every user. The purpose of salt is to prevent precalculation of hashes/rainbow table attacks.
This doesn't apply once you make use of bcrypt (or scrypt or pbkdf2)
http://security.stackexchange.com has some topics on the subject, you should check them out.
Some extra notes - serious attackers will crack sha512 hashes way faster than your laptop. For example you could rent a server with a few Tesla GPU's from Amazon or similar, and start cracking at a few billion hashes/second rate. Scrypt makes some effort trying to prevent this by using memory intensive operations.
6 characters minimum for password is not enough, go with at least 8. A related image, I haven't verified the times but it gives a rough estimate and gives you the general idea (excluding dictionary attacks, which can target longer passwords):
I am tasked with implementing a dongle-based copy protection scheme for an application. I realize that no matter what I do, someone will crack it, but I want to at least make it a little more difficult than an if-statement checking whether a dongle is present.
My approach is to encrypt critical data that the application needs for proper execution. During runtime, the decryption key is retrieved from the dongle (our chosen model has some suitable API functions for that), the data is decrypted and the application is happy.
Of course, a determined attacker can intercept that decryption key and also get ahold of the decrypted data. That's ok. But what should be hard is to substitute their own data. So I'm looking for an encryption scheme where knowing the decryption key doesn't enable someone to encrypt their own data.
That's obviously asymmetric encryption. But for every such algorithm I found so far, the encryption (or public) key can be generated from the decryption (or private) key, which is exactly what I'm trying to avoid.
Note: simply signing the data won't help much, since (unless I'm totally misunderstanding such signatures) verifying the signature will just be another if-statement, which is easily circumvented.
So... any ideas?
The moment the private key is known to the attacker you won't have any secret information to differentiate yourself from the others.
To make it harder for the attacker: You might want to expire each pair (public key, private key) after an application specific time T and generate a new pair based on the previous pair both on the dongle and your own machine, independently. This way the attacker needs to have a constant access to the dongle to be able to encrypt his data with the new private key or to run his private_key_detection algorithm as often as T.
You probably want to run the decrypt on the dongle. There are a few pieces of hardware that help this (I just googled this one, for example.). There are likely many others....Dallas Semiconductor used to have a Java button that would allow you to run code on a small dongle like device, but I don't think they have it anymore.
Some of these allow you to execute code in the dongle. So maybe a critical function that is hard to recreate yet doesn't require high performance might work? Perhaps a license key validation algorithm.
Maybe you could include code in the dongle that has to be put into memory in order for the program to run. This would be a little harder to break, but might be hard to implement depending on what tools you are using to make your program.
You probably also want to study up on some anti-debugging subjects. I remember seeing a few publications a while back, but here is at least one. This is another layer that will make it harder to crack.
Dependency on an Internet connection may also be an option. You have to be careful here to not piss off your customers if they can't get your code to run without an Internet connection.
You can also check out FlexLM (or whatever it is called these days). It works, but it is a beast. They also try to negotiate a percentage of your company's gross profit for the license fee if I recall correctly (it's been years....I think we told them to stuff it when they asked for that.)
Good luck!
To answer my own question (somewhat), it is possible to do this with RSA, but most APIs (including the one of OpenSSL's crypto library) need to be "tricked" into doing it. The reason you can generate the public key, given the private key, is that
It is common practive for implementations of RSA to save p and q (those big prime numbers) in the private key data structure.
Since the public key (which consists of the modulus N and some exponent e) is public anyway, there's (usually) no point in choosing an obscure e. Thus, there are a handful of standard values that are used commonly, like 3 or 65537. So even if p and q are unknown, you might be able to "guess" the public exponent.
However, RSA is symmetrical in the sense that anything you encrypt with the public key can be decrypted with the private key and vice versa. So what I've done (I'm a monster) is to let the crypto library generate an RSA key. You can choose your own public exponent there, which will later be used to decrypt (contrary to the normal way). Then, I switch around the public and private exponent in the key data structure.
Some tips for anyone trying to do something similar with the crypto library:
In the RSA data structure, clear out everything but n and e / d, depending on whether you want to encrypt or decrypt with that particular key.
Turn off blinding with RSA_blind_off. It requires the encryption exponent even when decrypting, which is not what we want. Note that this might open you up to some attacks.
If someone needs more help, leave a comment and I'll edit this post with more information.
I've an idea in my mind but I've no idea what the magic words are to use in Google - I'm hoping to describe the idea here and maybe someone will know what I'm looking for.
Imagine you have a database. Lots of data. It's encrypted. What I'm looking for is an encryption whereby to decrypt, a variable N must at a given time hold the value M (obtained from a third party, like a hardware token) or it failed to decrypt.
So imagine AES - well, AES is just a single key. If you have the key, you're in. Now imagine AES modified in such a way that the algorithm itself requires an extra fact, above and beyond the key - this extra datum from an external source, and where that datum varies over time.
Does this exist? does it have a name?
This is easy to do with the help of a trusted third party. Yeah, I know, you probably want a solution that doesn't need one, but bear with me — we'll get to that, or at least close to that.
Anyway, if you have a suitable trusted third party, this is easy: after encrypting your file with AES, you just send your AES key to the third party, ask them to encrypt it with their own key, to send the result back to you, and to publish their key at some specific time in the future. At that point (but no sooner), anyone who has the encrypted AES key can now decrypt it and use it to decrypt the file.
Of course, the third party may need a lot of key-encryption keys, each to be published at a different time. Rather than storing them all on a disk or something, an easier way is for them to generate each key-encryption key from a secret master key and the designated release time, e.g. by applying a suitable key-derivation function to them. That way, a distinct and (apparently) independent key can be generated for any desired release date or time.
In some cases, this solution might actually be practical. For example, the "trusted third party" might be a tamper-resistant hardware security module with a built-in real time clock and a secure external interface that allows keys to be encrypted for any release date, but to be decrypted only for dates that have passed.
However, if the trusted third party is a remote entity providing a global service, sending each AES key to them for encryption may be impractical, not to mention a potential security risk. In that case, public-key cryptography can provide a solution: instead of using symmetric encryption to encrypt the file encryption keys (which would require them either to know the file encryption key or to release the key-encryption key), the trusted third party can instead generate a public/private key pair for each release date and publish the public half of the key pair immediately, but refuse to disclose the private half until the specified release date. Anyone else holding the public key may encrypt their own keys with it, but nobody can decrypt them until the corresponding private key has been disclosed.
(Another partial solution would be to use secret sharing to split the AES key into the shares and to send only one share to the third party for encryption. Like the public-key solution described above, this would avoid disclosing the AES key to the third party, but unlike the public-key solution, it would still require two-way communication between the encryptor and the trusted third party.)
The obvious problem with both of the solutions above is that you (and everyone else involved) do need to trust the third party generating the keys: if the third party is dishonest or compromised by an attacker, they can easily disclose the private keys ahead of time.
There is, however, a clever method published in 2006 by Michael Rabin and Christopher Thorpe (and mentioned in this answer on crypto.SE by one of the authors) that gets at least partially around the problem. The trick is to distribute the key generation among a network of several more or less trustworthy third parties in such a way that, even if a limited number of the parties are dishonest or compromised, none of them can learn the private keys until a sufficient majority of the parties agree that it is indeed time to release them.
The Rabin & Thorpe protocol also protects against a variety of other possible attacks by compromised parties, such as attempts to prevent the disclosure of private keys at the designated time or to cause the generated private or public keys not to match. I don't claim to understand their protocol entirely, but, given that it's based on a combination of existing and well studies cryptographic techniques, I see no reason why it shouldn't meet its stated security specifications.
Of course, the major difficulty here is that, for those security specifications to actually amount to anything useful, you do need a distributed network of key generators large enough that no single attacker can plausibly compromise a sufficient majority of them. Establishing and maintaining such a network is not a trivial exercise.
Yes, the kind of encrpytion you are looking for exists. It is called timed-release encryption, or abbreviated TRE. Here is a paper about it: http://cs.brown.edu/~foteini/papers/MathTRE.pdf
The following is an excerpt from the abstract of the above paper:
There are nowdays various e-business applications, such as sealedbid auctions and electronic voting, that require time-delayed decryption of encrypted data. The literature oers at least three main categories of protocols that provide such timed-release encryption (TRE).
They rely either on forcing the recipient of a message to solve some time-consuming, non-paralellizable problem before being able to decrypt, or on the use of a trusted entity responsible for providing a piece of information which is necessary for decryption.
I personally like another name, which is "time capsule cryptography", probably coined at crypto.stackoverflow.com: Time Capsule cryptography?.
A quick answer is no: the key used to decrypt the data cannot change in time, unless you decrypt and re-encrypt all the database periodically (I suppose it is not feasible).
The solution suggested by #Ilmari Karonen is the only one feasible but it needs a trusted third party, furthermore, once obtained the master AES key it is reusable in the future: you cannot use 'one time pads' with that solution.
If you want your token to be time-based you can use TOTP algorithm
TOTP can help you generate a value for variable N (token) at a given time M. So the service requesting the access to your database would attach a token which was generated using TOTP. During validation of token at access provider end, you'll validate if the token holds the correct value based on the current time. You'll need to have a Shared Key at both the ends to generate same TOTP.
The advantage of TOTP is that the value changes with time and one token cannot be reused.
I have implemented a similar thing for two factor authentication.
"One time Password" could be your google words.
I believe what you are looking for is called Public Key Cryptography or Public Key Encryption.
Another good word to google is "asymmetric key encryption scheme".
Google that and I'm quite sure you'll find what you're looking for.
For more information Wikipedia's article
An example of this is : Diffie–Hellman key exchange
Edit (putting things into perspective)
The second key can be determined by an algorithm that uses a specific time (for example at the insert of data) to generate the second key which can be stored in another location.
As other guys pointed out One Time Password may be a good solution for the scenario you proposed.
There's an OTP implemented in C# that you might take a look https://code.google.com/p/otpnet/.
Ideally, we want a generator that depends on the time, but I don't know any algorithm that can do that today.
More generally, if Alice wants to let Bob know about something at a specific point in time, you can consider this setup:
Assume we have a public algorithm that has two parameters: a very large random seed number and the expected number of seconds the algorithm will take to find the unique solution of the problem.
Alice generates a large seed.
Alice runs it first on her computer and computes the solution to the problem. It is the key. She encrypts the message with this key and sends it to Bob along with the seed.
As soon as Bob receives the message, Bob runs the algorithm with the correct seed and finds the solution. He then decrypts the message with this key.
Three flaws exist with this approach:
Some computers can be faster than others, so the algorithm has to be made in such a way as to minimize the discrepancies between two different computers.
It requires a proof of work which may be OK in most scenarios (hello Bitcoin!).
If Bob has some delay, then it will take him more time to see this message.
However, if the algorithm is independent of the machine it runs on, and the seed is large enough, it is guaranteed that Bob will not see the content of the message before the deadline.
There are many articles and quotes on the web saying that a 'salt' must be kept secret. Even the Wikipedia entry on Salt:
For best security, the salt value is
kept secret, separate from the
password database. This provides an
advantage when a database is stolen,
but the salt is not. To determine a
password from a stolen hash, an
attacker cannot simply try common
passwords (such as English language
words or names). Rather, they must
calculate the hashes of random
characters (at least for the portion
of the input they know is the salt),
which is much slower.
Since I happen to know for a fact that encryption Salt (or Initialization Vectors) are OK to be stored on clear text along with the encrypted text, I want to ask why is this misconception perpetuated ?
My opinion is that the origin of the problem is a common confusion between the encryption salt (the block cipher's initialization vector) and the hashing 'salt'. In storing hashed passwords is a common practice to add a nonce, or a 'salt', and is (marginally) true that this 'salt' is better kept secret. Which in turn makes it not a salt at all, but a key, similar to the much clearly named secret in HMAC. If you look at the article Storing Passwords - done right! which is linked from the Wikipedia 'Salt' entry you'll see that is talking about this kind of 'salt', the password hash. I happen to disagree with most of these schemes because I believe that a password storage scheme should also allow for HTTP Digest authentication, in which case the only possible storage is the HA1 digest of the username:realm:password, see Storing password in tables and Digest authentication.
If you have an opinion on this issue, please post here as a response.
Do you think that the salt for block cipher encryption should be hidden? Explain why and how.
Do you agree that the blanket statement 'salts should be hidden' originates from salted hashing and does not apply to encryption?
Sould we include stream ciphers in discussion (RC4)?
If you are talking about IV in block cipher, it definitely should be in clear. Most people make their cipher weaker by using secret IV.
IV should be random, different for each encryption. It's very difficult to manage a random IV so some people simply use a fixed IV, defeating the purpose of IV.
I used to work with a database with password encrypted using secret fixed IV. The same password is always encrypted to the same ciphertext. This is very prone to rainbow table attack.
Do you think that the salt for block
cipher encryption should be hidden?
Explain why and how
No it shouldn't. The strength of a block cipher relies on the key. IMO you should not increase the strength of your encryption by adding extra secrets. If the cipher and key are not strong enough then you need to change the cipher or key length, not start keeping other bits of data secret. Security is hard enough so keep it simple.
Like LFSR Consulting says:
There are people that are much smarter
than you and I that have spent more
time thinking about this topic than
you or I ever will.
Which is a loaded answer to say the least. There are folks who, marginally in the honest category, will overlook some restraints when money is available. There are a plethora of people who have no skin at the fire and will lower the boundaries for that type,....
then, not too far away, there is a type of risk that comes from social factors - which is almost impossible to program away. For that person, setting up a device solely to "break the locks" can be an exercise of pure pleasure for no gain or measurable reason. That said, you asked that those who have an opinion please respond so here goes:
Do you think that the salt for block
cipher encryption should be hidden?
Explain why and how.
Think of it this way, it adds to the computational strength needed. It's just one more thing to hide if it has to be hidden. By and of it's self, being forced to hide ( salt, iv, or anything ) places the entity doing the security in the position of being forced to do something. Anytime the opposition can tell you what to do, they can manipulate you. If it leaks, that should have been caught by cross-controls that would have detected the leak and replacement salts available. There is no perfect cipher, save otp, and even that can be compromised somehow as greatest risk comes from within.
In my opinion, the only solution is to be selective about whom you do any security for - the issue of protecting salts leads to issues that are relevant to the threat model. Obviously, keys have to be protected. If you have to protect the salt, you probably need to review your burger flippin resume and question the overall security approach of those for whom you are working.
There is no answer, actually.
Do you agree that the blanket statement 'salts should be hidden' originates from salted hashing and does not apply to encryption?
Who said this, where, and what basis was given.
Should we include stream ciphers in discussion (RC4)?
A cipher is a cipher - what difference would it make?
Each encrypted block is the next block IV. So by definition, the IV cannot be secret. Each block is an IV.
The first block is not very different. An attacker who knows the length of the plain text could have a hint that the first block is the IV.
BLOCK1 could be IV or Encrypted with well known IV
BLOCK2 is encrypted with BLOCK#1 as an IV
...
BLOCK N is encrypted with BLOCK#N-1 as an IV
Still, whenever possible, I generate a random (non-null) IV and give it to each party out-of-band. But the security gain is probably not that important.
The purpose of a per record salt is to make the task of reversing the hashes much harder. So if a password database is exposed the effort required to break the passwords is increased. So assuming that the attacker knows exactly how you perform the hash, rather than constructing a single rainbow table for the entire database they need to do this for every entry in the database.
The per record salt is usually some combination of fields in the record that vary greatly between records. Transaction time, Account Number, transaction Number are all good examples of fields that can be used in a per record salt. A record salt should come from other fields in the record. So yes it is not secret, but you should avoid publicising the method of calculation.
There is a separate issue with a database wide salt. This is a sort of key, and protects against the attacker using existing rainbow tables to crack the passwords. The database wide salt should be stored separately so that if the database is compromised then it is unlikely that the attacker will get this value as well.
A database wide salt should be treated as though it was a key and access to the salt value should be moderately protected. One way of doing this is to split the salt into components that are managed in different domains. One component in the code, one in a configuration file, one in the database. Only the running code should be able to read all of these and combine them together using a bit wide XOR.
The last area is where many fail. There must be a way to change these salt values and or algorithm. If a security incident occurs we may want to be able to change the salt values easily. The database should have a salt version field and the code will use the version to identify which salts to use and in what combination. The encryption or hash creation always uses the latest salt algorithm, but the decode verify function always uses the algorithm specified in the record. This way a low priority thread can read through the database decrypting and re-encrypting the entries.