Encrypt multiple strings that have 2 or few values - encryption

I want to AES encrypt information in a database, that can have just a few values. E.g. I want to encrypt the gender and the value can only be "male" and "female".
The encryption key for multiple rows is the same (cannot guarantee a unique key for each row).
My question: Is it safe to just encrypt the raw value or is it better to pad the original value with random noise and an stop character before encryption:
male -> 214214.male.73297 -> [encryptedtext](after decryption, the noise can be removed.)
Or is there other best practices for this issue?

Related

Is it secure to use SHA hash of the secret value as its external ID?

I am trying to design the system where I need to store users' secret values in database (private and public key strings). The storing of secrets itself will be done with the help of HashiCorp Vault. But I have one more requirement that disallows to store two equal pairs (private key + public key).
As far as I am not able to check keys uniqueness before storing I have to store hash of the original secrets. My idea to to calculate SHA hash from secret data and compare it with already saved hashes. So, I wonder is it working solution and can I use this digest as an external ID for accessing data (because hash imply the uniqueness of the data entry). Hope for your help.
My idea to to calculate SHA hash from secret data and compare it with already saved hashes
I'd assume cryptographic hash is best option you have when there is no other unique identifier
(because hash imply the uniqueness of the data entry)
And that's wrong assumption. Regardless cryptographic hashes are designed to have negligible collision probability (probability that two inputs are having the same hash value), principially there is still some (very small) probability.
For controlled (formated) inputs I'd say the collision probability is so miniscule, that you could boldly use the hashes as unique identifiers, but prepare to handle a very seldom case that a collision occurs (probably you could post it and become famous)
calculate SHA hash from secret data
Concerning security - it is very hard (=impossible) to compute the input value based on its hash (assuming cryptographic hash currently considered as secure)
Beware of the space size - if you have say 1000 known values, it is trival to check which secret value has certain hash. Assuming you store keypairs, it should be ok

3DES: does identical ciphertext mean identical keys?

Can we assume that same encryption key is used to encrypt data if encrypted data are same?
For example, plain text is 'This is sample'.
First time we use 3DES algorithm and encryption key to encrypt it. Encrypted data became 'MNBVCXZ'.
Second time again, we use 3DES algorithm and encryption key to encrypt it. Encrypted data became 'MNBVCXZ'.
My questions are:
Can I assume static encryption key is used in this encryption process?
How many keys can be used to encrypt data using 3DES algorithm?
Can I assume static encryption key is used in this encryption process?
Yes, if you perform the encryption yourself (with a very high probability), no if an adversary can perform the encryption and the plaintext/ciphertext is relatively small.
As 3DES does indeed have 2^168 possible keys and 2^64 possible blocks, it should be obvious that some keys will encrypt a single plaintext to the same ciphertext. Finding such a pair of keys requires about 2^32 calculations on average (because of the birthday paradox).
If the plaintext is larger (requires more than one block encrypt) then the chance of finding a different key that produces the same ciphertext quickly will go to zero.
If one of the keys is preset it will take about 2^64 calculations to find another key. And - for the same reason - there is only a chance of 1 / 2^64 to use two keys that unfortunately produce the same ciphertext for a specific plaintext.
If you want to make the calculations yourself, more information here on the crypto site.
How many keys can be used to encrypt data using 3DES algorithm?
2^168 if you consider the full set of possible keys, i.e. you allow DES-ABC keys. These keys are encoded as 192 bits including parity. This would include DES-ABA and DES-AAA keys (the latter is equivalent to single DES).
2^112 if you consider only DES-ABA keys. These keys are encoded as 128 bits including parity. This would include single DES.

AES Encryption and Obfuscating IDs

I was considering hashing small blocks of sensitive ID data but I require to maintain the full uniqueness of the data blocks as a whole once obfuscated.
So, I came up with the idea of encrypting some publicly-known input data (say, 128 bits of zeroes), and use the data I want to obfuscate as the key/password, then throw it away, thus protecting the original data from ever being discovered.
I already know about hashing algorithms, but my problem is that I need to maintain full uniqueness (generally speaking a 1:1 mapping of input to output) while still making it impossible to retrieve the actual input. A hash cannot serve this function because information is lost during the process.
It is not necessary that the data be retrieved once "encrypted". It is only to be used as an ID number from then on.
An actual GUID/UUID is not suitable here because I need to manually control the identifiers on a per-identifier basis. The IDs cannot be unknown or arbitrarily generated data.
EDIT: To clarify exactly what these identifiers are made of:
(unencrypted) 64bit Time Stamp
ID Generation Counter (one count for each filetype)
Random Data (to make multiple encrypted keys dissimilar)
MAC Address (or if that's not available, set top bit + random digits)
Other PC-Specific Information (from registry)
The whole thing should add up to 192 bits, but the encrypted section's content size(s) could vary (this is by no means a final specification).
Given:
A static IV value
Any arbitrary 128bit key
A static 128 bits of input
Are AES keys treated in a fashion that would result in a 1:1 key<---->output mapping, given the same input and IV value?
No. AES is, in the abstract, a family of permutations of which you select a random one with the key. It is the case that for one of those permutations(i.e. for encryption under a given AES key) you will not get collisions because permutations are bijective.
However, for two different permutations (i.e. encryption under different AES keys, which is what you have), there is no guarantee what so ever that you don't get a collision. Indeed, because of the birthday paradox, the likelihood of a collision is probably higher than you think.
If your ID's are short ( < 1024 bits) you could just do an RSA encryption of them which would give you want you want. You'd just need to forget the private key.

How collision resistant are encryption algorithms?

How hard is it for a given ciphertext generated by a given (symmetric or asymmetric) encryption algorithm working on a plaintext/key pair, to find a different plaintext/key pair that yields the same cyphertext?
And how hard is it two find two plaintext/key pairs lead to the same cyphertext?
What led to this question, is another question that might turn out to have nothing to do with the above questions:
If you have a ciphertext and a key and want to decrypt it using some decryption routine, the routine usually tells you, if the key was correct. But how does it know it? Does it look for some pattern in the resulted plaintext, that indicates, that the decryption was successful? Does there exists another key results in some different plaintext, that contains the pattern and is also reported "valid" by the routine?
Follow-up question inspired by answers and comments:
If the allowed plaintext/key pairs where restricted in the on of the following (or both) way(s):
1) The plaintext starts with the KCV (Key check value) of the key.
2) The plaintext starts with a hash value of some plaintext/key combination
Would this make the collision finding infeasible? Is it even clear, that such a plaintext/key exists=
The answer to your question the way you phrased it, is that there is no collision resistance what so ever.
Symmetric case
Let's presume you got a plain text PT with a length that is a multiple of the block length of the underlying block cipher. You generate a random IV and encrypt the plain text using a key K, CBC mode and no padding.
Producing a plain text PT' and key K' that produces the same cipher text CT is easy. Simply select K' at random, decrypt CT using key K' and IV, and you get your colliding PT'.
This gets a bit more complicated if you also use padding, but it is still possible. If you use PKCS#5/7 padding, just keep generating keys until you find one such that the last octet of your decrypted text PT' is 0x01. This will take on average 128 attempts.
To make such collision finding infeasible, you have to use a message authentication code (MAC).
Asymmetric case
Something similar applies to RSA public key encryption. If you use no padding (which obviously isn't recommended and possibly not even supported by most cryptographic libraries), and use a public key (N,E) for encrypting PT into CT, simply generate a second key pair (N',E',D') such that N' > N, then PT' = CT^D' (mod N) will encrypt into CT under (N',E').
If you are using PKCS#1 v1.5 padding for your RSA encryption, the most significant octet after the RSA private key operation has to be 0x02, which it will be with a probability of approximately one in 256. Furthermore the first 0x00 valued octet has to occur no sooner than at index 9, which will happen with a high probability (approximately 0,97). Hence, on average you will have to generate on average some 264 random RSA key pairs of the same bit size, before you hit one that for some plain text could have produced the same cipher text.
If your are using RSA-OAEP padding, the private key decryption is however guaranteed to fail unless the cipher text was generated using the the corresponding public key.
If you're encrypting some plaintext (length n), then there are 2n unique input strings, and each must result in a unique ciphertext (otherwise it wouldn't be reversible). Therefore, all possible strings of length n are valid ciphertexts. But this is true for all keys. Therefore, for any given ciphertext, there are 2k ways of obtaining it, each with a different key of length k.
Therefore, to answer your first question: very easy! Just pick an arbitrary key, and "decrypt" the ciphertext. You will get the plaintext that matches the key.
I'm not sure what you mean by "the routine usually tells you if the key was correct".
One simple way to check the validity of a key is to add a known part to the plaintext before encryption. If the decryption doesn't reproduce that, it's not the right key.
The known part should not be a constant, since that would be an instant crib. But it could be e.g. be a hash of the plaintext; if hashing the decrypted text yields the same hash value, the key is probably correct (with the exception of hash collisions).

Error correcting key encryption

Say I have a scheme that derives a key from N different inputs. Each of the inputs may not be completely secure (f.x. bad passwords) but in combination they are secure. The simple way to do this is to concatenate all of the inputs in order and use a hash as a result.
Now I want to allow key-derivation (or rather key-decryption) given only N-1 out of the N inputs. A simple way to do this is to generate a random key K, generate N temporary keys out of different N subsets of the input, each with one input missing (i.e. Hash(input_{1}, ..., input_{N-1}), Hash(input_{0}, input_{2}, ..., input_{N-1}), Hash(input_{0}, input_{1}, input_{3},..., input_{N-1}), ..., Hash(input_{0}, ..., input_{N-2})) then encrypt K with each of the N keys and store all of the results.
Now I want to a generalized solution, where I can decrypt the key using K out of N inputs. The naive way to expand the scheme above requires storing (N choose N-K) values, which quickly becomes infeasible.
Is there a good algorithm for this, that does not entail this much storage?
I have thought about a way to use something like Shamir's Secret Sharing Scheme, but cannot think of a good way, since the inputs are fixed.
Error Correcting Codes are the most direct way to deal with the problem. They are not, however, particularly easy to implement.
The best approach would be using a Reed Solomon Code. When you derive the password for the first time you also calculate the redundancy required by the code and store it. When you want to recalculate the key you use the redundancy to correct the wrong or missing inputs.
To encrypt / create:
Take the N inputs. Turn each into a block in a good /secure way. Use Reed Solomon to generate M redundancy blocks from the N block combination. You now have N+M blocks, of which you need only a total of N to generate the original N blocks.
Use the N blocks to encrypt or create a secure key.
If the first, store the encrypted key and the M redundancy blocks. If the second, store only the M redundancy blocks.
To decrypt / retrieve:
Take N - R correct input blocks, where R =< M. Combine them with the redundancy blocks you stored to create the original N blocks. Use the original N blocks to decrypt or create the secure key.
(Thanks to https://stackoverflow.com/users/492020/giacomo-verticale : This is essentially what he/she said, but I think a little more explicit / clearer.)
Shamir's share secret is a techinique that is used when you want to split a secret in multiple shares such that only a combination of minimum k parts would reveal the intial secret. If you are not sure about the correctness of the initiator and you want to verify this you use verifiable secret sharing .both are based to polynomial interpolation
One approach would be to generate a purely random key (or by hashing all of the inputs, if you want to avoid an RNG for some reason), split it using a k-of-n threshold scheme, and encrypt each share using the individual password inputs (eg send them through PBKDF2 with 100000 iterations and then encrypt/MAC with AES-CTR/HMAC). This would require less storage than storing hash subsets; roughly N * (share size + salt size + MAC size)
Rather than simply allowing a few errors out of a large number of inputs, you should divide the inputs up into groups and allow some number of errors in each group. If you were to allow 4 errors out of 64 inputs then you would have to have 15,249,024 encrypted keys, but if you break that up into two groups of 32, allowing two errors per group then you would only need to have 1984 encrypted keys.
Once you have decrypted the key information from each group then use that as input into decrypting key that you ultimately want.
Also, the keys acquired from each group must not be trivial in comparison to the key that you ultimately want. Do not simply break up a 256 bit key into 8 32bit key pieces. Doing this would allow someone that could decrypt 7 of those key pieces to attempt a bruteforce attack on the last piece. if you want access to a 256 bit key, then you must work with 256 bit keys for the whole procedure.

Resources