I am receiving an encrypted file and it's key from a partner. The Key has itself been encrypted using our Digital Certificate Public Key.
When I attempt to decrypt the key using the following and our private key, I get a padding error as shown below:
C:\openssl rsautl -decrypt -in xxxx_Key -inkey xxxxprivatekey.pem -hexdump -out aeskey.txt
Loading 'screen' into random state - done
RSA operation error
5612:error:0407109F:rsa routines:RSA_padding_check_PKCS1_type_2:pkcs decoding er
ror:.\crypto\rsa\rsa_pk1.c:273:
5612:error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed:.\
crypto\rsa\rsa_eay.c:602:
If I add the -Raw switch to the decrypt, it appears to work but the resulting hexdump is WAY larger than I'm expecting. Can anyone offer advice as to what may be going on here? Thanks!
My guess is that you are decrypting with the wrong private key or your ciphertext is corrupted.
In RSA, padding is used to extend the length of the message being encrypted to be the same size as the modulus (so 1024 bit RSA pads messages to 1024 bits). PKCS1 type 2 is (I believe) another name for PKCS#1 v1.5 which adds the padding 0x00 || 0x02 || (random bytes) || 0x00 to the start of the message. When decrypting the first check that is done is that the start of the message is 0x00 0x02. Then all bytes up to and including the second 0x00 are stripped off, yielding the original message. If the start is not 0x00 0x02 or there is no second 0x00 byte then there is a padding error.
If you ignore the padding check you most likely will get a message the same size as the RSA modulus since no padding is stripped off. Considering most RSA moduli are at least 1024 bit this will be much larger than an AES key.
Related
I've generated a random 256 bit symmetric key, in a file, to use for encrypting some data using the OpenSSL command line which I need to decrypt later programmatically using the OpenSSL library. I'm not having success, and I think the problem might be in the initialization vector I'm using (or not using).
I encrypt the data using this command:
/usr/bin/openssl enc -aes-256-cbc -salt -in input_filename -out output_filename -pass file:keyfile
I'm using the following call to initialize the decrypting of the data:
EVP_DecryptInit_ex(ctx, EVP_aes_256_cbc(), nullptr, keyfile.data(), nullptr))
keyfile is a vector<unsigned char> that holds the 32 bytes of the key. My question is regarding that last parameter. It's supposed to be an initialization vector to the cipher algorithm. I didn't specify an IV when encrypting, so some default must have been used.
Does passing nullptr for that parameter mean "use the default"? Is the default null, and nothing is added to the first cipher block?
I should mention that I'm able to decrypt from the command line without supplying an IV.
What is the default IV when encrypting with EVP_aes_256_cbc() [sic] cipher...
Does passing nullptr for that parameter mean "use the default"? Is the default null, and nothing is added to the first cipher block?
There is none. You have to supply it. For completeness, the IV should be non-predictable.
Non-Predictable is slightly different than both Unique and Random. For example, SSLv3 used to use the last block of ciphertext for the next block's IV. It was Unique, but it was neither Random nor Non-Predictable, and it made SSLv3 vulnerable to chosen plaintext attacks.
Other libraries do clever things like provide a null vector (a string of 0's). Their attackers thank them for it. Also see Why is using a Non-Random IV with CBC Mode a vulnerability? on Stack Overflow and Is AES in CBC mode secure if a known and/or fixed IV is used? on Crypto.SE.
/usr/bin/openssl enc -aes-256-cbc...
I should mention that I'm able to decrypt from the command line without supplying an IV.
OpenSSL uses an internal mashup/key derivation function which takes the password, and derives a key and iv. Its called EVP_BytesToKey, and you can read about it in the man pages. The man pages also say:
If the total key and IV length is less than the digest length and MD5 is used then the derivation algorithm is compatible with PKCS#5 v1.5 otherwise a non standard extension is used to derive the extra data.
There are plenty of examples of EVP_BytesToKey once you know what to look for. Openssl password to key is one in C. How to decrypt file in Java encrypted with openssl command using AES in one in Java.
EVP_DecryptInit_ex(ctx, EVP_aes_256_cbc(), nullptr, keyfile.data(), nullptr))
I didn't specify an IV when encrypting, so some default must have been used.
Check your return values. A call should have failed somewhere along the path. Maybe not at EVP_DecryptInit_ex, but surely before EVP_DecryptFinal.
If its not failing, then please file a bug report.
EVP_DecryptInit_ex is an interface to the AES decryption primitive. That is just one piece of what you need to decrypt the OpenSSL encryption format. The OpenSSL encryption format is not well documented, but you can work it backwards from the code and some of the docs. The key and IV computation is explained in the EVP_BytesToKey documentation:
The key and IV is derived by concatenating D_1, D_2, etc until enough
data is available for the key and IV. D_i is defined as:
D_i = HASH^count(D_(i-1) || data || salt)
where || denotes concatentaion, D_0 is empty, HASH is the digest
algorithm in use, HASH^1(data) is simply HASH(data), HASH^2(data) is
HASH(HASH(data)) and so on.
The initial bytes are used for the key and the subsequent bytes for the
IV.
"HASH" here is MD5. In practice, this means you compute hashes like this:
Hash0 = ''
Hash1 = MD5(Hash0 + Password + Salt)
Hash2 = MD5(Hash1 + Password + Salt)
Hash3 = MD5(Hash2 + Password + Salt)
...
Then you pull of the bytes you need for the key, and then pull the bytes you need for the IV. For AES-128 that means Hash1 is the key and Hash2 is the IV. For AES-256, the key is Hash1+Hash2 (concatenated, not added) and Hash3 is the IV.
You need to strip off the leading Salted___ header, then use the salt to compute the key and IV. Then you'll have the pieces to feed into EVP_DecryptInit_ex.
Since you're doing this in C++, though, you can probably just dig through the enc code and reuse it (after verifying its license is compatible with your use).
Note that the OpenSSL IV is randomly generated, since it's the output of a hashing process involving a random salt. The security of the first block doesn't depend on the IV being random per se; it just requires that a particular IV+Key pair never be repeated. The OpenSSL process ensures that as long as the random salt is never repeated.
It is possible that using MD5 this way entangles the key and IV in a way that leaks information, but I've never seen an analysis that claims that. If you have to use the OpenSSL format, I wouldn't have any hesitations over its IV generation. The big problems with the OpenSSL format is that it's fast to brute force (4 rounds of MD5 is not enough stretching) and it lacks any authentication.
We are developing an application that has to work with data that is enycrpted by LoraWan (https://www.lora-alliance.org)
We have already found the documentation of how they encrypt their data, and have been reading through it for the past few days (https://www.lora-alliance.org/sites/default/files/2018-04/lorawantm_specification_-v1.1.pdf) but currently still can't solve our problem.
We have to use AES 128-bit ECB decryption with zero-padding to decrypt the messages, but the problem is it's not working because the encrypted messages we are receiving are not long enough for AES 128 so the algorithm returns a "Data is not a complete block" exception on the last line.
An example key we receive is like this: D6740C0B8417FF1295D878B130784BC5 (not a real key). It is 32 characters long, so 32 bytes, but if treat it as hexadecimal, then it becomes 16 bytes long, which is what is needed for AES 128-bit. This is the code we use to convert the Hex from String:
public static string HextoString(string InputText)
{byte[] hex= Enumerable.Range(0, InputText.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(InputText.Substring(x, 2), 16))
.ToArray();
return System.Text.Encoding.ASCII.GetString(hex);}
(A small thing to note for the above code is that we are not sure what Encoding to use, as we could not find it in the Lora documentation and they have not told us, but depending on this small setting we could be messing up our decryption (though we have tried all possible combinations, ascii, utf8, utf7, etc))
An example message we receive is: d3 73 4c which we are assuming is also in hexadecimal. This is only 6 bytes, and 3 bytes if we convert it from hexa to normal, compared to the 16 bytes we'd need minimum to match the key length.
This is the code for Aes 128 decrypt we are using:
private static string Aes128Decrypt(string cipherText, string key){
string decrypted = null;
var cipherPlainTextBytes = HexStringToByteArray(cipherText);
//var cipherPlainTextBytes = ForcedZeroPadding(HexStringToByteArray(cipherText));
var keyBytes = HexStringToByteArray(key);
using (var aes = new AesCryptoServiceProvider())
{
aes.KeySize = 128;
aes.Key = keyBytes;
aes.Mode = CipherMode.ECB;
aes.Padding = PaddingMode.Zeros;
ICryptoTransform decryptor = aes.CreateDecryptor(aes.Key, aes.IV);
using (MemoryStream ms = new MemoryStream(cipherPlainTextBytes, 0, cipherPlainTextBytes.Length))
{
using (CryptoStream cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Read))
{
using (StreamReader sr = new StreamReader(cs))
{
decrypted = sr.ReadToEnd();
}
}
}
}
return decrypted;}
So obviously this is going to return "Data is an incomplete block" at sr.ReadToEnd().
As you can see from the example, in that one commented out line, we have also tried to "Pad" the text to the correct size with a full zero byte array of correct length (16 - cipherText), in which case the algorithm runs fine, but it returns complete gibberish and not the original text.
We already have tried all of the modes of operation and messed around with padding modes as well. They are not providing us with anything but a cipherText, and a key for that text. No Initialization vector either, so we are assuming we are supposed to be generating that every time (but for ECB it isn't even needed iirc)
What's more is, they are able to encrypt-decrypt their messages just fine. What is most puzzling about this is that I have been googling this for days now and I cannot find a SINGLE example on google where the CIPHERTEXT is shorter than the key during decryption.
Obviously I have found examples where the message they are Encrypting is shorter than what is needed, but that is what padding is for on the ENCRYPTION side (right?). So that when you then receive the padded message, you can tell the algorithm what padding mode was used to make it correct length, so then it can seperate the padding from the message. But in all of those cases the recieved message during decryption is of correct length.
So the question is - what are we doing wrong? is there some way to decrypt with ciphertexts that are shorter than the key? Or are they messing up somewhere by producing ciphers that are too short?
Thanks for any help.
In AES-ECB, the only valid ciphertext shorter than 16-byte is empty. That 16-byte limit is the block (not key) size of AES, which happens to match the key size for AES-128.
Therefore, the question's
An example message we receive is: d3 73 4c
does not show an ECB encrypted message (since a comment tells that's from a JSON, that can't be bytes that happen to show as hex). And that's way too short to be a FRMPayload (per this comment) for a Join-Accept, since the spec says of the later:
1625 The message is either 16 or 32 bytes long.
Could it be that whatever that JSON message contains is not a full FRMPayload, but a fragment of a packet, encoded as hexadecimal pair with space separator? As long as it is not figured out how to build a FRMPayload, there's not point in deciphering it.
Update: If that mystery message is always 3 bytes, and if it is always the same for a given key (or available a single time per key), then per Maarten Bodewes's comment it might be a Key Check Value. The KCV is often the first 3 bytes of the encryption of the all-zero value with the key per the raw block cipher (equivalently: per ECB). Herbert Hanewinkel's javascript AES can work fully offline (which is necessary to not expose the key), and be used to manually validate an hypothesis. It tells that for the 16-byte key given in the question, a KCV would be cd15e1 (or c076fc per the variant in the next section).
Also: it is used CreateDecryptor to craft a gizmo in charge of the ECB decryption. That's likely incorrect in the context of decryption of a LoraWan payload, which requires ECB encryption for decryption of some fields:
1626 Note: AES decrypt operation in ECB mode is used to encrypt the join-accept message so that the end-device can use an AES encrypt operation to decrypt the message. This way an end device only has to implement AES encrypt but not AES decrypt.
In the context of decryption of a LoraWan packets, you want to communicate with the AES engine using byte arrays, not strings. Strings have an encoding, when LoraWan ciphertext and corresponding plaintext does not. Others seems to have managed to coerce the nice .NET do-it-all crypto API to get a low-level job done.
In the HextoString code, I vaguely get that the intention and perhaps outcome is that hex becomes the originally hex input as a byte array (fully rid of hexadecimal and other encoding sin; in which case the variable hex should be renamed to something on the tune of pure_bytes). But then I'm at loss about System.Text.Encoding.ASCII.GetString(hex). I'd be surprised if it just created a byte string from a byte array, or turned the key back to hexadecimal for later feeding to HexStringToByteArray in Aes128Decrypt. Plus this makes me fear that any byte in [0x80..0xFF] might turn to 0x3F, which is not nice for key, ciphertext, and corresponding LoraWan payload. These have no character encoding when de-hexified.
My conclusion is that if HexStringToByteArray does what its name suggests, and given the current interface of Aes128Decrypt, HextoString should simply remove whitespace (or is unneeded if HexStringToByteArray removes whitespace on the fly). But my recommendation is to change the interface to use byte arrays, not strings (see previous section).
As an aside: creating an ICryptoTransform object from its key is supposed to be performed once for multiple uses of the object.
The code encrypt/decrypt function with openssl library, like following...
EVP_EncryptInit_ex( ctx, EVP_aes_256_cbc(), NULL, key, iv)
It can work, when the key length is not equal 256 bits(32 bytes).
The key length can be any. Why?
For example, it works fine, and no error received:
char key[]="012345678901234567890";
Why does EVP_CIPHER is EVP_aes_256_cbc() succeed when key length is not equal 256bits?
You seem to be asking why you can encrypt using EVP_aes_256_cbc when the key is smaller, like 128-bits.
If you supply an undersized key you are reading random bytes/garbage at the tail of the key bytes. The function is reading bytes, but you don't know what they are. You may get lucky on the local machine and be able to encrypt and decrypt. It will almost certainly fail to decrypt on a different machine.
Valgrind should alert you to the problem of reading uninaitalezed [key] memory. Asan should alert about a read in a guard zone.
I don't believe EVP_aes_256_cbc pads or expands. Like #Zaph said, always use the correct size. If you need to "stretch" a smaller key into a bigger one, then see HMAC-based Extract-and-Expand Key Derivation Function (HKDF), which extracts entropy then expands it.
I have used following code to encrypt string in C++ with OpenSSL and it works fine only if text is up to 256 bytes, it doesn't work for larger size:
void encrypt(char* message, int sourceSzie, char* encryptedMessage, int &destSize)
{
char err[130];
RSA *rsa_pubkey = NULL;
FILE *rsa_pub_file = fopen("pubkey_file.bin", "rb");
if(PEM_read_RSAPublicKey(rsa_pub_file, &rsa_pubkey, NULL, NULL) == NULL)
{
printf("Error reading public key \n");
}
if((destSize = RSA_public_encrypt(sourceSzie, (unsigned char*)message, (unsigned char*)encryptedMessage, rsa_pubkey, RSA_PKCS1_OAEP_PADDING)) == -1)
{
ERR_load_crypto_strings();
ERR_error_string(ERR_get_error(), err);
fprintf(stderr, "Error encrypting message: %s\n", err);
}
fclose(rsa_pub_file);
}
char msg[1024];
stcpy(msg, "A test message");
int size;
char encrypted[1024];
encrypt(msg, strlen(msg), encrypted, size);
But if size of string crosses 256, it doesn't work and generates Data too large error.
What should I do to make it work for string of any size?
But if size of string crosses 256, it doesn't work and generates Data too large error.
Correct. RSA encryption is defined as c = m ^ e mod n. The message cannot be larger than the modulus size n. You can get the modulus size with RSA_size().
More correctly, the limit is modulus size - padding size because messages are padded. OAEP padding size is approximately 41 bytes, so the limit is somewhere around RSA_size(rsa) - 41.
I omitted PKCS padding because its insecure. In the sources, it is #define RSA_PKCS1_PADDING_SIZE 11. See A bad couple of years for the cryptographic token industry for an approachable discussion.
What should I do to make it work for string of any size?
You would need an arbitrarily large key ;) But OpenSSL limits your key size to 16K-bits. So you can't make the keys arbitrarily large.
Plus, its hard to generate those large keys. Generation time grows with key size, and you could spend a couple of days generating large keys.
In your particular case:
char msg[1024];
...
encrypt(rsa, msg, sizeof(msg), ...);
Try generating a key that is (1024 + 64) * 8 or 8704-bits. That should handle the 1024-byte buffer with padding.
To use public key encryption in this case, you should encrypt the string with a symmetric cipher like AES. Then, encrypt the AES key with the public RSA key. That's basically how SSL/TLS and others operate.
If you use the hybrid encryption, you should choose a mode like EAX or GCM; and not CBC mode. EAX and GCM are authenticated encryption modes, and they provide both confidentiality and authenticity. With authenticated encryption, you will provide privacy and be able to detect tampering.
If you chose to use OpenSSL with AES/GCM, then see the examples of how to use it on the OpenSSL wiki at EVP Authenticated Encryption and Decryption.
There are a couple of cryptosystems that provide the service as a package. See Shoupe's Elliptic Curve Integrated Encryption Scheme (ECIES); or Abdalla, Bellare, and Rogaway's Diffie-Hellman Authenticates Encryption Schemes (DHAES). Unfortunately, OpenSSL does not provide either.
But if you are interested in ECIES or DHAES, then take a look at Crpyto++. The library offers both of them. For an example of its usage in Crypto++, see Elliptic Curve Integrated Encryption Scheme.
I have found the solution at link https://shanetully.com/2012/06/openssl-rsa-aes-and-c/.
The functions Crypto::aesEncrypt() and Crypto::aesDecrypt() are able to encrypt/decrypt strings of any size.
I have an application to decrypt media packets.
it require me to provide Master key and salt key.
my SDP provide me (after negotiation ended) with
AES_CM_128_HMAC_SHA1_80 inline:Fu8vxnU4x1fcCzbhNrtDV0eq4RnaK4n2/jarOigZ
according to SDP rfc the string after the "inline:" is:
"concatenated master key and salt, base64 encoded"
when the master key is X bytes and the salt is Y bytes.
I am tyring :
byte[] masterAndSalt = Convert.FromBase64String("Fu8vxnU4x1fcCzbhNrtDV0eq4RnaK4n2/jarOigZ")
and then get the first x bytes to the master and the other Y for salt.
but my app says my keys are wrong, i don't understand - should i use some else than Convert.FromBase64String ?
OK, i got it right.
on AES_CM_128_HMAC_SHA1_80 cipher the Master key is 16 byte, and the salt is 14 byte long.
what should be done is to use Convert.FromBase64String on the key,
which produced a 30 byte long array, take the first 16 as master, and the last 14 as salt.
the decryption algorithm should produce the session key and salt from it (along with other info).