I am encoding strings using C#'s Rijndael AES encryption. I generate a key and an IV, and use them to encode strings and values that I can then save to disk (I am using Unity3D's playerprefs).
The problem I am facing is that the PlayerPrefs keys and values need to be valid character sequences, and the encoded bytes are not necessarily valid.
So, after encoding my string with my key and IV, I get a byte array that I can enode in Unicode, but (sometimes) when I try to save it, I get an error message:
byte[] encryptedBytes = Encode("someText", encryptionKey, initVector);
string encodedString = Encoding.Unicode.GetString(encryptedBytes);
PlayerPrefs.SetString("SecretData",encodedString);
PlayerPrefs.Save();
Error:
invalid utf-16 sequence at -1073752512 (missing surrogate tail)
Any way to make sure the string is in a valid format?
The bytes returned by the encrypt function are indistinguishable from random and may not contain valid character encoding. To convert the result into a string (if required) you may use base 64.
Related
I am trying to decode filenames in HTTP but the string from browser messages are different.
In my test file I put the name ç.jpg.
What I need is the name %C3%A7.jpg.
But the browser is sending %C3%83%C2%A7.jpg.
It's not UTF8, UTF16 or UTF32.
For another example I test the file name €.jpg.
What I need is the name %E2%82%AC.jpg.
But I am receiving %C3%A2%E2%80%9A%C2%AC.jpg.
how can I convert this names to UTF8?
Ok I played with this for about 30 minutes and I finally figured it out.
This is how the original string was encoded:
The string was in UTF-8
Some encoding mechanism thought it was CP1252, and based on that wrong assumption re-encoded it to UTF-8 again.
The resulting string was url-encoded.
To get back to a real UTF-8 string, this is what I did. (note, I used PHP, don't know what you are using but it should be doable in other languages just the same).
$input = '%C3%A2%E2%80%9A%C2%AC %C3%83%C2%A7';
$str1 = urldecode($input);
echo iconv('UTF-8', 'CP1252', $str1);
// output "€ ç"
So that conversion is counter intuitive. We're converting to CP1252, but still end up with a UTF-8 string. This only works because an existing UTF-8 was falsely treated as CP1252, and that incorrect interpretation was then converted to UTF-8. So I'm just reversing this double-encoding.
In other languages there might be a few more steps, this works in just 1 line with PHP because strings are bytes, not characters.
Here is the function, with variable names in place
encrypt(arguments.input, key, algorithm, encoding, arguments.salt, iterations)
I'm using a 256 bit AES key which is 44 characters long.
I am choosing AES and base64 for Algorithm and Encoding.
I've tried various ways of generating a salt,
createUUID(), generatesecretkey('AES',128) and generatesecretkey('AES',256)
The encrypted result is always the same with the same input, when the salt changes each time. Like it's ignoring it, there is no error to suggest why.
I also note, iterations has no effect on the encryption either.
The algorithm "AES" is actually shorthand for "AES/ECB/PKCS5Padding" (ie algorithm/mode/padding). When using the default ECB mode, the iv will be ignored. Use the longhand algorithm form to specify CBC mode, ie "AES/CBC/PKCS5Padding"
Runnable Example on Trycf.com:
<cfscript>
for (i = 1; i <= 5; i++) {
key = "ji3fd0ZKB87COPz5ZwqsQEQKcuRggtvvO98t3mZFxns=";
// generate different iv's for DEMO only
uuid = CreateUUID();
iv = BinaryDecode( replace(uuid, "-", "", "all"), "hex");
input = "This is plain text to be encrypted";
encoding = "base64";
algorithm = "AES/CBC/PKCS5Padding";
encrypted = encrypt(input, key, algorithm, encoding, iv);
decrypted = decrypt(encrypted, key, algorithm, encoding, iv);
writeOutput("<hr>["& i &"] encrypted="& encrypted );
writeOutput("<br>["& i &"] decrypted="& decrypted );
writeOutput("<br>["& i &"] iv="& uuid );
}
</cfscript>
Note: To use larger keys, like 256bit, you must first have installed the (JCE) Unlimited Strength Jurisdiction Policy Files
AES only supports three key lengths, 16, 24 and 32 bytes. Note that 44 characters is 352-bits which is none of these. But it appears that the encrypt method expects a Base64 encoded string as the key so a 44 character Base64 key would seems to be correct. The documentation does not detail the key form.
Also note that the iv (arguments.salt) must be exactly one block in size, for AES that is 16-bytes.
See Encrypt for more information.
For more help please supply the encrypt arguments and the result.
I am encrypting the plain text using RSA and converting that value to base64 string.But while decrypting the I altered the base64 string and try to decrypt it...it given me same original text return.
Is there any thing wrong ?
Original Plain Text :007189562312
Output Base64 string : VfZN7WXwVz7Rrxb+W08u9F0N9Yt52DUnfCOrF6eltK3tzUUYw7KgvY3C8c+XER5nk6yfQFI9qChAes/czWOjKzIRMUTgGPjPPBfAwUjCv4Acodg7F0+EwPkdnV7Pu7jmQtp4IMgGaNpZpt33DgV5AJYj3Uze0A3w7wSQ6/tIgL4=
Altered Base64 String : VfZN7WXwVz7Rrxb+W08u9F0N9Yt52DUnfCOrF6eltK3tzUUYw7KgvY3C8c+XER5nk6yfQFI9qChAes/czWOjKzIRMUTgGPjPPBfAwUjCv4Acodg7F0+EwPkdnV7Pu7jmQtp4IMgGaNpZpt33DgV5AJYj3Uze0A3w7wSQ6/tIgL4=55
Please explain. Thank you.
I'm assuming you're asking whether the altered ciphertext should have thrown an error when decrypting. It looks like the altered string only adds two characters to the end and is otherwise the same string.
Your Base 64 library probably makes some reasonable assumptions when parsing Base 64 data. Base 64 works by encoding 3 bytes into 4 characters. If at the end the data length is not a multiple of 3 it must be padded. That is signalized by the = at the end of the encoded string.
This also means that during parsing, the library knows that padding characters are at the end and stops parsing there. If the alteration appeared at the end of the string then the encoded ciphertext didn't effectively change.
My company is working on a project that will put card readers in the field. The readers use DUKPT TripleDES encryption, so we will need to develop software that will decrypt the card data on our servers.
I have just started to scratch the surface on this one, but I find myself stuck on a seemingly simple problem... In trying to generate the IPEK (the first step to recreating the symmetric key).
The IPEK's a 16 byte hex value created by concatenating two triple DES encrypted 8 byte hex strings.
I have tried ECB and CBC (zeros for IV) modes with and without padding, but the result of each individual encoding is always 16 bytes or more (2 or more blocks) when I need a result that's the same size as the input. In fact, throughout this process, the cyphertexts should be the same size as the plaintexts being encoded.
<cfset x = encrypt("FFFF9876543210E0",binaryEncode(binaryDecode("0123456789ABCDEFFEDCBA98765432100123456789ABCDEF", "hex"), "base64") ,"DESEDE/CBC/PKCS5Padding","hex",BinaryDecode("0000000000000000","hex"))>
Result: 3C65DEC44CC216A686B2481BECE788D197F730A72D4A8CDD
If you use the NoPadding flag, the result is:
3C65DEC44CC216A686B2481BECE788D1
I have also tried encoding the plaintext hex message as base64 (as the key is). In the example above that returns a result of:
DE5BCC68EB1B2E14CEC35EB22AF04EFC.
If you do the same, except using the NoPadding flag, it errors with "Input length not multiple of 8 bytes."
I am new to cryptography, so hopefully I'm making some kind of very basic error here. Why are the ciphertexts generated by these block cipher algorithms not the same lengths as the plaintext messages?
For a little more background, as a "work through it" exercise, I have been trying to replicate the work laid out here:
https://www.parthenonsoftware.com/blog/how-to-decrypt-magnetic-stripe-scanner-data-with-dukpt/
I'm not sure if it is related and it may not be the answer you are looking for, but I spent some time testing bug ID 3842326. When using different attributes CF is handling seed and salt differently under the hood. For example if you pass in a variable as the string to encrypt rather than a constant (hard coded string in the function call) the resultant string changes every time. That probably indicates different method signatures - in your example with one flag vs another flag you are seeing something similar.
Adobe's response is, given that the resulting string can be unecrypted in either case this is not really a bug - more of a behavior to note. Can your resultant string be unencrypted?
The problem is encrypt() expects the input to be a UTF-8 string. So you are actually encrypting the literal characters F-F-F-F-9.... rather than the value of that string when decoded as hexadecimal.
Instead, you need to decode the hex string into binary, then use the encryptBinary() function. (Note, I did not see an iv mentioned in the link, so my guess is they are using ECB mode, not CBC.) Since the function also returns binary, use binaryEncode to convert the result to a more friendly hex string.
Edit: Switching to ECB + "NoPadding" yields the desired result:
ksnInHex = "FFFF9876543210E0";
bdkInHex = "0123456789ABCDEFFEDCBA98765432100123456789ABCDEF";
ksnBytes = binaryDecode(ksnInHex, "hex");
bdkBase64 = binaryEncode(binaryDecode(bdkInHex, "hex"), "base64");
bytes = encryptBinary(ksnBytes, bdkBase64, "DESEDE/ECB/NoPadding");
leftRegister = binaryEncode(bytes, "hex");
... which produces:
6AC292FAA1315B4D
In order to do this we want to start with our original 16 byte BDK
... and XOR it with the following mask ....
Unfortunately, most of the CF math functions are limited to 32 bit integers. So you probably cannot do that next step using native CF functions alone. One option is to use java's BigInteger class. Create a large integer from the hex strings and use the xor() method to apply the mask. Finally, use the toString(radix) method to return the result as a hex string:
bdkText ="0123456789ABCDEFFEDCBA9876543210";
maskText = "C0C0C0C000000000C0C0C0C000000000";
// use radix=16 to create integers from the hex strings
bdk = createObject("java", "java.math.BigInteger").init(bdkText, 16);
mask = createObject("java", "java.math.BigInteger").init(maskText, 16);
// apply the mask and convert the result to hex (upper case)
newKeyHex = ucase( bdk.xor(mask).toString(16) );
WriteOutput("<br>newKey="& newKeyHex);
writeOutput("<br>expected=C1E385A789ABCDEF3E1C7A5876543210");
That should be enough to get you back on track. Given some of CF's limitations here, java would be a better fit IMO. If you are comfortable with it, you could write a small java class and invoke that from CF instead.
I have a method which encodes some key-value entries into an ASCII string with Percent-Encoding.
The result value is expected to be used as a http header value.
With following entries
("English", "love")
("한국어", "사랑")
The method generates
%ED%95%9C%EA%B5%AD%EC%96%B4=%EC%82%AC%EB%9E%91&English=love
Which looks like
key=value(&key=value)*
Keys and values are encoded as Percent-Encoding
Encoded key and value are concatenated with =.
Pairs of encoded key and values are concatenated with &.
My question is, Is this output string can be used as http header field-value?
Is there any problem or concern?
As long you use printable US-ASCII, there shouldn't be a problem.