We have a legacy ASP.NET site which uses the encryption methods here:
http://www.codekeep.net/snippets/af1cd375-059a-4175-93d7-25eea2c5c660.aspx
When we call the following method, the page loads very slowly and eventually Connection Reset is returned:
Decrypt(" ", true);
If the method is called multiple times in subsequent page requests, the Application Pool goes down.
This is occurring on a Windows 2008 server running .NET framework v3.5.
I narrowed the problem down to the TransformFinalBlock() call.
NOTE: on Cassini, I do not get a connection timeout; instead the following exception is thrown:
System.Security.Cryptography.CryptographicException: Bad Data
Calling Decrypt() for other strings causes no problems in any environment.
Why is this happening? Is it a bug in TripleDESCryptoServiceProvider?
Obviously, I could filter the cipherString to reject " " and avoid this particular issue. However, I am worried that some other cipherString values that I am not suspecting will cause the DoS.
UPDATE 2011.06.28
The following is the minimal code to reproduce the issue:
// problem occurs when toEncryptArray is an empty array {}
byte[] toEncryptArray = {};
MD5CryptoServiceProvider hashmd5 = new MD5CryptoServiceProvider();
byte[] keyArray = hashmd5.ComputeHash(UTF8Encoding.UTF8.GetBytes("dummy_key"));
hashmd5.Clear();
TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider();
tdes.Key = keyArray;
tdes.Mode = CipherMode.ECB;
tdes.Padding = PaddingMode.PKCS7;
ICryptoTransform cTransform = tdes.CreateDecryptor();
// the following line can crashes the ASP.NET Application Pool (may need to call multiple times).
byte[] resultArray = cTransform.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length);
tdes.Clear();
The issue, as mentioned above, is that the decryption logic does not properly handle the case where the input cipher is a zero-length array.
A ticket was created for this:
http://connect.microsoft.com/VisualStudio/feedback/details/678150/denial-of-service-in-tripledescryptoserviceprovider
Note, it seems to work OK when running .NET framework 4.0.
The final block is where the padding is. In your example, the single space is the first and last block. DES/Triple DES is a 64 bit block cipher, the ciphet text should should be a multiple of 8 bytes (64 bits).
I don't have the environment to test it, but did you try playing with padding options ? Padding with more spaces won't do, because padding won't match.
A common padding scheme is PKCS5. For a single byte (that encrypted to the space caracter), your plain text should be, in hex :
0x?? 0x07 0x07 0x07 0x07 0x07 0x07 0x07
But in your code sample, a base64 input is expected. Wich means that your input string must be :
A multiple of 12 caracters
A valid base64 string
Any other string can be rejected.
The true value looks like a MAC, which means that your input plain text should be followed by a hash (MD5 in your code). It is there to help you detect changes to the cipher text. It is usefull when you encrypt binary data. If you can easily detect garbled plain text, you can set it to false.
Related
We are developing an application that has to work with data that is enycrpted by LoraWan (https://www.lora-alliance.org)
We have already found the documentation of how they encrypt their data, and have been reading through it for the past few days (https://www.lora-alliance.org/sites/default/files/2018-04/lorawantm_specification_-v1.1.pdf) but currently still can't solve our problem.
We have to use AES 128-bit ECB decryption with zero-padding to decrypt the messages, but the problem is it's not working because the encrypted messages we are receiving are not long enough for AES 128 so the algorithm returns a "Data is not a complete block" exception on the last line.
An example key we receive is like this: D6740C0B8417FF1295D878B130784BC5 (not a real key). It is 32 characters long, so 32 bytes, but if treat it as hexadecimal, then it becomes 16 bytes long, which is what is needed for AES 128-bit. This is the code we use to convert the Hex from String:
public static string HextoString(string InputText)
{byte[] hex= Enumerable.Range(0, InputText.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(InputText.Substring(x, 2), 16))
.ToArray();
return System.Text.Encoding.ASCII.GetString(hex);}
(A small thing to note for the above code is that we are not sure what Encoding to use, as we could not find it in the Lora documentation and they have not told us, but depending on this small setting we could be messing up our decryption (though we have tried all possible combinations, ascii, utf8, utf7, etc))
An example message we receive is: d3 73 4c which we are assuming is also in hexadecimal. This is only 6 bytes, and 3 bytes if we convert it from hexa to normal, compared to the 16 bytes we'd need minimum to match the key length.
This is the code for Aes 128 decrypt we are using:
private static string Aes128Decrypt(string cipherText, string key){
string decrypted = null;
var cipherPlainTextBytes = HexStringToByteArray(cipherText);
//var cipherPlainTextBytes = ForcedZeroPadding(HexStringToByteArray(cipherText));
var keyBytes = HexStringToByteArray(key);
using (var aes = new AesCryptoServiceProvider())
{
aes.KeySize = 128;
aes.Key = keyBytes;
aes.Mode = CipherMode.ECB;
aes.Padding = PaddingMode.Zeros;
ICryptoTransform decryptor = aes.CreateDecryptor(aes.Key, aes.IV);
using (MemoryStream ms = new MemoryStream(cipherPlainTextBytes, 0, cipherPlainTextBytes.Length))
{
using (CryptoStream cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Read))
{
using (StreamReader sr = new StreamReader(cs))
{
decrypted = sr.ReadToEnd();
}
}
}
}
return decrypted;}
So obviously this is going to return "Data is an incomplete block" at sr.ReadToEnd().
As you can see from the example, in that one commented out line, we have also tried to "Pad" the text to the correct size with a full zero byte array of correct length (16 - cipherText), in which case the algorithm runs fine, but it returns complete gibberish and not the original text.
We already have tried all of the modes of operation and messed around with padding modes as well. They are not providing us with anything but a cipherText, and a key for that text. No Initialization vector either, so we are assuming we are supposed to be generating that every time (but for ECB it isn't even needed iirc)
What's more is, they are able to encrypt-decrypt their messages just fine. What is most puzzling about this is that I have been googling this for days now and I cannot find a SINGLE example on google where the CIPHERTEXT is shorter than the key during decryption.
Obviously I have found examples where the message they are Encrypting is shorter than what is needed, but that is what padding is for on the ENCRYPTION side (right?). So that when you then receive the padded message, you can tell the algorithm what padding mode was used to make it correct length, so then it can seperate the padding from the message. But in all of those cases the recieved message during decryption is of correct length.
So the question is - what are we doing wrong? is there some way to decrypt with ciphertexts that are shorter than the key? Or are they messing up somewhere by producing ciphers that are too short?
Thanks for any help.
In AES-ECB, the only valid ciphertext shorter than 16-byte is empty. That 16-byte limit is the block (not key) size of AES, which happens to match the key size for AES-128.
Therefore, the question's
An example message we receive is: d3 73 4c
does not show an ECB encrypted message (since a comment tells that's from a JSON, that can't be bytes that happen to show as hex). And that's way too short to be a FRMPayload (per this comment) for a Join-Accept, since the spec says of the later:
1625 The message is either 16 or 32 bytes long.
Could it be that whatever that JSON message contains is not a full FRMPayload, but a fragment of a packet, encoded as hexadecimal pair with space separator? As long as it is not figured out how to build a FRMPayload, there's not point in deciphering it.
Update: If that mystery message is always 3 bytes, and if it is always the same for a given key (or available a single time per key), then per Maarten Bodewes's comment it might be a Key Check Value. The KCV is often the first 3 bytes of the encryption of the all-zero value with the key per the raw block cipher (equivalently: per ECB). Herbert Hanewinkel's javascript AES can work fully offline (which is necessary to not expose the key), and be used to manually validate an hypothesis. It tells that for the 16-byte key given in the question, a KCV would be cd15e1 (or c076fc per the variant in the next section).
Also: it is used CreateDecryptor to craft a gizmo in charge of the ECB decryption. That's likely incorrect in the context of decryption of a LoraWan payload, which requires ECB encryption for decryption of some fields:
1626 Note: AES decrypt operation in ECB mode is used to encrypt the join-accept message so that the end-device can use an AES encrypt operation to decrypt the message. This way an end device only has to implement AES encrypt but not AES decrypt.
In the context of decryption of a LoraWan packets, you want to communicate with the AES engine using byte arrays, not strings. Strings have an encoding, when LoraWan ciphertext and corresponding plaintext does not. Others seems to have managed to coerce the nice .NET do-it-all crypto API to get a low-level job done.
In the HextoString code, I vaguely get that the intention and perhaps outcome is that hex becomes the originally hex input as a byte array (fully rid of hexadecimal and other encoding sin; in which case the variable hex should be renamed to something on the tune of pure_bytes). But then I'm at loss about System.Text.Encoding.ASCII.GetString(hex). I'd be surprised if it just created a byte string from a byte array, or turned the key back to hexadecimal for later feeding to HexStringToByteArray in Aes128Decrypt. Plus this makes me fear that any byte in [0x80..0xFF] might turn to 0x3F, which is not nice for key, ciphertext, and corresponding LoraWan payload. These have no character encoding when de-hexified.
My conclusion is that if HexStringToByteArray does what its name suggests, and given the current interface of Aes128Decrypt, HextoString should simply remove whitespace (or is unneeded if HexStringToByteArray removes whitespace on the fly). But my recommendation is to change the interface to use byte arrays, not strings (see previous section).
As an aside: creating an ICryptoTransform object from its key is supposed to be performed once for multiple uses of the object.
I have a client who is implementing ZNode which uses the aspnet_Membership table to store a password. This table contains an encrypted password, the password salt and is using the "PasswordFormat" of 2. From what I gather, "2" is a recoverable encrypted password.
The ColdFusion server is BlueDragon 9 Alpha. If you don't know BD, no worries, anything that ColdFusion supports "should" work and I have CF 10 to test it on as well.
If you know a better way to do this I'm all ears. I need to be able to create a user/password and store it in the ASP membership table via ColdFusion. In addition I need to be able to check the user/password for login.
When looking at the Web.config file, the ZnodeMembershipProvider is a "System.Web.Security.SqlMembershipProvider" type.
The machineKey entry looks like this: (took out the two key values)
<machineKey decryption="AES"
decryptionKey="[64 character string]"
validation="SHA1"
validationKey="[128 character string]"/>
If I try something like this:
Encrypt('myPassword', '[64 character string]', 'AES', 'Base64')
It says "Specified key is not a valid size for this algorithm."
I'm not very savy on encryption or .NET. Thanks in advance.
I believe that .NET Password tables use Triple-DES, not AES. Try this instead.
Encrypt('myPassword', '[64 character string]', '3DES', 'Base64')
This answer I wrote up, about DNN (Dot Net Nuke) authentication, should do the trick. (Assuming no differences between ACF and BD). Essentially there are few difference in how .NET and CF handle encryption. The primary differences are:
Encoding:
.NET uses UTF-16LE
CF always uses UTF-8. In ACF, this means you must use encryptBinary instead of encrypt. (I am not sure about OBD).
Key Format:
.NET uses hexadecimal
CF typically uses base64, so you may need to convert the keys first.
Encryption Mode:
.NET defaults to CBC mode (requires IV)
CF defaults to ECB (no IV required)
In case the other link dies, here is the full example. While it uses 3DES, the basic concept is the same for AES. Note: In Java, the larger key sizes (ie 192,256) are only available if the Sun Unlimited Strength Jurisdiction Policy Files are installed.
3DES Example:
// sample valus
plainPassword = "password12345";
base64Salt = "x7le6CBSEvsFeqklvLbMUw==";
hexDecryptKey = "303132333435363738393031323334353637383930313233";
// first extract the bytes of the salt and password
saltBytes = binaryDecode(base64Salt, "base64");
passBytes = charsetDecode(plainPassword, "UTF-16LE" );
// next combine the bytes. note, the returned arrays are immutable,
// so we cannot use the standard CF tricks to merge them
// NOTE: If BlueDragon does not include "org.apache.commons...."
// just loop through the arrays and merge them manually
ArrayUtils = createObject("java", "org.apache.commons.lang.ArrayUtils");
dataBytes = ArrayUtils.addAll( saltBytes, passBytes );
// convert DNN hex key to base64 for ColdFusion
base64Key = binaryEncode(binaryDecode( hexDecryptKey, "hex"), "base64");
// create an IV and intialize it with all zeroes
// block size: 16 => AES, 8=> DES or TripleDES
blockSize = 8;
iv = javacast("byte[]", listToArray(repeatString("0,", blocksize)));
// encrypt using CBC mode
bytes = encryptBinary(dataBytes, base64Key, "DESede/CBC/PKCS5Padding", iv);
// result: WBAnoV+7cLVI95LwVQhtysHb5/pjqVG35nP5Zdu7T/Cn94Sd8v1Vk9zpjQSFGSkv
WriteOutput("encrypted password="& binaryEncode( bytes, "base64" ));
Asp.Net 4, C#, Oracle 11g
Hi, I'm trying to save the content of an html file to an Oracle CLOB column. The html file is uploaded to the server through an asp:Upload button. It is working fine most of the time, the problem is that sometimes the stream at the FileContent property of the button, has an odd number of bytes, and the Write method of the clob column throws an exception, stating that it requires an even number of bytes
How can I solve this problem??? Is there anything I can do to make my html files have an even number of bytes??? Html files are encoded as UTF8, and changing the encoding do modify the number of bytes, but they are still an even number
Thanks in advance
Edit: For now, I'm just increasing the size of the buffer by 1 in the case of the stream length being an odd number, then write the stream, in its own length, to the buffer, thus defaulting the last byte of the buffer. Please advice of any potential errors in doing it this way:
var buffer = new byte[(stream.Length % 2 > 0? stream.Length + 1: stream.Length)];
stream.Read(buffer, 0, (int)stream.Length);
clob.Write(buffer, 0, buffer.Length);
Thanks again
Edit: the previous solution didn't work. The new approach consists of passing the stream to string, add a space at the end of the string, and then convert to stream again. It's working fine until now. Sorry I can't post the code... it's just that I couldn't figured out how to overcome this policy of 4 spaces for code in StackOverflow
I'm assuming that you are using the System.Data.OracleClient classes (as opposed to Oracle's ODP.NET).
The OracleLob class has no method for writing a string, which I would expect for handling CLOBs. Instead, the documentation says:
The .NET Framework Data Provider for Oracle handles all CLOB and NCLOB
data as Unicode. Therefore, when accessing CLOB and NCLOB data types,
you are always dealing with the number of bytes, where each character
is 2 bytes. For example, if a string of text containing three
characters is saved as an NCLOB on an Oracle server where the
character set is 4 bytes per character, and you perform a Write
operation, you specify the length of the string as 6 bytes, although
it is stored as 12 bytes on the server.
In this context, Unicode means the UTF-16 encoding which requires 2 bytes for most characters and 4 or 6 bytes for characters in the supplementary planes.
So if you have a string, you have to convert it to UTF-16 first:
byte[] utf16Bytes = Encoding.Unicode.GetBytes(str);
clob.Write(utf16Bytes, 0, utf16Bytes.Lenght);
Or you can use a StreamWriter to achieve the same:
OracleLob clob = ...
using (StreamWriter writer = new StreamWriter(clob, Encoding.Unicode))
{
writer.Write(str);
}
If your data is in a UTF-8 encoded byte array, then you have to convert it to UTF-16:
byte[] utf8Data = ...
byte[] utf16Data = Encoding.Convert(Encoding.UTF8, Encoding.Unicode, utf8Data);
clob.Write(utf16Data, 0, utf16Data.Length);
I tried on Flex 3, facing issue with uploading JPG/PNG image, trace readUTFBytes would return correct bytes length but tmpFileContent is trucated, it would only appear to have upload just 3 characters of data to the server through PHP script which made image unusable. I have no issue for non-images format. What is wrong here?
var tmpFileContent:String = fileRef.data.readUTFBytes(fileRef.data.length);
Is String capable of handle bytes?
I'm not sure what you're looking to do with the image, but you might want to read this:
http://livedocs.adobe.com/flex/3/html/help.html?content=Filesystem_15.html
You may also need a image encoder such as the JPEGEncoder: http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/mx/graphics/codec/JPEGEncoder.html
You could always encode using base64:
var enc:Base64Encoder = new Base64Encoder();
enc.encodeBytes(fileRef.data);
var base64data:String = enc.drain();
The method used in the tutorial is not going to work safely for anything but text files. An arbitrary binary format is likely to contain zeros. A zero (a byte whose value is 0) is generally considered a string terminator in many languages / platforms. This is also the case in Actionscript as this code shows:
var str:String = "abc\x00def";
trace(str);
The string will be truncated to "abc", since 0x00 is considered to mark the end of a string.
I think your best bet is to encode the content to base 64 as maclema suggested. From the php side, decode it back before writting the file with something like:
file_put_contents($myFilePath, base64_decode($fileData["filedata"]));
Also, I can't remember if file_put_contents is binary safe (I think it's not). If that's the case, you should use fopen('you_path',"wb"), fwrite() and fclose() to write the file. Notice the "b" in "wb", which stands for binary. If you don't pass that flag you'll probably have problems with some characters (newline and carriage return, for example).
Added:
Perhaps, following davr suggestion, you could try sending the data ByteArray to see if AMFPHP handles it correctly.
Php does allow embbeded Nuls in strings as this code shows:
$str = "a\x00b";
var_dump(ord($str{0})); // 97
var_dump(ord($str{1})); // 0
var_dump(ord($str{2})); // 98
So, if AMFPHP converts the bytearray to a string and does not mangle it in the process, this could actually work.
// method saves files on the server
function uploadFiles($fileData) {
// new file path an name
// to not overwrite the files we add the microtime before the file name
$myFilePath = '../../_uploads/'.
preg_replace("/[^0-9]+/","_",microtime()).'_'.$fileData["filename"];
// writing on the disk
$fp = fopen($myFilePath,"wb");
if($fp) {
fwrite($fp,$fileData["filedata"]);
fclose($fp);
}
// returning response - is not used anywhere
return true;
}
Otherwise, try echoing var_dump($fileData['filedata']) to see what the actual type AMFPHP is converting the data to (perhaps it uses an array, not sure; given how strings work in php (much like a buffer of single byte characters, though, I think it could be just using strings).
I'm having some troubles matching the value returned from RSA signing
a Base64 SHA1 hash in the actionscript as3crypto library with the result returned in c#.
I'm passing in a Base64 hash decoded as a byte array to the sign()
function provided in as3crypto and base64 encoding the result.
However, this result never matches the returned result from a c#
function which performs the same task. Does it matter that the
function takes in and returns hex even though it works at the byte
array level?
Please see my below signing function to check i haven't missed
anything!
private function signHash(hashInBase64:String):String
{
var src:ByteArray = Base64.decodeToByteArray(hashInBase64);
var key:RSAKey = getRSAKey();
var dst:ByteArray = new ByteArray();
key.sign(src, dst, src.length);
return Base64.encodeByteArray(dst);
}
Anyone had much experience with the AS3Crypto library?
Any help would be great!!!
Thanks,
Jon
I assume that your C# version is using RSA PKCS #1 version 1.5. The standard computes signatures by doing an RSA private key operation over a byte string composed as
0x00 0x01 || 0xff* || 0x00 || OID || hash
Looking at the as3crypto code shows that the RSAKey class does not add any OID during the sign operation. Hence if you don't do it you'll get incorrect results.
Looking at the code also shows that as3crypto is vulnerable to this attack, because it does not verify the padding properly. This attack is more than 3 years old. Hence it seems like a good to use a different library than as3crypto.
Now there is an ActionScript crypto library compatible with .NET. Here it is: http://code.google.com/p/flame. Looks like it supports RSA exactly the way .NET does.