AES Decryption iv error: must be a class [B value - encryption

I received the following data from a vendor so that we can decrypt data on our end.
Algorithm : AES 256bit
Key : test123xxxxxx
Key Length : 32
Initialize Vector: ei8B3hcD8VhnN_cK
Built in methods : YES (From inbuilt class CommonCryptor.h method with variable CCCryptorStatus).
Please note I have no idea if the last line has any relevance to our decryption.
I attempted the following on a sample string that we should be able to decode.
<cfset item = "eLgExhcox5Ro1kPB1OlJP1w6tEJ3x94gM/QJS5dCZkyjEVfNjIid3R7JP4l1WZD1" />
<cfoutput>#decrypt(#item#, #key#, 'AES', 'Base64', #iv# )#</cfoutput>
The error I receive is: The value of parameter 5, which is currently ei8B3hcD8VhnN_cK, must be a class [B value. of which I cannot find anything about.
I am also assuming the encoding is Base64 of which I am finding out from the vendor. Is there anything else I'm missing.

My guess would be it is complaining that the IV value is not binary. If your IV value is a base64 string, use binaryDecode(yourIVString, "base64") to get the binary value.
a class [B value
The [B refers to the expected object: an array of bytes. Apparently [B is the "binary name [...] as specified by the Java Language Specification (§13.1)". You will see the same thing if you create a byte[] array and dump the class name:
// show binary and canonical class names
arr = javacast("byte[]", [1]);
writeOutput("name="& arr.getClass().name);
writeOutput("<br>canonicalName="& arr.getClass().canonicalName);
Side note, if you are using a 256 bit key, be sure you have installed the (JCE) Unlimited Strength Jurisdiction Policy Files first. Otherwise, you are limited to 128 bit keys maximum.

Related

Verifying DSA Signature generated by PKCS#11 with OpenSSL

I want to sign a SHA-256 hash with DSA using PKCS#11 Java Wrapper for the PKCS#11 API of a hardware security module. For this purpose I selected the Mechanism CKM_DSA, load the corresponding DSA key from the token and have the data (read as byte-array) signed. The key I use for testing has 1024bit length.
Everything seems to work fine: The key is loaded, the Session.sign() yields a byte[] array of length 40. This corresponds to the PKCS#11 Spec, which says:
"For the purposes of this mechanism, a DSA signature is a 40-byte string,
corresponding to the concatenation of the DSA values r and s, each represented most significant byte first."
Now I want to verify this signature using openSSL, i.e., using
openssl dgst -d -sha256 -verify ${PUBLIC_KEY} -signature signature.der <raw input file>
This works if I
a) created the signature using OpenSSL
b) created the signature using bouncycastle and encoding the result as ASN1 encoded DER sequence.
Now I want to do the same with the PKCS#11 signature. My question is: how to format this 40 byte array? I tried the following:
//sign data
byte[] signedData = this.pkcs11Session.sign(dataToSign);
//convert result
byte[] r = new byte[20];
byte[] s = new byte[20];
System.arraycopy(signedData, 0, r, 0, 20);
System.arraycopy(signedData, 19, s, 0, 20);
//encode result
ASN1EncodableVector v = new ASN1EncodableVector();
v.add(new ASN1Integer(r));
v.add(new ASN1Integer(s));
return new DERSequence(v).getEncoded(ASN1Encoding.DER);
The encoding part seems to be correct, because it works if I produce r and s directly with bouncycastle and another software key. Besides, openssl does accept the input format but the verification fails sometimes with an error, sometimes just with "Verification failure".
Thus, I assume the conversion of the PKCS#11 signature to r and s is wrong. Can someone help finding the mistake?
You probably have to convert the r and s values to a BigInteger class before you do. The reason for this is that ASN.1 uses signed value encoding and DH results in unsigned value encoding. So you've got a pretty high chance of getting a negative value in your ASN.1, which will result in an error.
To perform the conversion, use new BigInteger(1, r) and new BigInteger(1, s) and put the result into the ASN1Integer instances. Here 1 indicates that the value needs to be converted to a positive value (i.e. the input is unsigned positive).

Ada83 Unchecked Conversion of Record in Declaration

I want to declare a constant as a 16 bit integer of type Word and assign a value to it. To support portability between Big and Little Endian platforms, I can't safely use an assignment like this one:
Special_Value : Constant Word := 16#1234#;
because the byte order might be misinterpreted.
So I use a record like this:
Type Double_Byte Is Record
Byte_1 : Byte; -- most significant byte
Byte_0 : Byte; -- least significant byte
End Record;
For Double_Byte Use Record
Byte_1 At 0 Range 0..7;
Byte_0 At 0 Range 8..15;
End Record;
However, in some cases, I have a large number of pre-configuration assignments that look like this:
Value_1 : Constant Word := 15#1234#;
This is very readable by a person, but endian issues cause it to be misunderstood a number of ways (including in the debugger, for example).
Because I have many lines where I do this, I tried the following because it is fairly compact as source code. It is working, but I'm not sure why, or what part of the Ada Reference Manual covers this concept:
Value_1 : Constant Word := DByte_To_Word((Byte_1 => 16#12#,
Byte_0 => 16#34#));
where DByte_To_Word is defined as
Function DByte_To_Word Is New Unchecked_Conversion(Double_Byte, Word);
I think I have seen something in the ARM that allows me to do this, but not the way I described above. I can't find it and I don't know what I would be searching for.
There’s nothing unusual about your call to DByte_To_Word; (Byte_1 => 16#12#, Byte_0 => 16#34#) is a perfectly legitimate record aggregate of type Double_Byte, see LRM83 4.3.1.
But! But! it’s true that, on a big-endian machine, the first (lowest-addressed) byte of your Word will contain 16#12#, whereas on a little-endian machine it will contain 16#34#. The CPU takes care of all of that; if you print the value of Special_Value you will get 16#1234# (or 0x1234) no matter which endianness the computer implements.
The only time you’ll encounter endianness issues is when you’re copying binary data from one endianness to another, via the network, or a file.
If your debugger gets confused about this, you need a better debugger!

Lua Encryption with Shared Key

I've been using this open-source function to encrypt and decrypt strings via base64 methods, and I was wondering if there was a way to have a specific 'key' shared amongst me and some friends to make it work in a way where only the people who have this 'key' will properly encrypt or decrypt the messages.
-- Lua 5.1+ base64 v3.0 (c) 2009 by Alex Kloss <alexthkloss#web.de>
-- licensed under the terms of the LGPL2
-- character table string
local b='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
-- encoding
function enc(data)
return ((data:gsub('.', function(x)
local r,b='',x:byte()
for i=8,1,-1 do r=r..(b%2^i-b%2^(i-1)>0 and '1' or '0') end
return r;
end)..'0000'):gsub('%d%d%d?%d?%d?%d?', function(x)
if (#x < 6) then return '' end
local c=0
for i=1,6 do c=c+(x:sub(i,i)=='1' and 2^(6-i) or 0) end
return b:sub(c+1,c+1)
end)..({ '', '==', '=' })[#data%3+1])
end
-- decoding
function dec(data)
data = string.gsub(data, '[^'..b..'=]', '')
return (data:gsub('.', function(x)
if (x == '=') then return '' end
local r,f='',(b:find(x)-1)
for i=6,1,-1 do r=r..(f%2^i-f%2^(i-1)>0 and '1' or '0') end
return r;
end):gsub('%d%d%d?%d?%d?%d?%d?%d?', function(x)
if (#x ~= 8) then return '' end
local c=0
for i=1,8 do c=c+(x:sub(i,i)=='1' and 2^(8-i) or 0) end
return string.char(c)
end))
end
So, say a similar function like this was given to me and three friends, and we all had a private string key called 'flibble'... How could we share messages undecipherable by others?
No, not with base 64. Base 64 is not encryption, it's encoding. Base 64 does not take a key as parameter, is just takes binary and converts it to printable ASCII.
There are of course tricks to make base 64 look a bit more like ciphertext: just put in a hustled alphabet (in your case in variable b). That's however common substitution; as such it should be considered obfuscation instead of encryption. I could explain to a random high-school student how to crack it.
Generally you need to first encrypt using a block cipher + mode of operation, and then perform the encoding. You'll need something like AES for the confidentiality and HMAC for the integrity and authenticity of messages.
I would recommend something like luacrypto. You really don't want to perform crypto using a high level language such as Lua for performance reasons alone. Many Lua libraries do offer just AES or HMAC but not both, and many seem one man projects instead of well supported/maintained libraries - so choose carefully.

Lua Alien - Pointer Arithmetic and Dereferencing

My goal is to call Windows' GetModuleInformation function to get a MODULEINFO struct back. This is all working fine. The problem comes as a result of me wanting to do pointer arithmetic and dereferences on the LPVOID lpBaseOfDll which is part of the MODULEINFO.
Here is my code to call the function in Lua:
require "luarocks.require"
require "alien"
sizeofMODULEINFO = 12 --Gotten from sizeof(MODULEINFO) from Visual Studio
MODULEINFO = alien.defstruct{
{"lpBaseOfDll", "pointer"}; --Does this need to be a buffer? If so, how?
{"SizeOfImage", "ulong"};
{"EntryPoint", "pointer"};
}
local GetModuleInformation = alien.Kernel32.K32GetModuleInformation
GetModuleInformation:types{ret = "int", abi = "stdcall", "long", "pointer", "pointer", "ulong"}
local GetModuleHandle = alien.Kernel32.GetModuleHandleA
GetModuleHandle:types{ret = "pointer", abi = "stdcall", "pointer"}
local GetCurrentProcess = alien.Kernel32.GetCurrentProcess
GetCurrentProcess:types{ret = "long", abi = "stdcall"}
local mod = MODULEINFO:new() --Create struct (needs buffer?)
local currentProcess = GetCurrentProcess()
local moduleHandle = GetModuleHandle("myModule.dll")
local success = GetModuleInformation(currentProcess, moduleHandle, mod(), sizeofMODULEINFO)
if success == 0 then --If there is an error, exit
return 0
end
local dataPtr = mod.lpBaseOfDll
--Now how do I do pointer arithmetic and/or dereference "dataPtr"?
At this point, mod.SizeOfImage seems to be giving me the correct values that I am expecting, so I know the functions are being called and the struct is being populated. However, I cannot seem to do pointer arithmetic on mod.lpBaseOfDll because it is a UserData.
The only information in the Alien Documentation that may address what I'm trying to do are these:
Pointer Unpacking
Alien also provides three convenience functions that let you
dereference a pointer and convert the value to a Lua type:
alien.tostring takes a userdata (usually returned from a function that has a pointer return value), casts it to char*, and returns a Lua
string. You can supply an optional size argument (if you don’t Alien
calls strlen on the buffer first).
alien.toint takes a userdata, casts it to int*, dereferences it and returns it as a number. If you pass it a number it assumes the
userdata is an array with this number of elements.
alien.toshort, alien.tolong, alien.tofloat, and alien.todouble are like alien.toint, but works with with the respective typecasts.
Unsigned versions are also available.
My issue with those, is I would need to go byte-by-byte, and there is no alien.tochar function. Also, and more importantly, this still doesn't solve the problem of me being able to get elements outside of the base address.
Buffers
After making a buffer you can pass it in place of any argument of
string or pointer type.
...
You can also pass a buffer or other userdata to the new method of your
struct type, and in this case this will be the backing store of the
struct instance you are creating. This is useful for unpacking a
foreign struct that a C function returned.
These seem to suggest I can use an alien.buffer as the argument of MODULEINFO's LPVOID lpBaseOfDll. And buffers are described as byte arrays, which can be indexed using this notation: buf[1], buf[2], etc. Additionally, buffers go by bytes, so this would ideally solve all problems. (If I am understanding this correctly).
Unfortunately, I can not find any examples of this anywhere (not in the docs, stackoverflow, Google, etc), so I am have no idea how to do this. I've tried a few variations of syntax, but nearly every one gives a runtime error (others simply does not work as expected).
Any insight on how I might be able to go byte-by-byte (C char-by-char) across the mod.lpBaseOfDll through dereferences and pointer arithmetic?
I need to go byte-by-byte, and there is no alien.tochar function.
Sounds like alien.tostring has you covered:
alien.tostring takes a userdata (usually returned from a function that has a pointer return value), casts it to char*, and returns a Lua string. You can supply an optional size argument (if you don’t Alien calls strlen on the buffer first).
Lua strings can contain arbitrary byte values, including 0 (i.e. they aren't null-terminated like C strings), so as long as you pass a size argument to alien.tostring you can get back data as a byte buffer, aka Lua string, and do whatever you please with the bytes.
It sounds like you can't tell it to start at an arbitrary offset from the given pointer address. The easiest way to tell for sure, if the documentation doesn't tell you, is to look at the source. It would probably be trivial to add an offset parameter.

Microsoft CNG BCryptEncrypt returning ciphertext == plaintext

I am trying to implement an AES-OFB wrapper around CNG's AES for symmetric encryption.
I have run into an issue that I cannot understand... I have created an AES algorithm handle (BCRYPT_AES_ALGORITHM) and imported an AES key. I then attempt to generate a 16 byte keystream for use with XORing my plaintext/ciphertext. The first time I run through this mechanism, the keyStreamPtr changes from some random byte stream to another, however, the 3rd time I do this (the 3rd set of 16 bytes of keystream), I start getting the same output and it happens forever.
status = BCryptEncrypt((BCRYPT_KEY_HANDLE)keyHandle,
keyStreamPtr,
keyStreamLength,
NULL, //no padding
NULL, // no IV
0, // no IV
keyStreamPtr,
keyStreamLength,
&Length,
0); // no option flags
Has anybody ever seen anything like this? why would AES ever return ciphertext totally identical to the plaintext that was the input? Again this is for an AES-OFB implementation... Perhaps I am doing something wrong?
The only thing I can think of is that you encrypt the key stream again. If you do this you effectively perform encrypt/decrypt: P XOR C XOR C = P where C is the key stream and P is the plain text. You might want to look at the buffer/stream handling within your code.

Resources