I want to sign a SHA-256 hash with DSA using PKCS#11 Java Wrapper for the PKCS#11 API of a hardware security module. For this purpose I selected the Mechanism CKM_DSA, load the corresponding DSA key from the token and have the data (read as byte-array) signed. The key I use for testing has 1024bit length.
Everything seems to work fine: The key is loaded, the Session.sign() yields a byte[] array of length 40. This corresponds to the PKCS#11 Spec, which says:
"For the purposes of this mechanism, a DSA signature is a 40-byte string,
corresponding to the concatenation of the DSA values r and s, each represented most significant byte first."
Now I want to verify this signature using openSSL, i.e., using
openssl dgst -d -sha256 -verify ${PUBLIC_KEY} -signature signature.der <raw input file>
This works if I
a) created the signature using OpenSSL
b) created the signature using bouncycastle and encoding the result as ASN1 encoded DER sequence.
Now I want to do the same with the PKCS#11 signature. My question is: how to format this 40 byte array? I tried the following:
//sign data
byte[] signedData = this.pkcs11Session.sign(dataToSign);
//convert result
byte[] r = new byte[20];
byte[] s = new byte[20];
System.arraycopy(signedData, 0, r, 0, 20);
System.arraycopy(signedData, 19, s, 0, 20);
//encode result
ASN1EncodableVector v = new ASN1EncodableVector();
v.add(new ASN1Integer(r));
v.add(new ASN1Integer(s));
return new DERSequence(v).getEncoded(ASN1Encoding.DER);
The encoding part seems to be correct, because it works if I produce r and s directly with bouncycastle and another software key. Besides, openssl does accept the input format but the verification fails sometimes with an error, sometimes just with "Verification failure".
Thus, I assume the conversion of the PKCS#11 signature to r and s is wrong. Can someone help finding the mistake?
You probably have to convert the r and s values to a BigInteger class before you do. The reason for this is that ASN.1 uses signed value encoding and DH results in unsigned value encoding. So you've got a pretty high chance of getting a negative value in your ASN.1, which will result in an error.
To perform the conversion, use new BigInteger(1, r) and new BigInteger(1, s) and put the result into the ASN1Integer instances. Here 1 indicates that the value needs to be converted to a positive value (i.e. the input is unsigned positive).
Related
I have a NASM 64 dll called by ctypes. The program multiplies two 64-bit integers and returns a 128-bit integer, so I am using xmm SIMD instructions. It loops through 10,000 times and stores its results in a memory buffer created by malloc.
Here is the part of the NASM code where the SIMD calculations are performed:
cvtsi2sd xmm0,rax
mov rax,[pcalc_result_0]
cvtsi2sd xmm1,rax
PMULUDQ xmm0,xmm1
lea rdi,[rel s_ptr] ; Pointer
mov rbp,qword[rdi]
mov rcx,[s_ctr]
;movdqa [rbp + rcx],xmm0
movdqu [rbp + rcx],xmm0
add rcx,16
The movdqa instruction does not work (the program crashes, even though it's assembled with the align=16 directive). The movdqu instruction does work, but when I return the array to ctypes, I need to convert the return pointer to 128-bits, but there is no 128-bit ctypes datatype. Here's the relevant part of the ctypes code:
CallName.argtypes = [ctypes.POINTER(ctypes.c_double)]
CallName.restype = ctypes.POINTER(ctypes.c_int64)
n0 = ctypes.cast(a[0],ctypes.POINTER(ctypes.c_int64))
n0_size = int(a[0+1] / 8)
x0 = n0[:n0_size]
where x0 is the returned array converted to a usable form, but not to 128 bits.
There is a post at Handling 128-bit integers with ctypes that deals with passing 128-bit arrays in but not out.
My questions are:
-- Should I use an instruction other than movdqa or movdqu? Of the many SIMD instructions, these seem the most appropriate.
-- Python can handle integers up to any arbitrary size, but apparently ctypes can't. Is there any way to use 128-bit integers from ctypes when there is no ctypes size larger than 64 bits?
You can generate byte arrays containing 16 bytes representing a 128-bit integer and convert to and from byte format. This may not be aligned, so you should use movdqu. I would use an input/output parameter instead of a return value, so Python can manage the memory:
>>> import ctypes
>>> value = 0xaabbccddeeff
>>> int128 = ctypes.create_string_buffer(value.to_bytes(16,'little',signed=True))
>>> int128
<ctypes.c_char_Array_17 object at 0x000001ECCB1D41C8>
>>> int128.raw
b'\xff\xee\xdd\xcc\xbb\xaa\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
(NOTE: The buffer gets null-terminated, which is why it is 17 bytes)
Pass this writable buffer to your function, the function can write the result back to the same buffer. On return use the following to convert back to a Python integer:
>>> hex(int.from_bytes(int128.raw[:16],'little',signed=True))
'0xaabbccddeeff'
I've been using this open-source function to encrypt and decrypt strings via base64 methods, and I was wondering if there was a way to have a specific 'key' shared amongst me and some friends to make it work in a way where only the people who have this 'key' will properly encrypt or decrypt the messages.
-- Lua 5.1+ base64 v3.0 (c) 2009 by Alex Kloss <alexthkloss#web.de>
-- licensed under the terms of the LGPL2
-- character table string
local b='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
-- encoding
function enc(data)
return ((data:gsub('.', function(x)
local r,b='',x:byte()
for i=8,1,-1 do r=r..(b%2^i-b%2^(i-1)>0 and '1' or '0') end
return r;
end)..'0000'):gsub('%d%d%d?%d?%d?%d?', function(x)
if (#x < 6) then return '' end
local c=0
for i=1,6 do c=c+(x:sub(i,i)=='1' and 2^(6-i) or 0) end
return b:sub(c+1,c+1)
end)..({ '', '==', '=' })[#data%3+1])
end
-- decoding
function dec(data)
data = string.gsub(data, '[^'..b..'=]', '')
return (data:gsub('.', function(x)
if (x == '=') then return '' end
local r,f='',(b:find(x)-1)
for i=6,1,-1 do r=r..(f%2^i-f%2^(i-1)>0 and '1' or '0') end
return r;
end):gsub('%d%d%d?%d?%d?%d?%d?%d?', function(x)
if (#x ~= 8) then return '' end
local c=0
for i=1,8 do c=c+(x:sub(i,i)=='1' and 2^(8-i) or 0) end
return string.char(c)
end))
end
So, say a similar function like this was given to me and three friends, and we all had a private string key called 'flibble'... How could we share messages undecipherable by others?
No, not with base 64. Base 64 is not encryption, it's encoding. Base 64 does not take a key as parameter, is just takes binary and converts it to printable ASCII.
There are of course tricks to make base 64 look a bit more like ciphertext: just put in a hustled alphabet (in your case in variable b). That's however common substitution; as such it should be considered obfuscation instead of encryption. I could explain to a random high-school student how to crack it.
Generally you need to first encrypt using a block cipher + mode of operation, and then perform the encoding. You'll need something like AES for the confidentiality and HMAC for the integrity and authenticity of messages.
I would recommend something like luacrypto. You really don't want to perform crypto using a high level language such as Lua for performance reasons alone. Many Lua libraries do offer just AES or HMAC but not both, and many seem one man projects instead of well supported/maintained libraries - so choose carefully.
My goal is to call Windows' GetModuleInformation function to get a MODULEINFO struct back. This is all working fine. The problem comes as a result of me wanting to do pointer arithmetic and dereferences on the LPVOID lpBaseOfDll which is part of the MODULEINFO.
Here is my code to call the function in Lua:
require "luarocks.require"
require "alien"
sizeofMODULEINFO = 12 --Gotten from sizeof(MODULEINFO) from Visual Studio
MODULEINFO = alien.defstruct{
{"lpBaseOfDll", "pointer"}; --Does this need to be a buffer? If so, how?
{"SizeOfImage", "ulong"};
{"EntryPoint", "pointer"};
}
local GetModuleInformation = alien.Kernel32.K32GetModuleInformation
GetModuleInformation:types{ret = "int", abi = "stdcall", "long", "pointer", "pointer", "ulong"}
local GetModuleHandle = alien.Kernel32.GetModuleHandleA
GetModuleHandle:types{ret = "pointer", abi = "stdcall", "pointer"}
local GetCurrentProcess = alien.Kernel32.GetCurrentProcess
GetCurrentProcess:types{ret = "long", abi = "stdcall"}
local mod = MODULEINFO:new() --Create struct (needs buffer?)
local currentProcess = GetCurrentProcess()
local moduleHandle = GetModuleHandle("myModule.dll")
local success = GetModuleInformation(currentProcess, moduleHandle, mod(), sizeofMODULEINFO)
if success == 0 then --If there is an error, exit
return 0
end
local dataPtr = mod.lpBaseOfDll
--Now how do I do pointer arithmetic and/or dereference "dataPtr"?
At this point, mod.SizeOfImage seems to be giving me the correct values that I am expecting, so I know the functions are being called and the struct is being populated. However, I cannot seem to do pointer arithmetic on mod.lpBaseOfDll because it is a UserData.
The only information in the Alien Documentation that may address what I'm trying to do are these:
Pointer Unpacking
Alien also provides three convenience functions that let you
dereference a pointer and convert the value to a Lua type:
alien.tostring takes a userdata (usually returned from a function that has a pointer return value), casts it to char*, and returns a Lua
string. You can supply an optional size argument (if you don’t Alien
calls strlen on the buffer first).
alien.toint takes a userdata, casts it to int*, dereferences it and returns it as a number. If you pass it a number it assumes the
userdata is an array with this number of elements.
alien.toshort, alien.tolong, alien.tofloat, and alien.todouble are like alien.toint, but works with with the respective typecasts.
Unsigned versions are also available.
My issue with those, is I would need to go byte-by-byte, and there is no alien.tochar function. Also, and more importantly, this still doesn't solve the problem of me being able to get elements outside of the base address.
Buffers
After making a buffer you can pass it in place of any argument of
string or pointer type.
...
You can also pass a buffer or other userdata to the new method of your
struct type, and in this case this will be the backing store of the
struct instance you are creating. This is useful for unpacking a
foreign struct that a C function returned.
These seem to suggest I can use an alien.buffer as the argument of MODULEINFO's LPVOID lpBaseOfDll. And buffers are described as byte arrays, which can be indexed using this notation: buf[1], buf[2], etc. Additionally, buffers go by bytes, so this would ideally solve all problems. (If I am understanding this correctly).
Unfortunately, I can not find any examples of this anywhere (not in the docs, stackoverflow, Google, etc), so I am have no idea how to do this. I've tried a few variations of syntax, but nearly every one gives a runtime error (others simply does not work as expected).
Any insight on how I might be able to go byte-by-byte (C char-by-char) across the mod.lpBaseOfDll through dereferences and pointer arithmetic?
I need to go byte-by-byte, and there is no alien.tochar function.
Sounds like alien.tostring has you covered:
alien.tostring takes a userdata (usually returned from a function that has a pointer return value), casts it to char*, and returns a Lua string. You can supply an optional size argument (if you don’t Alien calls strlen on the buffer first).
Lua strings can contain arbitrary byte values, including 0 (i.e. they aren't null-terminated like C strings), so as long as you pass a size argument to alien.tostring you can get back data as a byte buffer, aka Lua string, and do whatever you please with the bytes.
It sounds like you can't tell it to start at an arbitrary offset from the given pointer address. The easiest way to tell for sure, if the documentation doesn't tell you, is to look at the source. It would probably be trivial to add an offset parameter.
I received the following data from a vendor so that we can decrypt data on our end.
Algorithm : AES 256bit
Key : test123xxxxxx
Key Length : 32
Initialize Vector: ei8B3hcD8VhnN_cK
Built in methods : YES (From inbuilt class CommonCryptor.h method with variable CCCryptorStatus).
Please note I have no idea if the last line has any relevance to our decryption.
I attempted the following on a sample string that we should be able to decode.
<cfset item = "eLgExhcox5Ro1kPB1OlJP1w6tEJ3x94gM/QJS5dCZkyjEVfNjIid3R7JP4l1WZD1" />
<cfoutput>#decrypt(#item#, #key#, 'AES', 'Base64', #iv# )#</cfoutput>
The error I receive is: The value of parameter 5, which is currently ei8B3hcD8VhnN_cK, must be a class [B value. of which I cannot find anything about.
I am also assuming the encoding is Base64 of which I am finding out from the vendor. Is there anything else I'm missing.
My guess would be it is complaining that the IV value is not binary. If your IV value is a base64 string, use binaryDecode(yourIVString, "base64") to get the binary value.
a class [B value
The [B refers to the expected object: an array of bytes. Apparently [B is the "binary name [...] as specified by the Java Language Specification (§13.1)". You will see the same thing if you create a byte[] array and dump the class name:
// show binary and canonical class names
arr = javacast("byte[]", [1]);
writeOutput("name="& arr.getClass().name);
writeOutput("<br>canonicalName="& arr.getClass().canonicalName);
Side note, if you are using a 256 bit key, be sure you have installed the (JCE) Unlimited Strength Jurisdiction Policy Files first. Otherwise, you are limited to 128 bit keys maximum.
I am trying to implement an AES-OFB wrapper around CNG's AES for symmetric encryption.
I have run into an issue that I cannot understand... I have created an AES algorithm handle (BCRYPT_AES_ALGORITHM) and imported an AES key. I then attempt to generate a 16 byte keystream for use with XORing my plaintext/ciphertext. The first time I run through this mechanism, the keyStreamPtr changes from some random byte stream to another, however, the 3rd time I do this (the 3rd set of 16 bytes of keystream), I start getting the same output and it happens forever.
status = BCryptEncrypt((BCRYPT_KEY_HANDLE)keyHandle,
keyStreamPtr,
keyStreamLength,
NULL, //no padding
NULL, // no IV
0, // no IV
keyStreamPtr,
keyStreamLength,
&Length,
0); // no option flags
Has anybody ever seen anything like this? why would AES ever return ciphertext totally identical to the plaintext that was the input? Again this is for an AES-OFB implementation... Perhaps I am doing something wrong?
The only thing I can think of is that you encrypt the key stream again. If you do this you effectively perform encrypt/decrypt: P XOR C XOR C = P where C is the key stream and P is the plain text. You might want to look at the buffer/stream handling within your code.