How do I create a Multidimensional cython array of fixed size? - multidimensional-array

I am trying to convert a python list of lists to a cython multidimensional array.
The list has 300,000 elements each element is a list of 10 integers. For this case here created randomly. The way I tried works fine as long as my cython multidimensional array is not bigger then somewhere about [210000][10]. My actual project of course is more complex but I believe if I get this example here to work, the rest is just more of the same.
I have a cython file "array_cy.pyx" with the following content:
cpdef doublearray(list list1):
cdef int[200000][10] a
cdef int i
cdef int y
cdef int j
cdef int value = 0
for i in range(200000):
for y in range(10):
a[i][y] = list1[i][y]
print("doublearray")
print(a[40000][6])
cpdef doublearray1(list list1):
cdef int[300000][10] a
cdef int i
cdef int y
cdef int value = 0
for i in range(300000):
for y in range(10):
a[i][y] = list1[i][y]
print("doublearray1")
print(a[40000][6])
Then in the main.py I have
import array_cy
import random
list1 = []
for i in range(300000):
list2 = []
for j in range(10):
list2.append(random.randint(0, 22))
list1.append(list2)
array_cy.doublearray(list1)
array_cy.doublearray1(list1)
And the output is:
doublearray
4
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
So the function doublearray(list) works fine and the output is some random number as expected. But doublearray1(list) gives SIGSEGV. If in doublearray1(list) I comment out the line
print(a[40000][6])
it also runs through witout a problem, which makes sense because I never try to access the array. I dont understand why it does not work. I thought in C the limit of elements in an array would be defined by the hardware. My goal is to convert the python list of lists in a way to a cython multidimensional array, that I can access without any python interaction.
The suggested question is about using malloc I think that is what I need but I still dont get it to work because if I change the two functions to:
cpdef doublearray(list list1):
cdef int[200000][10] a = <int**> malloc(200000 * 10 * sizeof(int))
cdef int i
cdef int y
cdef int j
cdef int value = 0
for i in range(200000):
for y in range(10):
a[i][y] = list1[i][y]
print("doublearray")
print(a[40000][6])
cpdef doublearray1(list list1):
cdef int[300000][10] a = <int**> malloc(300000 * 10 * sizeof(int))
cdef int i
cdef int y
cdef int value = 0
for i in range(300000):
for y in range(10):
a[i][y] = list1[i][y]
print("doublearray1")
print(a[40000][6])
still only the smaller array works.

The way to do that in C is that you transform the list of lists with length 10 into a 1D-Array. And Using malloc to allocate enough space and freeing it afterwards. Another way is to use an array of pointers.
cpdef doublearray1(list list1):
cdef int *a = <int *> malloc(3000000*sizeof(int))
cdef int i
cdef int y
cdef int value = 0
for i in range(300000):
for y in range(10):
a[i*10+y] = list1[i][y]
print("doublearray1")
# same as a[2][5] in 2D-Array
print(a[25])

Related

Cannot convert Cython memoryviewslice to ndarray

I am trying to write an explicit Successive Overrelaxation Function over a 2D matrix. In this case for an electrostatic potential.
When trying to optimize this in Cython I seem to get an error that I am not quite sure I understand.
%%cython
cimport cython
import numpy as np
cimport numpy as np
from libc.math cimport pi
#SOR function
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.initializedcheck(False)
#cython.nonecheck(False)
def SOR_potential(np.float64_t[:, :] potential, mask, int max_iter, float error_threshold, float alpha):
#the ints
cdef int height = potential.shape[0]
cdef int width = potential.shape[1] #more general non quadratic
cdef int it = 0
#the floats
cdef float error = 0.0
cdef float sor_adjustment
#the copy array we will iterate over and return
cdef np.ndarray[np.float64_t, ndim=2] input_matrix = potential.copy()
#set the ideal alpha if user input is 0.0
if alpha == 0.0:
alpha = 2/(1+(pi/((height+width)*0.5)))
#start the SOR loop. The for loops omit the 0 and -1 index\
#because they are *shadow points* used for neuman boundary conditions\
cdef int row, col
#iteration loop
while True:
#2-stencil loop
for row in range(1, height-1):
for col in range(1, width-1):
if not(mask[row][col]):
potential[row][col] = 0.25*(input_matrix[row-1][col] + \
input_matrix[row+1][col] + \
input_matrix[row][col-1] + \
input_matrix[row][col+1])
sor_adjustment = alpha * (potential[row][col] - input_matrix[row][col])
input_matrix[row][col] = sor_adjustment + input_matrix[row][col]
error += np.abs(input_matrix[row][col] - potential[row][col])
#by the end of this loop input_matrix and potential have diff values
if error<error_threshold:
break
elif it>max_iter:
break
else:
error = 0
it = it + 1
return input_matrix, error, it
and I used a very simple example for an array to see if it would give an error output.
test = [[True, False], [True, False]]
pot = np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float64)
SOR_potential(pot, test, 50, 0.1, 0.0)
Gives out this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [30], line 1
----> 1 SOR_potential(pot, test, 50, 0.1, 0.0)
File _cython_magic_6c09a5060df996862b8e35adacc0e25c.pyx:21, in _cython_magic_6c09a5060df996862b8e35adacc0e25c.SOR_potential()
TypeError: Cannot convert _cython_magic_6c09a5060df996862b8e35adacc0e25c._memoryviewslice to numpy.ndarray
But when I delete the np.float64_t[:, :] part from
def SOR_potential(np.float64_t[:, :] potential,...)
the code works. Of course, the simple 2x2 matrix will not converge but it gives no errors. Where is the mistake here?
I also tried importing the modules differently as suggested here
Cython: how to resolve TypeError: Cannot convert memoryviewslice to numpy.ndarray?
but I got 2 errors instead of 1 where there were type mismatches.
Note: I would also like to ask, how would I define a numpy array of booleans to put in front of the "mask" input in the function?
A minimal reproducible example of your error message would look like this:
def foo(np.float64_t[:, :] A):
cdef np.ndarray[np.float64_t, ndim=2] B = A.copy()
# ... do something with B ...
return B
The problem is, that A is a memoryview while B is a np.ndarray. If both A and B are memoryviews, i.e.
def foo(np.float64_t[:, :] A):
cdef np.float64_t[:, :] B = A.copy()
# ... do something with B ...
return np.asarray(B)
your example will compile without errors. Note that you then need to call np.asarray if you want to return a np.ndarray.
Regarding your second question: You could use a memoryview with dtype np.uint8_t
def foo(np.float64_t[:, :] A, np.uint8_t[:, :] mask):
cdef np.float64_t[:, :] B = A.copy()
# ... do something with B and mask ...
return np.asarray(B)
and call it like this from Python:
mask = np.array([[True, True], [False, False]], dtype=bool)
A = np.ones((2,2), dtype=np.float64)
foo(A, mask)
PS: If your array's buffers are guaranteed to be C-Contiguous, you can use contiguous memoryviews for better performance:
def foo(np.float64_t[:, ::1] A, np.uint8_t[:, ::1] mask):
cdef np.float64_t[:, ::1] B = A.copy()
# ... do something with B and mask ...
return np.asarray(B)

TypeError: initializer for ctype 'unsigned int *' must be a cdata pointer, not bytes

I try to convert PIL image to leptonica PIX. Here is my code python 3.6:
import os, cffi
from PIL import Image
# initialize leptonica
ffi = cffi.FFI()
ffi.cdef("""
typedef int l_int32;
typedef unsigned int l_uint32;
struct Pix;
typedef struct Pix PIX;
PIX * pixCreate (int width, int height, int depth);
l_int32 pixSetData (PIX *pix, l_uint32 *data);
""")
leptonica = ffi.dlopen(os.path.join(os.getcwd(), "leptonica-1.78.0.dll"))
# convert PIL to PIX
im = Image.open("test.png").convert("RGBA")
depth = 32
width, height = im.size
data = im.tobytes("raw", "RGBA")
pixs = leptonica.pixCreate(width, height, depth)
leptonica.pixSetData(pixs, data)
pixSetData failes with message: TypeError: initializer for ctype 'unsigned int *' must be a cdata pointer, not bytes.
How to convert bytes object (data) to cdata pointer?
I got answer from Armin Rigo at python-cffi forum:
Assuming you have the recent cffi 1.12, you can do:
leptonica.pixSetData(pixs, ffi.from_buffer("l_uint32[]", data))
The backward-compatible way is more complicated because we need to
make sure an intermediate object stays alive:
p = ffi.from_buffer(data)
leptonica.pixSetData(pixs, ffi.cast("l_uint32 *", p))
# 'p' must still be alive here after the call, so put it in a variable above!
PIL and Leptonica seem not to share exactly the same raw format. At last RGBA vs. ABGR. What worked for me was to use uncompressed TIFF as a fast and dependable data exchange format.
# Add these to ffi.cdef():
#
# typedef unsigned char l_uint8;
# PIX * pixReadMem(const l_uint8 *data, size_t size);
# l_ok pixWriteMem(l_uint8 **pdata, size_t *psize, PIX *pix, l_int32 format);
from io import BytesIO
import PIL.Image
IFF_TIFF = 4
def img_pil_to_lepto(pilimage):
with BytesIO() as bytesio:
pilimage.save(bytesio, 'TIFF')
tiff_bytes = bytesio.getvalue()
cdata = ffi.from_buffer('l_uint8[]', tiff_bytes)
pix = leptonica.pixReadMem(cdata, len(tiff_bytes))
return pix
def img_lepto_to_pil(pix):
cdata_ptr = ffi.new('l_uint8**')
size_ptr = ffi.new('size_t*')
leptonica.pixWriteMem(cdata_ptr, size_ptr, pix, IFF_TIFF)
cdata = cdata_ptr[0]
size = size_ptr[0]
tiff_bytes = bytes(ffi.buffer(cdata, size))
with BytesIO(tiff_bytes) as bytesio:
pilimage = PIL.Image.open(bytesio).copy()
return pilimage

cython - determining the number of items in pointer variable

How would I determine the number of elements in a pointer variable in cython? I saw that in C one way seems to be sizeof(ptr)/sizeof(int), if the pointer points to int variables. But that doesn't seem to work in cython. E.g. when I tried to join two memory views into a single pointer like so:
from libc.stdlib cimport malloc, free
cdef int * join(int[:] a, int[:] b):
cdef:
int n_a = a.shape[0]
int n_b = b.shape[0]
int new_size = n_a + n_b
int *joined = <int *> malloc(new_size*sizeof(int))
int i
try:
for i in range(n_a):
joined[i] = a[i]
for i in range(n_b):
joined[n_a+i] = b[i]
return joined
finally:
free(joined)
#cython.cdivision(True)
def join_memviews(int[:] n, int[:] m):
cdef int[:] arr_fst = n
cdef int[:] arr_snd = m
cdef int *arr_new
cdef int new_size
arr_new = join(arr_fst,arr_snd)
new_size = sizeof(arr_new)/sizeof(int)
return [arr_new[i] for i in range(new_size)]
I do not get the desired result when calling join_memviews from a python script, e.g.:
# in python
a = np.array([1,2])
b = np.array([3,4])
a_b = join_memviews(a,b)
I also tried using the types
DTYPE = np.int
ctypedef np.int_t DTYPE_t
as the arguement inside sizeof(), but that didn't work either.
Edit: The handling of the pointer variable was apparently a bit careless of me. I hope the following is fine (even though it might not be a prudent approach):
cdef int * join(int[:] a, int[:] b, int new_size):
cdef:
int *joined = <int *> malloc(new_size*sizeof(int))
int i
for i in range(n_a):
joined[i] = a[i]
for i in range(n_b):
joined[n_a+i] = b[i]
return joined
def join_memviews(int[:] n, int[:] m):
cdef int[:] arr_fst = n
cdef int[:] arr_snd = m
cdef int *arr_new
cdef int new_size = n.shape[0] + m.shape[0]
try:
arr_new = join(arr_fst,arr_snd, new_size)
return [arr_new[i] for i in range(new_size)]
finally:
free(arr_new)
You can't. It doesn't work in C either. sizeof(ptr) returns the amount of memory used to store the pointer (i.e. typically 4 or 8 depending on your system) rather than the length of the array. The lengths of your malloced arrays are something that you need to keep track of manually.
Additionally the following code is a recipe for disaster:
cdef int *joined = <int *> malloc(new_size*sizeof(int))
try:
return joined
finally:
free(joined)
The free happens immediately on function exit so that an invalid pointer is returned to the calling function.
You should be using properly managed Python arrays (either from numpy or the standard library array module) unless you absolutely can't avoid it.

cython static shaped array views

In cython, one can use array views, e.g.
cdef void func(float[:, :] arr)
In my usage the second dimension should always have a shape of 2. Can I tell cython this? I was thinking of something like:
cdef void func(float[:, 2] arr)
but this results in an invalid syntax; Or is it possible to have something more similar to c++, e.g.
cdef void func(tuple<float, float>[:] arr)
Thanks in advance!
You can use a 2D static array instead. Just use the pointer notation. Here is how you achieve it
def pyfunc():
# static 1D array
cdef float *arr1d = [1,-1, 0, 2,-1, -1, 4]
# static 2D array
cdef float[2] *arr2d = [[1,.2.],[3.,4.]]
# pass to a "cdef"ed function
cfunc(arr2d)
# your function signature would now look like this
cdef void cfunc(float[2] *arr2d):
print("my 2D static array")
print(arr2d[0][0],arr2d[0][1],arr2d[1][0],arr2d[1][1])
Calling it you get:
>>> pyfunc()
my 2D static array
1.0, 2.0, 3.0, 4.0
I don't think this is really supported, but if you want to do this then the best way is probably to use memoryviews of structs (which are compatible with numpys custom dtypes):
import numpy as np
cdef packed struct Pair1: # packed ensures it matches custom numpy dtypes
# (but probably doesn't matter here!)
double x
double y
# pair 1 matches arrays of this dtype
pair_1_dtype = [('x',np.float64), ('y',np.float64)]
cdef packed struct Pair2:
double data[2]
pair_2_dtype = [('data',np.float64, (2,))]
def pair_func1(Pair1[::1] x):
# do some very basic work
cdef Pair1 p
cdef Py_ssize_t i
p.x = 0; p.y = 0
for i in range(x.shape[0]):
p.x += x[i].x
p.y += x[i].y
return p # take advantage of auto-conversion to a dict
def pair_func2(Pair2[::1] x):
# do some very basic work
cdef Pair2 p
cdef Py_ssize_t i
p.data[0] = 0; p.data[1] = 0
for i in range(x.shape[0]):
p.data[0] += x[i].data[0]
p.data[1] += x[i].data[1]
return p # take advantage of auto-conversion to a dict
and a function to show you how to call it:
def call_pair_funcs_example():
# generate data of correct dtype
d = np.random.rand(100,2)
d1 = d.view(dtype=pair_1_dtype).reshape(-1)
print(pair_func1(d1))
d2 = d.view(dtype=pair_2_dtype).reshape(-1)
print(pair_func2(d2))
The thing I'd like to have done is:
ctypedef double[2] Pair3
def pair_func3(Pair3[::1] x):
# do some very basic work
cdef Pair3 p
cdef Py_ssize_t i
p[0] = 0; p[1] = 0
for i in range(x.shape[0]):
p[0] += x[i][0]
p[1] += x[i][1]
return p # ???
That compiles successfully, but I couldn't find any way of converting it from numpy. If you could work out how to get this version to work then I think it would be the most elegant solution.
Note that I'm not convinced of the performance advantages of any of these solutions. Your best move is probably to tell Cython that the trailing dimension is contiguous in memory (e.g. double [:,::1]) but let it be any size.

What is the difference between int -> int -> int and (int*int) -> int in SML?

I have noticed that there are 2 ways of defining functions in SML. For example if you take the add function, these are the two ways:
fun add x y = x+y;
fun add(x,y) = x+y;
The first method creates the function type as:
val add = fn : int -> int -> int
The second one creates the function type as:
val add = fn : int * int -> int
What is the difference between these two types for the same function? And also why are there two types for the same function?
If we remove the syntactic sugar from your two definitions they become:
val add = fn x => fn y => x+y
and
val add = fn xy =>
case xy of
(x,y) => x+y
So in the first case add is a function that takes an argument x and returns another function, which takes an argument y and then returns x+y. This technique of simulating multiple arguments by returning another function is known as currying.
In the second case add is a function that takes a tuple as an argument and then adds the two elements of the tuple.
This also explains the two different types. -> is the function arrow, which associates to the right, meaning int -> int -> int is the same as int -> (int -> int) describing a function that takes an int and returns an int -> int function.
* on the other hand is the syntax used for tuple types, that is int * int is the type of tuples containing two ints, so int * int -> int (which is parenthesized as (int * int) -> int because * has higher precedence than ->) describes a function that takes a tuple of two ints and returns an int.
The reason those 2 functions are different is because of the phenomenon of Currying. Specifically, Currying is the ability to write any function with dom(f) = R^{n} as a function that takes inputs from R n-times. This basically is accomplished by ensuring that each input returns a function for the next variable to take in. This is what the -> sign represents - It's a fundamental result from the Curry-Howard Isomorphism. So :
fun addCurry x y = x + y (* int -> int -> int *)
fun addProd (x,y) = x + y (* (int*int) -> int *)
tells us that addCurry is the reduction of addProd into a form that can be used to "substitute" and return variables. So, addProd and addCurry are Contextually-Equivalent. However, they are not Semantically-Equivalent.
(int*int) is a product-type. It says that it expects input1=int and input2=int. int -> int says that it takes an int and returns an int. It's an arrow-type.
If you're interested, you may also want to know that there are only 2 kinds of arguments to SML functions :
1) Curried
2) Tuples - So, fun addProd (x,y) represents (x,y) as a tuple to the function argument.

Resources