Is there a way to work around the limitation in PyOpenCL whereby:
array.data
fails with
pyopencl.array.ArrayHasOffsetError: The operation you are attempting does not yet support arrays that start at an offset from the beginning of their buffer.
I tried:
a.base_data[a.offset: a.offset + a.nbytes]
This seems to work sometimes, but other times I get:
pyopencl.LogicError: clCreateSubBuffer failed: invalid value
clcreateSubBuffer needs to have the offset (or in this case it is called the origin) that is aligned, and the size + origin to fall within the limits of the buffer.
CL_INVALID_VALUE is returned in errcode_ret if the region specified by
(origin, size) is out of bounds in buffer.
CL_MISALIGNED_SUB_BUFFER_OFFSET is returned in errcode_ret if there
are no devices in context associated with buffer for which the origin
value is aligned to the CL_DEVICE_MEM_BASE_ADDR_ALIGN value.
For the particular error you are seeing it looks like either your program or pyopencl is miscalculating the size of the array after the offset. Even if you fixed this you may still have problems if the original offset is not aligned to CL_DEVICE_MEM_BASE_ADDR_ALIGN.
Having said that NVIDIA seems to break from spec and allow arbitrary offsets. So your mileage may vary depending on the hardware.
If you're just looking to get a buffer that marks the start of the array data, to pass to a kernel, you don't have to worry about the size. Here's a function that gets a size-1 buffer that points to the start of the offset data:
def data_ptr(array):
if array.offset:
return array.base_data.get_sub_region(array.offset, 1)
else:
return array.data
You can use this to pass to a kernel, if you need a pointer to the start of the offset data. Here's an example, where I want to set a sub-region clV of array clA to the value 3. I use data_ptr to get a pointer to the start of clV's data.
import numpy as np
import pyopencl as cl
import pyopencl.array
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
m, n = 5, 5
A = np.random.uniform(size=(m, n)).astype(np.float32)
clA = cl.array.Array(queue, A.shape, A.dtype)
clA.set(A)
clV = clA[1::2, 1::2]
def data(array):
if array.offset:
return array.base_data.get_sub_region(array.offset, 1)
else:
return array.data
source = """
__kernel void fn(long si, long sj, __global float *Y)
{
const int i = get_global_id(0);
const int j = get_global_id(1);
Y[i*si + j*sj] = 3;
}
"""
kernel = cl.Program(ctx, source).build().fn
gsize = clV.shape
lsize = None
estrides = np.array(clV.strides) / clV.dtype.itemsize
kernel(queue, gsize, lsize, estrides[0], estrides[1], data_ptr(clV))
print(clA.get())
Related
I faced very strange behavior of OpenCL. I've linked a minimal code sample.
Starting from some random index (commonly 32-divisible) values is not written to array if I add one extra operation beforehand (g_idata[ai] = g_idata[ai-1]). Also notable that, i will get correct result if:
just read value, and writing a literal (see SHOW_BUG).
add if (ai >= n) g_idata[0]+=0; at beginning. see commented lines
tested on Intel and nvidia.
import numpy as np
import pyopencl as cl
ctx = cl.create_some_context()
prg = cl.Program(ctx, """
__kernel void prescan(__global float *g_idata, const int n) {
int thid = get_global_id(0);
int ai = thid*2+1;
// if uncomment strings bellow the bug dissappears
//if (ai >= n){
// g_idata[0]+=0;
//}
bool SHOW_BUG=1;
// make a dummy operation
if (SHOW_BUG)
g_idata[ai] = g_idata[ai-1];
else {
g_idata[ai-1]; //dummy read
g_idata[ai] = 3.14f; //constant write
}
barrier(CLK_GLOBAL_MEM_FENCE);
//set 0,1,2,3... as result
g_idata[thid] = thid;
}
""").build()
prescan_kernel = prg.prescan
prescan_kernel.set_scalar_arg_dtypes([None, np.int32])
def main():
N = 512
a_np = (np.random.random((N,))).astype(np.float32)
queue = cl.CommandQueue(ctx)
mf = cl.mem_flags
a_g = cl.Buffer(ctx, mf.READ_WRITE | mf.COPY_HOST_PTR, hostbuf=a_np)
global_size = (512,)
local_size = None
prescan_kernel(queue, global_size, local_size, a_g, N)
cl.enqueue_copy(queue, a_np, a_g)
corect = np.array(range(N))
#assert np.allclose(a_np, 3.14), np.where(3.14 != a_np)
assert np.allclose(a_np, corect), np.where(corect != a_np)
if __name__ == '__main__':
for i in range(25):
main()
Several things in your code will, according to the OpenCL spec, create undefined behavior.
These include:
Accessing out-of-range memory. Array size expected to be N*2+1 for N work-items.
Multiple work-items (threads) accessing the same index of the array (read or write).
Furthermore barriers only synchronize work-items/threads in a work-group, so it has no effect in your code.
When discussing undefined behavior, it may behave differently on different platforms, sometimes crash the driver and sometimes take down the OS. Please fix these problems and then describe your problems.
I am using PyOpenCl in combination with Python 3.7.
When calling the same kernel with multiple processes having each their own context pointing to the same GPU device, I get performance improvements which scale almost linearly with the number of processes.
I can imagine that execution of parallel processes makes some overlapping transfers possible, where a kernel of process A is executed while process B sends data to the graphic card. But this should not be responsible for such a boost in performance.
Attached you find a code example, where I implemented a dummy application where some data is decoded.
When setting n_processes=1 I get around 12 Mbit/sec, while when setting n_processes=4 I get 45 Mbit/sec.
I am using a single AMD Radeon VII graphics card.
Has anyone a good explanation for that phenomenon?
Update:
I profiled the script using CodeXL. Seems like there is a lot of time wasted between kernel executions and multiple processes are able to make use of it.
import logging
import multiprocessing as mp
import pyopencl as cl
import pyopencl.array as cl_array
from mako.template import Template
import numpy as np
import time
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(process)d %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s')
kernelsource = """
float boxplus(float a,float b)
{
float boxp=log((1+exp(a+b))/(exp(a)+exp(b)));
return boxp;
}
void kernel test(global const float* in,
global const int* permutation_vector,
global float* out)
{
int gid = get_global_id(0);
int p = gid; // permutation index
float t = 0.0;
for(int k=1; k<10;k++){
p = permutation_vector[p];
t= boxplus(in[p],in[gid]);
}
out[gid] = t;
}
"""
class MyProcess(mp.Process):
def __init__(self, q):
super().__init__()
self.q = q
def run(self) -> None:
platform = cl.get_platforms()
my_gpu_devices = [platform[0].get_devices(device_type=cl.device_type.GPU)[0]]
ctx = cl.Context(devices=my_gpu_devices)
queue = cl.CommandQueue(ctx)
tpl = Template(kernelsource)
rendered_tp = tpl.render()
prg = cl.Program(ctx, str(rendered_tp)).build()
size = 100000 # shape of random input array
dtype = np.float64
output_buffer = cl_array.empty(queue, size, dtype=dtype)
input_buffer = cl_array.empty(queue, size, dtype=dtype)
permutation = np.random.permutation(size)
permutation_buffer = cl_array.to_device(queue, permutation.astype(np.int))
def decode(data_in):
input_buffer.set(data_in)
for i in range(10):
prg.test(queue, input_buffer.shape, None,
input_buffer.data,
permutation_buffer.data,
output_buffer.data)
queue.finish()
return output_buffer.get()
counter = 1
while True:
data_in = np.random.normal(size=size).astype(dtype)
data_out = decode(data_in)
if counter % 100 == 0:
self.q.put(size * 100)
counter = 1
else:
counter += 1
def run_test_multi_cpu_single_gpu():
q = mp.Queue()
n_processes = 4
for i in range(n_processes):
MyProcess(q).start()
t0 = time.time()
symbols_sum = q.get()
i = 0
while True:
i += 1
print('{} Mbit/sec'.format(1 / 1e6 * symbols_sum / (time.time() - t0 + 1e-15)))
symbols = q.get()
symbols_sum += symbols
if __name__ == '__main__':
run_test_multi_cpu_single_gpu()
Kernel loop has too few work. It must be almost comparable to kernel launch overhead. Kernel launch overhead is also comparable to a function call overhead in Python.
for(int k=1; k<10;k++){
p = permutation_vector[p];
t= boxplus(in[p],in[gid]);
}
This latency probably hidden behind another process's kernel launch latency and its kernel launch latency probably hidden behind a third one's function call overhead. And GPU can take even more, there are only 10 cycles of for loop with O(N) complexity. Even low end GPUs get saturated with at least thousands of iterations with O(N*N) complexity.
Also the buffer read/writes and compute are overlapping as you said.
So if the kernel takes all time in that profiling window, there is no
capacity left on the graphic card?
GPU can also overlap multiple computes if it has capability and if each work is small enough to let some in-flight threads remain for others. Number of in-flight threads can be as high as 40*shaders. 40*3840 = 153600 instructions issued/pipelined per cycle(or a few cycles) or lets say 3.46 TFLOPS.
3.46 TFLOPS with even 1000 FLOP per 64bit data element, it can stream data at 3.46 GB/s rate. This is without pipelining anything in the kernel(read element 1, compute, write result, read element 2). But it does pipelining, just after starting first element compute, next batch of items are mapped on same shaders, loading new data, it can take hundreds of GB/s, which is more than PCI-e bandwidth.
Also CPU can't preprocess/post process at that rate. So there are buffer copies and CPU as bottlenecks which are hidden behind each other when there are multiple processes.
I am trying to implement a general matrix-matrix multiplication OpenCL kernel, one that conforms to C = α*A*B + β*C.
The Kernel
I did some research online and decided to use a modified kernel from this website as a starting point. The main modification I have made is that allocation of local memory as working space is now dynamic. Below is the kernel I have written:
__kernel
void clkernel_gemm(const uint M, const uint N, const uint K, const float alpha,
__global const float* A, __global const float* B, const float beta,
__global float* C, __local float* Asub, __local float* Bsub) {
const uint row = get_local_id(0);
const uint col = get_local_id(1);
const uint TS = get_local_size(0); // Tile size
const uint globalRow = TS * get_group_id(0) + row; // Row ID of C (0..M)
const uint globalCol = TS * get_group_id(1) + col; // Row ID of C (0..N)
// Initialise the accumulation register
float acc = 0.0f;
// Loop over all tiles
const int numtiles = K / TS;
for (int t = 0; t < numtiles; t++) {
const int tiledRow = TS * t + row;
const int tiledCol = TS * t + col;
Asub[col * TS + row] = A[tiledCol * M + globalRow];
Bsub[col * TS + row] = B[globalCol * K + tiledRow];
barrier(CLK_LOCAL_MEM_FENCE);
for(int k = 0; k < TS; k++) {
acc += Asub[k * TS + row] * Bsub[col * TS + k] * alpha;
}
barrier(CLK_LOCAL_MEM_FENCE);
}
C[globalCol * M + globalRow] = fma(beta, C[globalCol * M + globalRow], acc);
}
Tile Size (TS) is now a value defined in the calling code, which looks like this:
// A, B and C are 2D matrices, their cl::Buffers have already been set up
// and values appropriately set.
kernel.setArg(0, (cl_int)nrowA);
kernel.setArg(1, (cl_int)ncolB);
kernel.setArg(2, (cl_int)ncolA);
kernel.setArg(3, alpha);
kernel.setArg(4, A_buffer);
kernel.setArg(5, B_buffer);
kernel.setArg(6, beta);
kernel.setArg(7, C_buffer);
kernel.setArg(8, cl::Local(sizeof(float) * nrowA * ncolB));
kernel.setArg(9, cl::Local(sizeof(float) * nrowA * ncolB));
cl::NDRange global(nrowA, ncolB);
cl::NDRange local(nrowA, ncolB);
status = cmdq.enqueueNDRangeKernel(kernel, cl::NDRange(0), global, local);
The Problem
The problem I am encountering is, unit tests (written with Google's gtest) I have written will randomly fail, but only for this particular kernel. (I have 20 other kernels in the same .cl source file that pass tests 100% of the time)
I have a test that multiplies a 1x4 float matrix {0.0, 1.0, 2.0, 3.0} with a transposed version of itself {{0.0}, {1.0}, {2.0}, {3.0}}. The expected output is {14.0}.
However, I can get this correct result maybe just 75% of the time.
Sometimes, I can get 23.0 (GTX 970), 17.01 (GTX 750) or just -nan and 0.0 (all 3 devices). The curious part is, the respective incorrect results seem to be unique to the devices; I cannot seem to, for example, get 23.0 on the Intel CPU or the GTX 750.
I am baffled because if I have made an algorithmic or mathematical mistake, the mistake should be consistent; instead I am getting incorrect results only randomly.
What am I doing wrong here?
Things I have tried
I have verified that the data going into the kernels are correct.
I have tried to initialize both __local memory to 0.0, but this causes all results to become wrong (but frankly, I'm not really sure how to initialize it properly)
I have written a test program that only executes this kernel to rule out any race conditions interacting with the rest of my program, but the bug still happens.
Other points to note
I am using the C++ wrapper retrieved directly from the Github page.
To use the wrapper, I have defined CL_HPP_MINIMUM_OPENCL_VERSION 120 and CL_HPP_TARGET_OPENCL_VERSION 120.
I am compiling the kernels with the -cl-std=CL1.2 flag.
All cl::Buffers are created with only the CL_MEM_READ_WRITE flag.
I am testing this on Ubuntu 16.04, Ubuntu 14.04, and Debian 8.
I have tested this on Intel CPUs with the Intel OpenCL Runtime 16.1 for Ubuntu installed. The runtime reports that it supports up to OpenCL 1.2
I have tested this on both Nvidia GTX 760 and 970. Nvidia only supports up to OpenCL 1.2.
All 3 platforms exhibit the same problem with varying frequency.
This looks like a complicated one. There are several things to address and they won't fit into comments, so I'll post all this as an answer even though it does not solve your problem (yet).
I am baffled because if I have made an algorithmic or mathematical
mistake, the mistake should be consistent; instead I am getting
incorrect results only randomly.
Such a behavior is a typical indicator of race conditions.
I have tried to initialize both __local memory to 0.0, but this causes
all results to become wrong (but frankly, I'm not really sure how to
initialize it properly)
Actually this is a good thing. Finally we have some consistency.
Initializing local memory
Initializing local memory can be done using the work items, e.g. if you have a 1D workgroup of 16 items and your local memory consists of 16 floats, just do this:
local float* ptr = ... // your pointer to local memory
int idx = get_local_id(0); // get the index for the current work-item
ptr[idx] = 0.f; // init with value 0
barrier(CLK_LOCAL_MEM_FENCE); // synchronize local memory access within workgroup
If your local memory is larger, e.g. 64 floats, you will have to use a loop where each work item initializes 4 values, at least that is the most efficient way. However, no one will stop you from using every work item to initialize every value in the local memory, even though that is complete nonsense since you're essentially initializing it multiple times.
Your changes
The original algorithm looks like it is especially designed to use quadratic tiles.
__local float Asub[TS][TS];
__local float Bsub[TS][TS];
Not only that but the size of local memory matches the workgroup size, in their example 32x32.
When I look at your kernel parameters for local memory, I can see that you use parameters that are defined as M and N in the original algorithm. This doesn't seem correct.
Update 1
Since you have not described if the original algorithm works for you, this is what you should do to find your error:
Create a set of testdata. Make sure you only use data sizes that are actually supported by the original algorithm (e.g. minimum size, mulitples of x, etc.). Also, use large data sets since some errors only show if multiple workgroups are dispatched.
Use the original, unaltered algorithm with your testdata sets and verify the results.
Change the algorithm only that instead of fixed size local memory, dynamic local memory size is used, but make sure it has the same size as the fixed size approach. This is what you tried but I think it failed due to what I have described under "Your changes".
When passing buffers as argument to OpenCL kernels, will the address of the buffer seen by the kernel code remains the same for the same buffer?
I used the code below to check and it seems that the address are indeed the same. However, I can't find anything in the standard to guarantee this.
import pyopencl as cl
import numpy as np
def main():
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
mf = cl.mem_flags
buf = cl.Buffer(ctx, mf.READ_ONLY, 1000)
buf2 = cl.Buffer(ctx, mf.READ_WRITE, 8)
prg = cl.Program(ctx, """
__kernel void
get_addr(__global const int *in, __global long *out)
{
*out = (long)in;
}
""").build()
knl = prg.get_addr
knl.set_args(buf, buf2)
cl.enqueue_task(queue, knl)
b = np.empty([1], dtype=np.int64)
cl.enqueue_copy(queue, b, buf2).wait()
print(b[0])
prg = cl.Program(ctx, """
__kernel void
get_addr(__global const int *in, __global long *out)
{
*out = (long)in;
}
""").build()
knl = prg.get_addr
knl.set_args(buf, buf2)
cl.enqueue_task(queue, knl)
b = np.empty([1], dtype=np.int64)
cl.enqueue_copy(queue, b, buf2).wait()
print(b[0])
if __name__ == '__main__':
main()
The use case is that I am running a simulation using OpenCL which has many (arrays) of parameters. In order not having to pass these arrays around as arguments, I fill them in a struct and pass the pointer to the struct around instead. Since this struct will be used many times (and by all work items) I would like not having to fill it in every run of every kernels and would like to know if the pointers will change between different runs/work items.
It is not guaranteed for OpenCL 1.x. This is why it is unsafe to store pointers in buffers. The runtime is allowed to move the allocation for each kernel launch. There is no guarantee that it will move it, and of course it is reasonable to expect that the buffer will not often need to move so it isn't surprising that you'd see the result you see. If you allocate a lot more buffers and cycle through them to force the runtime to move them around you will be more likely to see the issue.
For OpenCL 2.0 the shared virtual memory feature guarantees this by definition: the address couldn't be shared if it kept changing.
I've been having some trouble making a copy of an image using PyOpenCL. I wanted to try copying as I really want to do other processing, but im not able to understand this basic task of accessing every pixel. Please help me catch the error to make sure it works.
Here is the program
import pyopencl as cl
import numpy
import Image
import sys
img = Image.open(sys.argv[1])
img_arr = numpy.asarray(img).astype(numpy.uint8)
dim = img_arr.shape
host_arr = img_arr.reshape(-1)
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
mf = cl.mem_flags
a_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=host_arr)
dest_buf = cl.Buffer(ctx, mf.WRITE_ONLY, host_arr.nbytes)
kernel_code = """
__kernel void copyImage(__global const uint8 *a, __global uint8 *c)
{
int rowid = get_global_id(0);
int colid = get_global_id(1);
int ncols = %d;
int npix = %d; //number of pixels, 3 for RGB 4 for RGBA
int index = rowid * ncols * npix + colid * npix;
c[index + 0] = a[index + 0];
c[index + 1] = a[index + 1];
c[index + 2] = a[index + 2];
}
""" % (dim[1], dim[2])
prg = cl.Program(ctx, kernel_code).build()
prg.copyImage(queue, (dim[0], dim[1]) , None, a_buf, dest_buf)
result = numpy.empty_like(host_arr)
cl.enqueue_copy(queue, result, dest_buf)
result_reshaped = result.reshape(dim)
img2 = Image.fromarray(result_reshaped, "RGB")
img2.save("new_image_gpu.bmp")
The image I gave as input was
However, the output from the program was
I'm not able to make sense of why those black lines appear.
Please help me solve this bug.
Thank You
OK ! so I've found a solution. I changed all uint8 to int, and in the numpy array i removed the "astype(numpy.uint8)". I dont know why, I just tried this and it worked. An explanation as to why would be helpful. Also, does this mean this will take much more memory now ?
It works, but now I think it takes much more memory. Any workaround using the uint8 will be helpful.
There is a mismatch between the datatypes your are using in Python and OpenCL. In numpy, a uint8 is an 8-bit unsigned integer (which I presume is what you were after). In OpenCL, a uint8 is an 8-element vector of 32-bit unsigned integers. The correct datatype for an 8-bit unsigned integer in OpenCL is just uchar. So, your astype(numpy.uint8) is fine, but it should be accompanied by arrays of __global const uchar* in your OpenCL kernel.
If you are dealing with images, I would also recommend looking into OpenCL's dedicated image types, which can take advantage of the native support for handling images available in some hardware.