OpenCL to search array and set a flag - opencl

I'm brand new to using OpenCL, and this seems like it should be very simple, so bear with me.
I'm writing a simple kernel to scan an array and look for a particular value. If that value is found anywhere in the array, I'd like a flag to be set. If the value is not found, the flag should remain 0;
Currently I'm creating a cl_mem object to hold an int
cl_mem outputFlag = clCreateBuffer(mCLContext, CL_MEM_WRITE_ONLY, sizeof(cl_int), NULL, NULL);
setting it as a kernel argument
clSetKernelArg(mCLKernels[1],1, sizeof(cl_mem), &outputFlag);
and executing my kernel which looks like:
__kernel void checkForHole(__global uchar *image , __global int found, uchar holeValue)
{
int i = get_global_id(0);
int j = get_global_id(1);
uchar sample = image[i*j];
if (sample == holeValue) {
found = 1;
}
}
Note that my array is 2D, though it shouldn't matter.
When I put a printf statement inside my found condition, it does get called (the value is found). But when I read back my value via:
cl_int result;
errorCode = clEnqueueReadBuffer(mCLCommandQueue, outputFlag, CL_TRUE
, 0, sizeof(cl_int), &result, 0, NULL, NULL);
I get 0. Is there a proper way to set a flag in openCL? it would also be nice if there was a way to halt the entire execution and just return my value if it is found.
Can I write a bool return type kernel and just return true?
Thanks!

In the kernel the output flag should be a pointer to an int.
Change the kernel parameter to __global int *found
I always seem to figure out my issues just by writing them here....
If anyone knows a way to halt the execution though, or if it's even possible, I'd still be interested :)

Related

Copying global on-device pointer address back and forth between device and host

I created a buffer on the OpenCL device (a GPU), and from the host I need to know the global on-device pointer address so that I can put that on-device address in another buffer so that the kernel can then read from that buffer that contains the address of the first buffer so that then it can access the contents of that buffer.
If that's confusing here's what I'm trying to do: I create a generic floats-containing buffer representing a 2D image, then from the host I create a todo list of all the things my kernel needs to draw, which lines, which circles, which images... So from that list the kernel has to know where to find that image, but the reference to that image cannot be passed as a kernel argument, because that kernel might draw no image, or a thousand different images, all depending on what the list says, so it has to be referenced in that buffer that serves as a todo list for my kernel.
The awkward way I've done it so far:
To do so I tried making a function that calls a kernel after the creation of the image buffer that gets the buffer and returns the global on-device address as a ulong in another buffer, then the host stores that value in a 64-bit integer, like this:
uint64_t get_clmem_device_address(clctx_t *clctx, cl_mem buf)
{
const char kernel_source[] =
"kernel void get_global_ptr_address(global void *ptr, global ulong *devaddr) \n"
"{ \n"
" *devaddr = (ulong) ptr; \n"
"} \n";
int32_t i;
cl_int ret;
static int init=1;
static cl_program program;
static cl_kernel kernel;
size_t global_work_size[1];
static cl_mem ret_buffer;
uint64_t devaddr;
if (init)
{
init=0;
ret = build_cl_program(clctx, &program, kernel_source);
ret = create_cl_kernel(clctx, program, &kernel, "get_global_ptr_address");
ret_buffer = clCreateBuffer(clctx->context, CL_MEM_WRITE_ONLY, 1*sizeof(uint64_t), NULL, &ret);
}
if (kernel==NULL)
return ;
// Run the kernel
ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), &buf);
ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), &ret_buffer);
global_work_size[0] = 1;
ret = clEnqueueNDRangeKernel(clctx->command_queue, kernel, 1, NULL, global_work_size, NULL, 0, NULL, NULL); // enqueue the kernel
ret = clEnqueueReadBuffer(clctx->command_queue, ret_buffer, CL_FALSE, 0, 1*sizeof(uint64_t), &devaddr, 0, NULL, NULL); // copy the value
ret = clFlush(clctx->command_queue);
clFinish(clctx->command_queue);
return devaddr;
}
Apparently this works (it does return a number, although it's hard to know if it's correct), but then I put this devaddr (a 64-bit integer on the host) in the todo list buffer that the kernel uses to know what to do, and then if necessary (according to the list) the kernel calls the function below, le here being a pointer to the relevant entry in the todo list, and the 64-bit address being the first element:
float4 blit_sprite(global uint *le, float4 pv)
{
const int2 p = (int2) (get_global_id(0), get_global_id(1));
ulong devaddr;
global float4 *im;
int2 im_dim;
devaddr = ((global ulong *) le)[0]; // global address for the start of the image as a ulong
im_dim.x = le[2];
im_dim.y = le[3];
im = (global float4 *) devaddr; // ulong is turned into a proper global pointer
if (p.x < im_dim.x)
if (p.y < im_dim.y)
pv += im[p.y * im_dim.x + p.x]; // this gives me a CL_OUT_OF_RESOURCES error, even when changing it to im[0]
return pv;
}
but big surprise this doesn't work, it gives me a CL_OUT_OF_RESOURCES error, which I assume means my im pointer isn't valid. Actually it works, it didn't work when I used two different contexts. But it's still pretty unwieldy.
Is there a less weird way to do what I want to do?
OpenCL standard doesn't guarantee that memory objects will not be physically reallocated between kernel calls. So, original Device-side address is valid only within single kernel NDRange. That's one of the reasons why OpenCL memory objects are represented on Host side as transparent structure pointers.
Though, you can save offset to memory object's first byte in 1st kernel and pass it to 2nd kernel. Every time you launch your kernel, you will obtain actual Device-side address within your kernel & increment it by saved shift value. That would be perfectly "legal".

Simple Vector Geometric Progression Design in OpenCL

I'm new to OpenCL and in order to get a better grasp of a few concepts I contrived a simple example of a geometric progression as follows (emphasis on contrived):
An array of N values and N coefficients (whose values could be
anything, but in the example they all are the same) are allocated.
M steps are performed in sequence where each value in the values array
is multiplied by its corresponding coefficient in the coefficients
array and assigned as the new value in the values array. Each step needs to fully complete before the next step can complete. I know this part is a bit contrived, but this is a requirement I want to enforce to help my understanding of OpenCL.
I'm only interested in the values in the values array after the final step has completed.
Here is the very simple OpenCL kernel (MultiplyVectors.cl):
__kernel void MultiplyVectors (__global float4* x, __global float4* y, __global float4* result)
{
int i = get_global_id(0);
result[i] = x[i] * y[i];
}
And here is the host program (main.cpp):
#include <CL/cl.hpp>
#include <vector>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
int main ()
{
auto context = cl::Context (CL_DEVICE_TYPE_GPU);
auto *sourceFile = fopen("MultiplyVectors.cl", "r");
if (sourceFile == nullptr)
{
perror("Couldn't open the source file");
return 1;
}
fseek(sourceFile, 0, SEEK_END);
const auto sourceSize = ftell(sourceFile);
auto *sourceBuffer = new char [sourceSize + 1];
sourceBuffer[sourceSize] = '\0';
rewind(sourceFile);
fread(sourceBuffer, sizeof(char), sourceSize, sourceFile);
fclose(sourceFile);
auto program = cl::Program (context, cl::Program::Sources {std::make_pair (sourceBuffer, sourceSize + 1)});
delete[] sourceBuffer;
const auto devices = context.getInfo<CL_CONTEXT_DEVICES> ();
program.build (devices);
auto kernel = cl::Kernel (program, "MultiplyVectors");
const size_t vectorSize = 1024;
float coeffs[vectorSize] {};
for (size_t i = 0; i < vectorSize; ++i)
{
coeffs[i] = 1.000001;
}
auto coeffsBuffer = cl::Buffer (context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof (coeffs), coeffs);
float values[vectorSize] {};
for (size_t i = 0; i < vectorSize; ++i)
{
values[i] = static_cast<float> (i);
}
auto valuesBuffer = cl::Buffer (context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, sizeof (values), values);
kernel.setArg (0, coeffsBuffer);
kernel.setArg (1, valuesBuffer);
kernel.setArg (2, valuesBuffer);
auto commandQueue = cl::CommandQueue (context, devices[0]);
for (size_t i = 0; i < 1000000; ++i)
{
commandQueue.enqueueNDRangeKernel (kernel, cl::NDRange (0), cl::NDRange (vectorSize / 4), cl::NullRange);
}
printf ("All kernels enqueued. Waiting to read buffer after last kernel...");
commandQueue.enqueueReadBuffer (valuesBuffer, CL_TRUE, 0, sizeof (values), values);
return 0;
}
What I'm basically asking is for advice on how to best optimize this OpenCL program to run on a GPU. I have the following questions based on my limited OpenCL experience to get the conversation going:
Could I be handling the buffers better? I'd like to minimize any
unnecessary ferrying of data between the host and the GPU.
What's the optimal work group configuration (in general at least, I
know this can very by GPU)? I'm not actually sharing any data
between work items and it doesn't seem like I'd benefit from work
groups much here, but just in case.
Should I be allocating and loading anything into local memory for a
work group (if that would at all makes sense)?
I'm currently enqueing one kernel for each step, which will create a
work item for each 4 floats to take advantage of a hypothetical GPU with a SIMD
width of 128 bits. I'm attempting to enqueue all of this
asynchronously (although I'm noticing the Nvidia implementation I have
seems to block each enqueue until the kernel is complete) at once
and then wait on the final one to complete. Is there a whole better
approach to this that I'm missing?
Is there a design that would allow for only one call to
enqueueNDRangeKernel (instead of one call per step) while
maintaining the ability for each step to be efficiently processed in
parallel?
Obviously I know that the example problem I'm solving can be done in much better ways, but I wanted to have as simple of an example as possible that illustrated a vector of values being operated on in a series of steps where each step has to be completed fully before the next. Any help and pointers on how to best go about this would be greatly appreciated.
Thanks!

OpenCL - adding to a single global value

I'm fighting a bug related to adding to a single global value from an OpenCL kernel.
Consider this (oversimplified) example:
__kernel some_kernel(__global unsigned int *ops) {
unsigned int somevalue = ...; // a non-zero value is assigned here
*ops += somevalue;
}
I pass in an argument initialized as zero through clCreateBuffer and clEnqueueWriteBuffer. I assumed that after adding to the value, letting the queue finish and reading it back, I'd get a non-zero value.
Then I figured this might be some weird conflict, so I tried to do an atomic operation:
__kernel some_kernel(__global unsigned int *ops) {
unsigned int somevalue = ...; // a non-zero value is assigned here
atomic_add(ops, somevalue);
}
Alas, no dice - after reading the value back to a host pointer, it's still zero. I've already verified that somevalue has non-zero values in kernel executions, and am at a loss.
By request, the code for creating the memory:
unsigned int *cpu_ops = new unsigned int;
*cpu_ops = 0;
cl_mem_flags flags = CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR;
cl_int error;
cl_mem buffer = clCreateBuffer(context, flags, sizeof(unsigned int), (void*)cpu_ops, &error);
// error code check snipped
error = clEnqueueWriteBuffer(queue, buffer, CL_TRUE, 0, sizeof(unsigned int), (void*)cpu_ops, 0, NULL, NULL);
// error code check snipped
// snip: program setup - it checks out, no errors
cl_kernel some_kernel = clCreateKernel(program, "some_kernel", &error);
// error code check snipped
cl_int error = clSetKernelArg(some_kernel, 0, sizeof(cl_mem), &buffer);
// error code check snipped
//global_work_size and local_work_size set elsewhere
cl_int error = clEnqueueNDRangeKernel(queue, some_kernel, 1, NULL, &global_work_size, &local_work_size, 0, NULL, NULL);
// error code check snipped
clFinish(queue);
cl_int error = clEnqueueReadBuffer(queue, buffer, CL_TRUE, 0, sizeof(unsigned int), (void*)cpu_ops, 0, NULL, NULL);
// error code check snipped
// at this point, cpu_ops still has it's initial value (whatever that value might have been set to)'
I've skipped the error checking code since it does not error out. I'm actually using a bunch of custom helper functions for sending and receiving data, setting up the platform and context, compiling the program and so on, so the above is constructed of the bodies of the appropriate helpers with the parameters' names changed to make sense.
I'm fairly sure that this is a slip-up or lack of understanding on my part, but desperately need input on this.
Never mind. I was confused about my memory handles - just a stupid error. The code is probably fine.

Basic OpenCL Mutex Implementation (Currently Hanging)

I am trying to write a mutex for OpenCL. The idea is for every single individual work item to be able to proceed atomically. Currently, I believe the problem may be that thread warps are unable to proceed when one thread in a warp gets the lock.
My current simple kernel below, for summing numbers. "numbers" is an array of floats as input. "sum" is a one element array for the result, and "semaphore" is a one element array for holding the semaphore. I based it heavily off the example here.
void acquire(__global int* semaphore) {
int occupied;
do {
occupied = atom_xchg(semaphore, 1);
} while (occupied>0);
}
void release(__global int* semaphore) {
atom_xchg(semaphore, 0); //the previous value, which is returned, is ignored
}
__kernel void test_kernel(__global float* numbers, __global float* sum, __global int* semaphore) {
int i = get_global_id(0);
acquire(semaphore);
*sum += numbers[i];
release(semaphore);
}
I am calling the kernel effectively like:
int numof_dimensions = 1;
size_t offset_global[1] = {0};
size_t size_global[1] = {4000}; //the length of the numbers array
size_t* size_local = NULL;
clEnqueueNDRangeKernel(command_queue, kernel, numof_dimensions,offset_global,size_global,size_local, 0,NULL, NULL);
As above, when running, the graphics card hangs, and the driver restarts itself. How can I fix it so that it doesn't?
What you are trying to do is not possible because of the GPU execution model, where all threads on a "processor" share the instruction pointer, even in branches. Here is a post that explains the problem in detail: http://vansa.ic.cz/author/admin/.
BTW, the example code that you found has the exact same problem and would never work.
The answer to this might seem obvious in retrospect, but it's not unless you thought of it.
Basically, the GPU's prediction of the ideal local group size (size of a thread warp) is greater than 1, and so thread warps lock up. To fix it, you just need to specify it to be 1 (i.e. "size_t size_local[1] = {1};"). Doing this produces a correct result.

clSetKernelArg arg_value other than memory object

In OpenCL, can I set kernel argument as following?
cl_uint a = 0;
kernel.setArg(0, sizeof(a), &a);
I want to read&write one value from/to a kernel function, not only write to.
Setting a kernel argument in this manner can only be used for inputs to the kernel. Any output you want to read (either in a subsequent kernel or from the host program) must be written to a buffer or an image. In your case, that means you need to create a single-element buffer and pass the buffer to the kernel.
One way to think about this is that when you call setArg with the parameter &a, the OpenCL kernel is using the value of a, not the location of a. If the kernel were to write to kernel argument zero, your host program would have no way of recovering the value that was written.
Your code creates an argument of type unsigned int, not pointer to unsigned int.
clSetKernelArg takes a pointer to the argument value, not the value itself.
If you want to pass a pointer argument, you will have to create a buffer with clCreateBuffer (even if it's just one value in there) and call clSetKernelArg with the resulting cl_mem.
The following code creates a buffer for 1 cl_uint in __global memory, and copies the value of my_value to it. After running the kernel, it copies the (possibly modified) value back to my_value.
cl_uint my_value = 0;
const unsigned int count = 1;
// Allocate buffer
cl_mem hDeviceMem = clCreateBuffer(hContext, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, count * sizeof(cl_uint), &my_value, &nError);
// Set pointer to buffer as argument
clSetKernelArg(hKernel, 0, sizeof(cl_mem), &hDeviceMem);
// Run kernel
clEnqueueNDRangeKernel(...);
// Copy values back
clEnqueueReadBuffer(hCmdQueue, hDeviceMem, CL_TRUE, 0, count * sizeof(cl_uint), &my_value, 0, NULL, NULL);
Your kernel should then look like this:
__kernel void myKernel(__global unsigned int* value)
{
// read/write to *value here
}
This should work the same as sending a 1-length vector as a param. You might have to use __global uint aParam in your kernel definition.

Resources