Is it possible to build the same program twice in OpenCL with different preprocessor options. - opencl

Given the following code, wher P is a cl_program loaded with some source code.
What happens if i run
*err = clBuildProgram (p,
1,
m_gpu_device_id,
str0, // Compiler options, see the specifications for more details
0,
0);
cl_kernel kernel0= clCreateKernel (p, // The program where the kernel is
"nn_feedforward", // The name of the kernel, i.e. the name of the kernel function as it's declared in the code
err);
*err = clBuildProgram (p,1,m_gpu_device_id, str1 ,0, 0);
cl_kernel kernel1 = clCreateKernel (p, "nn_feedforward", err);
Will the kernel1 work with the options of str1 in contrast to kernel0 with str0 options. or will the first kernel get written over in some way.

Related

Non blocking kernel launches in OpenCL intel implementation

I have the following skeleton code
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
&global_item_size,NULL,0, NULL, NULL);
printf("print immediately\n ");
I thought and read somewhere that clEnqueueNDRangeKernel is non blocking call and cpu continues its execution immediately after enqueuing the kernel.
But I see a different behaviour. printf statement executes after kernel completes execution. Why am I seeing this behaviour?. How to make any kernel calls non blocking?.
Yes, clEnqueueNDRangeKernel() is supposed to be non-blocking. However, the code you show does not allow to definitively conclude that the kernel finishes before the printf statement. There's several possibilities:
The kernel is not enqueued properly or fails to run. You need to check if the return value ret is CL_SUCCESS, and if not, fix whatever needs to be fixed and try again.
The kernel runs fast and the thread on which the kernel runs is likely to be given priority, such that the printf statement ends up being executed after the kernel finishes.
The kernel is actually running during the printf statement, since nothing in your code allows you to conclude otherwise. To check if the kernel is running or finished, you need to use an event. For example:
cl_event evt = NULL;
cl_int ret, evt_status;
// ...
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
&global_item_size, NULL, 0, NULL, &evt);
// Check if it's finished or not
if (ret == CL_SUCCESS)
{
clGetEventInfo(evt, CL_EVENT_COMMAND_EXECUTION_STATUS,
sizeof(cl_int), (void*) &evt_status, NULL);
if (evt_status == CL_COMPLETE)
printf("Kernel is finished\n");
else
printf("Kernel is NOT finished\n");
}
else
{
printf("Something's wrong: %d\n", ret);
}

Copying global on-device pointer address back and forth between device and host

I created a buffer on the OpenCL device (a GPU), and from the host I need to know the global on-device pointer address so that I can put that on-device address in another buffer so that the kernel can then read from that buffer that contains the address of the first buffer so that then it can access the contents of that buffer.
If that's confusing here's what I'm trying to do: I create a generic floats-containing buffer representing a 2D image, then from the host I create a todo list of all the things my kernel needs to draw, which lines, which circles, which images... So from that list the kernel has to know where to find that image, but the reference to that image cannot be passed as a kernel argument, because that kernel might draw no image, or a thousand different images, all depending on what the list says, so it has to be referenced in that buffer that serves as a todo list for my kernel.
The awkward way I've done it so far:
To do so I tried making a function that calls a kernel after the creation of the image buffer that gets the buffer and returns the global on-device address as a ulong in another buffer, then the host stores that value in a 64-bit integer, like this:
uint64_t get_clmem_device_address(clctx_t *clctx, cl_mem buf)
{
const char kernel_source[] =
"kernel void get_global_ptr_address(global void *ptr, global ulong *devaddr) \n"
"{ \n"
" *devaddr = (ulong) ptr; \n"
"} \n";
int32_t i;
cl_int ret;
static int init=1;
static cl_program program;
static cl_kernel kernel;
size_t global_work_size[1];
static cl_mem ret_buffer;
uint64_t devaddr;
if (init)
{
init=0;
ret = build_cl_program(clctx, &program, kernel_source);
ret = create_cl_kernel(clctx, program, &kernel, "get_global_ptr_address");
ret_buffer = clCreateBuffer(clctx->context, CL_MEM_WRITE_ONLY, 1*sizeof(uint64_t), NULL, &ret);
}
if (kernel==NULL)
return ;
// Run the kernel
ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), &buf);
ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), &ret_buffer);
global_work_size[0] = 1;
ret = clEnqueueNDRangeKernel(clctx->command_queue, kernel, 1, NULL, global_work_size, NULL, 0, NULL, NULL); // enqueue the kernel
ret = clEnqueueReadBuffer(clctx->command_queue, ret_buffer, CL_FALSE, 0, 1*sizeof(uint64_t), &devaddr, 0, NULL, NULL); // copy the value
ret = clFlush(clctx->command_queue);
clFinish(clctx->command_queue);
return devaddr;
}
Apparently this works (it does return a number, although it's hard to know if it's correct), but then I put this devaddr (a 64-bit integer on the host) in the todo list buffer that the kernel uses to know what to do, and then if necessary (according to the list) the kernel calls the function below, le here being a pointer to the relevant entry in the todo list, and the 64-bit address being the first element:
float4 blit_sprite(global uint *le, float4 pv)
{
const int2 p = (int2) (get_global_id(0), get_global_id(1));
ulong devaddr;
global float4 *im;
int2 im_dim;
devaddr = ((global ulong *) le)[0]; // global address for the start of the image as a ulong
im_dim.x = le[2];
im_dim.y = le[3];
im = (global float4 *) devaddr; // ulong is turned into a proper global pointer
if (p.x < im_dim.x)
if (p.y < im_dim.y)
pv += im[p.y * im_dim.x + p.x]; // this gives me a CL_OUT_OF_RESOURCES error, even when changing it to im[0]
return pv;
}
but big surprise this doesn't work, it gives me a CL_OUT_OF_RESOURCES error, which I assume means my im pointer isn't valid. Actually it works, it didn't work when I used two different contexts. But it's still pretty unwieldy.
Is there a less weird way to do what I want to do?
OpenCL standard doesn't guarantee that memory objects will not be physically reallocated between kernel calls. So, original Device-side address is valid only within single kernel NDRange. That's one of the reasons why OpenCL memory objects are represented on Host side as transparent structure pointers.
Though, you can save offset to memory object's first byte in 1st kernel and pass it to 2nd kernel. Every time you launch your kernel, you will obtain actual Device-side address within your kernel & increment it by saved shift value. That would be perfectly "legal".

Reading an external kernel in OpenCL

I have the following lines of code which I use to first determine the file size of the .cl file I am reading from (and loading into a buffer), and subsequently building my program and kernel from the buffer. Assuming calculate.cl contains a simple vector addition kernel.
//get size of kernel source
FILE *f = fopen("calculate.cl", "r");
fseek(f, 0, SEEK_END);
size_t programSize = ftell(f);
rewind(f);
//load kernel into buffer
char *programBuffer = (char*)malloc(programSize + 1);
programBuffer[programSize] = '\0';
fread(programBuffer, sizeof(char), programSize, f);
fclose(f);
//create program from buffer
cl_program program = clCreateProgramWithSource(context, 1, (const char**) &programBuffer, &programSize, &status);
//build program for devices
status = clBuildProgram(program, numDevices, devices, NULL, NULL, NULL);
//create the kernel
cl_kernel calculate = clCreateKernel(program, "calculate", &status);
However, when I run my program, the output produced is zero instead of the intended vector addition results. I've verified that the problem is not to do with the kernel itself (I used a different method to load an external kernel which worked and gave me the intended results) however I am still curious as to why this initial method I attempted did not work.
Any help?
the problem's been solved.
following bl0z0's suggestion and looking up the error, I've found the solution here:
OpenCL: Expected identifier in kernel
thanks everyone :D I really appreciate it!
I believe this gives the programing size in terms of the number of chars:
size_t programSize = ftell(f);
and here you need to allocate in terms of bytes:
char *programBuffer = (char*)malloc(programSize + 1);
so I think that previous line should be
char *programBuffer = (char*)malloc(programSize * sizeof(char) + 1);
Double check this by just printing the programBuffer.

How make data transfer between host and device more faster?

I am coding some program and i need to perform some data transfer repeatedly between host and device. I will try to minimize best i can, but is there any faster way to perform this? Here, array copied to the device is changing on each iteration, hence device needs to be updated with the new array values. Any suggestions/pointers/help will be appreciated.
for (i = 0; i <= SEVERALCALLS; i++) {
wrtBuffer = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_ALLOC_HOST_PTR, sizeof(double) * num, NULL, &ret);
if (ret != 0) {
printf("clCreateBuffer wrtBuffer error: %d. couldn't load\n", ret);
exit(1);
}
// update cti array
ret = clEnqueueWriteBuffer(command_queue, wrtBuffer, CL_TRUE, 0, sizeof(double) * num, cti, 0, NULL, NULL);
if (ret != 0) {
printf("clEnqueueWriteBuffer wrtBuffer error: %d. couldn't load\n", ret);
exit(1);
}
// NDRange Kernel call
ret = clEnqueueReadBuffer(command_queue, readBuffer, CL_TRUE, 0, sizeof(double) * num, newcti, 0, NULL, NULL);
if (ret != 0) {
printf("clEnqueueReadBuffer readBuffer error: %d. couldn't load\n", ret);
exit(1);
}
}
Three ways to optimize this:
On integrated GPU (like Intel or AMD APU) use "zero copy" buffers so you won't pay for any transfers.
On NVIDIA use pinned host memory for the host-side memory source for clEnqueueWriteBuffer or receive buffer for clEnqueueReadBuffer. This will be faster than using normal malloc memory and won't block.
Overlap transfer and compute. Use three command queues, one for upload, one for compute, and one for download, and use events to ensure dependencies. See NVIDIA's example: oclCopyComputeOverlap (although it is suboptimal; it can go slightly faster than they say it can).

OpenCL - adding to a single global value

I'm fighting a bug related to adding to a single global value from an OpenCL kernel.
Consider this (oversimplified) example:
__kernel some_kernel(__global unsigned int *ops) {
unsigned int somevalue = ...; // a non-zero value is assigned here
*ops += somevalue;
}
I pass in an argument initialized as zero through clCreateBuffer and clEnqueueWriteBuffer. I assumed that after adding to the value, letting the queue finish and reading it back, I'd get a non-zero value.
Then I figured this might be some weird conflict, so I tried to do an atomic operation:
__kernel some_kernel(__global unsigned int *ops) {
unsigned int somevalue = ...; // a non-zero value is assigned here
atomic_add(ops, somevalue);
}
Alas, no dice - after reading the value back to a host pointer, it's still zero. I've already verified that somevalue has non-zero values in kernel executions, and am at a loss.
By request, the code for creating the memory:
unsigned int *cpu_ops = new unsigned int;
*cpu_ops = 0;
cl_mem_flags flags = CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR;
cl_int error;
cl_mem buffer = clCreateBuffer(context, flags, sizeof(unsigned int), (void*)cpu_ops, &error);
// error code check snipped
error = clEnqueueWriteBuffer(queue, buffer, CL_TRUE, 0, sizeof(unsigned int), (void*)cpu_ops, 0, NULL, NULL);
// error code check snipped
// snip: program setup - it checks out, no errors
cl_kernel some_kernel = clCreateKernel(program, "some_kernel", &error);
// error code check snipped
cl_int error = clSetKernelArg(some_kernel, 0, sizeof(cl_mem), &buffer);
// error code check snipped
//global_work_size and local_work_size set elsewhere
cl_int error = clEnqueueNDRangeKernel(queue, some_kernel, 1, NULL, &global_work_size, &local_work_size, 0, NULL, NULL);
// error code check snipped
clFinish(queue);
cl_int error = clEnqueueReadBuffer(queue, buffer, CL_TRUE, 0, sizeof(unsigned int), (void*)cpu_ops, 0, NULL, NULL);
// error code check snipped
// at this point, cpu_ops still has it's initial value (whatever that value might have been set to)'
I've skipped the error checking code since it does not error out. I'm actually using a bunch of custom helper functions for sending and receiving data, setting up the platform and context, compiling the program and so on, so the above is constructed of the bodies of the appropriate helpers with the parameters' names changed to make sense.
I'm fairly sure that this is a slip-up or lack of understanding on my part, but desperately need input on this.
Never mind. I was confused about my memory handles - just a stupid error. The code is probably fine.

Resources