I want to ask a simple question. Can time difference between creating OpenCL context and Executing OpenCL kernel be called kernel time in OpenCL code? And is
time_t start1,end1;
clock_t start2,end2;
start1=time(NULL);
start2=clock();
capable of calculating this time?
Briefly - no. Right way to measure kernel time is to get OpenCL event, associated with kernel and collect profiling info. It's done like this:
cl_int ret;
cl_command_queue queue = clCreateCommandQueue(context, device, CL_QUEUE_PROFILING_ENABLE, &ret);
...
cl_event my_event;
ret = clEnqueueNDRangeKernel(queue, kernel, 1, global_offset, global_size, local_size, num_events, wait_list, &my_event);
clWaitForEvents(1, &my_event);
cl_ulong start, finish;
ret = clGetEventProfilingInfo(my_event, CL_PROFILING_COMMAND_START, sizeof(cl_ulong), &start, NULL);
ret = clGetEventProfilingInfo(my_event, CL_PROFILING_COMMAND_END, sizeof(cl_ulong), &finish, NULL);
cl_ulong time_ns = finish - start;
time_ns is time in nanoseconds between kernel start & end. Don't forget to check return codes.
Related
I have the following skeleton code
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
&global_item_size,NULL,0, NULL, NULL);
printf("print immediately\n ");
I thought and read somewhere that clEnqueueNDRangeKernel is non blocking call and cpu continues its execution immediately after enqueuing the kernel.
But I see a different behaviour. printf statement executes after kernel completes execution. Why am I seeing this behaviour?. How to make any kernel calls non blocking?.
Yes, clEnqueueNDRangeKernel() is supposed to be non-blocking. However, the code you show does not allow to definitively conclude that the kernel finishes before the printf statement. There's several possibilities:
The kernel is not enqueued properly or fails to run. You need to check if the return value ret is CL_SUCCESS, and if not, fix whatever needs to be fixed and try again.
The kernel runs fast and the thread on which the kernel runs is likely to be given priority, such that the printf statement ends up being executed after the kernel finishes.
The kernel is actually running during the printf statement, since nothing in your code allows you to conclude otherwise. To check if the kernel is running or finished, you need to use an event. For example:
cl_event evt = NULL;
cl_int ret, evt_status;
// ...
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
&global_item_size, NULL, 0, NULL, &evt);
// Check if it's finished or not
if (ret == CL_SUCCESS)
{
clGetEventInfo(evt, CL_EVENT_COMMAND_EXECUTION_STATUS,
sizeof(cl_int), (void*) &evt_status, NULL);
if (evt_status == CL_COMPLETE)
printf("Kernel is finished\n");
else
printf("Kernel is NOT finished\n");
}
else
{
printf("Something's wrong: %d\n", ret);
}
I have the following lines of code which I use to first determine the file size of the .cl file I am reading from (and loading into a buffer), and subsequently building my program and kernel from the buffer. Assuming calculate.cl contains a simple vector addition kernel.
//get size of kernel source
FILE *f = fopen("calculate.cl", "r");
fseek(f, 0, SEEK_END);
size_t programSize = ftell(f);
rewind(f);
//load kernel into buffer
char *programBuffer = (char*)malloc(programSize + 1);
programBuffer[programSize] = '\0';
fread(programBuffer, sizeof(char), programSize, f);
fclose(f);
//create program from buffer
cl_program program = clCreateProgramWithSource(context, 1, (const char**) &programBuffer, &programSize, &status);
//build program for devices
status = clBuildProgram(program, numDevices, devices, NULL, NULL, NULL);
//create the kernel
cl_kernel calculate = clCreateKernel(program, "calculate", &status);
However, when I run my program, the output produced is zero instead of the intended vector addition results. I've verified that the problem is not to do with the kernel itself (I used a different method to load an external kernel which worked and gave me the intended results) however I am still curious as to why this initial method I attempted did not work.
Any help?
the problem's been solved.
following bl0z0's suggestion and looking up the error, I've found the solution here:
OpenCL: Expected identifier in kernel
thanks everyone :D I really appreciate it!
I believe this gives the programing size in terms of the number of chars:
size_t programSize = ftell(f);
and here you need to allocate in terms of bytes:
char *programBuffer = (char*)malloc(programSize + 1);
so I think that previous line should be
char *programBuffer = (char*)malloc(programSize * sizeof(char) + 1);
Double check this by just printing the programBuffer.
I am coding some program and i need to perform some data transfer repeatedly between host and device. I will try to minimize best i can, but is there any faster way to perform this? Here, array copied to the device is changing on each iteration, hence device needs to be updated with the new array values. Any suggestions/pointers/help will be appreciated.
for (i = 0; i <= SEVERALCALLS; i++) {
wrtBuffer = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_ALLOC_HOST_PTR, sizeof(double) * num, NULL, &ret);
if (ret != 0) {
printf("clCreateBuffer wrtBuffer error: %d. couldn't load\n", ret);
exit(1);
}
// update cti array
ret = clEnqueueWriteBuffer(command_queue, wrtBuffer, CL_TRUE, 0, sizeof(double) * num, cti, 0, NULL, NULL);
if (ret != 0) {
printf("clEnqueueWriteBuffer wrtBuffer error: %d. couldn't load\n", ret);
exit(1);
}
// NDRange Kernel call
ret = clEnqueueReadBuffer(command_queue, readBuffer, CL_TRUE, 0, sizeof(double) * num, newcti, 0, NULL, NULL);
if (ret != 0) {
printf("clEnqueueReadBuffer readBuffer error: %d. couldn't load\n", ret);
exit(1);
}
}
Three ways to optimize this:
On integrated GPU (like Intel or AMD APU) use "zero copy" buffers so you won't pay for any transfers.
On NVIDIA use pinned host memory for the host-side memory source for clEnqueueWriteBuffer or receive buffer for clEnqueueReadBuffer. This will be faster than using normal malloc memory and won't block.
Overlap transfer and compute. Use three command queues, one for upload, one for compute, and one for download, and use events to ensure dependencies. See NVIDIA's example: oclCopyComputeOverlap (although it is suboptimal; it can go slightly faster than they say it can).
I use C++ wrapper and create buffer with following code:
cl_int err(0);
unsigned int size;
void *data = GetData(/*out*/ size);
cl::Buffer buf(m_ctx, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR,
size, data, &err);
free(data);
After that working set of my application increases by size bytes. And since I have 32 bit application I can't allocate more then 1.5 Gb in total, but GPU has 3 Gb.
Is it possible to allocate more buffers ?
PS. size is less than 128 Mb.
UPdate: I use only 1 device and it is GPU. (GeForce GTX 780 GPU NVIDIA Corporation 337.88)
I don't know the exact reasons why the problem occurs, it is probably some underlying driver behavior.
However, disabling the special flags seems to solve it for nVIDIA platform, and probably for AMD as well. So I'm writing a proper answer.
cl_int err(0);
unsigned int size;
void *data = GetData(/*out*/ size);
cl::Buffer buffer = cl::Buffer buf(m_ctx, CL_MEM_READ_WRITE,
size, NULL, &err);
err = queue.EnqueueWriteBuffer(buffer, CL_TRUE, NULL, size, data);
free(data);
Partial answer to your question. Maximal size of memory chunk, that can be allocated:
cl_ulong max_buffer_size = 0;
cl_int ret_code = clGetDeviceInfo(Device, CL_DEVICE_MAX_MEM_ALLOC_SIZE,
sizeof(cl_ulong), &max_buffer_size, NULL);
if(ret_code != CL_SUCCESS){
fprintf(stderr, "Error %d happened \n", ret_code);
}
Generally, OpenCL API doesn't allow to allocate big (over couple hundreds of Mbytes) memory objects.
DarkZeros was right in comments. It seems to be implementation-depended, and when I use clEnqueueWriteBuffer() instead of flag CL_MEM_COPY_HOST_PTR it solves problem.
I want to traverse a tree at GPU with OpenCL, so i assemble the tree in a contiguous block at host and i change the addresses of all pointers so as to be consistent at device as follows:
TreeAddressDevice = (size_t)BaseAddressDevice + ((size_t)TreeAddressHost - (size_t)BaseAddressHost);
I want the base address of the memory buffer:
At host i allocate memory for the buffer, as follows:
cl_mem tree_d = clCreateBuffer(...);
The problem is that cl_mems are objects that track an internal representation of the data. Technically they're pointers to an object, but they are not pointers to the data. The only way to access a cl_mem from within a kernel is to pass it in as an argument via setKernelArgs.
Here http://www.proxya.net/browse.php?u=%3A%2F%2Fwww.khronos.org%2Fmessage_boards%2Fviewtopic.php%3Ff%3D37%26amp%3Bt%3D2900&b=28 i found the following solution, but it doesnot work:
__kernel void getPtr( __global void *ptr, __global void *out )
{
*out = ptr;
}
that can be invoked as follows
Code:
...
cl_mem auxBuf = clCreateBuffer( context, CL_MEM_READ_WRITE, sizeof(void*), NULL, NULL );
void *gpuPtr;
clSetKernelArg( getterKernel, 0, sizeof(cl_mem), &myBuf );
clSetKernelArg( getterKernel, 1, sizeof(cl_mem), &auxBuf );
clEnqueueTask( commandQueue, getterKernel, 0, NULL, NULL );
clEnqueueReadBuffer( commandQueue, auxBuf, CL_TRUE, 0, sizeof(void*), &gpuPtr, 0, NULL, NULL );
clReleaseMemObject(auxBuf);
...
Now "gpuPtr" should contain the address of the beginning of "myBuf" in GPU memory space.
The solution is obvious and i can't find it? How can I get back a pointer to device memory when creating buffers?
It's because in the OpenCL model, host memory and device memory are disjoint. A pointer in device memory will have no meaning on the host.
You can map a device buffer to host memory using clEnqueueMapBuffer. The mapping will synchronize device to host, and unmapping will synchronize back host to device.
Update. As you explain in the comments, you want to send a tree structure to the GPU. One solution would be to store all tree nodes inside an array, replacing pointers to nodes with indices in the array.
As Eric pointed out, there are two sets of memory to consider: host memory and device memory. Basically, OpenCL tries to hide the gritty details of this interaction by introducing the buffer object for us to interact with in our program on the host side. Now, as you noted, the problem with this methodology is that it hides away the details of our device when we want to do something trickier than the OpenCL developers intended or allowed in their scope. The solution here is to remember that OpenCL kernels use C99 and that the language allows us to access pointers without any issue. With this in mind, we can just demand the pointer be stored in an unsigned integer variable to be referenced later.
Your implementation was on the right track, but it needed a little bit more C syntax to finish up the transfer.
OpenCL Kernel:
// Kernel used to obtain pointer from target buffer
__kernel void mem_ptr(__global char * buffer, __global ulong * ptr)
{
ptr[0] = &buffer[0];
}
// Kernel to demonstrate how to use that pointer again after we extract it.
__kernel void use_ptr(__global ulong * ptr)
{
char * print_me = (char *)ptr[0];
/* Code that uses all of our hard work */
/* ... */
}
Host Program:
// Create the buffer that we want the device pointer from (target_buffer)
// and a place to store it (ptr_buffer).
cl_mem target_buffer = clCreateBuffer(context, CL_MEM_READ_WRITE,
MEM_SIZE * sizeof(char), NULL, &ret);
cl_mem ptr_buffer = clCreateBuffer(context, CL_MEM_READ_WRITE,
1 * sizeof(cl_ulong), NULL, &ret);
/* Setup the rest of our OpenCL program */
/* .... */
// Setup our kernel arguments from the host...
ret = clSetKernelArg(kernel_mem_ptr, 0, sizeof(cl_mem), (void *)&target_buffer);
ret = clSetKernelArg(kernel_mem_ptr, 1, sizeof(cl_mem), (void *)&ptr_buffer);
ret = clEnqueueTask(command_queue, kernel_mem_ptr, 0, NULL, NULL);
// Now it's just a matter of storing the pointer where we want to use it for later.
ret = clEnqueueCopyBuffer(command_queue, ptr_buffer, dst_buffer, 0, 1 * sizeof(cl_ulong),
sizeof(cl_ulong), 0, NULL, NULL);
ret = clEnqueueReadBuffer(command_queue, ptr_buffer, CL_TRUE, 0,
1 * sizeof(cl_ulong), buffer_ptrs, 0, NULL, NULL);
There you have it. Now, keep in mind that you don't have to use the char variables I used; it works for any type. However, I'd recommend using cl_ulong for the storing of pointers. This shouldn't matter for devices with less than 4GB of accessible memory. But for devices with a larger address space, you have to use cl_ulong. If you absolutely NEED to save space on your device but have a device whose memory > 4GB, then you might be able to create a struct that can store the lower 32 LSB of the address into a uint type, with the MSB's being stored in a small type.