Throughput calculation in OpenCl - opencl

I am trying to calculate the throughput of my kernel which is written in my openCL. But I am not sure how to do that, I have tried to find some file generated after compilation which shows throughput as 0.435(" found in the .attrb file") but not sure what does that mean. Is there any other way to find throughput?

Throughput of kernel in OpenCL calculated as:
(NumReadBytes + NumWriteBytes)/ElapsedTime
For measuring time use cl_event.
double getDuration(cl_event event)
{
cl_ulong start_time, end_time;
clGetEventProfilingInfo (event,CL_PROFILING_COMMAND_START,
sizeof(cl_ulong), &start_time,NULL);
clGetEventProfilingInfo (event,CL_PROFILING_COMMAND_END,
sizeof(cl_ulong), &end_time,NULL);
double total_time = (end_time - start_time) * 1e-6;
return total_time;
}
cl_event timer;
int ret = clEnqueueNDRangeKernel(cq, kernel, 1, p_global_work_offset, &global_work_size,
&local_work_size, 0, NULL, &timer);
printf("T:%zu L:%zu T:%fms",global_work_size, local_work_size, getDuration(timer));

This is a very vague question.
Do you mean only the kernel without loading the data?
What does the kernel going do, on what kind of hardware are you running it, how is your data organized, how do you manage your buffers?
Is everything in global memory? Are you defining latencies also? Do you need to maximaze the throughput? Are you going to optimize for specific hardware?
For me many questions rise.

Related

Darknet - OpenCL weird continous increment of time in clEnqueueNDRangeKernel

I am facing an issue with OpenCL version of Darknet. I digged into the implementation and realized that the problem is in the call of a softmax kernel (which happens in https://github.com/ganyc717/Darknet-On-OpenCL/blob/c13fefc66a13da5805986937fccd486b2b313c24/darknet_cl/src/blas_kernels_cl.cpp#L1020). I reported it in an issue on github (https://github.com/ganyc717/Darknet-On-OpenCL/issues/4). But meanwhile I am trying to understand what could be happening.
I profiled the time that the algorithm takes to perform the prediction and it increases over runs. Just for curiousness, I decided to reload all the network before each run and then the time spent in the prediction of the algorithm remains stable, thus it seems, to me, that it is something that depends on the continuous execution of the algorithm.
What is weird to me, is that what it seems to get slower over time is the call to the kernel,i.e. the call to clEnqueueNDRangeKernel. I am not an expert in OpenCL, but it does not seems logical that executing the kernel several times gets slower. Could it be a memory issue? How can it affect to the execution time? I am a bit lost, any help is appreciated.
PD: a similar issue was reported in A weird Timinig issue with "clEnqueueNDRangeKernel" in OpenCL but not answer is marked. It has a comment related to the way of measuring the time, but I think it is not my case because the time is obviously growing.
EDIT:
I modified the code to enable CL_QUEUE_PROFILING_ENABLE. Then I added the following lines to profile the enqueue call:
cl_ulong time_start;
cl_ulong time_end;
clGetEventProfilingInfo(e, CL_PROFILING_COMMAND_START, sizeof(time_start), &time_start, NULL);
clGetEventProfilingInfo(e, CL_PROFILING_COMMAND_END, sizeof(time_end), &time_end, NULL);
double nanoSeconds = time_end-time_start;
printf("OpenCl Execution time is: %0.3f milliseconds \n",nanoSeconds / 1000000.0);
These measures of time remain stable... That confuses me more. It seems that the GPU run it self takes the same time, but when the measure of the cpu call it grows in time:
clock_t t1 = clock();
cl_event e;
cl_int status = clEnqueueNDRangeKernel(*cl->queue, kernel, 3, NULL, global_size, NULL, NULL, NULL, &e);
clock_t t2 = clock();
printf("enqueue : \t %f\n",(float)(t2 - t1) / CLOCKS_PER_SEC);

OpenCL work item ordering

With OpenCL 1.1, is it possible to force work items to order their execution so they wait until higher priority items have finished?
I've tried various implementations and seem to always get stuck when executing my kernel on a GPU (Nvidia OpenCL 1.1); although running on a CPU is fine.
My most recent attempt below will hang on a GPU. My suspicion is that the GPU is splitting the global work group in local groups that get suspended creating a deadlock. I typically run a global size several multiples of my local size and this is important for scaling up my calculation.
kernel void ordered_workitem_kernel(
global uint *min_active_id_g
) {
uint i = get_global_id(0);
min_active_id_g[0] = 0;
barrier(CLK_GLOBAL_MEM_FENCE);
while (i >= min_active_id_g[0]) {
// do something interesting here
if (i == min_active_id_g[0])
atomic_inc(&min_active_id_g[0]);
}
}
Perhaps there's a better way to do this? Any suggestions?

OpenCL workers count always 1

I am new to GPU development , and i was wondering about how many workers (threads) concurrently executes my kernel so i used below kernel
kernel void helloWorld(global int* result)
{
int gid = 0;
gid = get_local_id(0);
if (gid > result[0])
{
result[0] = gid;
}
}
but when running on core i7 intel result[0] is always 0 ; and when run on nvidia GPU it always 0
Opencl divides threads on the device into work groups.
A work group is settled on the same execution unit (note several workgroups can be on the execution unit)
It is possible to let the compiler decide the size of the work groups.
However you pick a number of total threads when starting the kernel.
example:
clEnqueueNDRangeKernel(command_queue, cl_exec, 1, NULL, &tasksize, &local_size_in, 0, NULL, NULL)
tasksize = total number of threads
local_size = number of threads in a group
so tasksize / local_size is the number of workgroups you will have.
If you write NULL instead of local_size the compiler decides about the size of work groups.
There are several contraints for the local_size.
Take a look here: cl api
In my experience the optimal result is mostly the one the compiler choses.
This can be different in complex cases, where you have specific knowledge what happens at runtime, which is not available at compile time.
Also all devices have a max work group size.
CL_DEVICE_MAX_WORK_GROUP_SIZE

OpenCL and Tesla M1060

I'm using the Tesla m1060 for GPGPU computation. It has the following specs:
# of Tesla GPUs 1
# of Streaming Processor Cores (XXX per processor) 240
Memory Interface (512-bit per GPU) 512-bit
When I use OpenCL, I can display the following board information:
available platform OpenCL 1.1 CUDA 6.5.14
device Tesla M1060 type:CL_DEVICE_TYPE_GPU
max compute units:30
max work item dimensions:3
max work item sizes (dim:0):512
max work item sizes (dim:1):512
max work item sizes (dim:2):64
global mem size(bytes):4294770688 local mem size:16383
How can I relate the GPU card informations to the OpenCL memory informations ?
For example:
What does "Memory Interace" means ? Is it linked the a Work Item ?
How can I relate the "240 cores" of the GPU to Work Groups/Items ?
How can I map the work-groups to it (what would be the number of Work groups to use) ?
Thanks
EDIT:
After the following answers, there is a thing that is still unclear to me:
The CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE value is 32 for the kernel I use.
However, my device has a CL_DEVICE_MAX_COMPUTE_UNITS value of 30.
In the OpenCL 1.1 Api, it is written (p. 15):
Compute Unit: An OpenCL device has one or more compute units. A work-group executes on a single compute unit
It seems that either something is incoherent here, or that I didn't fully understand the difference between Work-Groups and Compute Units.
As previously stated, when I set the number of Work Groups to 32, the programs fails with the following error:
Entry function uses too much shared data (0x4020 bytes, 0x4000 max).
The value 16 works.
Addendum
Here is my Kernel signature:
// enable double precision (not enabled by default)
#ifdef cl_khr_fp64
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
#else
#error "IEEE-754 double precision not supported by OpenCL implementation."
#endif
#define BLOCK_SIZE 16 // --> this is what defines the WG size to me
__kernel __attribute__((reqd_work_group_size(BLOCK_SIZE, BLOCK_SIZE, 1)))
void mmult(__global double * A, __global double * B, __global double * C, const unsigned int q)
{
__local double A_sub[BLOCK_SIZE][BLOCK_SIZE];
__local double B_sub[BLOCK_SIZE][BLOCK_SIZE];
// stuff that does matrix multiplication with __local
}
In the host code part:
#define BLOCK_SIZE 16
...
const size_t local_work_size[2] = {BLOCK_SIZE, BLOCK_SIZE};
...
status = clEnqueueNDRangeKernel(command_queue, kernel, 2, NULL, global_work_size, local_work_size, 0, NULL, NULL);
The memory interface doesn't mean anything to an opencl application. It is the number of bits the memory controller has for reading/writing to the memory (the ddr5 part in modern gpus). The formula for maximum global memory speed is approximately: pipelineWidth * memoryClockSpeed, but since opencl is meant to be cross-platform, you won't really need to know this value unless you are trying to figure out an upper bound for memory performance. Knowing about the 512-bit interface is somewhat useful when you're dealing with memory coalescing. wiki: Coalescing (computer science)
The max work item sizes have to do with 1) how the hardware schedules computations, and 2) the amount of low-level memory on the device -- eg. private memory and local memory.
The 240 figure doesn't matter to opencl very much either. You can determine that each of the 30 compute units is made up of 8 streaming processor cores for this gpu architecture (because 240/30 = 8). If you query for CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, it will very likey be a multiple of 8 for this device. see: clGetKernelWorkGroupInfo
I have answered a similar questions about work group sizing. see here, and here
Ultimately, you need to tune your application and kernels based on your own bench-marking results. I find it worth the time to write many tests with various work group sizes and eventually hard-code the optimal size.
Adding another answer to address your local memory issue.
Entry function uses too much shared data (0x4020 bytes, 0x4000 max)
Since you are allocating A_sub and B_sub, each having 32*32*sizeof(double), you run out of local memory. The device should be allowing you to allocate 16kb, or 0x4000 bytes of local memory without an issue.
0x4020 is 32 bytes or 4 doubles more than what your device allows. There are only two things I can think of that may cause the error: 1) there could be a bug with your device or drivers preventing you from allocating the full 16kb, or 2) you are allocating the memory somewhere else in your kernel.
You will have to use a BLOCK_SIZE value less than 32 to work around this for now.
There's good news though. If you only want to hit a multiple of CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE as a work group size, BLOCK_SIZE=16 already does this for you. (16*16 = 256 = 32*8). To better take advantage of local memory, try BLOCK_SIZE=24. (576=32*18)

Affect of local_work_size on performance and why it is

Hello Everyone....
i am new to opencl and trying to explore more # it.
What is the work of local_work_size in openCL program and how it matters in performance.
I am working on some image processing algo and for my openCL kernel i gave
as
size_t local_item_size = 1;
size_t global_item_size = (int) (ceil((float)(D_can_width*D_can_height)/local_item_size))*local_item_size; // Process the entire lists
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,&global_item_size, &local_item_size, 0, NULL, NULL);
and for same kernel when i changed
size_t local_item_size = 16;
keeping everything same.
i got around 4-5 times faster performance.
The local-work-size, aka work-group-size, is the number of work-items in each work-group.
Each work-group is executed on a compute-unit which is able to handle a bunch of work-items, not only one.
So when you are using too small groups you waste some computing power, and only got a coarse parallelization at the compute-unit level.
But if you have too many work-items in a group you can also lose some opportunnity for parallelization as some compute-units may not be used, whereas other would be overused.
So you could test with many values to find the best one or just let OpenCL pick a good one for you by passing NULL as the local-work-size.
PS : I'll be interested in knowing the peformance with OpenCL choice compared to your previous values, so could you please make a test and post the results.
Thanks :)

Resources