I am facing an issue with OpenCL version of Darknet. I digged into the implementation and realized that the problem is in the call of a softmax kernel (which happens in https://github.com/ganyc717/Darknet-On-OpenCL/blob/c13fefc66a13da5805986937fccd486b2b313c24/darknet_cl/src/blas_kernels_cl.cpp#L1020). I reported it in an issue on github (https://github.com/ganyc717/Darknet-On-OpenCL/issues/4). But meanwhile I am trying to understand what could be happening.
I profiled the time that the algorithm takes to perform the prediction and it increases over runs. Just for curiousness, I decided to reload all the network before each run and then the time spent in the prediction of the algorithm remains stable, thus it seems, to me, that it is something that depends on the continuous execution of the algorithm.
What is weird to me, is that what it seems to get slower over time is the call to the kernel,i.e. the call to clEnqueueNDRangeKernel. I am not an expert in OpenCL, but it does not seems logical that executing the kernel several times gets slower. Could it be a memory issue? How can it affect to the execution time? I am a bit lost, any help is appreciated.
PD: a similar issue was reported in A weird Timinig issue with "clEnqueueNDRangeKernel" in OpenCL but not answer is marked. It has a comment related to the way of measuring the time, but I think it is not my case because the time is obviously growing.
EDIT:
I modified the code to enable CL_QUEUE_PROFILING_ENABLE. Then I added the following lines to profile the enqueue call:
cl_ulong time_start;
cl_ulong time_end;
clGetEventProfilingInfo(e, CL_PROFILING_COMMAND_START, sizeof(time_start), &time_start, NULL);
clGetEventProfilingInfo(e, CL_PROFILING_COMMAND_END, sizeof(time_end), &time_end, NULL);
double nanoSeconds = time_end-time_start;
printf("OpenCl Execution time is: %0.3f milliseconds \n",nanoSeconds / 1000000.0);
These measures of time remain stable... That confuses me more. It seems that the GPU run it self takes the same time, but when the measure of the cpu call it grows in time:
clock_t t1 = clock();
cl_event e;
cl_int status = clEnqueueNDRangeKernel(*cl->queue, kernel, 3, NULL, global_size, NULL, NULL, NULL, &e);
clock_t t2 = clock();
printf("enqueue : \t %f\n",(float)(t2 - t1) / CLOCKS_PER_SEC);
Related
I am trying to calculate the throughput of my kernel which is written in my openCL. But I am not sure how to do that, I have tried to find some file generated after compilation which shows throughput as 0.435(" found in the .attrb file") but not sure what does that mean. Is there any other way to find throughput?
Throughput of kernel in OpenCL calculated as:
(NumReadBytes + NumWriteBytes)/ElapsedTime
For measuring time use cl_event.
double getDuration(cl_event event)
{
cl_ulong start_time, end_time;
clGetEventProfilingInfo (event,CL_PROFILING_COMMAND_START,
sizeof(cl_ulong), &start_time,NULL);
clGetEventProfilingInfo (event,CL_PROFILING_COMMAND_END,
sizeof(cl_ulong), &end_time,NULL);
double total_time = (end_time - start_time) * 1e-6;
return total_time;
}
cl_event timer;
int ret = clEnqueueNDRangeKernel(cq, kernel, 1, p_global_work_offset, &global_work_size,
&local_work_size, 0, NULL, &timer);
printf("T:%zu L:%zu T:%fms",global_work_size, local_work_size, getDuration(timer));
This is a very vague question.
Do you mean only the kernel without loading the data?
What does the kernel going do, on what kind of hardware are you running it, how is your data organized, how do you manage your buffers?
Is everything in global memory? Are you defining latencies also? Do you need to maximaze the throughput? Are you going to optimize for specific hardware?
For me many questions rise.
I am working on some OpenCL reduction and I found AMD and Nvidia both has some example like the following kernel (this one is taken from Nvidia's website, but AMD has a similar one):
__kernel void reduce2(__global T *g_idata, __global T *g_odata, unsigned int n, __local T* sdata){
// load shared mem
unsigned int tid = get_local_id(0);
unsigned int i = get_global_id(0);
sdata[tid] = (i < n) ? g_idata[i] : 0;
barrier(CLK_LOCAL_MEM_FENCE);
// do reduction in shared mem
for(unsigned int s=get_local_size(0)/2; s>0; s>>=1)
{
if (tid < s)
{
sdata[tid] += sdata[tid + s];
}
barrier(CLK_LOCAL_MEM_FENCE);
}
// write result for this block to global mem
if (tid == 0) g_odata[get_group_id(0)] = sdata[0];}
I have two questions:
the code above reduce an array to another smaller array, I am just wondering why all the example I saw did the same instead of reducing an array directly into a single element, which is the usual semantic of "reduction" (IMHO). This should be easily achievable with an outer loop inside the kernel. Is there special reason for this?
I have implemented this reduction and found it quite slow, is there any optimisation I can do to improve it? I saw another example used some unrolling to avoid synchronisation in the loop, but I did not quite get the idea, can you explain a bit?
The reduction problem in a multithread environment is a very special parallel problem. There is one path that needs to be done sequentially, which is the element 0 to the power of 2.
Even if you had infinite threads for processing, you will need log2(N) passes trough the array to reduce it to a single element.
In a real system your number of threads (work-items) are reduced but high (~128-2048). So, in order to use them efficiently all of them have to have something to do. But since the problem is more and more serial and less parallel as you reduce the size of the reduction. These algorithms only bother about the high part, and let the CPU do the rest of the reduction.
To make the story short. You can reduce an array from 1024 to 512 in one pass, but you need the same power to reduce it from 2 to 1. In the latter case all the threads minus 1 are idle, an incredible waste of GPU resources (99.7% idle).
As you can see, there is no point in reducing this last part on a GPU. It is easier to simply copy it to CPU and do it sequentially.
Answering your question: Yes, it is slow, and will always be. If there was a magic trick to solve it, then AMD and nVIDIA would be using it don't you think? :)
For question 1: This kernel reduces a big array into a smaller one and not a single element because there is no synchronization possible between work-groups. so each work-group can reduces its portion of the array to one elements but after that all these single elements given by each work-group need to be written in global memory before a new pass is performed. This could go on until the moment the array is small enough to have only one work-group running.
For question 2: There is several approaches to perform a reduction with different performance. How to improve performance for such problem is discussed in this article from the AMD resources. Hope you'll find it useful.
I am writing an openCL code to find shortest paths from each node to others in a graph using BFS. (Here is the details of what I am doing:
Shortest paths by BFS, porting a code from CUDA to openCL
and here is how I split the data to pass to clEnqueueNDRangeKernel
size_t global_size, local_size;
local_size=1024;
global_size=ceil(e_count/(float)local_size)*local_size;
cl_event sync1;
err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL,
&global_size, &local_size, 0, NULL, &sync1); //wait for this to finish (to synchronize);
err = clWaitForEvents(1, &sync1)
The code works well with number of edge <= 50000 (through considerably slower than its equivalent cpu version). When I increased the number of edge, the program just exited and gave error -58 (after clEnqueueNDRangeKernel )
I am using NVIDIA Geforce 630M.
How can I figure out what happened and how to fix the problem?
Best Regards
Error -58 is a CL_INVALID_EVENT (as you can see in cl.h), and it isn't returned by clEnqueueNDRangeKernel, only by clWaitForEvents. So you're probably only checking for errors on the latter function. In order to find what the actual error is, you should check the return value clEnqueueNDRangeKernel against the different status constants it can return. In fact, you should do this for all host OpenCL functions, or else it will be very difficult to determine exactly what type of errors are occurring.
In this specific case, I bet you have a memory related error, such as CL_OUT_OF_RESOURCES (not enough local or private memory for your kernels).
Hope this helps.
Hello Everyone....
i am new to opencl and trying to explore more # it.
What is the work of local_work_size in openCL program and how it matters in performance.
I am working on some image processing algo and for my openCL kernel i gave
as
size_t local_item_size = 1;
size_t global_item_size = (int) (ceil((float)(D_can_width*D_can_height)/local_item_size))*local_item_size; // Process the entire lists
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,&global_item_size, &local_item_size, 0, NULL, NULL);
and for same kernel when i changed
size_t local_item_size = 16;
keeping everything same.
i got around 4-5 times faster performance.
The local-work-size, aka work-group-size, is the number of work-items in each work-group.
Each work-group is executed on a compute-unit which is able to handle a bunch of work-items, not only one.
So when you are using too small groups you waste some computing power, and only got a coarse parallelization at the compute-unit level.
But if you have too many work-items in a group you can also lose some opportunnity for parallelization as some compute-units may not be used, whereas other would be overused.
So you could test with many values to find the best one or just let OpenCL pick a good one for you by passing NULL as the local-work-size.
PS : I'll be interested in knowing the peformance with OpenCL choice compared to your previous values, so could you please make a test and post the results.
Thanks :)
I'm taking my first steps in OpenCL (and CUDA) for my internship. All nice and well, I now have working OpenCL code, but the computation times are way too high, I think. My guess is that I'm doing too much I/O, but I don't know where that could be.
The code is for the main: http://pastebin.com/i4A6kPfn, and for the kernel: http://pastebin.com/Wefrqifh I'm starting to measure time after segmentPunten(segmentArray, begin, eind); has returned, and I end measuring time after the last clEnqueueReadBuffer.
Computation time on a Nvidia GT440 is 38.6 seconds, on a GT555M 35.5, on a Athlon II X4 5.6 seconds, and on a Intel P8600 6 seconds.
Can someone explain this to me? Why are the computation times are so high, and what solutions are there for this?
What is it supposed to do: (short version) to calculate how much noiseload there is made by an airplane that is passing by.
long version: there are several Observer Points (OP) wich are the points in wich sound is measured from an airplane thas is passing by. The flightpath is being segmented in 10.000 segments, this is done at the function segmentPunten. The double for loop in the main gives OPs a coordinate. There are two kernels. The first one calculates the distance from a single OP to a single segment. This is then saved in the array "afstanden". The second kernel calculates the sound load in an OP, from all the segments.
Just eyeballing your kernel, I see this:
kernel void SEL(global const float *afstanden, global double *totaalSEL,
const int aantalSegmenten)
{
// ...
for(i = 0; i < aantalSegmenten; i++) {
double distance = afstanden[threadID * aantalSegmenten + i];
// ...
}
// ...
}
It looks like aantalSegmenten is being set to 1000. You have a loop in each
kernel that accesses global memory 1000 times. Without crawling though the code,
I'm guessing that many of these accesses overlap when considering your
computation as a whole. It this the case? Will two work items access the same
global memory? If this is the case, you will see a potentially huge win on the
GPU from rewriting your algorithm to partition the work such that you can read
from a specific global memory only once, saving it in local memory. After that,
each work item in the work group that needs that location can read it quickly.
As an aside, the CL specification allows you to omit the leading __ from CL
keywords like global and kernel. I don't think many newcomers to CL realize
that.
Before optimizing further, you should first get an understanding of what is taking all that time. Is it the kernel compiles, data transfer, or actual kernel execution?
As mentioned above, you can get rid of the kernel compiles by caching the results. I believe some OpenCL implementations (the Apple one at least) already do this automatically. With other, you may need to do the caching manually. Here's instructions for the caching.
If the performance bottle neck is the kernel itself, you can probably get a major speed-up by organizing the 'afstanden' array lookups differently. Currently when a block of threads performs a read from the memory, the addresses are spread out through the memory, which is a real killer for GPU performance. You'd ideally want to index array with something like afstanden[ndx*NUM_THREADS + threadID], which would make accesses from a work group to load a contiguous block of memory. This is much faster than the current, essentially random, memory lookup.
First of all you are measuring not the computation time but the whole kernel read-in/compile/execute mumbo-jumbo. To do a fair comparison measure the computation time from the first "non-static" part of your program. (For example from between the first clSetKernelArgs to the last clEnqueueReadBuffer.)
If the execution time is still too high, then you can use some kind of profiler (such as VisualProfiler from NVidia), and read the OpenCL Best Practices guid which is included in the CUDA Toolkit documentation.
To the raw kernel execution time: Consider (and measure) that do you really need the double precision for your calculation, because the double precision calculations are artificially slowed down on the consumer grade NVidia cards.