Queued kernels slower than expected on AMD gpus only - opencl

I am performing a benchmark like show below
CHECK( context = clCreateContext(props, 1, &device, NULL, NULL, &_err); );
CHECK( queue = clCreateCommandQueue(context, device, 0, &_err); );
#define SYNC() clFinish(queue)
#define LAUNCH(glob, loc, kernel) OCL(clEnqueueNDRangeKernel(queue, kernel, 2,\
NULL, glob, loc,\
0, NULL, NULL))
/* Build program, set arguments over here */
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, plus_kernel);
}
SYNC();
STOP;
printf("Time taken (plus) : %lf\n", uSec / iter);
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, minus_kernel);
}
SYNC();
STOP;
printf("Time taken (minus): %lf\n", uSec / iter);
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, plus_kernel);
LAUNCH(global, local, minus_kernel);
}
SYNC();
STOP;
printf("Time taken (both) : %lf\n", uSec / iter);
The results look weird:
Time taken (plus) : 31.450000
Time taken (minus): 28.120000
Time taken (both) : 2256.380000
START, and STOP are just macros that start and stop a timer.
Here are the relevant macros.
I am not sure why queuing up is the kernels is slowing them down (and only on AMD GPUs)!
EDIT I am using Radeon 7970
EDIT Both kernels are operating on independent memory. Also here is the system information.
OS: Ubuntu 11.10
fglrxinfo:
display: :0 screen: 0
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon HD 7900 Series
OpenGL version string: 4.2.11762 Compatibility Profile Context

I think the answer has to do with caching of data on newer GPUs (Specifically the Radeon 7970, which uses the Graphics Compute Next (GCN) architecture.
One of the advantages of this architecture is it's caching capabilities (somewhat close to CPU caching at this point). If you perform calls like this:
PLUS
PLUS
PLUS
....
Then the memory that is resident in the inner caches of the GPU. On the other hand if you make calls like this:
PLUS
MINUS
PLUS
MINUS
...
Where the two kernels have different memory objects associated with them, then the data is kicked out of the hardware devices on each CU, causing a need for them to be brought in from the very sluggish global memory.
Two easy ways to test if this is the case:
Run only Pluses with varying numbers of iterations. As the number of iterations increases, the average time will go down because the cost of the first run (which brings the data in) is amortized. Also, you should notice that all calls after the first should be relatively equal.
Make the Plus and Minus kernels run on the same memory objects. If the reason for the slowdown is because of the caching of memory objects, then the overall run time should be the average of the individual running times of PLUS and MINUS (depending perhaps on experiment 1).
Let me know if you find out if this is actually the case!

Related

Equivalent of cudaSetDevice in OpenCL?

I have a function that I wrote for 1 gpu, and it runs for 10 seconds with one set of args, and I have a very long list of args to go through. I would like to use both my AMD gpus, so I have some wrapper code that launches 2 threads, and runs my function on thread 0 with an argument gpu_idx 0 and on thread 1 with an argument gpu_idx 1.
I have a cuda version for another machine, and I just run checkCudaErrors(cudaSetDevice((unsigned int)device_id)); to get my desired behavior.
With openCL I have tried to do the following:
void createDevice(int device_idx)
{
cl_device_id *devices;
ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
HANDLE_CLERROR_G(ret);
ret = clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_ALL, 0, NULL, &ret_num_devices);
HANDLE_CLERROR_G(ret);
devices = (cl_device_id*)malloc(ret_num_devices*sizeof(cl_device_id));
ret = clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_ALL, ret_num_devices, devices, &ret_num_devices);
HANDLE_CLERROR_G(ret);
if (device_idx >= ret_num_devices)
{
fprintf(stderr, "Found %i devices but asked for device at index %i\n", ret_num_devices, device_idx);
exit(1);
}
device_id = devices[device_idx];
// usleep(((unsigned int)(500000*(1-device_idx)))); // without this line multithreaded 2 gpu execution does not work.
context = clCreateContext( NULL, 1, &device_id, NULL, NULL, &ret);
HANDLE_CLERROR_G(ret);
}
context is a static variable in my *c file that I then use later again when I create the kernel.
This code works when I run only with device_idx 0, or only with device_idx 1, and even if I manually in two terminal windows run the executable "simultaneously" with device_idx 0 and device_idx 1.
BUT, there is something about the threads being "too" concurrent that prevents this code from working. In fact, depending on the amount of sleep (commented above), I get different behavior (sometimes both threads do work on gpu 0, sometimes both threads do work on gpu 1, sometimes threads are balanced on both gpus). If I sleep for too little time, I either get: CL_INVALID_CONTEXT and if I don't sleep at all I get CL_INVALID_KERNEL_NAME.
Like I said, I don't get any errors when running on gpu 0 or gpu 1 alone, only when spawning multiple threads that call this code (as an *so with an extern C function from go) simultaneously with device_idx 0 in thread 0 and device_idx 1 in thread 1.
How can I solve my problem? I am attached to the idea that I have an executable that works on 1 gpu, for which I specify which gpu, and that specification should be respected.
What is the proper way to pick the device when both devices need to be used, one completely separate from the other?
Whoops! Instead of saving device_id into a static variable I started returning from the above code and using it as a local variable, and everything works as expected, and is now thread safe.

XC8 builds font tables from top ROM

I wrote a barebone progran template in XC8 (1.37) that I use to develop and test new GLCD functions for the 18F family. Programming is done via a PICkit3. Since I need to quicky reprogram several times the code it is really important that programming is faster as much as possible.
Tipically, the code size is around 2K and it takes less than 10 sec to program,
Everiything is fine until I must use a font table, defined as:
const char font8[] = {....
Now, with just $400 bytes added, the compiler place the table at the ROM's end and the programming of 64K memory takes more than 1 minute.
Is there any way to avoid this?
I tried to manually limit the memory range in the MPLABX options, but this is annoying and a little unsafe (sometimes part of code is truncated).
A while back I had to write some code for emissions testing, where I needed to copy data between extreme ends of RAM. To do that I needed to specify the exact memory addresses. You can also use the C extension __at() construct. http://ww1.microchip.com/downloads/en/DeviceDoc/50002053F.pdf#page=27
int scanMode __at(0x200);
const char keys[] __at(123) = { ’r’, ’s’, ’u’, ’d’};
int modify(int x) __at(0x1000) {
return x * 2 + 3;
}

Parallel copy and opencl kernel execution

I would like to implement an image filtering algorithm using OpenCL but the image size is very large (4096 x 4096). I understand that the copy time to the OpenCL device may take too long.
Do you think it makes sense to address this problem by using a parallel copy in combination with OpenCL kernel execution?
E.g., below is my approach:
1) Split the full image into 2 parts.
2) Copy the first half to the device.
3) Execute the image filtering kernel on the device, then copy the 2nd half of the image to the device.
4) Block the kernel execution until the first half completes, then call the kernel again to process the 2nd part.
5) Block until the 2nd part finishes.
Best regards,
OpenCL thread of execution is completely independent to your application. So there is no need to "wait" after each call. Just flush all the order to OpenCL and it should schedule them properly.
The only need is to have 2 queues, in order to be able to run commands in parallel. So you will need a IO queue, and an execution queue. A single queue (even in out of order mode), can never run 2 operations in parallel.
Here you have one example approach with events, you can call clFlush() on the queues just after doing the enqueues in order to speed them up.
//Create 2 queues (at creation only!)
mQueueIO = cl::CommandQueue(context, device[0], 0);
mQueueRun = cl::CommandQueue(context, device[0], 0);
//Everytime you run your image filter
//Queue the 2 writes
cl::Event wev1; //Event to known when the write finishes
mQueueIO.enqueueWriteBuffer(ImageBufferCL, CL_FALSE, 0, size/2, imageCPU, NULL, &wev1);
cl::Event wev2; //Event to known when the write finishes
mQueueIO.enqueueWriteBuffer(ImageBufferCL, CL_FALSE, size/2, size/2, imageCPU+size/2, &wev2);
//Queue the 2 runs (with the proper dependency)
std::vector<cl::Event> wait;
wait.push_back(wev1);
cl::Event ev1; //Event to track the finish of the run command
mQueueRun.enqueueNDRangeKernel(kernel, cl::NDRange(0), cl::NDRange(size/2), cl::NDRange(localsize), &wait, &ev1);
wait[0] = wev2;
cl::Event ev2; //Event to track the finish of the run command
mQueueRun.enqueueNDRangeKernel(kernel, cl::NDRange(size/2), cl::NDRange(size/2), cl::NDRange(localsize), &wait, &ev2);
//Read back the data when it has finished
std::vector<cl::Event> rev(2);
wait[0] = ev1;
mQueueIO.enqueueReadBuffer(ImageBufferCL, CL_FALSE, 0, size/2, imageCPU, &wait, &rev[0]);
wait[0] = ev1;
mQueueIO.enqueueReadBuffer(ImageBufferCL, CL_FALSE, size/2, size/2, imageCPU + size/2, &wait, &rev[1]);
rev[0].wait();
rev[1].wait();
Notice how I create 2 events for writing, these are the wait events of the execution; and 2 events for execution that are the wait events for reading.
In the last part I create another 2 events for reading but they are not really needed, you can use a blocking read.
Try using out of order queues - most implementation's hardware should support them. You'll want to use the global offset parameter in your kernels along with global_id where applicable. At some point you will get diminishing returns with a division strategy like this but there should exist a number such that you can get a good payoff in latency reduction - I would guess it's in [2, 100] is probably a good interval to brute force profile. Be aware that only one kernel can write to a memory buffer at a time and make sure the input buffer is const (read-only). Be aware that you must also merge the result from N buffer splits in one kernel to an output - this means you will effectively write all pixels twice to GDS. OpenCL 2.0 may be able to save us all these divided writes with it's image types, if you are able to use it.
cl::CommandQueue queue(context, device, CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE|CL_QUEUE_ON_DEVICE);
cl::Event last_event;
std::vector<Event> events;
std::vector<cl::Buffer> output_buffers;//initialize with however many splits you have, ensure there is at least enough for what is written and update the kernel perhaps to only write to it's relative region.
//you might approach finer granularity with even more splits
//just make sure the kernel is using the global offset -
//in which case adjust this code into a loop
set_args(kernel, image_input, image_outputs[0]);
queue.enqueueNDRangeKernel(kernel, cl::NDRange(0, 0), cl::NDRange(cols * local_size[0], (rows/2) * local_size[0]), cl::NDRange(local_size[0], local_size[1]), &events, &last_event); events.push_back(last_event);
set_args(kernel, image_input, image_outputs[0]);
queue.enqueueNDRangeKernel(kernel, cl::NDRange(0, size/2 * local_size), cl::NDRange(cols * local_size[0], (size - size/2) * local_size[1]), cl::NDRange(local_size[0], local_size[1]), &events, &last_event); events.push_back(last_event);
set_args(merge_buffers_kernel, output_buffers...)
queue.enqueueNDRangeKernel(merge_buffers_kernel, NDRange(), NDRange(cols * local_size[0], rows * local_size[1])
cl::waitForEvents(events);

Why doesn't OpenCL Nvidia compiler (nvcc) use the registers twice?

I'm doing a small OpenCL benchmark using Nvidia drivers,
my kernel performs 1024 fuse multiply-adds and store the result in an array:
#define FLOPS_MACRO_1(x) { (x) = (x) * 0.99f + 10.f; } // Multiply-add
#define FLOPS_MACRO_2(x) { FLOPS_MACRO_1(x) FLOPS_MACRO_1(x) }
#define FLOPS_MACRO_4(x) { FLOPS_MACRO_2(x) FLOPS_MACRO_2(x) }
#define FLOPS_MACRO_8(x) { FLOPS_MACRO_4(x) FLOPS_MACRO_4(x) }
// more recursive macros ...
#define FLOPS_MACRO_1024(x) { FLOPS_MACRO_512(x) FLOPS_MACRO_512(x) }
__kernel void ocl_Kernel_FLOPS(int iNbElts, __global float *pf)
{
for (unsigned i = get_global_id(0); i < iNbElts; i += get_global_size(0))
{
float f = (float) i;
FLOPS_MACRO_1024(f)
pf[i] = f;
}
}
But when I look in the PTX generated, I see this:
.entry ocl_Kernel_FLOPS(
.param .u32 ocl_Kernel_FLOPS_param_0,
.param .u32 .ptr .global .align 4 ocl_Kernel_FLOPS_param_1
)
{
.reg .f32 %f<1026>; // 1026 float registers !
.reg .pred %p<3>;
.reg .s32 %r<19>;
ld.param.u32 %r1, [ocl_Kernel_FLOPS_param_0];
// some more code unrelated to the problem
// ...
BB1_1:
and.b32 %r13, %r18, 65535;
cvt.rn.f32.u32 %f1, %r13;
fma.rn.f32 %f2, %f1, 0f3F7D70A4, 0f41200000;
fma.rn.f32 %f3, %f2, 0f3F7D70A4, 0f41200000;
fma.rn.f32 %f4, %f3, 0f3F7D70A4, 0f41200000;
fma.rn.f32 %f5, %f4, 0f3F7D70A4, 0f41200000;
// etc
// ...
If I am correct, the PTX uses 1026 float registers to perform the 1024 operations and never reuse a register twice even if it could perform all the multiply-add operations using only 2 registers. 1026 is far above the maximum number of registers a thread is allow to have (according to the specs), so I guess this ends up in memory spilling.
Is it a compiler bug or am I totally missing something ?
I am using nvcc version 6.5 on a Quadro K2000 GPU.
EDIT
Actually I did miss something in the specs:
"Since PTX supports virtual registers, it is quite common for a compiler frontend to generate
a large number of register names. Rather than require explicit declaration of every name,
PTX supports a syntax for creating a set of variables having a common prefix string
appended with integer suffixes. For example, suppose a program uses a large number, say
one hundred, of .b32 variables, named %r0, %r1, ..., %r99"
The PTX file format is intended to describe a virtual machine and instruction set architecture:
PTX defines a virtual machine and ISA for general purpose parallel thread execution. PTX programs are translated at install time to the target hardware instruction set. The PTX-to-GPU translator and driver enable NVIDIA GPUs to be used as programmable parallel computers.
So the PTX output that you are obtaining there is not a form of "GPU assembler". It is only an intermediate representation, intended to be capable of describing virtually any form of parallel computation.
The PTX representation is then compiled into actual binaries for the respective target GPU. This is important in order to be possible to abstract from the actual architecture - specifically, regarding your example: It should be possible to use the same PTX representation of a program, regardless of the number of registers that are available on a specific target machine. The 1026 "registers" that you see there are "virtual" registers, and in the end, may be mapped to the (few) real hardware registers that are actually available. You may add the --ptxas-options=-v argument to the NVCC during the compilation to obtain addition information about the register usage.
(This is roughly the same idea as that behind the LLVM - namely, to have a representation that can be optimized on and argued about, both abstracting from the original source code and from the actual target architecture).

OpenMP and MPI hybrid program

I have a machine with 8 processors. I want to alternate using OpenMP and MPI on my code like this:
OpenMP phase:
ranks 1-7 wait on a MPI_Barrier
rank 0 uses all 8 processors with OpenMP
MPI phase:
rank 0 reaches barrier and all ranks use one processor each
So far, I've done:
set I_MPI_WAIT_MODE 1 so that ranks 1-7 don't use the CPU while on the barrier.
set omp_set_num_threads(8) on rank 0 so that it launches 8 OpenMP threads.
It all worked. Rank 0 did launch 8 threads, but all are confined to one processor. On the OpenMP phase I get 8 threads from rank 0 running on one processor and all other processors are idle.
How do I tell MPI to allow rank 0 to use the other processors? I am using Intel MPI, but could switch to OpenMPI or MPICH if needed.
The following code shows an example on how to save the CPU affinity mask before the OpenMP part, alter it to allow all CPUs for the duration of the parallel region and then restore the previous CPU affinity mask. The code is Linux specific and it makes no sense if you do not enable process pinning by the MPI library - activated by passing --bind-to-core or --bind-to-socket to mpiexec in Open MPI; deactivated by setting I_MPI_PIN to disable in Intel MPI (the default on 4.x is to pin processes).
#define _GNU_SOURCE
#include <sched.h>
...
cpu_set_t *oldmask, *mask;
size_t size;
int nrcpus = 256; // 256 cores should be more than enough
int i;
// Save the old affinity mask
oldmask = CPU_ALLOC(nrcpus);
size = CPU_ALLOC_SIZE(nrcpus);
CPU_ZERO_S(size, oldmask);
if (sched_getaffinity(0, size, oldmask) == -1) { error }
// Temporary allow running on all processors
mask = CPU_ALLOC(nrcpus);
for (i = 0; i < nrcpus; i++)
CPU_SET_S(i, size, mask);
if (sched_setaffinity(0, size, mask) == -1) { error }
#pragma omp parallel
{
}
CPU_FREE(mask);
// Restore the saved affinity mask
if (sched_setaffinity(0, size, oldmask) == -1) { error }
CPU_FREE(oldmask);
...
You can also tweak the pinning arguments of the OpenMP run-time. For GCC/libgomp the affinity is controlled by the GOMP_CPU_AFFINITY environment variable, while for Intel compilers it is KMP_AFFINITY. You can still use the code above if the OpenMP run-time intersects the supplied affinity mask with that of the process.
Just for the sake of completeness - saving, setting and restoring the affinity mask on Windows:
#include <windows.h>
...
HANDLE hCurrentProc, hDupCurrentProc;
DWORD_PTR dwpSysAffinityMask, dwpProcAffinityMask;
// Obtain a usable handle of the current process
hCurrentProc = GetCurrentProcess();
DuplicateHandle(hCurrentProc, hCurrentProc, hCurrentProc,
&hDupCurrentProc, 0, FALSE, DUPLICATE_SAME_ACCESS);
// Get the old affinity mask
GetProcessAffinityMask(hDupCurrentProc,
&dwpProcAffinityMask, &dwpSysAffinityMask);
// Temporary allow running on all CPUs in the system affinity mask
SetProcessAffinityMask(hDupCurrentProc, &dwpSysAffinityMask);
#pragma omp parallel
{
}
// Restore the old affinity mask
SetProcessAffinityMask(hDupCurrentProc, &dwpProcAffinityMask);
CloseHandle(hDupCurrentProc);
...
Should work with a single processor group (up to 64 logical processors).
Thanks all for the comments and answers. You are all right. It's all about the "PIN" option.
To solve my problem, I just had to:
I_MPI_WAIT_MODE=1
I_MPI_PIN_DOMAIN=omp
Simple as that. Now all processors are available to all ranks.
The option
I_MPI_DEBUG=4
shows which processors each rank gets.

Resources