Restrict number of GPUs for AMD OpenCL - opencl

Is there a solution to restrict the used number of GPUs for AMD OpenCL platforms? For NVIDIA platforms one can simply set the environment variable CUDA_VISIBLE_DEVICES to limit the set of GPUs available to OpenCL.
EDIT: I know, that I can create a context with a reduced set of devices. However, I am looking for ways to control the number of devices for the OpenCL platform from "outside".

AMD have the GPU_DEVICE_ORDINAL environment variable for both Windows and Linux. This allows you to specify the indices of the GPUs that you want to be visible from your OpenCL application. For example:
jprice#nowai:~/benchmark$ python benchmark.py -clinfo
Platform 0: AMD Accelerated Parallel Processing
-> Device 0: Tahiti
-> Device 1: Tahiti
-> Device 2: Intel(R) Core(TM) i5-3550 CPU # 3.30GHz
jprice#nowai:~/benchmark$ export GPU_DEVICE_ORDINAL=0
jprice#nowai:~/benchmark$ python benchmark.py -clinfo
Platform 0: AMD Accelerated Parallel Processing
-> Device 0: Tahiti
-> Device 1: Intel(R) Core(TM) i5-3550 CPU # 3.30GHz
A more detailed description can be found in the AMD APP OpenCL Programming Guide (currently in section 2.4.3 "Masking Visible Devices"):
http://developer.amd.com/wordpress/media/2013/07/AMD_Accelerated_Parallel_Processing_OpenCL_Programming_Guide-rev-2.7.pdf

OpenCL host APIs allow you to specify the the the number of devices when you get the device ids list
_int clGetDeviceIDs(
cl_platform_id platform,
cl_device_type device_type,
cl_uint num_entries, // Controls the minimum number of devices
cl_device_id *devices,
cl_uint *num_devices)
The device id pointer *devices can be used to create the context with a specific number of devices.
Here is what the spec says
num_entries is the number of cl_device entries that can be added to
devices. If devices is not NULL, the num_entries must be greater than
zero. devices returns a list of OpenCL devices found. The cl_device_id
values returned in devices can be used to identify a specific OpenCL
device. If devices argument is NULL, this argument is ignored. The
number of OpenCL devices returned is the minimum of the value
specified by num_entries or the number of OpenCL devices whose type
matches device_type. num_devices returns the number of OpenCL devices
available that match device_type. If num_devices is NULL, this
argument is ignored
cl_context clCreateContext(
const cl_context_properties *properties,
cl_uint num_devices, // Number of devices
const cl_device_id *devices,
(voidCL_CALLBACK *pfn_notify) (
const char *errinfo,
const void *private_info, size_t cb,
void *user_data
),
void *user_data,
cl_int *errcode_ret)
Each device is then addressed through its own device queue.

There is not a portable solution defined by the OpenCL specification.
NVIDIA has the solution you mentioned. I don't think AMD has a standard; your OpenCL programs will have to come up with a way to share the available devices.
Note that AMD does have OpenCL extensions (some of which have become more official in OpenCL 1.2) for "device fission" which is used for splitting up a single device among multiple programs (but that is different than what you are asking).

Related

OpenCL (in-kernel) callable SVD kernel code?

I'm studying how to offload some quite heavy calculations on GPUs.
Although on my machine I have a NVIDIA RTX GPU, I would like to avoid using
CUDA in order to develop something portable on other GPUs as well (at least in its core).
Thus the choice of OpenCL.
Now, my current biggest concern is that, within the core that is suitable for offload I intensively make use of LAPACK SVD implementation.
However, in OpenCL, kernel code cannot either:
Be linked to external libraries. There's a "workaraound" using clEnqueueNativeKernel(), but this does not seem to apply in this case (call within a kernel itself) (not to mention this is not very portable, since it is needed the device to support CL_EXEC_NATIVE_KERNEL capability);
Accept function pointers as kernel arguments.
So, does anyone know of the existence of a OpenCL kernel SVD open-source implemetation, which can then be called within a parent OpenCL kernel?
I googled, and found several libraries/implementations of SVD for GPU offload, but I couldn't see how to "embed" them into an OpenCL kernel (they all seem implementations to be launched from host code). If I'm wrong, please correct me. Any help is more than welcome.
Implement an event-callback API between host and kernel using only atomic functions such that:
void callExternalLib(__global int * ptr)
{
atomic_inc(ptr,1);
// if clWaitForEvents not supported in kernel
while(atomic_inc(ptr,0) == 1)
{
// somehow wait until signal 0 is received
}
dynamicParallelismLaunchRestOfTheAlgorithm();
}
__kernel void test(__global int * communication, __global int * data)
{
callExternalLib(communication);
}
// at the same time on host with a dedicated event-thread:
// if opencl-events do not work between gpu and host
while(ptr.load()==0)
{
std::this_thread::yield();
}
if(ptr.load()==CALL_SVD)
{
clMagmaCopyToGraphicsCard(); // not required if buffer handle can be shared
clMagmaComputeOnGPU();
clMagmaCopyToHost(); // not required if buffer handle can be shared
copyToYourOpenCLBuffer(); // not required if buffer handle can be shared
ptr--; // inform kernel's threads that clmagma function has been called
}
From https://man.opencl.org/atomic_store.html:
With fine-grained system SVM, sharing happens at the granularity of
individual loads and stores anywhere in host memory. Memory
consistency is always guaranteed at synchronization points, but to
obtain finer control over consistency, the OpenCL atomics functions
may be used to ensure that the updates to individual data values made
by one unit of execution are visible to other execution units. In
particular, when a host thread needs fine control over the consistency
of memory that is shared with one or more OpenCL devices, it must use
atomic and fence operations that are compatible with the C11 atomic
operations.
I don't know if your graphics card / driver supports this. OpenCL 2.0 may not be fully supported by all GPUs.
To make host-side libraries run directly on GPU, you'll need to convert some parts by hand:
allocations
math functions' implementations like sqrt,cos,sin,exp
intrinsic functions (GPU can't run AVX maybe except Intel's XeonPhi?)
alignments of structs, arrays
dependencies to other libraries
maybe even calling-conventions? (some gpus don't have a real call stack)
Latency of just an atomically-triggered GPU-library call should be negligible if the work is heavy but it's not suitable when every clock-cycle is required on GPU-side. So it wouldn't be good for working with small matrices.

Intel FPGA OpenCL: Track down reason for low kernel clock frequency

I'm implementing an OpenCL design for an Intel Cyclone V FPGA. It is based on a modified Version of the Terasic DE10 Standard OpenCL BSP.
The modification contains a connection to an external AD converter card, connected to the FPGA board, for which a custom VHDL-based Qsys block was implemented, that streams samples at a sample rate of approx 16MHz into an avalon dual-clock FIFO. The output of the FIFO is connected to the kernel clock domain and the Avalon Streaming interface output of the FIFO is exported to the OpenCL kernel as channel as described in the IntelĀ® FPGA SDK for OpenCLTM Standard Edition Programming Guide under topic 5.4.5.4. (Implementing I/O Channels Using the io Channels Attribute).
Currently, the CL kernel simply fetches blocks of contiguous data from the channel and writes it to a global memory buffer. The host then appends the samples to a file. This works stable for AD converter sample rates up to 1MHz, higher sample rates produce lots of dropouts.
Enabling profiling and using the Intel Dynamic Profiler for OpenCL showed the reason, the average kernel clock is as low as 1.3MHz. However, as the an OpenCL system is not compiled through the Quartus IDE but through the command line via aoc, there is not so much information on what was synthesized and what is the reason for such a low clock frequency. So how do I track down the bottleneck of my design with the tools provided by Intel?
Here is a screenshot of the profiling results:
Edit:
Here is the relevant part of the CL kernel and the Qsys Design. Note that the TX path that you find in both is currently not used.
#pragma OPENCL EXTENSION cl_intel_channels: enable
struct TwoChannelSample
{
short2 chanA;
short2 chanB;
};
#define FIFO_DEPTH 32768
channel struct TwoChannelSample rxSamps __attribute__((depth(0))) __attribute__((io("THDB_ADA_rxSamples")));
channel struct TwoChannelSample txSamps __attribute__((depth(0))) __attribute__((io("THDB_ADA_txSamples")));
channel ushort stateChan __attribute__((depth(0))) __attribute__((io("THDB_ADA_state")));
kernel void thdbADARxTxCallback (global const float2* restrict txSamples,
global float2* restrict rxSamples,
global ushort* restrict interfaceState)
{
// get state from interface
*interfaceState = read_channel_intel (stateChan);
// Process sample-wise
for (int i = 0; i < FIFO_DEPTH; ++i)
{
struct TwoChannelSample rxSample = read_channel_intel (rxSamps);
rxSamples[i].x = (float)rxSample.chanA.x;
rxSamples[i].y = (float)rxSample.chanA.y;
rxSamples[i + FIFO_DEPTH].x = (float)rxSample.chanB.x;
rxSamples[i + FIFO_DEPTH].y = (float)rxSample.chanB.y;
}
}

Got completely confused on how to OpenCL data transfer

I'm learning OpenCL and attempt to utilize it on some low-latency scenario, so I'm really concerned with the memory transferring delay.
According to NVidia's OpenCL Best Practices Guide, and also by many other places, direct read/write on buffer object should be avoided. Instead, we should use map/unmap utility. In that guide, a demonstrative code is given like this:
cl_mem cmPinnedBufIn = clCreateBuffer(cxGPUContext, CL_MEM_READ_ONLY | CL_MEM_ALLOC_HOST_PTR, memSize, NULL, NULL);
cl_mem cmDevBufIn = clCreateBuffer(cxGPUContext, CL_MEM_READ_ONLY, memSize, NULL, NULL);
unsigned char* cDataIn = (unsigned char*) clEnqueueMapBuffer(cqCommandQue, cmPinnedBufIn, CL_TRUE, CL_MAP_WRITE, 0, memSize, 0, NULL, NULL, NULL);
for(unsigned int i = 0; i < memSize; i++)
{
cDataIn[i] = (unsigned char)(i & 0xff);
}
clEnqueueWriteBuffer(cqCommandQue, cmDevBufIn, CL_FALSE, 0, szBuffBytes, cDataIn , 0, NULL, NULL);
In this code snippet, two buffer objects are generated explicitly, and a write-to-device operation is also explicitly called.
If my understanding is correct, when you call clCreateBuffer with CL_MEM_ALLOC_HOST_PTR OR CL_MEM_USE_HOST_PTR, the storage of buffer object is created in on host side, probably in DMA memory, and no storage is allocated on device side. So the above code actually creates two separated storage. If so:
What would happen if I call map buffer on cmDevBufIn, which do not have host side memory?
For CPU-integrated GPUs, there is no separate graphics memory. Especially, for new version of AMD APUs, the memory address is also homologus. So it seems create two buffer objects is not good. What is the best practice for integrated platforms?
Is there any way to write single lines of memory transfer code for different platforms? Or I must write several different suits of memory transfer codes to achieve best performance for Nvidia, AMD separate GPU, AMD old APU, AMD new APU and Intel HD graphics......
Unfortunately, it's different for each vendor.
NVIDIA claims their best bandwidth is when you use read/write buffer where the host memory is "pinned", which can be achieved by creating a buffer with CL_MEM_ALLOC_HOST_PTR and mapping it (I think your example is that). You should also compare that to just mapping and unmapping the device memory; their more recent drivers have gotten better at that.
With AMD you can just map/unmap the device buffer to get full speed. They also have a bunch of vendor-specific buffer flags which can make certain scenarios faster; you should study them but more importantly create benchmarks that try out everything to see what actually works best for your task.
With both discrete devices you should use separate command queues for the transfer operations so they can overlap with other (non-dependent) compute operations (look up various compute overlap examples). Furthermore, some higher end discrete GPUs can be downloading one buffer at the same time they are uploading another (using dual DMA engines), so you could be uploading one batch of work while you're computing another while you're downloading the result of a third. When written elegantly, this isn't even much more code than the strictly sequential version, but you have to use OpenCL events to synchronize between command queues. NVIDIA has a GTC talk you can watch that shows how to do this for video frames every 16 ms.
With AMD's APU and with Intel's Integrated Graphics, the map/unmap of the "device" buffer is "free" since it is in main memory. Don't use read/write buffer here or you'll be paying for unneeded transfers.

OpenCL - How to I query for a device's SIMD width?

In CUDA, there is a concept of a warp, which is defined as the maximum number of threads that can execute the same instruction simultaneously within a single processing element. For NVIDIA, this warp size is 32 for all of their cards currently on the market.
In ATI cards, there is a similar concept, but the terminology in this context is wavefront. After some hunting around, I found out that the ATI card I have has a wavefront size of 64.
My question is, what can I do to query for this SIMD width at runtime for OpenCL?
I found the answer I was looking for. It turns out that you don't query the device for this information, you query the kernel object (in OpenCL). My source is:
http://www.hpc.lsu.edu/training/tutorials/sc10/tutorials/SC10Tutorials/docs/M13/M13.pdf
(Page 108)
which says:
The most efficient work group sizes are likely to be multiples of the native hardware execution width
wavefront size in AMD speak/warp size in Nvidia speak
Query device for CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE
So, in short, the answer appears to be to call the clGetKernelWorkGroupInfo() method with a param name of CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE. See this link for more information on this method:
http://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/clGetKernelWorkGroupInfo.html
On AMD, you can query CL_DEVICE_WAVEFRONT_WIDTH_AMD. That's different from CL_DEVICE_SIMD_WIDTH_AMD, which returns the number of threads it executes in each clock cycle. The latter may be smaller than the wavefront size, in which case it takes multiple clock cycles to execute one instruction for all the threads in a wavefront.
On NVIDIA, you can query the warp size width using clGetDeviceInfo with CL_DEVICE_WARP_SIZE_NV (although this is always 32 for current GPUs), however, this is an extension, as OpenCL defines nothing like warps or wavefronts. I don't know about any AMD extension that would allow to query for the wavefront size.
For AMD: clGetDeviceInfo(..., CL_DEVICE_WAVEFRONT_WIDTH_AMD, ...) (if cl_amd_device_attribute_query extension supported)
For Nvidia: clGetDeviceInfo(..., CL_DEVICE_WARP_SIZE_NV, ...) (if cl_nv_device_attribute_query extension supported)
But there is no uniform way. The way suggested by Jonathan DeCarlo doesn't work, I was using it for GPUs if these two extensions does not supported - for example Intel iGPU, but recently I faced wrong results on Intel HD 4600:
Intel HD 4600 says CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE=32 while in fact Intel GPUs seems to have wavefront equal to 16, so I faced incorrect results, everything works fine if barriers were used for wavefront=16.
P.S. I have not enough reputation to comment Jonathan DeCarlo answer about this, will be glad if somebody will add comment.
The closest to actual SIMD width is the result of get_max_sub_group_size() kernel runtime function from cl_khr_subgroups extension. It returns min(SIMD-width, work-group-size).
Worth attention is also function get_sub_group_size() which returns the size of the current sub-group, which is never bigger than SIMD width: for example if SIMD width is 32 and group size is 40, then get_sub_group_size for threads 0..31 will return 32 and for threads 32..39, it will return 8.
foot-note: to use this extension add #pragma OPENCL EXTENSION cl_khr_subgroups : enable at the top of your openCL kernel code.
UPDATE:
it seems that there's also corresponding host level function clGetKernelSubGroupInfo, but jocl that I use does not have a binding for it, so I cannot verify if it works.
Currently, if I need to check SIMD width at the host level, I run a helper kernel which calls get_max_sub_group_size() and stores it into its result buffer:
// run it with max work-group size
__kernel void getSimdWidth(__global uint *simdWidth) {
if (get_local_id(0) == 0) simdWidth[0] = get_max_sub_group_size();
}
You can use the clGetDeviceInfo to get maximum number of workitems you can have in your local workset for each dimension. This is most likely multiple of your wavefront size.
See: http://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/clGetDeviceInfo.html
For CUDA (using NVIDIA), please take a look at B.4.5 Cuda programming guide from NVIDIA. There is a variable for containing this information. You can query this variable at runtime. For AMD , I'm not sure if there is such a variable.

How many threads (or work-item) can run at the same time?

I'm new in GPGPU programming and I'm working with NVIDIA implementation of OpenCL.
My question was how to compute the limit of a GPU device (in number of threads).
From what I understood a there are a number of work-group (equivalent of blocks in CUDA) that contain a number of work-item (~ cuda thread).
How do I get the number of work-group present on my card (and that can run at the same time) and the number of work-item present on one work group?
To what CL_DEVICE_MAX_COMPUTE_UNITS corresponds?
The khronos specification speeks of cores ("The number of parallel compute cores on the OpenCL device.") what is the difference with the CUDA core given in the specification of my graphic card. In my case openCL gives 14 and my GeForce 8800 GT has 112 core based on NVIDIA website.
Does CL_DEVICE_MAX_WORK_GROUP_SIZE (512 in my case) corresponds to the total of work-items given to a specific work-group or the number of work-item that can run at the same time in a work-group?
Any suggestions would be extremely appreciated.
The OpenCL standard does not specify how the abstract execution model provided by OpenCL is mapped to the hardware. You can enqueue any number T of threads (work items), and provide a workgroup size (WG), with at least the following constraints (see OpenCL spec 5.7.3 and 5.8 for details):
WG must divide T
WG must be at most DEVICE_MAX_WORK_GROUP_SIZE
WG must be at most KERNEL_WORK_GROUP_SIZE returned by GetKernelWorkGroupInfo ; it may be smaller than the device max workgroup size if the kernel consumes a lot of resources.
The implementation manages the execution of the kernel on the hardware. All threads of a single workgroup must be scheduled on a single "multiprocessor", but a single multiprocessor can manage several workgroups at the same time.
Threads inside a workgroup are executed by groups of 32 (NVIDIA warp) or 64 (AMD wavefront). Each micro-architecture does this in a different way. You will find more details in NVIDIA and AMD forums, and in the various docs provided by each vendor.
To answer your question: there is no limit to the number of threads. In the real world, your problem is limited by the size of inputs/outputs, i.e. the size of the device memory. To process a 4GB buffer of float, you can enqueue 1G threads, with WG=256 for example. The device will have to schedule 4M workgroups on its small number (say between 2 and 40) of multiprocessors.

Resources