What is the meaning of having a certain number of OpenCL work-items into a CPU? - opencl

I'm trying tu understand why I could have more work-items in a CPU than a GPU in one dimension.
PLATFORM 0 DEVICE 0
== CPU ==
DEVICE_VENDOR: Intel
DEVICE NAME: Intel(R) Core(TM) i5-5257U CPU # 2.70GHz
MAXIMUM NUMBER OF PARALLAEL COMPUTE UNITS: 4
MAXIMUM DIMENSIONS FOR THE GLOBAL/LOCAL WORK ITEM IDs: 3
MAXIMUM NUMBER OF WORK-ITEMS IN EACH DIMENSION: (1024 1 1 )
MAXIMUM NUMBER OF WORK-ITEMS IN A WORK-GROUP: 1024
PLATFORM 0 DEVICE 1
== GPU ==
DEVICE_VENDOR: Intel Inc.
DEVICE NAME: Intel(R) Iris(TM) Graphics 6100
MAXIMUM NUMBER OF PARALLAEL COMPUTE UNITS: 48
MAXIMUM DIMENSIONS FOR THE GLOBAL/LOCAL WORK ITEM IDs: 3
MAXIMUM NUMBER OF WORK-ITEMS IN EACH DIMENSION: (256 256 256 )
MAXIMUM NUMBER OF WORK-ITEMS IN A WORK-GROUP: 256
The above is the result of my test code to print the information of the actual hardware that the OpenCL framework can use.
I really do not understand why the value of 1024 in the Maximum number of work-items in the CPU section. What is the real meaning of having that amount of work-items?

CPUs are more general purpose than GPUs. Their OpenCL implementation looks like serialized(but interleaved on instructions) for workgroups since each compute unit is a physical core to issue workgroups as a whole. Since they are serialized/interleaved, they rely on instructions-in-flight. CPUs have 100-200 instructions in-flight and if those instructions are AVX/SSE, then you can expect 800-1600 scalar data operations in-flight. This is well within range of 1024 workitems per workgroup, if OpenCL implementation is vectorized under the hood.
Since GPUs use massive thread-level-parallelism to fill pipelines to have more instructions-in-flight, they don't need as much ILP as CPUs so they can work fine with just 256 threads per workgroup and these threads run in parallel. Thread-level-parallelism fills pipelines easier than instruction-level-parallelism. Intel has 7-way, Nvidia 16-way, Amd 40-way thread-level-parallelism, for each pipeline. Each subslice of Iris6100 has (8 EUs) 64 pipelines. 64 pipelines x 7 means it can have multiple workgroups in-flight too, just like Nvidia and Amd GPUs. Probably having more threads/workitems per workgroup doesn't yield more performance for that iGPU and having more than 1024 threads per workgroup doesn't yield more performance for that CPU.
CPU also has 256kB L2 cache for compute unit which may be another limiting factor on maximum 1024 workitems per workgroup for saving states of each workitem efficiently.
As an image processing example:
You can divide and conquer an image by having 32x32 patches of it, on CPU(1024 threads). But this needs re-computation of 2D indices in kernel since CPU supports 1D kernel.
You can divide and conquer an image by having 16x16 patches of it, on iGPU (256 threads).
256x1 on iGPU
1024x1 on CPU
8x8x4 on iGPU
1x256x1 on iGPU
1x1x256 on iGPU
but not 1x1024x1 on CPU
They are the number of workitems per workgroup and generally they are a fraction of maximum allowed in-flight workitems per compute unit.
For this image processing example, up to several thousands of pixels can be in-flight per compute unit or up to 50k-100k pixels in-flight for a high-end GPU.
Having only 1 on other dimensions for CPU (imo) is originated from CPU's OpenCL implementation being an emulation. It doesn't have hardware to accelerate computation of thread-id values for other dimensions. But GPUs probably have this kind of support on hardware so that they can have more dimensions without decreasing performance as 1D kernel on CPU has to compute some modulos and divisions to emulate 2nd and 3rd dimensions which is a bottleneck for simple kernels.
If CPUs had emulated 2nd and 3rd dimensions too, there would be some modulos and divisions going on background with further slow-downs inside kernel if developers flatten a 3d kernel into 1d indices unknowingly. But GPUs may not even be computing modules under the hood. They could be just some lookup tables as fast as registers or some other fast accessed constants.
This is just a limitation per workgroup. You can launch many workgroups per kernel launch so it shouldn't affect the maximum image size to process in different devices like CPU or GPU or iGPU. Each image is processed by multiple workgroups for tiling from 1x1x1 to 32x32x1 or some other size.

Related

Check No.of nvidia cores utilized

Is there a way to check the number of stream processors and cores utilized by an OpenCL kernel?
No. However you can make guesses based on your application: If the number of work items is much larger than the number of CUDA cores and the work group size is 32 or larger, all stream processors are used at the same time.
If the number of work items is the about the same or lower than the number of CUDA cores, you won't have full utilization.
If you set the work size to 16, only half of the CUDA cores will be used at any time, but the non-used half is blocked and cannot do other work. (So always set work group size to 32 or larger.)
Tools like nvidia-smi can tell you the time-averaged GPU usage. So if you run your kernel over and over without any delay in between, the usage indicates the average fraction of used CUDA cores at any time.

OpenCL Compute units and GPU Processing units mismatch

I'm a bit confused about compute units. I have an nvidia gtx 1650Ti graphics card. When I asked for max_compute_units, it returns 16 units, and max_work_group_size is 1024.
But when I executed the kernel:
int i = get_global_id (0);
result [i] = get_local_id (0);
I get the repeating local id range from 0 to 255. How does this relate to the max_compute_units returned by the graphics card? Is this an error in max_compute_units value and the gpu actually has more compute units than it indicates? Or does OpenCl get_local_id have its own distribution logic not tied to hardware? Thx!
OpenCL ompute units refer to streaming multiprocessors (SMs) on Nvidia GPUs or compute units (CUs) on AMD GPUs. Each SM contains 128 CUDA cores (Pascal and earlier) or 64 CUDA cores (Turing/Volta). For AMD, each CU contains 64 streaming multiprocessors. This refers to the hardware. The more SMs/CUs, the faster the GPU (within the same microarchitecture).
The work group size / local ID refer to how you group threads in software into so-called thread blocks. Thread blocks are useful for matrix multiplications for example, because within a thread block, communication between threads is possible via shared memory. Thread blocks can have different size (sort of an optimization parameter, either 32, 64, 128, 256, 512 or 1024 (max_work_group_size)). Based on your GPU, some intermediate values might also work. On the hardware (at least for Nvidia), the thread blocks are executed as so-called warps (groups of 32 threads) on the SMs. For Turing, one SM can compute 2 warps simultaneously. If you choose the thread block size 16, then each warp only computes 16 threads and the other 16 are idle, so you only get half the performance.
In your example with the local ID (this is the index in the thread block) betwqeen 0 and 255, your thread block size is 256. You define the thread block size in the kernel call as the "local range". max_work_group_size does not correlate with max_compute_units in any way; both are hardware / driver limitations.

How do I plan this least-squares computation on a GPU?

I'm writing a program called PerfectTIN (https://github.com/phma/perfecttin) which does lots of least-squares adjustments of a TIN to fit a point cloud. Each adjustment takes some contiguous group of triangles and adjusts the elevations of up to 8 points, which are corners of the triangles, to fit the dots in the triangles. I have it working on SMP. At the start of processing, it does only one adjustment at a time, so it splits the adjustment into tasks, each of which takes some dots, all of which are in the same triangle. Each thread takes a task from a queue and computes a small square matrix and a small column vector. When they're all ready, the adjustment routine adds up the matrices and the vectors and finishes the least squares computation.
I'd like to process tasks on the GPU as well as the CPU. The data needed for a task are
The three corners of the triangle (x,y,z)
The coordinates of the dots (x,y,z).
The output data are
A symmetric matrix with up to nine nonzero entries (since it's symmetric, I need only compute six numbers)
A column vector with the same number of rows.
The number of dots is a multiple of 1024, except for a few tasks which I can handle in the CPU (the total number of dots in a triangle can be any nonnegative integer). For a fairly large point cloud of 56 million dots, some tasks are larger than 131072 dots.
Here is part of the output of clinfo (if you need other parts, let me know):
Platform Name Clover
Number of devices 1
Device Name Radeon RX 590 Series (POLARIS10, DRM 3.33.0, 5.3.0-7625-generic, LLVM 9.0.0)
Device Vendor AMD
Device Vendor ID 0x1002
Device Version OpenCL 1.1 Mesa 19.2.8
Driver Version 19.2.8
Device OpenCL C Version OpenCL C 1.1
Device Type GPU
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Max compute units 36
Max clock frequency 1545MHz
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Preferred work group size multiple 64
Preferred / native vector sizes
char 16 / 16
short 8 / 8
int 4 / 4
long 2 / 2
half 8 / 8 (cl_khr_fp16)
float 4 / 4
double 2 / 2 (cl_khr_fp64)
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
If I understand right, if I put one dot in each core of the GPU, the total number of dots I can process at once is 36×256=9×1024=9216. Could I put four dots in each core, since a work group would then have 1024 dots? In this case I could process 36864 dots at once. How many dots should each core process? What if a task is bigger than the GPU can hold? What if several tasks (possibly from different triangles) fit in the GPU?
Of course I want this code to run on other GPUs than mine. I'm going to use OpenCL for portability. What different GPUs (description rather than name) am I going to encounter?
If I understand right, if I put one dot in each core of the GPU, the total number of dots I can process at once is 36×256=9×1024=9216.
Not quite. The total number of dots is not limited by the maximum work group size. The work group size is the number of GPU threads working synchronously in a group. Within a work group, you can share data via local memory, which can be useful to speed up certain calculations like matrix multiplications for example (cache tiling).
The idea of GPU parallelization is to split the prioblem up into as many independent parts as there are. In C++ something like
void example(float* data, const int N) {
for(int n=0; n<N; n++) {
data[n] += 1.0f;
}
}
in OpenCL becomes this
kernel void example(global float* data ) {
const int n = get_global_id(0) ;
data[n] += 1.0f;
}
where the global range of the kernel is set to N.
Each GPU thread should process only a single dot. The number of threads (global range) can and should be much larger than the number of GPU cores available. If you don't explicitly need local memory (all threads can work independent), you can set the local work group size to either 32, 64, 128 or 256 - it doesn't matter - but there might be some performance difference between these values. However the global range (total number of threads / points) must be a multiple of the work group size.
What if a task is bigger than the GPU can hold?
I assume you mean when your data set does not fit into video memory all at once. In this case you can do the computation in several batches, so exchange the GPU buffers using PCIe transfers. But that comes at a large performance penalty.
Of course I want this code to run on other GPUs than mine. I'm going to use OpenCL for portability. What different GPUs (description rather than name) am I going to encounter?
OpenCL is excellent for portability across devices and operating systems. Other than AMD GPUs and CPUs, you will encounter Nvidia GPUs, which only support OpenCL 1.2, and Intel GPUs and CPUs. If the graphics drivers are installed, your code will run on all of them without issues. Just be aware that the amount of video memory can be vastly different.

Constant size of task - the same executing time on 1x and 2x CPU - OpenCl

I have a problem with understanding my results regarding Integral algorithm (implemented in OpenCl).
I have access to two Intel Xeon E5-2680 v3 , one has 12 cores.
From OpenCl I don't know why but I can see only one device but I can request 12 or 24 cores, so I guess it does not matter if I "see" one or two devices, if 24 cores are used (2 CPUs).
I was running those tasks with max local size = 4096, and minimal global size = 4096, and for 1 CPU and 2 CPU executing time was the same, I was changing global size to 2* 4096, 4* 4096, 8* 4096 and when I reached 16* 4096 global size, 1CPU was slowing down, but 2x CPU was speeding up, and every next global size I changed to bigger than before it stayed that way, 2x CPU was 2x faster than 1x CPU.
I don't understand why from the beginning we can't see advantage of 2x CPU over 1x CPU.
What is also important to me, I was collecting power consumption for CPU's, and in that last global size = 8* 4096 when we see the same execution time of 1 and 2 CPUs I can see a bit smaller power consumption for 2 CPUs, and when global size was growing, that 2 CPU consumption was lower than on 1 CPU I guess because of 2x faster time execution, but shouldn't it be equal or bigger than on 1 CPU?
What may be important: I checked that always 1 and 2 CPUs have 2.5 Ghz freq, and it is not changing.
My questions regarding above are:
Why on smaller global Size's 1 CPU and 2 CPU have equal execution time?
Why on bigger global size's 2 CPU have smaller power consumption.
Why in that one point when Global Size = 8*4096 when we have equal execution times, I have slightly less power consumption with 2 CPUs than 1 CPU.
I need to add that every run was made 10x so those results are not accidental
Here are my results:
Why on smaller global Size's 1 CPU and 2 CPU have equal execution
time?
Because you used 4096 as local size. Each compute unit for a cpu is 1 core. You put 16x4096 for global size so it used 16 cores. Probably you used a memory bound kernel or one core accesses other CPU's cache or memory so it couldn't matter if it used 1 core or N cores. When you increase global size, other CPU memory could be used more often an becomes more symmetrical memory access pattern.
Why on bigger global size's 2 CPU have smaller power consumption.
2 CPU have more cache so they can schedule more kernels at the same time, maybe even reusing of data is made low powered than accessing ram. Gettin data from ram should be more power consuming than getting it from cache.
Why in that one point when Global Size = 8*4096 when we have equal
execution times, I have slightly less power consumption with 2 CPUs
than 1 CPU.
Using 8 cores(8 * local size), single CPU must have been in use and even if it is not, same memory bank groups could be in use by both CPU and memory bandwidth is bottlenecking. Again, 2 CPUs have more cache so there must some data-reuse to use advantage of bigger cache that decrease power consumption.
You should try different device fission combinations to get maximum locality and data sharability for cores. Threads could be randomly distributed among CPUs and cores and hardware threads. Device fission solves this problem and gives more control over thread scheduling.

Computing Maximum Concurrent Workgroups

I was wondering if there was a standard way to programatically determine the number of maximum concurrent workgroups that can run on a GPU.
For example, on a NVIDIA card with 5 compute units (or SMs), there can be a maximum of 8 workgroups (or blocks) per compute unit, so the maximum number of workgroups that can be run concurrently is 40.
Since I can find the number of compute units with clGetDeviceInfo, all I need is the maximum number of workgroups that can be run on a compute unit.
Thanks!
Max number of groups per execution unit/ SM are limited by the hardware resources. Let me take example of Intel Gen8 GPU. It contains 16 barrier registers per sub slice. So no more than 16 work groups can run simultaneously.
Also, The amount of shared local memory available per sub-slice (64KB). If for example a work-group requires 32KB of shared local memory, only 2 of those work-groups can run concurrently, regardless of work-group size.
I typically use the number of compute units as the number of work groups. I like to scale up the size of the groups to saturate the hardware, rather than force the gpu to schedule many work groups 'simultaneously'.
I don't know of a way to determine the max number of groups without looking it up on the vendor specs.

Resources