OpenCL and AMD Gpu Architecture understanding - opencl

So I was reading the architecture for GCN 1st Generation GPUs provided by the paper here, and I'm a bit confused on the size of the vector ALUs and some other things.
1) According to it, each compute unit has 1 scalar unit and 4 SIMDs. Each of these 4 SIMDs have 16 ALUs to perform vector operations. The paper states that the ALUs natively execute single precision floating point and 24-bit integer at full speed and DP and 32 bit integer at reduced speeds.
What I want to know is why do the 32 bit integers execute at reduced speed when 32 bit SP floating point can execute all right?
2) Secondly we know that for AMD GCN GPUs, each SIMD array executes 1-quarter of wavefront over 4 cycles. When an instruction is assigned to an SIMD unit, does it replicates across all 4? or does it take 4 different cycles in order for each SIMD unit to get an instruction?
If all 4 SIMD units execute the same instruction then, theoretically this gets us 4 wavefronts per 4 cycles. If it's the second case then only 1 wavefront gets completed at the 4th cycle.
Although note that according to the GCN whitepaper, the Local Data Share (LDS) coalesces 16 lanes from 2 different SIMD units each cycle so this gets us 2 complete wavefronts per 4 cycles. This seems to hint that it's the first case, since there is no way to get more than 1 wavefront completed per 4 cycles if the instructions aren't replicated across SIMD units.
3) Lastly I want to ask about a scenario.
Suppose I have a 2D workgroup assigned to a Compute Unit. The workgroup consists of 8x8 = 64 work items. Will the compute unit form 1 wavefront and execute this over 4 cycles in 1 SIMD unit, while the other 3 SIMD units remain idle? Or will something else happen?

why do the 32 bit integers execute at reduced speed when 32 bit SP floating point can execute all right?
If you look at how 32-bit floats are represented, you'll notice there is a 24-bit mantissa, a sign bit, and 7 bits of exponent. Presumably the GPU can use the floating-point ALU's capabilities directly to operate on 24-bit integers stored in what would normally be the mantissa. For operating on larger integers, explicit long multiplication of some kind will need to be done (like 64-bit arithmetic on a 32-bit CPU), slowing things down.

Related

How do I plan this least-squares computation on a GPU?

I'm writing a program called PerfectTIN (https://github.com/phma/perfecttin) which does lots of least-squares adjustments of a TIN to fit a point cloud. Each adjustment takes some contiguous group of triangles and adjusts the elevations of up to 8 points, which are corners of the triangles, to fit the dots in the triangles. I have it working on SMP. At the start of processing, it does only one adjustment at a time, so it splits the adjustment into tasks, each of which takes some dots, all of which are in the same triangle. Each thread takes a task from a queue and computes a small square matrix and a small column vector. When they're all ready, the adjustment routine adds up the matrices and the vectors and finishes the least squares computation.
I'd like to process tasks on the GPU as well as the CPU. The data needed for a task are
The three corners of the triangle (x,y,z)
The coordinates of the dots (x,y,z).
The output data are
A symmetric matrix with up to nine nonzero entries (since it's symmetric, I need only compute six numbers)
A column vector with the same number of rows.
The number of dots is a multiple of 1024, except for a few tasks which I can handle in the CPU (the total number of dots in a triangle can be any nonnegative integer). For a fairly large point cloud of 56 million dots, some tasks are larger than 131072 dots.
Here is part of the output of clinfo (if you need other parts, let me know):
Platform Name Clover
Number of devices 1
Device Name Radeon RX 590 Series (POLARIS10, DRM 3.33.0, 5.3.0-7625-generic, LLVM 9.0.0)
Device Vendor AMD
Device Vendor ID 0x1002
Device Version OpenCL 1.1 Mesa 19.2.8
Driver Version 19.2.8
Device OpenCL C Version OpenCL C 1.1
Device Type GPU
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Max compute units 36
Max clock frequency 1545MHz
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Preferred work group size multiple 64
Preferred / native vector sizes
char 16 / 16
short 8 / 8
int 4 / 4
long 2 / 2
half 8 / 8 (cl_khr_fp16)
float 4 / 4
double 2 / 2 (cl_khr_fp64)
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
If I understand right, if I put one dot in each core of the GPU, the total number of dots I can process at once is 36×256=9×1024=9216. Could I put four dots in each core, since a work group would then have 1024 dots? In this case I could process 36864 dots at once. How many dots should each core process? What if a task is bigger than the GPU can hold? What if several tasks (possibly from different triangles) fit in the GPU?
Of course I want this code to run on other GPUs than mine. I'm going to use OpenCL for portability. What different GPUs (description rather than name) am I going to encounter?
If I understand right, if I put one dot in each core of the GPU, the total number of dots I can process at once is 36×256=9×1024=9216.
Not quite. The total number of dots is not limited by the maximum work group size. The work group size is the number of GPU threads working synchronously in a group. Within a work group, you can share data via local memory, which can be useful to speed up certain calculations like matrix multiplications for example (cache tiling).
The idea of GPU parallelization is to split the prioblem up into as many independent parts as there are. In C++ something like
void example(float* data, const int N) {
for(int n=0; n<N; n++) {
data[n] += 1.0f;
}
}
in OpenCL becomes this
kernel void example(global float* data ) {
const int n = get_global_id(0) ;
data[n] += 1.0f;
}
where the global range of the kernel is set to N.
Each GPU thread should process only a single dot. The number of threads (global range) can and should be much larger than the number of GPU cores available. If you don't explicitly need local memory (all threads can work independent), you can set the local work group size to either 32, 64, 128 or 256 - it doesn't matter - but there might be some performance difference between these values. However the global range (total number of threads / points) must be a multiple of the work group size.
What if a task is bigger than the GPU can hold?
I assume you mean when your data set does not fit into video memory all at once. In this case you can do the computation in several batches, so exchange the GPU buffers using PCIe transfers. But that comes at a large performance penalty.
Of course I want this code to run on other GPUs than mine. I'm going to use OpenCL for portability. What different GPUs (description rather than name) am I going to encounter?
OpenCL is excellent for portability across devices and operating systems. Other than AMD GPUs and CPUs, you will encounter Nvidia GPUs, which only support OpenCL 1.2, and Intel GPUs and CPUs. If the graphics drivers are installed, your code will run on all of them without issues. Just be aware that the amount of video memory can be vastly different.

OpenCL: Confused by CL_DEVICE_MAX_COMPUTE_UNITS

I'm confused by this CL_DEVICE_MAX_COMPUTE_UNITS. For instance my Intel GPU on Mac, this number is 48. Does this mean the max number of parallel tasks run at the same time is 48 or the multiple of 48, maybe 96, 144...? (I know each compute unit is composed of 1 or more processing elements and each processing element is actually in charge of a "thread". What if these each of the 48 compute units is composed of more than 1 processing elements ). In other words, for my Mac, the "ideal" speedup, although impossible in reality, is 48 times faster than a CPU core (we assume the single "core" computation speed of CPU and GPU is the same), or the multiple of 48, maybe 96, 144...?
Summary: Your speedup is a little complicated, but your machine's (Intel GPU, probably GEN8 or GEN9) fp32 throughput 768 FLOPs per (GPU) clock and 1536 for fp16. Let's assume fp32, so something less than 768x (maybe a third of this depending on CPU speed). See below for the reasoning and some very important caveats.
A Quick Aside on CL_DEVICE_MAX_COMPUTE_UNITS:
Intel does something wonky when with CL_DEVICE_MAX_COMPUTE_UNITS with its GPU driver.
From the clGetDeviceInfo (OpenCL 2.0). CL_DEVICE_MAX_COMPUTE_UNITS says
The number of parallel compute units on the OpenCL device. A
work-group executes on a single compute unit. The minimum value is 1.
However, the Intel Graphics driver does not actually follow this definition and instead returns the number of EUs (Execution Units) --- An EU a grouping of the SIMD ALUs and slots for 7 different SIMD threads (registers and what not). Each SIMD thread represents 8, 16, or 32 workitems depending on what the compiler picks (we want higher, but register pressure can force us lower).
A workgroup is actually limited to a "Slice" (see the figure in section 5.5 "Slice Architecture"), which happens to be 24 EUs (in recent HW). Pick the GEN8 or GEN9 documents. Each slice has it's own SLM, barriers, and L3. Given that your apple book is reporting 48 EUs, I'd say that you have two slices.
Maximum Speedup:
Let's ignore this major annoyance and work with the EU number (and from those arch docs above). For "speedup" I'm comparing a single threaded FP32 calculation on the CPU. With good parallelization etc on the CPU, the speedup would be less, of course.
Each of the 48 EUs can issue two SIMD4 operations per clock in ideal circumstances. Assuming those are fused multiply-add's (so really two ops), that gives us:
48 EUs * 2 SIMD4 ops per EU * 2 (if the op is a fused multiply add)
= 192 SIMD4 ops per clock
= 768 FLOPs per clock for single precision floating point
So your ideal speedup is actually ~768. But there's a bunch of things that chip into this ideal number.
Setup and teardown time. Let's ignore this (assume the WL time dominates the runtime).
The GPU clock maxes out around a gigahertz while the CPU runs faster. Factor that ratio in. (crudely 1/3 maybe? 3Ghz on the CPU vs 1Ghz on the GPU).
If the computation is not heavily multiply-adds "mads", divide by 2 since I doubled above. Many important workloads are "mad"-dominated though.
The execution is mostly non-divergent. If a SIMD thread branches into an if-then-else, the entire SIMD thread (8,16,or 32 workitems) has to execute that code.
Register banking collisions delays can reduce EU ALU throughput. Typically the compiler does a great job avoiding this, but it can theoretically chew into your performance a bit (usually a few percent depending on register pressure).
Buffer address calculation can chew off a few percent too (EU must spend time doing integer compute to read and write addresses).
If one uses too much SLM or barriers, the GPU must leave some of the EU's idle so that there's enough SLM for each work item on the machine. (You can tweak your algorithm to fix this.)
We must keep the WL compute bound. If we blow out any cache in the data access hierarchy, we run into scenarios where no thread is ready to run on an EU and must stall. Assume we avoid this.
?. I'm probably forgetting other things that can go wrong.
We call the efficiency the percentage of theoretical perfect. So if our workload runs at ~530 FLOPs per clock, then we are 60% efficient of the theoretical 768. I've seen very carefully tuned workloads exceed 90% efficiency, but it definitely can take some work.
The ideal speedup you can get is the total number of processing elements which in your case corresponds to 48 * number of processing elements per compute unit. I do not know of a way to get the number of processing elements from OpenCL (that does not mean that it is not possible), however you can just google it for your GPU.
Up to my knowledge, a compute unit consists of one or multiple processing elements (for GPUs usually a lot), a register file, and some local memory. The threads of a compute unit are executed in a SIMD (single instruction multiple data) fashion. This means that the threads of a compute unit all execute the same operation but on different data.
Also, the speedup you get depends on how you execute a kernel function. Since a single work-group can not run on multiple compute units you need a sufficient number of work-groups in order to fully utilize all of the compute units. In addition, the work-group size should be a multiple of CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE.

Intel Gen8 architecture calculating total kernel instances per execution unit

I am taking the reference from the intel_gen8_arch
Few sections are causing confusion in my understanding for SIMD engine concept.
5.3.2 SIMD FPUs
Within each EU, the primary computation units are a pair of SIMD floating-point units (FPUs).
Although called FPUs, they support both floating-point and integer computation. These units
can SIMD execute up to four 32-bit floating-point (or integer) operations, or SIMD execute up to
eight 16-bit integer or 16-bit floating-point operations. Each SIMD FPU can complete simultaneous add and multiply
(MAD) floating-point instructions every cycle. Thus each EU is capable of 16 32-bit floating-point
operations per cycle: (add + mul) x 2 FPUs x SIMD-4.
The above lines of the documents clearly states the maximum floating point operations that can be done on each Execution Unit.
First doubt:
I think it is referring to per hardware thread of Execution unit than the whole execution unit.
In section 5.3.5 it mentions
On Gen8 compute architecture, most SPMD programming models employ this style code
generation and EU processor execution. Effectively, each SPMD kernel instance appears to
execute serially and independently within its own SIMD lane. In actuality, each thread executes
a SIMD-Width number of kernel instances concurrently. Thus for a SIMD-16 compile of a
compute kernel, it is possible for SIMD-16 x 7 threads = 112 kernel instances to be executing
concurrently on a single EU. Similarly, for a SIMD-32 compile of a compute kernel, 32 x 7
threads = 224 kernel instances could be executing concurrently on a single EU.
Now this section illustration seems contradicting with the section 5.3.2.
Specifically,
1) Since it says each HW thread of EU has 2, SIMD-4 units then how SIMD-16 works. How are we reaching to calculation of 224 on 7 threads.
Also, How we compile the kernel in SIMD-16 or SIMD-32 mode?
The 5.3.2. section is indeed saying that each EU can perform 16 32-bit ops.
Each EU has two FPU's each of which can do 4 ops.
2 pipes * 4 ops per pipe * 2 (since mad's are add+mul) = 16 ops per cycle
There are 7 threads per EU (see figure 3), but the EU can only choose instructions from two of the 7 (that are ready) (one instruction for each pipe).
As Mai alluded above, think of a SIMD16 instructions as 4 of those SIMD4 ops. Hence, it takes 4 cycles to complete one. A SIMD32 instruction will take 8 cycles through those same SIMD4 pipes. So regardless of the SIMD width the machine throughput is the same (theoretically). "Wider" SIMD just means you use more registers and fewer threads per workload.
There's no easy way to choose the kernel compilation width (SIMD8, SIMD16, or SIMD32) and you probably don't want to do that for most workloads. Nevertheless, there is an Intel extension your driver might support cl_intel_subgroups that lets you control the thread width. (You have to annotate the kernel with a special attribute.) This can be useful if you want to SIMD channels (lanes) to share data with each other directly (without extra loads to SLM or global memory).
Also check out this presentation from IDF. Slides 80-87 illustrate the mapping from compiler SIMD (e.g. SIMD32 or SIMD16) to the EU's.

How to calculate peak FLOPS in GPGPU hardware?

I want to calculate the theoretical peak performance of graphics hardware. Well, actually I want to understand the calculation.
Example with a AMD Radeon HD 6670:
The AMD Accelerated Parallel Processing Programming Guide (http://developer.amd.com/download/AMD_Accelerated_Parallel_Processing_OpenCL_Programming_Guide.pdf) tells me in the middle of page 6-42 to take the number of Stream Cores (96), multiply it by the number of operations per cycle for each Stream Core (let's take an ADD with Single Precision, which would be 5) and multiply that by the core clock (800 MHz). That results to:
96 * 5 FLOPS * 800MHz = 384,000 MFLOPS = 384 GFLOPS
The very same document tells me on page D-4 that this particular device has a peak throughput of 768 GFLOPS, which is twice of what I just calculated. Wikipedia and the AMD homepage state the same.
So my question is: Where am I missing the factor of two?
I am not sure about AMD hardware, but I remember that NVIDIA counted MAD (multiply-add) operation as two flops. Since MADs are performed in one cycle, the theoretical performance was multiplied by two.
480 processing elements * 2 operations per cycle(single addition pipeline + single multiplication pipeline per element) * 800MHz = 768 GFLOPS
When the code has too many levels of branching, it drops to 1-4 shader per compute unit which means 6-24 of them and this translates to as low as 10-40 GFlops which is even slower than a single cpu core.

Work_dim in NDRange

I can not understand what work_dim is for in clEnqueueNDRangeKernel()?
So, what is the difference between work_dim=1 and work_dim=2?
And why work items are grouped into work groups?
A work item or a work group is a thread running on the device (or neither)?
Thanks ahead!
work_dim is the number of dimensions for the clEnqueueNDRangeKernel() execution.
If you specify work_dim = 1, then the global and local work sizes are unidimensional. Thus, inside the kernels you can only access info in the first dimension, e.g. get_global_id(0), etc.
If you specify work_dim = 2 or 3, then you must also specify 2 or 3 dimensional global and local worksizes; in such case, you can access info inside the kernels in 2 or 3 dimensions, e.g. get_global_id(1), or get_group_id(2).
In practice you can do everything in 1D, but for dealing with 2D or 3D data, it maybe simpler to directly use 2/3 dimensional kernels; for example, in the case of 2D data, such as an image, if each thread/work-item is to deal with a single pixel, each thread/work-item could deal with the pixel at coordinates (x,y), with x = get_global_id(0) and y = get_global_id(1).
A work-item is a thread, while work-groups are groups of work-items/threads.
I believe the division work-groups / work-items is related with the hardware architecture of GPUs and other accelerators (e.g. Cell/BE); you can map the execution of work-groups to GPU Stream Multiprocessors (in NVIDIA talk) or SPUs (in IBM/Cell talk), while the corresponding work-itens would run inside the execution units of the Stream MultiProcessors and/or SPUs. It's not uncommon to have work group size = 1 if you are executing kernels in a CPU (e.g. for a quad-core, you would have 4 work groups, each one with one work item - though in my experience it's usually better to have more workgroups than CPU cores).
Check the OpenCL reference manual, as well as the OpenCl manual for whichever device your are programming. The quick reference card is also very helpful.

Resources