OpenCL kernel compiler optimisations - opencl

I'm using OpenCL on OS X, I was wondering if someone could tell me the compiler which is used to generate the GPU binary from the OpenCL kernel source code? In OS X is the OpenCL kernel compiled to LLVM first then optimized and then finally compiled to GPU native code? Also I was wondering if the OpenCL kernel compiler does optimisations on the kernel such as loop invariant code motion?

Yes, on Mac OS X all OpenCL code is compiled to LLVM IR, which is then passed to device-specific optimizations and code generation.
You can generate LLVM bitcode files offline, and use the result in clCreateProgramWithBinary. The openclc compiler is inside the OpenCL framework (/System/Library/Framework/OpenCL.framework/Libraries/openclc). You need these options (arch can be i386, x86_64, gpu_32):
openclc -c -o foo.bc -arch gpu_32 -emit-llvm foo.cl

Related

How to compile opencl-kernel-file(.cl) to LLVM IR

This question is related to LLVM/clang.
I already know how to compile opencl-kernel-file(.cl) using OpenCL API ( clBuildProgram() and clGetProgramBuildInfo() )
my question is this:
How to compile opencl-kernel-file(.cl) to LLVM IR with OpenCL 1.2 or higher?
In the other words, How to compile opnecl-kernel-file(.cl) to LLVM IR without libclc?
I have tried various methods to get LLVM-IR of OpenCL-Kernel-File.
I first followed the clang user manual.(https://clang.llvm.org/docs/UsersManual.html#opencl-features) but it did not run.
Secondly, I found a way to use libclc.
commands is this:
clang++ -emit-llvm -c -target -nvptx64-nvidial-nvcl -Dcl_clang_storage_class_specifiers -include /usr/local/include/clc/clc.h -fpack-struct=64 -o "$#".bc "$#" <br>
llvm-link "$#".bc /usr/local/lib/clc/nvptx64--nvidiacl.bc -o "$#".linked.bc <br>
llc -mcpu=sm_52 -march=nvptx64 "$#".linked.bc -o "$#".nvptx.s<br>
This method worked fine, but since libclc was built on top of the OpenCL 1.1 specification, it could not be used with OpenCL 1.2 or later code such as code using printf.
And this method uses libclc, which implements OpenCL built-in functions in the shape of new function. You can observe that in the assembly(ptx) of result opencl binary, it goes straight to the function call instead of converting it to an inline assembly. I am concerned that this will affect gpu behavior and performance, such as execution time.
So now I am looking for a way to replace compilation using libclc.
As a last resort, I'm considering using libclc with the NVPTX backend and AMDGPU backend of LLVM.
But if there is already another way, I want to use it.
(I expect that the OpenCL front-end I have not found yet exists in clang)
My program's scenarios are:
There is opencl kernel source file(.cl)
Compile the file to LLVM IR
IR-Level process to the IR
Compile(using llc) the IR to Binary
with each gpu targets(nvptx, amdgcn..)
Using the binary, Run host(.c or .cpp with lib OpenCL) with clCreateProgramWithBinary()
Now, When I compile kernel source file to LLVM IR, I have to include header of libclc(-include option in first one of above command) for compiling built-in functions. And I have to link libclc libraries before compile IR to binary
My environments are below:
GTX960
- NVIDIA's Binary appears in nvptx format
- I'm using sm_52 nvptx for my gpu.
Ubuntu Linux 16.04 LTS
LLVM/Clang 5.0.0
- If there is another way, I am willing to change the LLVM version.
Thanks in advice!
Clang 9 (and up) can compile OpenCL kernels written in the OpenCL C language. You can tell Clang to emit LLVM-IR by passing the -emit-llvm flag (add -S to output the IR in text rather than in bytecode format), and specify which version of the OpenCL standard using e.g. -cl-std=CL2.0. Clang currently supports up to OpenCL 2.0.
By default, Clang will not add the standard OpenCL headers, so if your kernel uses any of the OpenCL built-in functions you may see an error like the following:
clang-9 -c -x cl -emit-llvm -S -cl-std=CL2.0 my_kernel.cl -o my_kernel.ll
my_kernel.cl:17:12: error: implicit declaration of function 'get_global_id' is invalid in OpenCL
int i = get_global_id(0);
^
1 error generated.
You can tell Clang to include the standard OpenCL headers by passing the -finclude-default-header flag to the Clang frontend, e.g.
clang-9 -c -x cl -emit-llvm -S -cl-std=CL2.0 -Xclang -finclude-default-header my_kernel.cl -o my_kernel.ll
(I expect that the OpenCL front-end I have not found yet exists in clang)
There is an OpenCL front-end in clang - and you're using it, otherwise you couldn't compile a single line of OpenCL with clang. Frontend is Clang recognizing the OpenCL language. There is no OpenCL backend of any kind in LLVM, it's not the job of LLVM; it's the job of various OpenCL implementations to provide proper libraries. Clang+LLVM just recognizes the language and compiles it to bitcode & machine binaries, that's all it does.
in the assembly(ptx) of result opencl binary, it goes straight to the function call instead of converting it to an inline assembly.
You could try linking to a different library instead of libclc, if you find one. Perhaps NVidia's CUDA has some bitcode libraries somewhere, then again licensing issues... BTW are you 100% sure you need LLVM IR ? getting OpenCL binaries using the OpenCL runtime, or using SPIR-V, might get you faster binaries & certainly be less painful to work with. Even if you manage to get a nice LLVM IR, you'll need some runtime which actually accepts it (i could be wrong, but i doubt proprietary AMD/NVIDIA OpenCL will just accept random LLVM IR as inputs).
Clang does not provide a standard CL declaration header file (for example, C's stdio.h), which is why you're getting "undefined type float" and whatnot.
If you get one such header, you can then mark it as implicit include using "clang -include cl.h -x cl [your filename here]"
One such declaration header can be retrieved from the reference OpenCL compiler implementation at
https://github.com/KhronosGroup/SPIR-Tools/blob/master/headers/opencl_spir.h
And by the way, consider using this compiler which generates SPIR (albeit 1.0) which can be fed into OpenCL drivers as input.

Confusion about compiling with AVX512

I'm reading this document about how to compile C/C++ code using the Intel C++ compiler and AVX512 support on a Intel Knights Landing.
However, I'm a little bit confused about this part:
-xMIC-AVX512: use this option to generate AVX-512F, AVX-512CD, AVX-512ER and AVX-512FP.
-xCORE-AVX512: use this option to generate AVX-512F, AVX-512CD, AVX-512BW, AVX-512DQ and AVX-512VL.
For example, to generate Intel AVX-512 instructions for the Intel Xeon
Phi processor x200, you should use the option –xMIC-AVX512. For
example, on a Linux system
$ icc –xMIC-AVX512 application.c This compiler option is useful when
you want to build a huge binary for the Intel Xeon Phi processor x200.
Instead of building it on the coprocessor where it will take more
time, build it on an Intel Xeon processor-based machine
My Xeon Phi KNL doesn't have a coprocessor (No need to ssh micX or to compile with the -mmic flag). However, I don't understand if it's better to use the -xMIC or -xCORE?
In second place about -ax instead of -x:
This compiler option is useful when you try to build a binary that can run on multiple platforms.
So -ax is used for cross-platform support, but is there any performance difference comapred to -x?
For the first question, please use –xMIC-AVX512 if you want to compile for the Intel Xeon Phi processor x200 (aka KNL processor). Note that the phrase in the paper that you mentioned was mistyped, it should read "This compiler option is useful when you want to build a huge binary for the Intel Xeon Phi processor x200. Instead of building it on the Intel Xeon Phi processor x200 where it will take more time, build it on an Intel Xeon processor-based machine."
For the second question, there should not be a performance difference if you run the binaries on an Intel Xeon Phi processor x200. However, the size of the binary complied with -ax should be bigger than the one compiled with -x option.
Another option from the link you provide is to build with -xCOMMON-AVX512. This is a tempting option because in my case it has all the instructions that I need and I can use the same option for both a KNL and a Sklake-AVX512 system. Since I don't build on a KNL system I cannot use -xHost (or -march=native with GCC).
However, -xCOMMON-AVX512 should NOT be used with KNL. The reason is that it generates the vzeroupper instruction (https://godbolt.org/z/PgFX55) which is not only not necessary it actually is very slow on a KNL system.
From Agner Fog's micro-architecture manual he writes in the KNL section.
The VZEROALL or VZEROUPPER instructions are not only superfluous here, they are actually
harmful for the performance. A VZEROALL or VZEROUPPER instruction takes 36 clock cycles
in 64 bit mode...
Therefore for a KNL system you should use -xMIC-AVX512for other systems with AVX512 you should use -xCORE-AVX512 (or -xSKYLAKE-AVX512). I use -qopt-zmm-usage=high as well.
I am not aware of a switch for ICC to disable vzeroupper once it is enabled (with GCC you can use -mno-vzeroupper).
Incidentally, by the same logic you should use -march=knl with GCC and not -mavx512f (-mavx512f -mno-vzeroupper may work if you are sure you don't need AVX512ER or AVX512PF).

Is there an offlineOpenCL compiler (for NVIDIA graphics cards)?

The normal way to run an OpenCL program is to include the openCL kernel that is compiled at runtime (online compilation).
But I've seen examples of compiling OpenCL to binary before, called offline compilation. I'm aware of the disadvantages (reducing compatibility across hardware).
There used to be an offline compiler at http://www.fixstars.com/en/ but it does not seems to be available anymore.
So is there an offline compiler for OpenCL available, in particular for NVIDIA-based cards?
Someone suggested that nvcc.exe in NVidia's CUDA SDK could compile .cl files with
nvcc -x cu file.cl -o file.out -lOpenCL
...but it says missing cl.exe at least on Windows. This might be worth checking out, however: http://clcc.sourceforge.net/
As well as:
https://github.com/HSAFoundation/CLOC (AMD-maintained offline compiler)
https://github.com/Maratyszcza/clcc (includes also links to above ones and more)

OpenCL Simple "Hello World!" program compiles correctly but spits out garbage when executed

As the title suggests, I have copied verbatim the hello.cl and hello.c files from Fixstar's online OpenCL book, at http://www.fixstars.com/en/opencl/book/OpenCLProgrammingBook/first-opencl-program.html, and cannot get correct output.
I compile the program using
gcc -lOpenCL hello.c -o hello.
I execute normally with
./hello.
But my output reads something like
���.
I run Arch Linux and have installed OpenCL, the headers, and the NVIDIA implementation. I would like to continue learning OpenCL but simply cannot continue if my programs won't run. Does anyone have any ideas on what is occuring? Additionally, if anyone has any advice on how to debug this I would be immensely happy.
EDIT: I was using Nouveau drivers instead of the Nvidia ones. Nouveau does not support OpenCL. This was the problem.
Nouveau does NOT support OpenCL yet. Replace nouveau with nvidia and check to make sure libcl, libcl-headers, and opencl-nvidia are all correctly installed.

How a recent version of GCC (4.6) could be used together with Qt under Mac OS?

My problem is related to the one discussed here:
Is there a way that OpenMP can operate on Qt spanwed threads?
Upon trying to run my Qt-based program under Mac OS that has an OpenMP clause in a secondary thread, it crashed. After browsing through the web, now I understand that it is caused by a bug in the rather old version (4.2) of gcc supplied by Apple.
Then I downloaded the latest 4.6 version of gcc from http://hpc.sourceforge.net and tried to compile the project, but I got the following errors from g++ compiler:
unrecognized option ‘-arch’
unrecognized option ‘-Xarch_x86_64’
I learned that this is because these are options, which can be only interpreted by the custom-configured Apple-gcc compiler, but not by standard gcc.
Could anybody please help me could I overcome this issue and configure g++ 4.6 to use with Qt in order to get a bug-free OpenMP support? I admit that I'm a newbie under Mac OS platform with regard to compilers and programming and would like to port my code from Visual Studio-Qt environment.
Many thanks in advance!
If you aren't afraid of messing with your Qt installation, then change the QMAKE_CFLAGS_X86_64 entry in ~/QtSDK/Desktop/Qt/4.8.1/gcc/mkspecs/common/g++-macx.conf.
Replace ‘-Xarch_x86_64’ with ‘-arch x86_64’.
You can use your non-Apple gcc v4.6 and compile a binary for each architecture you want to build (use --target=${ARCH} should be fine for i386 and x86_64). Then once you have a binary for each of the architectures use lipo like so:
lipo -create -arch i386 binary_32bit -arch x86_64 binary_64bit -output binary_universal
This will create a fat binary (aka universal binary) named binary_universal from binary_32bit and binary_64bit.
Or you could use clang/llvm instead of gcc, which probably won't have the bug you described and (if supplied via Apple's developer tools) should be able to compile universal binaries directly.
You should run qmake woth corresponding -spec option, for example, to use gcc46 on freebsd it is needed to run qmake so:
qmake --spec=freebsd-g++46
Lipo can indeed be used to put multiple object files together into a "fat" object file, in fact it turns out this is just what apple's compiler does. Their GCC compiler is actually a driver that maps various architectures to the appropriate compiler for the architecture and then mashes the objects together using lipo.
see: http://lists.macosforge.org/pipermail/macports-dev/2011-September/016210.html
Here is the source file for that driver:
http://opensource.apple.com/source/gcc/gcc-5666.3/driverdriver.c
All one needs to do to get a new version of GCC to honor the -arch flag is to modify this driver and get it to point to a script wrapper for your version of gcc that adds the appropriate flags for the given architecture and then passes all the rest of the arguments. Something like this:
#!/bin/sh
/opt/local/bin/gcc-mp-4.6 -m32 $#
and
#!/bin/sh
/opt/local/bin/gcc-mp-4.6 -m64 $#
Here is a link that talks about how to do it, and provides a cmake project to easily get the macports version of GCC fixed up and supporting the -arch flag for the two intel architectures:
http://thecoderslife.blogspot.com/2015/07/building-with-gcc-46-and-xcode-4.html

Resources