OpenCL, Vulkan, Sycl - opencl

I am trying to understand the OpenCL ecosystem and how Vulkan comes into play.
I understand that OpenCL is a framework to execute code on GPUs as well as CPUs, using kernels that may be compiled to SPIR.
Vulkan can also be used as a compute-API using the same SPIR language.
SYCL is a new specification that allows writing OpenCL code as proper standard-conforming C++14. It is my understanding that there are no free implementations of this specification yet.
Given that,
How does OpenCL relate to Vulkan? I understand that OpenCL is higher level and abstracts the devices, but does ( or could ) it use Vulkan internally? (instead of relying on vendor specific drivers)
Vulkan is advertised as both a compute and graphics API, however I found very little resources for the compute part. Why is that ?
Vulkan has performance advantages over OpenGL. Is the same true for Vulkan vs OpenCl? (OpenCL is sadly notorious to being slower than CUDA.)
Does SYCL use OpenCL internally or could it use Vulkan ? Or does it use neither and instead rely on low level, vendor specific APIs to be implemented ?

How does OpenCL relates to vulkan ? I understand that OpenCL is higher level and abstracts the devices, but does ( or could ) it uses Vulkan internally ?
They're not related to each other at all.
Well, they do technically use the same intermediate shader language, but Vulkan forbids the Kernel execution model, and OpenCL forbids the Shader execution model. Because of that, you can't just take a shader meant for OpenCL and stick it in Vulkan, or vice-versa.
Vulkan is advertised as both a compute and graphics api, however I found very little resources for the compute part - why is that ?
Because the Khronos Group likes misleading marketing blurbs.
Vulkan is no more of a compute API than OpenGL. It may have Compute Shaders, but they're limited in functionality. The kind of stuff you can do in an OpenCL compute operation is just not available through OpenGL/Vulkan CS's.
Vulkan CS's, like OpenGL's CS's, are intended to be used for one thing: to support graphics operations. To do frustum culling, build indirect graphics commands, manipulate particle systems, and other such things. CS's operate at the same numerical precision as graphical shaders.
Vulkan has a performance advantages over OpenGL. Is the same true for Vulkan vs OpenCl?
The performance of a compute system is based primarily on the quality of its implementation. It's not OpenCL that's slow; it's your OpenCL implementation that's slower than it possibly could be.
Vulkan CS's are no different in this regard. The performance will be based on the maturity of the drivers.
Also, there's the fact that, again, there's a lot of stuff you can do in an OpenCL compute operation that you cannot do in a Vulkan CS.
Does SYCL uses OpenCL internally or could it use vulkan ?
From the Khronos Group:
SYCL (pronounced ‘sickle’) is a royalty-free, cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency of OpenCL...
So yes, it's built on top of OpenCL.

How does OpenCL relates to vulkan ?
They both can pipeline a separable work from host to gpu and gpu to host using queues to reduce communication overhead using multiple threads. Directx-opengl cannot?
OpenCL: Initial release August 28, 2009. Broader hardware support. Pointers allowed but only to be used in device. You can use local memory shared between threads. Much easier to start a hello world. Has api overhead for commands unless they are device-side queued. You can choose implicit multi-device synchronization or explicit management. Bugs are mostly fixed for 1.2 but I don't know about version 2.0.
Vulkan: Initial release 16 February 2016(but progress from 2014). Narrower hardware support. Can SPIR-V handle pointers? Maybe not? No local-memory option? Hard to start hello world. Less api overhead. Can you choose implicit multi-device management? Still buggy for Dota-2 game and some other games. Using both graphics and compute pipeline at the same time can hide even more latency.
if opencl had vulkan in it, then it has been hidden from public for 7-9 years. If they could add it, why didn't they do it for opengl?(maybe because of pressure by physx/cuda?)
Vulkan is advertised as both a compute and graphics api, however I
found very little resources for the compute part - why is that ?
It needs more time, just like opencl.
You can check info aboout compute shaders here:
https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#fundamentals-floatingpoint
Here is an example of particle system managed by compute shaders:
https://github.com/SaschaWillems/Vulkan/tree/master/computeparticles
below that, there are raytracers and image processing examples too.
Vulkan has a performance advantages over OpenGL. Is the same true for
Vulkan vs OpenCl?
Vulkan doesn't need to synchronize for another API. Its about command buffers synchronization between commandqueues.
OpenCL needs to synchronize with opengl or directx (or vulkan?) before using a shared buffer(cl-gl or dx-cl interop buffers). This has an overhead and you need to hide it using buffer swapping and pipelining. If no shared buffer exists, it can run concurrently on modern hardware with opengl or directx.
OpenCL is sadly notorious to being slower than CUDA
It was, but now its mature and challenges cuda, especially with much wider hardware support from all gaming gpus to fpgas using version 2.1, such as in future Intel can put an fpga into a Core i3 and enable it for (soft-x86 core ip) many-core cpu model closing the gap between a gpu performance and a cpu to upgrade its cpu-physx gaming experience or simply let an opencl physics implementation shape it and use at least %90 die-area instead of a soft-core's %10-%20 effectively used area.
With same price, AMD gpus can compute faster on opencl and with same compute power Intel igpus draw less power. (edit: except when algorithms are sensitive to cache performance where Nvidia has upperhand)
Besides, I wrote a SGEMM opencl kernel and run on a HD7870 at 1.1 Tflops and checked internet then saw a SGEMM henchmark on a GTX680 for same performance using a popular title on CUDA!(price ratio of gtx680/hd7870 was 2). (edit: Nvidia's cc3.0 doesn't use L1 cache when reading global arrays and my kernel was purely local/shared memory + some registers "tiled")
Does SYCL uses OpenCL internally or could it use vulkan ? Or does it
use neither and instead relies on low level, vendor specific apis to
be implemented ?
Here,
https://www.khronos.org/assets/uploads/developers/library/2015-iwocl/Khronos-SYCL-May15.pdf
says
Provides methods for dealing with targets that do not have
OpenCL(yet!)
A fallback CPU implementation is debuggable!
so it can fall back to a pure threaded version(similar to java's aparapi).
Can access OpenCL objects from SYCL objects
Can construct SYCL objects from OpenCL object
Interop with OpenGL remains in SYCL
- Uses the same structures/types
it uses opencl(maybe not directly, but with an upgraded driver communication?), it develops parallel to opencl but can fallback to threads.
from the smallest OpenCL 1.2 embedded device to the most advanced
OpenCL 2.2 accelerators

Related

OpenCL: Writing to pointer in main memory

Is it possible, using OpenCL's DMA capabilities, to write to a main memory address that is passed into the cl program? I understand doing so would likely break the program, but the intent here is to run a GPU process and then overwrite the address space of the CPU program used to run it, so breakage is expected.
Thanks!
Which version of the OpenCL API are you targeting?
In OpenCL 2.0 and above you can use Shared Virtual Memory (SVM) to share address between host and device(s) in platforms that support it.
You can get more information about it in the Intel OpenCL SVM overview.
If you are using previous versions, or your hardware does not support it, you can use pinned memory with the appropriate flags to clCreateBuffer. In particular, CL_MEM_USE_HOST_PTR or CL_MEM_ALLOC_HOST_PTR, see clCreateBuffer in Khronos.
Note that, when using CL_MEM_USE_HOST_PTR has some alignment restrictions.
In general, in OpenCL, when and how the DMA is used depends on the hardware platform, so you should refer to the vendor documentation for details.

Checking if gpu is integrated or not

I couldn't find any query command about device being integrated/embedded in cpu or using system ram or its own dedicated gddr memory? I can benchmark mapping/unmapping versus reading/writing to get a conclusion but that device can be under load at that time and behave wrong and it would add complexity to already complex load balancing algorithm that I'm using.
Is there a simple way to check if a gpu is using same memory with cpu so I can choose directly mapping/unmapping instead of reading/writing?
Edit: there is CL_DEVICE_LOCAL_MEM_TYPE
CL_GLOBAL or CL_LOCAL
is this an indication of integratedness?
OpenCL 1.x has the device query CL_DEVICE_HOST_UNIFIED_MEMORY:
Is CL_TRUE if the device and the host have a unified memory subsystem
and is CL_FALSE otherwise.
This query is deprecated as of OpenCL 2.0, but should probably still work on OpenCL 2.x platforms for now. Otherwise, you may be able to produce a heuristic from the result of CL_DEVICE_SVM_CAPABILITIES instead.

OpenCL bytecode running on another card

I have program that use OpenCL for calculation, OpenCL code is big and compile time is about 2 minutes with 100% CPU load. Of course i save binary results of compilation. And second launch load opencl program from binary. Can i use same binary on another video card with same chip but different characteristics (RAM,CLOCK,etc.)?
As far as the OpenCL specification is concerned, you only have guarantees that a program binary can be re-used on the same device on which it was created.
In reality, the binaries that are returned by many OpenCL implementations are compatible with a wider range of devices available from that same vendor. For example, NVIDIA return PTX when you request binaries from their implementation, which is a reasonably high level intermediate representation (i.e. not native instructions). This is certainly compatible with other devices using the same architecture on which it was created (e.g. all GK110 devices, or all GF104 devices), and quite likely to be portable across a range of other NVIDIA GPU architectures too. Other vendors also return various types of intermediate representation (usually LLVM IR based) that allow this kind of binary compatibility.
So yes, you can probably re-use binaries across different devices that have the same architecture, but you'll really just have to try it and see. You could always implement a scheme that tries to use the binary and it that fails resort back to the source code.
In the future, we will hopefully see a large number of vendors supporting the recently ratified SPIR specification, which is a platform-portable intermediate representation for OpenCL device programs. This would allow you to generate binaries that are not only compatible with devices from a single vendor's architecture, but also across devices from many other vendors that also support SPIR. There would clearly be some remaining compilation overhead to lower SPIR to the native instruction set, but this should still result in significant speed-ups compared to compiling raw OpenCL C code.

Can a GPU be the host of a OpenCL program?

Little disclaimer: This is more the kind of theoretical / academic question than an actual problem I've got.
The usual way of setting up a parallel program in OpenCL is to write a C/C++ program, which sets up the devices (GPU and/or other CPUs), kernel and data buffers for executing the kernel on the device.
This program gets launched from the host, which used to be a CPU.
Would it be possible to write a OpenCL program where the host is a GPU and the devices other GPUs and/or CPUs?
What would be the prerequisites for such a scenario?
Do one need a special GPU or would it be possible to use any OpenCL-capable GPU?
Are you looking for a complete host or just a kernel launcher?
Up coming CUDA (v 5.0) introduces a feature to launch a kernel inside a kernel. Therefore, a device can be used for launching a kernel on itself. May be this feature will be supported by OpenCL too in near future.

Why doesn't OpenCL support recursion?

I am currently working on a OpenCL project and I am wonder why it does not support recursion. Is it related to parallelism?
It is related to the target hardware, I think. To support recursion requires several hardware features that certain classes of OpenCL devices (ie. GPUs) don't have. Without them, maintaining a call stack and doing indirect code branching isn't practical. NVIDIA don't support recursion on all their CUDA capable hardware for the same reason.
Its not OpenCL, Its GPU hardware. AMD had laid out a future Instruction Set Architecture
that will support recursion. GPUs have large numbers of registers (up to 32 K). So
be careful of what to ask for and get. Push/Pop of 32K registers, for a recursive call, will not be speedy.

Resources