GPU in-use memory in OpenCL - opencl

Is there anyway to query GPU device to find in-use memory with OpenCL? I want to allocate as much as memory that I can.

There is no standard way of getting such information. Some alternatives (pretty poor alternatives, but anyway):
CUDA provides such functionality via cuMemGetInfo
GL_ATI_meminfo and NVX_gpu_memory_info OpenGL extensions
nvidia-smi application

Related

OpenCL: Writing to pointer in main memory

Is it possible, using OpenCL's DMA capabilities, to write to a main memory address that is passed into the cl program? I understand doing so would likely break the program, but the intent here is to run a GPU process and then overwrite the address space of the CPU program used to run it, so breakage is expected.
Thanks!
Which version of the OpenCL API are you targeting?
In OpenCL 2.0 and above you can use Shared Virtual Memory (SVM) to share address between host and device(s) in platforms that support it.
You can get more information about it in the Intel OpenCL SVM overview.
If you are using previous versions, or your hardware does not support it, you can use pinned memory with the appropriate flags to clCreateBuffer. In particular, CL_MEM_USE_HOST_PTR or CL_MEM_ALLOC_HOST_PTR, see clCreateBuffer in Khronos.
Note that, when using CL_MEM_USE_HOST_PTR has some alignment restrictions.
In general, in OpenCL, when and how the DMA is used depends on the hardware platform, so you should refer to the vendor documentation for details.

CUDA MPS for OpenCL?

CUDA MPS allows you to run multiple processes in parallel on the GPU, thus fully utilizing the GPU for operations that don't take full advantage. Is there an equivalent for OpenCL? Or is there a different approach in OpenCL?
If you use multiple OpenCL command queues that don't have event interdependencies, an OpenCL runtime could keep the GPU cores busy with varied work from each queue. It's really up to the implementation as to whether this actually happens. You'd need to check each vendor's OpenCL guide to see if they support concurrent GPU kernels.

OpenCL, Vulkan, Sycl

I am trying to understand the OpenCL ecosystem and how Vulkan comes into play.
I understand that OpenCL is a framework to execute code on GPUs as well as CPUs, using kernels that may be compiled to SPIR.
Vulkan can also be used as a compute-API using the same SPIR language.
SYCL is a new specification that allows writing OpenCL code as proper standard-conforming C++14. It is my understanding that there are no free implementations of this specification yet.
Given that,
How does OpenCL relate to Vulkan? I understand that OpenCL is higher level and abstracts the devices, but does ( or could ) it use Vulkan internally? (instead of relying on vendor specific drivers)
Vulkan is advertised as both a compute and graphics API, however I found very little resources for the compute part. Why is that ?
Vulkan has performance advantages over OpenGL. Is the same true for Vulkan vs OpenCl? (OpenCL is sadly notorious to being slower than CUDA.)
Does SYCL use OpenCL internally or could it use Vulkan ? Or does it use neither and instead rely on low level, vendor specific APIs to be implemented ?
How does OpenCL relates to vulkan ? I understand that OpenCL is higher level and abstracts the devices, but does ( or could ) it uses Vulkan internally ?
They're not related to each other at all.
Well, they do technically use the same intermediate shader language, but Vulkan forbids the Kernel execution model, and OpenCL forbids the Shader execution model. Because of that, you can't just take a shader meant for OpenCL and stick it in Vulkan, or vice-versa.
Vulkan is advertised as both a compute and graphics api, however I found very little resources for the compute part - why is that ?
Because the Khronos Group likes misleading marketing blurbs.
Vulkan is no more of a compute API than OpenGL. It may have Compute Shaders, but they're limited in functionality. The kind of stuff you can do in an OpenCL compute operation is just not available through OpenGL/Vulkan CS's.
Vulkan CS's, like OpenGL's CS's, are intended to be used for one thing: to support graphics operations. To do frustum culling, build indirect graphics commands, manipulate particle systems, and other such things. CS's operate at the same numerical precision as graphical shaders.
Vulkan has a performance advantages over OpenGL. Is the same true for Vulkan vs OpenCl?
The performance of a compute system is based primarily on the quality of its implementation. It's not OpenCL that's slow; it's your OpenCL implementation that's slower than it possibly could be.
Vulkan CS's are no different in this regard. The performance will be based on the maturity of the drivers.
Also, there's the fact that, again, there's a lot of stuff you can do in an OpenCL compute operation that you cannot do in a Vulkan CS.
Does SYCL uses OpenCL internally or could it use vulkan ?
From the Khronos Group:
SYCL (pronounced ‘sickle’) is a royalty-free, cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency of OpenCL...
So yes, it's built on top of OpenCL.
How does OpenCL relates to vulkan ?
They both can pipeline a separable work from host to gpu and gpu to host using queues to reduce communication overhead using multiple threads. Directx-opengl cannot?
OpenCL: Initial release August 28, 2009. Broader hardware support. Pointers allowed but only to be used in device. You can use local memory shared between threads. Much easier to start a hello world. Has api overhead for commands unless they are device-side queued. You can choose implicit multi-device synchronization or explicit management. Bugs are mostly fixed for 1.2 but I don't know about version 2.0.
Vulkan: Initial release 16 February 2016(but progress from 2014). Narrower hardware support. Can SPIR-V handle pointers? Maybe not? No local-memory option? Hard to start hello world. Less api overhead. Can you choose implicit multi-device management? Still buggy for Dota-2 game and some other games. Using both graphics and compute pipeline at the same time can hide even more latency.
if opencl had vulkan in it, then it has been hidden from public for 7-9 years. If they could add it, why didn't they do it for opengl?(maybe because of pressure by physx/cuda?)
Vulkan is advertised as both a compute and graphics api, however I
found very little resources for the compute part - why is that ?
It needs more time, just like opencl.
You can check info aboout compute shaders here:
https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#fundamentals-floatingpoint
Here is an example of particle system managed by compute shaders:
https://github.com/SaschaWillems/Vulkan/tree/master/computeparticles
below that, there are raytracers and image processing examples too.
Vulkan has a performance advantages over OpenGL. Is the same true for
Vulkan vs OpenCl?
Vulkan doesn't need to synchronize for another API. Its about command buffers synchronization between commandqueues.
OpenCL needs to synchronize with opengl or directx (or vulkan?) before using a shared buffer(cl-gl or dx-cl interop buffers). This has an overhead and you need to hide it using buffer swapping and pipelining. If no shared buffer exists, it can run concurrently on modern hardware with opengl or directx.
OpenCL is sadly notorious to being slower than CUDA
It was, but now its mature and challenges cuda, especially with much wider hardware support from all gaming gpus to fpgas using version 2.1, such as in future Intel can put an fpga into a Core i3 and enable it for (soft-x86 core ip) many-core cpu model closing the gap between a gpu performance and a cpu to upgrade its cpu-physx gaming experience or simply let an opencl physics implementation shape it and use at least %90 die-area instead of a soft-core's %10-%20 effectively used area.
With same price, AMD gpus can compute faster on opencl and with same compute power Intel igpus draw less power. (edit: except when algorithms are sensitive to cache performance where Nvidia has upperhand)
Besides, I wrote a SGEMM opencl kernel and run on a HD7870 at 1.1 Tflops and checked internet then saw a SGEMM henchmark on a GTX680 for same performance using a popular title on CUDA!(price ratio of gtx680/hd7870 was 2). (edit: Nvidia's cc3.0 doesn't use L1 cache when reading global arrays and my kernel was purely local/shared memory + some registers "tiled")
Does SYCL uses OpenCL internally or could it use vulkan ? Or does it
use neither and instead relies on low level, vendor specific apis to
be implemented ?
Here,
https://www.khronos.org/assets/uploads/developers/library/2015-iwocl/Khronos-SYCL-May15.pdf
says
Provides methods for dealing with targets that do not have
OpenCL(yet!)
A fallback CPU implementation is debuggable!
so it can fall back to a pure threaded version(similar to java's aparapi).
Can access OpenCL objects from SYCL objects
Can construct SYCL objects from OpenCL object
Interop with OpenGL remains in SYCL
- Uses the same structures/types
it uses opencl(maybe not directly, but with an upgraded driver communication?), it develops parallel to opencl but can fallback to threads.
from the smallest OpenCL 1.2 embedded device to the most advanced
OpenCL 2.2 accelerators

Checking if gpu is integrated or not

I couldn't find any query command about device being integrated/embedded in cpu or using system ram or its own dedicated gddr memory? I can benchmark mapping/unmapping versus reading/writing to get a conclusion but that device can be under load at that time and behave wrong and it would add complexity to already complex load balancing algorithm that I'm using.
Is there a simple way to check if a gpu is using same memory with cpu so I can choose directly mapping/unmapping instead of reading/writing?
Edit: there is CL_DEVICE_LOCAL_MEM_TYPE
CL_GLOBAL or CL_LOCAL
is this an indication of integratedness?
OpenCL 1.x has the device query CL_DEVICE_HOST_UNIFIED_MEMORY:
Is CL_TRUE if the device and the host have a unified memory subsystem
and is CL_FALSE otherwise.
This query is deprecated as of OpenCL 2.0, but should probably still work on OpenCL 2.x platforms for now. Otherwise, you may be able to produce a heuristic from the result of CL_DEVICE_SVM_CAPABILITIES instead.

accessing file system using cpu device in opencl

I am a newbie to opencl. I have a doubt about opencl functioning when kernel is running on a cpu device.Suppose we have a kernel running on a cpu device, can it read from a file on the disk. If yes,then how ? If no , then why not ?
Can you please suggest a source for detailed information ??
thanks in advance.
It can't. Simply because not every OpenCL device has a file system, or a disk respectively.
You can't. OpenCL is trying to unite access to computing power and file system is depending on OS. If you want this feature, there are threads (C++11 thread, pthread,...) or OpenMP should be able to handle this, because it's CPU-only thing.
It doesn't make sense to allow device kernels to access the filesystem, because most of the semantics of filesystem access are essentially incompatible with the massively parallel nature of device kernels.
There are two ways to work around this, considering you're only asking about CPU.
if you intend to use OpenCL as a way to do multithreading on CPU, consider using what OpenCL calls “native kernels”, which are essentially just plain C functions, called within an OpenCL context;
a more general approach that might work on GPU too would be to mmap the files you want to operate on, and pass the resulting pointers to clCreateBuffer with CL_USE_HOST_PTR flags.

Resources