How to determine host's CPU? - opencl

We have multi-device system which distribute main task across them. Each subtask consist of:
enqueue write buffer
enqueue kernel
enqueue read buffer
All enqueues are async and command queues are in-order. We assign a callback to cl_event of enqueue read buffer where we determine whether main task is completed. If it's not, we schedule one more subtask to the queue.
Unfortunately, we've found that keeping host's CPU busy doesn't allow it to process callbacks from other devices (GPU) and most of the time they are not involved in work. The idea is to exclude host's cpu from the list of devices we use to complete main task.

You should look into device fission. If your platform supports this feature, you will be able to create an opencl device with any combination of cpu cores. Look here for details. This extension will allow you to save some number of cores for your host application.
I like how it allows you create sub-devices which share various level of cache memory. You might be interested in CL_DEVICE_PARTITION_BY_NAMES_EXT (search for "CL_DEVICE_PARTITION_BY NAMES_EXT" on the page).

Related

OpenCL: can I do simultaneous "read" operations?

I have an OpenCL buffer created with the read and write flag. Can I access the same memory address simultaneously? say, calling enqueueReadBuffer and a kernel that doesn't modify the contents out-of-order without waitlists, or two calls to kernels that only read from the buffer.
Yes, you can do so. Create 2 queues, then call clEnqueieReadBuffer and clEnqueueNDRangeKernel on the different queue.
It ultimately depends on weather the device and driver supports executing different queues at the same time. Most GPUs can while embedded devices may or may not.

User mode and kernel mode: different program at same time

Is it possible that one process is running in kernel mode and another in user mode at the same time?
I know, it's not a coding question but please guide me if someone knows answer.
For two processes to actually be running at the same time, you must have multiple CPUs. And indeed, when you have multiple CPUs, what runs on the different CPUs is very loosly coupled and you can definitely have one process running user code on one CPU, while another process runs kernel code (e.g., doing some work inside a system call) on another CPU.
If you are asking about just one CPU, in that case you can't have two running processes at the same time. But what you can have is two runnable processes, which mean two processes which are both ready to run but since there is just one CPU, only one of the can actually run. One of the runnable processes might be in user mode - e.g., consider a long-running tight loop that was preempted after its time quota was over. Another runnable process might be in kernel mode - e.g., consider a process that did a read() system call from disk, the kernel sent the read request to the disk, but the read request completed so now the process is ready to run again in kernel mode and complete the read() call.
Yes, it is possible. Even multiple processes can be in the kernel mode at the same time.
Just that a single process cannot be in both the modes at the same time.
correct me but i suppose there is no any processes in kernel mode , there are only threads.

How does the host send OpenCL kernels and arguments to the GPU at the assembly level?

So you get a kernel and compile it. You set the cl_buffers for the arguments and then clSetKernelArg the two together.
You then enqueue the kernel to run and read back the buffer.
Now, how does the host program tell the GPU the instructions to run. e.g. I'm on a 2017 MBP with a Radeon Pro 460. At the assembly level what instructions are called in the host process to tell the GPU "here's what you're going to run." What mechanism lets the cl_buffers be read by the GPU?
In fact, if you can point me to an in detail explanation of all of this I'd be quite pleased. I'm a toolchain engineer and I'm curious about the toolchain aspects of GPU programming but I'm finding it incredibly hard to find good resources on it.
It pretty much all runs through the GPU driver. The kernel/shader compiler, etc. tend to live in a user space component, but when it comes down to issuing DMAs, memory-mapping, and responding to interrupts (GPU events), that part is at least to some extent covered by the kernel-based component of the GPU driver.
A very simple explanation is that the kernel compiler generates a GPU-model-specific code binary, this gets uploaded to VRAM via DMA, and then a request is added to the GPU's command queue to run a kernel with reference to the VRAM address where that kernel is stored.
With regard to OpenCL memory buffers, there are essentially 3 ways I can think of that this can be implemented:
A buffer is stored in VRAM, and when the CPU needs access to it, that range of VRAM is mapped onto a PCI BAR, which can then be memory-mapped by the CPU for direct access.
The buffer is stored entirely in System RAM, and when the GPU accesses it, it uses DMA to perform read and write operations.
Copies of the buffer are stored both in VRAM and system RAM; the GPU uses the VRAM copy and the CPU uses the system RAM copy. Whenever one processor needs to access the buffer after the other has made modifications to it, DMA is used to copy the newer copy across.
On GPUs with UMA (Intel IGP, AMD APUs, most mobile platforms, etc.) VRAM and system RAM are the same thing, so they can essentially use the best bits of methods 1 & 2.
If you want to take a deep dive on this, I'd say look into the open source GPU drivers on Linux.
The enqueue the kernel means ask an OpenCL driver to submit work to dedicated HW for execution. In OpenCL, for example, you would call the clEnqueueNativeKernel API, which will add the dispatch compute workload command to the command queue - cl_command_queue.
From the spec:
The command-queue can be used to queue a set of operations (referred to as commands) in order.
https://www.khronos.org/registry/OpenCL/specs/2.2/html/OpenCL_API.html#_command_queues
Next, the implementation of this API will trigger HW to process commands recorded into a command queue (which holds all actual commands in the format which particular HW understands). HW might have several queues and process them in parallel. Anyway after the workload from a queue is processed, HW will inform the KMD driver via an interrupt, and KMD is responsible to propagate this update to OpenCL driver via OpenCL supported event mechanism, which allows user to track workload execution status - see https://www.khronos.org/registry/OpenCL/specs/2.2/html/OpenCL_API.html#clWaitForEvents.
To get better idea how the OpenCL driver interacts with a HW you could take a look into the opensource implementation, see:
https://github.com/pocl/pocl/blob/master/lib/CL/clEnqueueNativeKernel.c

Non blocking system call and mode switch

Suppose we invoke a system call for asynchronous IO. At the time of invoking system call, the mode changes from user mode to kernel mode . After invocation, the mode should immediately change back to user mode so that user application can proceed further(as it is non blocking).
Now if the mode is changed to user mode then how will kernel proceed with IO as mode is changed from kernel to user mode ? Will kernel perform asynchronous IO in user mode ?
IO means two different things (at two different levels of abstractions):
from an application point of view, from a process running in user-mode, calling any system call (listed in syscalls(2) for Linux) related to input or output, e.g. read(2), .... Notice that aio_read(3) is not listed as a system call (it is some library function using other system calls, see aio(7)).
on the raw hardware, any physical input or output operation sending data (or orders) to actual IO devices (e.g. SATA disks, USB ports, etc...)
Asynchronous or synchronous IO for a process means just calling some suitable subset of system calls, since system calls are the only way a process can interact with the kernel, and since in user-mode no physical IO is directly possible.
Read Operating Systems: Three Easy Pieces (freely downloadable) to get a better view of OSes.
Will kernel perform asynchronous IO in user mode ?
This shows some confusion. In practice, inside the kernel, physical IO is generally (and probably always) initiated by interrupt handlers (which might configure some DMA etc...). A hardware interrupt switches the processor to "kernel-mode" (actually supervisor mode of the ISA).
A blocking system call (e.g. read(2) when physical IO is needed since the data is not in the page cache) don't block the entire computer: it is just the calling process which becomes "blocked" so is rescheduled. The kernel will schedule some other runnable process. Much later, after having the kernel handle many interrupts, the blocked process will become runnable and could be rescheduled to run.
Processes are themselves (with files) one of the major abstractions (provided by the kernel) to application code.
In other words, at the conceptual level, the kernel scheduler is coded in some continuation-passing style.
See also kernelnewbies and OSDEV.
The asynchronous IO will be performed on behalf of the process, the kernel will handle it almost as usual while the process continues to run. In blocking mode, the process is just suspended.
Kernel has access to every process space, so he can fill/read data from process user space whatever a process is currently doing.

OpenCL clEnqueueReadBuffer During Kernel Execution?

Can queued kernels continue to execute while an OpenCL clEnqueueReadBuffer operation is occurring?
In other words, is clEnqueueReadBuffer a blocking operation on the device?
From a host API point of view, clEnqueueReadBuffer can be blocking or not, depending on if you set the blocking_read parameter to CL_TRUE or CL_FALSE.
If you set it to not block, then the read just gets queued and you should use an event (or subsequent blocking call) to determine when it has finished (i.e., before you access the memory that you are reading to).
If you set it to block, the call won't return until the read is done. The memory being read to will be correct. Also (and answering your actual question) any operations you queued prior to the clEnqueueReadBuffer will all have to finish first before the read starts (see exception note below).
All clEnqueue* API calls are asynchronous, but some have "blocking" parameters you can set. Using it is the equivalent to using a non-blocking version and then calling clFinish instead. The command queue will be flushed to the device and your host thread won't continue until the work has finished. Of course, it is hard to keep the GPU always busy doing it this way, since now it doesn't have any work, but if you queue up new work fast enough you can still keep it reasonably busy.
This all assumes a single, in-order command queue. If your command queue is out-of-order and your device supports out-of-order queues then enqueued items can execute in any order that doesn't violate the event_wait_list parameters you provided. Likewise, you can have multiple command queues, which can again be executed in any order that doesn't violate the event_wait_list parameters you provided. Typically, they are used to overlap memory transfers and compute, and to keep multiple compute units busy. Out-of-order command queues and multiple command queues are both advanced OpenCL concepts and shouldn't be attempted until you fully understand and have experience with in-order command queues.
Clarification added later after DarkZeros pointed out the "on the device" part of the OP's question: My answer was from the host thread API point of view. On the device, with an in-order command queue all downstream commands are blocked by the current command. With an out-of-order queue they are only blocked by the event_wait_list. However, out-of-order command queues are not well supported in today's drivers. With multiple command queues, in theory commands are only blocked by prior commands (if in-order) and the event_wait_list. In reality, there are sometimes special vendor rules that prevent the free flowing of potentially non-blocked commands that you might like. This is often because the multiple OpenCL command queues get transferred to device-side memory and compute queues, and get executed in-order there. So depending on the order that you add commands to your multiple command queues, they might get interleaved in such a way that they block in sub-optimal ways. The best solution I'm aware of is to either be careful about the order you enqueue (based on knowledge of this implementation detail), or use one queue for memory and one for compute, which matches the device-side queueing.
If overlap of memory and compute is your goal, both AMD and NVIDIA both provide examples of how to overlap memory and compute operations, and for GPUs that support multiple compute operations, how to do that too. NVIDIA examples are hard to get ahold of but they are out there (from CUDA 4 days).

Resources