OpenCL and multiple video cards - opencl

My understanding of the differences between CPUs and GPUs is that the GPUs are not general purpose processors such that if a video card contains 10 GPUs, each GPU actual share the same program pointer and to optimize parallelism on the GPU I need to ensure each GPU is actually running the same code.
Synchronisation is not a problem on the same card since each GPU is physically running in parallel so they should all complete at the same time.
My question is, how does this work on multiple cards? At the speed at which they operate at, doesn't the hardware make a slight difference in execution times such that a calculation on one GPU on one card may end quicker or slower than the same calculation on another GPU on another card?
thanks

Synchronisation is not a problem on the same card since each GPU is physically running in parallel so they should all complete at the same time.
This is not true. Different threads on a GPU may complete at different times due to differences in memory access latency, for example. That is why there are synchronization primitives in OpenCL such as the barrier command. You can never assume that your threads are running precisely in parallel.
The same is true for multiple GPUs. There is no guarantee that they are in sync, so you will need to rely on API calls such as clFinish to explicitly synchronize their work.

I think you may be confused about how threads work on a GPU. First to address the issue of multiple GPUs. Multiple GPUs NEVER share the program pointer, so they will almost never complete a kernel at the same time.
On a single GPU, only threads that are executing ON THE SAME COMPUTE UNIT (or SM in NVIDIA parlance) AND are part of the same warp/wavefront are guaranteed to execute in sync.
You can never really count on this, but for some devices the compiler can determine that will be the case (I am specifically thinking about some AMD devices, as long as the worgroup size is hardcoded to 64).
In any case, as #vocaro pointed out, that's why you need to use a barrier for local memory.
To emphasize, even on the same GPU, threads are not executing in parallel across the whole device - only within each compute unit.

Related

OpenCL kernel queueing delays

I have a gigantic pile of data, 100GB. I only have 1GB of Video memory. I need to queue my kernel many times with MaxWorkgroupSize chunks. That's going to be ~10000 kernel queueings and 100 Memory transfers. How badly will this affect my performance time? Also, is there a faster way of processing so much data? Would I just be better off running on my cpu with 8 threads, because then there is no data transfer and kernel delays. I'm asking before I code the thing because I want to make sure I have the right approach.
It depends on the nature of the work. GPUs are SIMD machines. If you are typically doing the same thing for each item (e.g. branches are normally going the same place for each work item), then that bodes well for a GPU. Even so, 8 thread CPU has OpenCL implementations for it as well. Also, in environments like Intel's embedded GPU (AMD too?) you should consider the CL_MEM_USE_HOST_PTR flag on the memory buffer. You can use it to get a zero-copy overhead.
Multiple enqueueing of same kernel doesn't impose any performance hit per enqueue in comparison to single kernel run. More to say, it becomes a little bit faster due to caching.
Also, you can run your code on CPU & GPU simultaneously, as both are OpenCL-compatible devices.
Your Device can use memory objects, allocated from Host's RAM (CL_MEM_ALLOC_HOST_PTR & CL_MEM_USE_HOST_PTR flags in clCreateBuffer() function). Anyway, memory transfers may not be the bottleneck.

Use GPU and CPU wisely

I'm newbie for OpenCL, just started learning. I wanted to know whether it is possible to execute few threads on GPU and remaining threads on CPU? In other words, if I launch 100 threads and assume that I've 8 core CPU then is it possible that 8 threads out of 100 threads will execute on CPU and remaining 92 threads will run on GPU?Can OpenCL help me to do this job smoothly?
I wanted to know whether it is possible to execute few threads on GPU and remaining threads on CPU?
Yes
In other words, if I launch 100 threads and assume that I've 8 core CPU then is it possible that 8 threads out of 100 threads will execute on CPU and remaining 92 threads will run on GPU?
No. That description suggests that you'd be viewing the GPU & CPU as a single compute resource. You can't do that.
That doesn't mean you can't have both working on the same task.
The GPU and CPU will be considered to be separate OpenCL devices.
You can write code that can talk to multiple devices.
You can compile the same kernel for multiple devices.
You can ask for multiple devices to do work at the same time.
...but...
None of this is automatic.
OpenCL won't split a single NDRange (or equivalent) call between multiple devices.
This means you'd have to schedule tasks between the two devices yourself.
There's going to be quite a large disparity in speed, so keeping it optimal will require more than "92 here, 8 there".
What I've found works better is having the CPU work on a different task whilst the GPU is working. Maybe preparing the next piece of work for the GPU, or post-processing the results from the GPU. Sometimes this is normal code. Sometimes it's OpenCL.
You can use multiple openCL devices to work on your algorithm, but the workload needs to be partitioned subtly enough so the work across devices is balanced properly, or else the overhead may make your runtime worse.
It is stated clearly in the AMD OpenCL Programming Guide section 4.7 about using multiple OpenCL devices, so my answer is, yes, you can divide the work to be executed with multiple devices, smoothly, if and only if your scheduling algorithm is smart enough to balance the whole thing.
openCL code is compiled at run time for the selected device (CPU, model of GPU)
You can switch which target you use for different tasks but you can't (with any implementation I know of) split the same task between CPU and GPU

Parallelism in OpenCL on 1 cpu device

Is it possible to achieve the same level of parallelism with a multiple core CPU device as that of multiple heterogenous devices ( like GPU and CPU ) in OpenCL?
I have an intel i5 and am looking to optimise my code. When I query the platform for devices I get only one device returned: the CPU. I was wondering how I could optimise my code by using this.
Also, if I used a single command queue for this device, would the application automatically assign the kernels to different compute devices or does it have to be done manually by the programmer?
Can a cpu device achieve the same level of parallelism as a gpu? Pretty much always no.
The number of compute units in a gpu is almost always more than in a cpu. For example, $50 can get you a video card with 10 compute units (Radeon 6450). The cheapest 8-core cpus on newegg are going for $189 (desktop cpu) and $269 (server).
The compute units of a cpu will run faster due to clock speed, and execute branching code much better than a gpu. You want a cpu if your workload has a lot of conditional statements.
A gpu will execute the same instructions on many pieces of data. The 6450 gpu has 16 'stream processors' per compute unit to make this happen. Gpus are great when you have to do the same (small/medium) tasks many times. Matrix multiplication, n-boy computations, reduction operations, and some sorting algorithms run much better on gpu/accelerator hardware than on a cpu.
I answered a similar question with more detail a few weeks ago. (This one)
Getting back to your question about the "same level of parallelism" -- cpus don't have the same level of parallelism as gpus, except in cases where the gpu under performs on the execution of the actual kernel.
On your i5 system, there would be only one cpu device. This represents the entire cpu. When you query for the number of compute units, opencl will return the number of cores you have. If you want to use all cores, you just run the kernel on your device, and opencl will use all of the compute units (cores) for you.
Short answer: yes, it will run in parallel and no, no need to do it manually.
Long answer:
Also, if I used a single command queue for this device, would the application automatically assign the kernels to different compute devices [...]
Either you need to revise your OpenCL vocabulary or I didn't understand your question. You only have one device and core != device!
One CPU, regardless of how many cores it has, is one device. The same goes for a GPU: one GPU, which has hundreds of cores, is only one device. You send jobs to the device through the queue and the device's driver. Your jobs can (and will) be split up into work-items. Then, some (how many depends on the device/driver) work-items are executed in parallel. On the GPU aswell as on the CPU, one work-item is executed by one kernel. (This might not be completely true but it is a very helpful abstraction.)
If you enqueue several kernels in one queue (without connecting them through a wait event!), the driver may or may not run them in parallel.
It is the very goal of OpenCL to allow you to compute work-items in parallel regardless of whether it is using several devices' cores in parallel or only a single devices cores.
If this confuses you, watch these really good (and long) videos: http://macresearch.org/opencl
How are you determining the OPENCL device count? I have an Intel I3 laptop that gives me 2 OpenCL compute units? It has 2 cores.
According to Intels spec an I5-2300 has 4 cores and supports 4 threads. It isn't hyper-threaded. I would expect a OpenCL call to the query the # devices to give you a count of 4.

OpenCL- waste of host computing power

I am new to OpenCL, please tell me that the host cpu can be used only for allocating memory to the device, or we can use it can as an openCL device. (Because after the allocation is done, the host cpu will be idle).
You can use a cpu as a compute device. Opencl even allows multicore/processor systems to segment cores into separate compute units. I like to use this feature to divide the cpus on my system into groups based on NUMA nodes. It is possible to divide a cpu into compute devices which all share the same level of cache memory (L1, L2, L3 or L4).
You need a platform that supports it, such as AMD's SDK. I know there are ways to have Nvidia and AMD platforms on the same machine, but I have never had to do so myself.
Also, the opencl event/callback system allows you to use your cpu as you normally would while the gpu kernels are executing. In this way, you can use openmp or any other code on the host while you wait for the gpu kernel to finish.
There's no reason the CPU has to be idle, but it needs a separate job to do. Once you've submitted work to OpenCL you can:
Get on with something else, like preparing the next set of work, or performing calculation on something completely different.
Have the CPU set up as another compute device, and so submit a piece of work to it.
Personally I tend to find myself needing the first case more often as it's rare I find myself with two tasks that are independent and lend themselves to OpenCL style. The trick is keeping things balanced so you're not waiting a long time for the GPU task to finish, or having the GPU idle while the CPU is getting on with other work.
It's the same problem OpenGL coders had to conquer. Avoiding being CPU or GPU bound, and balancing between the two for best performance.

Sharing the GPU between OpenCL capable programs

Is there a method to share the GPU between two separate OpenCL capable programs, or more specifically between two separate processes that simultaneously both require the GPU to execute OpenCL kernels? If so, how is this done?
It depends what you call sharing.
In general, you can create 2 processes that both create an OpenCL device, on the same GPU. It's then the driver/OS/GPU's responsibility to make sure things just work.
That said, most implementations will time-slice the GPU execution to make that happen (just like it happens for graphics).
I sense this is not exactly what you're after though. Can you expand your question with a use case ?
Current GPUs (except NVidia's Fermi) do not support simultaneous execution of more than one kernel. Moreover, to this date GPUs do not support preemptive multitasking; it's completely cooperative! A kernel's execution cannot be suspended and continued later on. So the granularity of any time-based GPU sharing depends on the kernels' execution times.
If you have multiple programs running that require GPU access, you should therefore make sure that your kernels have short runtimes (< 100ms is a rule of thumb), so that GPU time can be timesliced among the kernels that want GPU cycles. It's also important to do that since otherwise the host system's graphics will become very unresponsive as they need GPU access too. This can go as far that a kernel in an endless or long loop will apparently crash the system.

Resources