OpenCL provides local memory similar to shared memory in cuda. In cuda we have to use volatile with shared memory , because If you don't declare a shared array as volatile, then the compiler is free to optimize locations in shared memory by locating them in registers. But it will be problem if threads communicate between each other. My question is do we have to follow same approach(of using volatile) in opencl kernel also and if yes how should I do it?
1) you don't need to use volatile with CUDA shared memory. Here is a good answer explaining that. Quote:
__syncthreads() call is sufficient to force thread synchronization as well as to force any register-cached values in shared memory to be evicted back to shared memory.
2) the OpenCL equivalent of __syncthreads() is barrier(CLK_LOCAL_MEM_FENCE). There is also a weaker mem_fence which is (supposedly) comparable to CUDA's __threadfence or __threadfence_block.
Related
Is it possible, using OpenCL's DMA capabilities, to write to a main memory address that is passed into the cl program? I understand doing so would likely break the program, but the intent here is to run a GPU process and then overwrite the address space of the CPU program used to run it, so breakage is expected.
Thanks!
Which version of the OpenCL API are you targeting?
In OpenCL 2.0 and above you can use Shared Virtual Memory (SVM) to share address between host and device(s) in platforms that support it.
You can get more information about it in the Intel OpenCL SVM overview.
If you are using previous versions, or your hardware does not support it, you can use pinned memory with the appropriate flags to clCreateBuffer. In particular, CL_MEM_USE_HOST_PTR or CL_MEM_ALLOC_HOST_PTR, see clCreateBuffer in Khronos.
Note that, when using CL_MEM_USE_HOST_PTR has some alignment restrictions.
In general, in OpenCL, when and how the DMA is used depends on the hardware platform, so you should refer to the vendor documentation for details.
I am trying to understand when to use CL_MEM_USE_HOST_PTR on a CPU-GPU Soc by Intel.
Reading this guide I came across:
If your application uses a specific memory management algorithm, or if
you want to wrap existing native application memory allocations, you
can pass a pointer to clCreateBuffer along with the
CL_MEM_USE_HOST_PTR flag.
Can someone explain with an example what is the meaning of: specific memory management algorithm, and wrap existing native application memory allocations.
CL_MEM_USE_HOST_PTR flag means, that memory for OpenCL memory object will not be allocated on Device side, but will be used from memory, allocated on Host side. Though, memory content may be cached (this is opaque to user).
Imagine, that you have complicated library, which has it's own sophisticated memory allocation mechanisms (e. g. with reference counting), etc. It's not that easy (usually - impossible) to allocate OpenCL memory objects "by hand", as they must have same lifetime to objects, allocated by library, (possibly - same alignment), etc.
In that case much easier way it to use CL_MEM_USE_HOST_PTR flag, when creating OpenCL memory objects. All objects handling will be done under-the-hood. This way can save you a lot of pain especially when you're working with big projects, implemented on plain C, in which memory objects processing is always tricky.
I'd like to know what exactly happens when we assign a memory object to a context in OpenCL.
Does the runtime copies the data to all of the devices which are associated with the context?
I'd be thankful if you help me understand this issue :-)
Generally and typically the copy happens when the runtime handles the clEnqueueWriteBuffer / clEnqueueReadBuffer commands.
However, if you created the memory object using certain combinations of flags, the runtime can choose to copy the memory sooner than that (like right after creation) or later (like on-demand before running a kernel or even on-demand as it needs it). Vendor documentation often indicates if they take special advantage of any of these flags.
A couple of the "interesting" variations:
Shared memory (Intel Ingrated Graphics GPUs, AMD APUs, and CPU drivers): You can allocate a buffer and never copy it to the device because the device can access host memory.
On-demand paging: Some discrete GPUs can copy buffer memory over PCIe as it is read or written by a kernel.
Those are both "advanced" usage of OpenCL buffers. You should probably start with "regular" buffers and work your way up if they don't do what you need.
This post describes the extra flags fairly well.
I am solving a 2d Laplace equation using OpenCL.
The global memory access version runs faster than the one using shared memory.
The algorithm used for shared memory is same as that in the OpenCL Game of Life code.
https://www.olcf.ornl.gov/tutorials/opencl-game-of-life/
If anyone has faced the same problem please help. If anyone wants to see the kernel I can post it.
If your global-memory really runs faster than your local-memory version (assuming both are equally optimized depending on the memory space you're using), maybe this paper could answer your question.
Here's a summary of what it says:
Usage of local memory in a kernel add another constraint to the number of concurrent workgroups that can be run on the same compute unit.
Thus, in certain cases, it may be more efficient to remove this constraint and live with the high latency of global memory accesses. More wavefronts (warps in NVidia-parlance, each workgroup is divided into wavefronts/warps) running on the same compute unit allow your GPU to hide latency better: if one is waiting for a memory access to complete, another can compute during this time.
In the end, each kernel will take more wall-time to proceed, but your GPU will be completely busy because it is running more of them concurrently.
No, it doesn't. It only says that ALL OTHER THINGS BEING EQUAL, an access from local memory is faster than an access from global memory. It seems to me that global accesses in your kernel are being coalesced which yields better performance.
Using shared memory (memory shared with CPU) isn't always going to be faster. Using a modern graphics card It would only be faster in the situation that the GPU/CPU are both performing oepratoins on the same data, and needed to share information with each-other, as memory wouldn't have to be copied from the card to the system and vice-versa.
However, if your program is running entirely on the GPU, it could very well execute faster by running in local memory (GDDR5) exclusively since the GPU's memory will not only likely be much faster than your systems, there will not be any latency caused by reading memory over the PCI-E lane.
Think of the Graphics Card's memory as a type of "l3 cache" and your system's memory a resource shared by the entire system, you only use it when multiple devices need to share information (or if your cache is full). I'm not a CUDA or OpenCL programmer, I've never even written Hello World in these applications. I've only read a few white papers, it's just common sense (or maybe my Computer Science degree is useful after all).
Ex: To perform an algorithm on an array, we must use a buffer created with an array.
But with a Intel/AMD CPU, it use the DDR of the system like Global Memory.
Finally, the table is created twice. Is there a way to use the table already in memory without allocating buffer.
You can ask OpenCL to use the original memory area by setting the CL_MEM_USE_HOST_PTR flag when creating the buffer.
If the kernel is run on a CPU no memory copy will occur.
If run on a GPU a copy might occur if the OpenCL runtime thinks it's more suitable.
The CPU has access to the machine's memory, but doesn't have access to the GPU's memory. Likewise, the GPU has access to its own memory, but not to the host machine's. This is the reason that you must transfer the information between those - they are two completely separate memory spaces.
As opposed to gpgpu, with OpenCL the kernel might run on the CPU itself, so no need to copy the buffer; but OpenCL still always requires you to explicitly transfer the memory, it's just that its implementation will ignore it if it's running on the host computer.