Asynchronous CUDA transfer calls not behaving asynchronously - asynchronous

I am using my GPU concurrently with my CPU. When I profile memory transfers I find that the async calls in cuBLAS do not behave asynchronously.
I have code that does something like the following
cudaEvent_t event;
cudaEventCreate(&event);
// time-point A
cublasSetVectorAsync(n, elemSize, x, incx, y, incy, 0);
cudaEventRecord(event);
// time-point B
cudaEventSynchronize(event);
// time-point C
I'm using sys/time.h to profile (code omited for clarity). I find that the cublasSetVectorAsync call dominates the time as though it were behaving synchronously. I.e. the duration A-B is much longer than the duration B-C and increases as I increase the size of the transfer.
What are possible reasons for this? Is there some environment variable I need to set somewhere or an updated driver that I need to use?
I'm using a GeForce GTX 285 with Cuda compilation tools, release 4.1, V0.2.1221

cublasSetVectorAsync is a thin wrapper around cudaMemcpyAsync. Unfortunately, in some circumstances, the name of this function is a misnomer, as explained on this page from the CUDA reference manual.
Notably:
For transfers from pageable host memory to device memory, a stream sync is performed before the copy is initiated. The function will return once the pageable buffer has been copied to the staging memory for DMA transfer to device memory, but the DMA to final destination may not have completed.
And
For transfers from pageable host memory to device memory, host memory is copied to a staging buffer immediately (no device synchronization is performed). The function will return once the pageable buffer has been copied to the staging memory. The DMA transfer to final destination may not have completed.
So the solution to your problem is likely to just allocate x, your host data array, using cudaHostAlloc, rather than standard malloc (or C++ new).
Alternatively, if your GPU and CUDA version support it, you can use malloc and then call cudaHostRegister on the malloc-ed pointer. Note in the documentation the condition that you must create your CUDA context with the cudaDeviceMapHost flag in order for cudaHostRegister to have any effect (see the documentation for cudaSetDeviceFlags.

In cuBLAS/cuSPARSE, things take place in stream 0 if you don't specify a different stream. To specify a stream, you have to use cublasSetStream (see cuBLAS documentation).

Related

OpenCL Buffer Creation

I am fairly new to OpenCL and though I have understood everything up until now, but I am having trouble understanding how buffer objects work.
I haven't understood where a buffer object is stored. In this StackOverflow question it is stated that:
If you have one device only, probably (99.99%) is going to be in the device. (In rare cases it may be in the host if the device does not have enough memory for the time being)
To me, this means that buffer objects are stored in device memory. However, as is stated in this StackOverflow question, if the flag CL_MEM_ALLOC_HOST_PTR is used in clCreateBuffer, the memory used will most likely be pinned memory. My understanding is that, when memory is pinned it will not be swapped out. This means that pinned memory MUST be located in RAM, not in device memory.
So what is actually happening?
What I would like to know what do the flags:
CL_MEM_USE_HOST_PTR
CL_MEM_COPY_HOST_PTR
CL_MEM_ALLOC_HOST_PTR
imply about the location of buffer.
Thank you
Let's first have a look at the signature of clCreateBuffer:
cl_mem clCreateBuffer(
cl_context context,
cl_mem_flags flags,
size_t size,
void *host_ptr,
cl_int *errcode_ret)
There is no argument here that would provide the OpenCL runtime with an exact device to whose memory the buffer shall be put, as a context can have multiple devices. The runtime only knows as soon as we use a buffer object, e.g. read/write from/to it, as those operations need a command queue that is connected to a specific device.
Every memory object an reside in either the host memory or one of the context's device's memories, and the runtime might migrate it as needed. So in general, every memory object, might have a piece of internal host memory within the OpenCL runtime. What the runtime actually does is implementation dependent, so we cannot not make too many assumptions and get no portable guarantees. That means everything about pinning etc. is implementation-dependent, and you can only hope for the best, but avoid patterns that will definitely prevent the use of pinned memory.
Why do we want pinned memory?
Pinned memory means, that the virtual address of our memory page in our process' address space has a fixed translation into a physical memory address of the RAM. This enables DMA (Direct Memory Access) transfers (which operate on physical addresses) between the device memory of a GPU and the CPU memory using PCIe. DMA lowers the CPU load and possibly increases copy speed. So we want the internal host storage of our OpenCL memory objects to be pinned, to increase the performance of data transfers between the internal host storage and the device memory of an OpenCL memory object.
As a basic rule of thumb: if your runtime allocates the host memory, it might be pinned. If you allocate it in your application code, the runtime will pessimistically assume it is not pinned - which usually is a correct assumption.
CL_MEM_USE_HOST_PTR
Allows us to provide memory to the OpenCL implementation for internal host-storage of the object. It does not mean that the memory object will not be migrated into device memory if we call a kernel. As that memory is user-provided, the runtime cannot assume it to be pinned. This might lead to an additional copy between the un-pinned internal host storage and a pinned buffer prior to device transfer, to enable DMA for host-device-transfers.
CL_MEM_ALLOC_HOST_PTR
We tell the runtime to allocate host memory for the object. It could be pinned.
CL_MEM_COPY_HOST_PTR
We provide host memory to copy-initialise our buffer from, not to use it internally. We can also combine it with CL_MEM_ALLOC_HOST_PTR. The runtime will allocate memory for internal host storage. It could be pinned.
Hope that helps.
The specification is (deliberately?) vague on the topic, leaving a lot of freedom to implementors. So unless an OpenCL implementation you are targeting makes explicit guarantees for the flags, you should treat them as advisory.
First off, CL_MEM_COPY_HOST_PTR actually has nothing to do with allocation, it just means that you would like clCreateBuffer to pre-fill the allocated memory with the contents of the memory at the host_ptr you passed to the call. This is as if you called clCreateBuffer with host_ptr = NULL and without this flag, and then made a blocking clEnqueueWriteBuffer call to write the entire buffer.
Regarding allocation modes:
CL_MEM_USE_HOST_PTR - this means you've pre-allocated some memory, correctly aligned, and would like to use this as backing memory for the buffer. The implementation can still allocate device memory and copy back and forth between your buffer and the allocated memory, if the device does not support directly accessing host memory, or if the driver decides that a shadow copy to VRAM will be more efficient than directly accessing system memory. On implementations that can read directly from system memory though, this is one option for zero-copy buffers.
CL_MEM_ALLOC_HOST_PTR - This is a hint to tell the OpenCL implementation that you're planning to access the buffer from the host side by mapping it into host address space, but unlike CL_MEM_USE_HOST_PTR, you are leaving the allocation itself to the OpenCL implementation. For implementations that support it, this is another option for zero copy buffers: create the buffer, map it to the host, get a host algorithm or I/O to write to the mapped memory, then unmap it and use it in a GPU kernel. Unlike CL_MEM_USE_HOST_PTR, this leaves the door open for using VRAM that can be mapped directly to the CPU's address space (e.g. PCIe BARs).
Default (neither of the above 2): Allocate wherever most convenient for the device. Typically VRAM, and if memory-mapping into host memory is not supported by the device, this typically means that if you map it into host address space, you end up with 2 copies of the buffer, one in VRAM and one in system memory, while the OpenCL implementation internally copies back and forth between the 2.
Note that the implementation may also use any access flags provided ( CL_MEM_HOST_WRITE_ONLY, CL_MEM_HOST_READ_ONLY, CL_MEM_HOST_NO_ACCESS, CL_MEM_WRITE_ONLY, CL_MEM_READ_ONLY, and CL_MEM_READ_WRITE) to influence the decision where to allocate memory.
Finally, regarding "pinned" memory: many modern systems have an IOMMU, and when this is active, system memory access from devices can cause IOMMU page faults, so the host memory technically doesn't even need to be resident. In any case, the OpenCL implementation is typically deeply integrated with a kernel-level device driver, which can typically pin system memory ranges (exclude them from paging) on demand. So if using CL_MEM_USE_HOST_PTR you just need to make sure you provide appropriately aligned memory, and the implementation will take care of pinning for you.

Simultaneously use OpenCL buffer in host and kernel

After creating the OpenCL buffer, we need to map it on host side, populate the required data and unmap so that kernel can use it. For a read only OpenCL buffer, is it possible to use it on host side as well as kernel side simultaneously?
No, not if you're using map/unmap. The content of the host memory range is invalid after the unmap. Perhaps you could use clEnqueueWriteBuffer instead, and then the host memory that you used as the source will still be host memory you can use on the host side.
Again, not with regular memory. In general, you can share memory between the GPU and CPU concurrently even to communicate. Look into Shared Virtual Memory (AMD and Intel).
Non-standard CPU/GPU communication is pretty rare for the simple fact that one doesn't get to assume what order the ND range executes. An implementation can dispatch them in any order it desires. So if the buffer's contents were to be changing as the kernel dispatches new workgroups, you wouldn't have any control of the dataflow sequence.
Rare exceptions like "persistent kernels" where the kernel continues running (as if processing a stream) do exist, but I know less about this.

Memory Object Assignation to Context Mechanism In OpenCL

I'd like to know what exactly happens when we assign a memory object to a context in OpenCL.
Does the runtime copies the data to all of the devices which are associated with the context?
I'd be thankful if you help me understand this issue :-)
Generally and typically the copy happens when the runtime handles the clEnqueueWriteBuffer / clEnqueueReadBuffer commands.
However, if you created the memory object using certain combinations of flags, the runtime can choose to copy the memory sooner than that (like right after creation) or later (like on-demand before running a kernel or even on-demand as it needs it). Vendor documentation often indicates if they take special advantage of any of these flags.
A couple of the "interesting" variations:
Shared memory (Intel Ingrated Graphics GPUs, AMD APUs, and CPU drivers): You can allocate a buffer and never copy it to the device because the device can access host memory.
On-demand paging: Some discrete GPUs can copy buffer memory over PCIe as it is read or written by a kernel.
Those are both "advanced" usage of OpenCL buffers. You should probably start with "regular" buffers and work your way up if they don't do what you need.
This post describes the extra flags fairly well.

Memory location and allocation

Ex: To perform an algorithm on an array, we must use a buffer created with an array.
But with a Intel/AMD CPU, it use the DDR of the system like Global Memory.
Finally, the table is created twice. Is there a way to use the table already in memory without allocating buffer.
You can ask OpenCL to use the original memory area by setting the CL_MEM_USE_HOST_PTR flag when creating the buffer.
If the kernel is run on a CPU no memory copy will occur.
If run on a GPU a copy might occur if the OpenCL runtime thinks it's more suitable.
The CPU has access to the machine's memory, but doesn't have access to the GPU's memory. Likewise, the GPU has access to its own memory, but not to the host machine's. This is the reason that you must transfer the information between those - they are two completely separate memory spaces.
As opposed to gpgpu, with OpenCL the kernel might run on the CPU itself, so no need to copy the buffer; but OpenCL still always requires you to explicitly transfer the memory, it's just that its implementation will ignore it if it's running on the host computer.

Non-blocking write into a in-order queue

I have a buffer created with CL_MEM_USE_HOST_PTR | CL_MEM_READ_WRITE flags. I have used this in one kernel and then downloaded (queue.enqueueReadBuffer(...)) the data back to the host memory set when the buffer was created. I have modified these data on CPU and now I'd like to use them in another kernel.
When I have uploaded (queue.enqueueWriteBuffer) the data manually using non-blocking write and then enqueued kernel with this buffer as argument, it returned the CL_OUT_OF_RESOURCES error. Blocking write was just fine.
Why did this happen? I thought that the blocking/non-blocking version only controls if I can work with the memory on CPU after the enqueueWriteBuffer call returns, with in-order queue there should be no difference for the kernel.
Second question is whether I have to upload it manually at all - does the CL_MEM_USE_HOST_PTR mean that the data has to be uploaded from host to device in for every time some kernel uses the buffer as argument? As I have to download the data manually when I require them, has the above mentioned flag any pros?
Thanks
I can't be sure of the specific problem for your CL_OUT_OF_RESOURCES error. This error seems to be raised as kind of a catch-all for problems in the system, so the actual error you're getting might be caused by something else in your program (maybe the kernel).
In regards to using the CL_MEM_USE_HOST_PTR, you still still have to manually upload the data. The OpenCL specification states:
This flag is valid only if host_ptr is not NULL. If specified, it
indicates that the application wants the OpenCL implementation to use
memory referenced by host_ptr as the storage bits for the memory
object. OpenCL implementations are allowed to cache the buffer
contents pointed to by host_ptr in device memory. This cached copy can
be used when kernels are executed on a device.
For some devices the data will be cached on the device memory. In order to sync your data you would have to use some clEnqueueReadBuffer / clEnqueueWriteBuffer or clEnqueueMapBuffer / clEnqueueUnmapBuffer. For discrete CPU+GPU combinations (i.e. seperate GPU card), I'm not sure what benefit there would be to doing CL_MEM_USE_HOST_PTR, since the data will be cached anyway.
Upon reading the specification, there might be some performance benefit for using clEnqueueMapBuffer / clEnqueueUnmapBuffer instead of clEnqueueReadBuffer / clEnqueueWriteBuffer, but I haven't tested this for any real devices.
Best of luck!

Resources