Asynchronous execution of commands from two command queues in OpenCL - asynchronous

I am trying to work out an application that can utilize both CPU and GPU at the same time by OpenCL. Specifically, I have two kernels, one for CPU executing, and one for GPU. CPU kernel will change the content of one buffer, and GPU will do other things when GPU detects that the buffer has been changed by CPU.
__kernel void cpuKernel(__global uint * dst1,const uint size)
{
uint tid = get_global_id(0);
uint size = get_global_size(0);
while(tid < size)
{
atomic_xchg(&dst1[tid],10);
tid += size;
}
}
__kernel void gpuKernel(__global uint * dst1, __global uint * dst2, const uint size)
{
uint tid = get_global_id(0);
uint size = get_global_size(0);
while(tid < vectorSize)
{
while(dst1[vectorOffset + tid] != 10)
;
dst2[vectorOffset + tid] = dst1[vectorOffset+tid];
tid += size;
}
}
As shown above, cpuKernel will change each element of dst1 buffer to 10, correspondingly, after GPU detect such changes, it will assign the element value (10) to the same place of another buffer dst2. cpuKernel is queued in command1 which is associated with CPU device, and gpuKernel is queued in command2 which is associated with GPU device, two command queues have been set CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE flag.
Then I make two cases:
case 1:
clEnqueueNDRangeKernel(command2,gpuKernel);
clEnqueueNDRangeKernel(command1,cpuKernel);
clfinish(command1);
clfinish(command2);
case 2:
clEnqueueNDRangeKernel(command1,cpuKernel);
clfinish(command1);
clEnqueueNDRangeKernel(command2,gpuKernel);
clfinish(command2);
But the results show that the time consumed in two cases are nearly the same, but I expect there will be some overlapping in case 1, but there is not. Can anyone help me? Thanks!
Or, can anyone help to explain how to implement two kernels running on two devices asynchronously in OpenCL?

You are asking too much. As you have probably noticed, buffer objects are relative to a context, while command queues are related to devices.
If a kernel operates on a buffer object, the corresponding data must be on this device. If you do not transfer it explicitely with clEnqueueWriteBuffer(), OpenCL will do that for you.
Hence, if you modify a buffer object with a kernel on one device (for example the CPU), and just after on another device (for example the GPU), the OpenCL driver will wait for the first kernel to finish, transfer the data, and then run the second kernel.

Related

Understanding Performance Behavior of Random Writes to Global Memory

I'm running experiments aiming to understand the behavior of random read and write access to global memory.
The following kernel reads from an input vector (groupColumn) with a coalesced access pattern and reads random entries from a hash table in global memory.
struct Entry {
uint group;
uint payload;
};
typedef struct Entry Entry;
__kernel void global_random_write_access(__global const uint* restrict groupColumn,
__global Entry* globalHashTable,
__const uint HASH_TABLE_SIZE,
__const uint HASH_TABLE_SIZE_BITS,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
uint sum = 0;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
// hash keys are pre-computed
uint hash_key = groupColumn[idx]; // coalesced read access
__global Entry* entry = &globalHashTable[hash_key]; // pointer arithmetic
sum += entry->payload; // random read
}
if (local_id < HASH_TABLE_SIZE) {
globalHashTable[local_id].payload = sum; // rare coalesced write
}
}
I ran this kernel on a NVIDIA V100 card with multiple iterations. The variance of the results is very low, thus, I only plot one dot per group configuration. The input data size is 1 GiB and each thread processes 128 entries (BATCH = 128). Here are the results:
So far so good. The V100 has a max memory bandwidth of roughly 840GiB/sec and the measurements are close enough, given the fact that there are random memory reads involved.
Now I'm testing random writes to global memory with the following kernel:
__kernel void global_random_write_access(__global const uint* restrict groupColumn,
__global Entry* globalHashTable,
__const uint HASH_TABLE_SIZE,
__const uint HASH_TABLE_SIZE_BITS,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
uint sum = 0;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
// hash keys are pre-computed
uint hash_key = groupColumn[idx]; // coalesced read access
__global Entry* entry = &globalHashTable[hash_key]; // pointer arithmetic
sum += i;
entry->payload = sum; // random write
}
if (local_id < HASH_TABLE_SIZE) {
globalHashTable[local_id].payload = sum; // rare coalesced write
}
}
Godbolt: OpenCL -> PTX
The performance drops significantly to a few GiB/sec for few groups.
I can't make any sense of the behavior. As soon as the hash table reaches the size of L1 the performance seems to be limited by L2. For fewer groups the performance is way lower. I don't really understand what the limiting factors are.
The CUDA documentation doesn't say much about how store instructions are handled internally. The only thing I could find is that the st.wb PTX instruction (Cache Operations) might cause a hit on stale L1 cache if another thread would try to read the same addess via ld.ca. However, there are no reads to the hash table involved here.
Any hints or links to understanding the performance behavior are much appreciated.
Edit:
I actually found a bug in my code that didn't pre-compute the hash keys. The access to global memory wasn't random, but actually coalesced due to how I generated the values. I further simplified my experiments by removing the hash table. Now I only have one integer input column and one interger output column. Again, I want to see how the writes to global memory actually behave for different memory ranges. Ultimately, I want to understand which hardware properties influence the performance of writes to global memory and see if I can predict based on the code what performance to expect.
I tested this with two kernels that do the following:
Read from input, write to output
Read from input, read from output and write to output
I also applied two different access patterns, by generating the values in the group column:
SEQUENTIAL: sequentially increasing numbers until current group's size is reached. This pattern leads to a coalesced memory access when reading and writing from the output column.
RANDOM: uni-distributed random numbers within the current group's size. This pattern leads to a misaligned memory access when reading and writing from the output column.
(1) Read & Write
__kernel void global_write_access(__global const uint* restrict groupColumn,
__global uint *restrict output,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
uint sum = 0;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
uint group = groupColumn[idx]; // coalesced read access
sum += i;
output[group] = sum; // write (coalesced | random)
}
}
PTX Code: https://godbolt.org/z/19nTdK
(2) Read, Read & Write
__kernel void global_read_write_access(__global const uint* restrict groupColumn,
__global uint *restrict output,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
uint group = groupColumn[idx]; // coalesced read access
output[group] += 1; // read & write (coalesced | random)
}
}
PTX Code: https://godbolt.org/z/b647cz
As ProjectPhysX pointed out, the access pattern makes a huge difference. However, for small groups the performance is quite similar for both access patterns. In general, I would like to better understand the shape of the curves and which hardware properties, architectural features etc. influence this shape.
From the cuda programming guide I learned that global memory accesses are conducted via 32-, 64-, or 128-byte transactions. Accesses to L2 are done via 32-byte transactions. So up to 8 integer words can be accessed via a single transaction. This might explain the plateau with a bump at 8 groups at the beginning of the curve. After that more transactions are needed and performance drops.
One cache line is 128 bytes long (both on L1 and L2), hence, 32 intergers fit into a single cache line. For more groups more cache lines are required which can be potentially processed in parallel by more memory controllers. That might be the reason for the performance to increase here. 8 controllers are available on the V100 So I would expect the performance to peak at 256 groups. Though, it doesn't. Instead it will steadily increase performance until reaching 4096 groups and plateau there with roughly 750 GiB/sec.
The plateauing in your second performane plot is GPU saturation: For only a few work groups, the GPU is partly idle and the latencies involved in launching the kernel significantly reduce performance. Above 8192 groups, the GPU fully saturates its memory bandwidth. The plateau only is at ~520GB/s because of the misaligned writes (have low performance on the V100) and also the "rare coalesced write" in the if-block, which happens at least once per group. For branching within the group, all other threads have to wait for the single write operation to finish. Also this write is not coalesced, because it is not happening for every thread in the group. On the V100, misaligned write performance is very poor at max. ~120GB/s, see the benchmark here.
Note that if you would comment the if-part, the compiler sees that you do not do anything with sum and optimizes everything out, leaving you with a blank kernel in PTX.
The first performance graph to me is a bit more confusing. The only difference in the first kernel to the second is that the random wrtite in the loop is replaced by a random read. Generally, read performance on the V100 is much better (~840GB/s, regardless of coalesced/misaligned) than misaligned write performance, so performance is expected to be much better overall and indeed it is. However I can't make sense of the performance dropping for more groups, where saturation should theoretically be better. But the performance drop isn't really that significant at ~760GB/s vs. 730GB/s.
To summarize, you are observing that the performance penalty for misaligned writes (~120GB/s vs. ~900GB/s for coalesced writes) is much larger than for reads, where performance is about the same for coalesced/misaligned at ~840GB/s. This is common thing for GPUs, with some variance of course between microarchitectures. Typically there is at least some performance penalty for misaligned reads, but not as large as for misaligned writes.

speedup when using float4, opencl

I have the following opencl kernel function to get the column sum of a image.
__kernel void columnSum(__global float* src,__global float* dst,int srcCols,
int srcRows,int srcStep,int dstStep)
{
const int x = get_global_id(0);
srcStep >>= 2;
dstStep >>= 2;
if (x < srcCols)
{
int srcIdx = x ;
int dstIdx = x ;
float sum = 0;
for (int y = 0; y < srcRows; ++y)
{
sum += src[srcIdx];
dst[dstIdx] = sum;
srcIdx += srcStep;
dstIdx += dstStep;
}
}
}
I assign that each thread process a column here so that a lot of threads can get the column_sum of each column in parallel.
I also use float4 to rewrite the above kernel so that each thread can read 4 elements in a row at one time from the source image, which is shown below.
__kernel void columnSum(__global float* src,__global float* dst,int srcCols,
int srcRows,int srcStep,int dstStep)
{
const int x = get_global_id(0);
srcStep >>= 2;
dstStep >>= 2;
if (x < srcCols/4)
{
int srcIdx = x ;
int dstIdx = x ;
float4 sum = (float4)(0.0f, 0.0f, 0.0f, 0.0f);
for (int y = 0; y < srcRows; ++y)
{
float4 temp2;
temp2 = vload4(0, &src[4 * srcIdx]);
sum = sum + temp2;
vstore4(sum, 0, &dst[4 * dstIdx]);
srcIdx += (srcStep/4);
dstIdx += (dstStep/4);
}
}
}
In this case, theoretically, I think the time consumed by the second kernel to process a image should be 1/4 of the time consumed by the first kernel function. However, no matter how large the image is, the two kernels almost consume the same time. I don't know why. Can you guys give me some ideas? T
OpenCL vector data types like float4 were fitting better the older GPU architectures, especially AMD's GPUs. Modern GPUs don't have SIMD registers available for individual work-items, they are scalar in that respect. CL_DEVICE_PREFERRED_VECTOR_WIDTH_* equals 1 for OpenCL driver on NVIDIA Kepler GPU and Intel HD integrated graphics. So adding float4 vectors on modern GPU should require 4 operations. On the other hand, OpenCL driver on Intel Core CPU has CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT equal to 4, so these vectors could be added in a single step.
You are directly reading the values from "src" array (global memory). Which typically is 400 times slower than private memory. Your bottleneck is definitelly the memory access, not the "add" operation itself.
When you move from float to float4, the vector operation (add/multiply/...) is more efficient thanks to the ability of the GPU to operate with vectors. However, the read/write to global memory remains the same.
And since that is the main bottleneck, you will not see any speedup at all.
If you want to speed your algorithm, you should move to local memory. However you have to manually resolve the memory management, and the proper block size.
which architecture do you use?
Using float4 has higher instruction level parallelism (and then require 4 times less threads) so theoretically should be faster (see http://www.cs.berkeley.edu/~volkov/volkov10-GTC.pdf)
However did i understand correctly in you kernel you are doing prefix-sum (you store the partial sum at every iteration of y)? If so, because of the stores the bottleneck is at the memory writes.
I think on the GPU float4 is not a SIMD operation in OpenCL. In other words if you add two float4 values the sum is done in four steps rather than all at once. Floatn is really designed for the CPU. On the GPU floatn serves only as a convenient syntax, at least on Nvidia cards. Each thread on the GPU acts as if it is scalar processor without SIMD. But the threads in a warp are not independent like they are on the CPU. The right way to think of the GPGPU models is Single Instruction Multiple Threads (SIMT).
http://www.yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-gpus.html
Have you tried running your code on the CPU? I think the code with float4 should run quicker (potentially four times quicker) than the scalar code on the CPU. Also if you have a CPU with AVX then you should try float8. If the float4 code is faster on the CPU than float8 should be even faster on a CPU with AVX.
try to define __ attribute __ to kernel and see changes in run timing
for example try to define:
__ kernel void __ attribute__((vec_type_hint(int)))
or
__ kernel void __ attribute__((vec_type_hint(int4)))
or some floatN as you want
read more:
https://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/functionQualifiers.html

OpenCL reduction from private to local then global?

The following kernel computes an acoustic pressure field, with each thread computing it's own private instance of the pressure vector, which then needs to be summed down into global memory.
I'm pretty sure the code which computes the pressurevector is correct, but I'm still having trouble making this produce the expected result.
int gid = get_global_id(0);
int lid = get_local_id(0);
int nGroups = get_num_groups(0);
int groupSize = get_local_size(0);
int groupID = get_group_id(0);
/* Each workitem gets private storage for the pressure field.
* The private instances are then summed into local storage at the end.*/
private float2 pressure[HYD_DIM_TOTAL];
local float2 pressure_local[HYD_DIM_TOTAL];
/* Code which computes value of 'pressure' */
//wait for all workgroups to finish accessing any memory
barrier(CLK_GLOBAL_MEM_FENCE | CLK_LOCAL_MEM_FENCE);
/// sum all results in a workgroup into local buffer:
for(i=0; i<groupSize; i++){
//each thread sums its own private instance into the local buffer
if (i == lid){
for(iHyd=0; iHyd<HYD_DIM_TOTAL; iHyd++){
pressure_local[iHyd] += pressure[iHyd];
}
}
//make sure all threads in workgroup get updated values of the local buffer
barrier(CLK_LOCAL_MEM_FENCE);
}
/// copy all the results into global storage
//1st thread in each workgroup writes the group's local buffer to global memory
if(lid == 0){
for(iHyd=0; iHyd<HYD_DIM_TOTAL; iHyd++){
pressure_global[groupID +nGroups*iHyd] = pressure_local[iHyd];
}
}
barrier(CLK_GLOBAL_MEM_FENCE);
/// sum the various instances in global memory into a single one
// 1st thread sums global instances
if(gid == 0){
for(iGroup=1; iGroup<nGroups; iGroup++){
//we only need to sum the results from the 1st group onward
for(iHyd=0; iHyd<HYD_DIM_TOTAL; iHyd++){
pressure_global[iHyd] += pressure_global[iGroup*HYD_DIM_TOTAL +iHyd];
barrier(CLK_GLOBAL_MEM_FENCE);
}
}
}
Some notes on data dimensions:
The total number of threads will vary between 100 and 2000, but may on occasion lie outside this interval.
groupSizewill depend on hardware but I'm currently using values between 1(cpu) and 32(gpu).
HYD_DIM_TOTAL is known at compile time and varies between 4 and 32 (will generally, but not necessarily, be a power of 2).
Is there anything blatantly wrong with this reduction code?
PS: I run this on an i7 3930k with AMD APP SDK 2.8 and on an NVIDIA GTX580.
I notice two issues here, one big, one smaller:
This code suggests that you have a misunderstanding of what a barrier does. A barrier never synchronizes across multiple workgroups. It only synchronizes within a workgroup. The CLK_GLOBAL_MEM_FENCE makes it look like it is global synchronization, but it really isn't. That flag just fences all of the current work item's accesses to global memory. So outstanding writes will be globally observable after a barrier with this flag. But it does not change the barrier's synchronization behavior, which is only at the scope of a workgroup. There is no global synchronization in OpenCL, beyond launching another NDRange or Task.
The first for loop causes multiple work items to overwrite each others' computation. The indexing of pressure_local with iHyd will be done by each work item with the same iHyd. This will produce undefined results.
Hope this helps.

Basic OpenCL Mutex Implementation (Currently Hanging)

I am trying to write a mutex for OpenCL. The idea is for every single individual work item to be able to proceed atomically. Currently, I believe the problem may be that thread warps are unable to proceed when one thread in a warp gets the lock.
My current simple kernel below, for summing numbers. "numbers" is an array of floats as input. "sum" is a one element array for the result, and "semaphore" is a one element array for holding the semaphore. I based it heavily off the example here.
void acquire(__global int* semaphore) {
int occupied;
do {
occupied = atom_xchg(semaphore, 1);
} while (occupied>0);
}
void release(__global int* semaphore) {
atom_xchg(semaphore, 0); //the previous value, which is returned, is ignored
}
__kernel void test_kernel(__global float* numbers, __global float* sum, __global int* semaphore) {
int i = get_global_id(0);
acquire(semaphore);
*sum += numbers[i];
release(semaphore);
}
I am calling the kernel effectively like:
int numof_dimensions = 1;
size_t offset_global[1] = {0};
size_t size_global[1] = {4000}; //the length of the numbers array
size_t* size_local = NULL;
clEnqueueNDRangeKernel(command_queue, kernel, numof_dimensions,offset_global,size_global,size_local, 0,NULL, NULL);
As above, when running, the graphics card hangs, and the driver restarts itself. How can I fix it so that it doesn't?
What you are trying to do is not possible because of the GPU execution model, where all threads on a "processor" share the instruction pointer, even in branches. Here is a post that explains the problem in detail: http://vansa.ic.cz/author/admin/.
BTW, the example code that you found has the exact same problem and would never work.
The answer to this might seem obvious in retrospect, but it's not unless you thought of it.
Basically, the GPU's prediction of the ideal local group size (size of a thread warp) is greater than 1, and so thread warps lock up. To fix it, you just need to specify it to be 1 (i.e. "size_t size_local[1] = {1};"). Doing this produces a correct result.

Effect of using page-able memory for asynchronous memory copy?

In CUDA C Best Practices Guide Version 5.0, Section 6.1.2, it is written that:
In contrast with cudaMemcpy(), the asynchronous transfer version
requires pinned host memory (see Pinned Memory), and it contains an
additional argument, a stream ID.
It means the cudaMemcpyAsync function should fail if I use simple memory.
But this is not what happened.
Just for testing purpose, I tried the following program:
Kernel:
__global__ void kernel_increment(float* src, float* dst, int n)
{
int tid = blockIdx.x * blockDim.x + threadIdx.x;
if(tid<n)
dst[tid] = src[tid] + 1.0f;
}
Main:
int main()
{
float *hPtr1, *hPtr2, *dPtr1, *dPtr2;
const int n = 1000;
size_t bytes = n * sizeof(float);
cudaStream_t str1, str2;
hPtr1 = new float[n];
hPtr2 = new float[n];
for(int i=0; i<n; i++)
hPtr1[i] = static_cast<float>(i);
cudaMalloc<float>(&dPtr1,bytes);
cudaMalloc<float>(&dPtr2,bytes);
dim3 block(16);
dim3 grid((n + block.x - 1)/block.x);
cudaStreamCreate(&str1);
cudaStreamCreate(&str2);
cudaMemcpyAsync(dPtr1,hPtr1,bytes,cudaMemcpyHostToDevice,str1);
kernel_increment<<<grid,block,0,str2>>>(dPtr1,dPtr2,n);
cudaMemcpyAsync(hPtr2,dPtr2,bytes,cudaMemcpyDeviceToHost,str1);
printf("Status: %s\n",cudaGetErrorString(cudaGetLastError()));
cudaDeviceSynchronize();
printf("Status: %s\n",cudaGetErrorString(cudaGetLastError()));
cudaStreamDestroy(str1);
cudaStreamDestroy(str2);
cudaFree(dPtr1);
cudaFree(dPtr2);
for(int i=0; i<n; i++)
std::cout<<hPtr2[i]<<std::endl;
delete[] hPtr1;
delete[] hPtr2;
return 0;
}
The program gave correct output. The array incremented successfully.
How did cudaMemcpyAsync execute without page locked memory?
Am I missing something here?
cudaMemcpyAsync is fundamentally an asynchronous version of cudaMemcpy. This means that it doesn't block the calling host thread when the copy call is issued. That is the basic behaviour of the call.
Optionally, if the call is launched into the non default stream, and if the host memory is a pinned allocation, and the device has a free DMA copy engine, the copy operation can happen while the GPU simultaneously performs another operation: either kernel execution or another copy (in the case of a GPU with two DMA copy engines). If any of these conditions are not satisfied, the operation on the GPU is functionally identical to a standard cudaMemcpy call, ie. it serialises operations on the GPU, and no simultaneous copy-kernel execution or simultaneous multiple copies can occur. The only difference is that the operation doesn't block the calling host thread.
In your example code, the host source and destination memory are not pinned. So the memory transfer cannot overlap with kernel execution (ie. they serialise operations on the GPU). The calls are still asynchronous on the host. So what you have is functionally equivalent to:
cudaMemcpy(dPtr1,hPtr1,bytes,cudaMemcpyHostToDevice);
kernel_increment<<<grid,block>>>(dPtr1,dPtr2,n);
cudaMemcpy(hPtr2,dPtr2,bytes,cudaMemcpyDeviceToHost);
with the exception that all the calls are asynchronous on the host, so the host thread blocks at the cudaDeviceSynchronize() call rather than at each of the memory transfer calls.
This is absolutely expected behaviour.

Resources