Performance degrades a lot by using mpirun to execute my program - mpi

I am new to the field of MPI. I write my program by using Intel Math Kernel Library and I want to compute a matrix-matrix multiplication by blocks, which means that I split the large matrix X into many small matrixs along the column as the following. My matrix is large, so each time I only compute (N, M) x (M, N) where I can set M manually.
XX^T = X_1X_1^T + X_2X_2^T + ... + X_nX_n^T
I first set the number of total threads as 16 and M equals to 1024. Then I run my program directly as the following . I check my cpu state and I find that the cpu usage is 1600%, which is normal.
./MMNET_MPI --block 1024 --numThreads 16
However, I tried to run my program by using MPI as the following. Then I find that cpu usage is only 200-300%. Strangely, I change the block number to 64 and I can get a little performance improvement to cpu usage 1200%.
mpirun -n 1 --bind-to none ./MMNET_MPI --block 1024 --numThreads 16
I do not know what the problem is. It seems that mpirun does some default setting which has an impact on my program. The following is a part of my matrix multiplication code. The command #pragma omp parallel for aims to extract the small N by M matrix from compression format parallel. After that I use clubs_dgemv to compute the matrix-matrix multiplication.
#include "MemoryUtils.h"
#include "Timer.h"
#include "omp.h"
#include <mpi.h>
#include <mkl.h>
#include <iostream>
using namespace std;
int main(int argc, char** argv) {
omp_set_num_threads(16);
Timer timer;
double start_time = timer.get_time();
MPI_Init(&argc, &argv);
int total_process;
int id;
MPI_Comm_size(MPI_COMM_WORLD, &total_process);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
if (id == 0) {
cout << "========== Testing MPI properties for MMNET ==========" << endl;
}
cout << "Initialize the random matrix ..." << endl;
unsigned long N = 30000;
unsigned long M = 500000;
unsigned long snpsPerBlock = 1024;
auto* matrix = ALIGN_ALLOCATE_DOUBLES(N*M);
auto* vector = ALIGN_ALLOCATE_DOUBLES(N);
auto* result = ALIGN_ALLOCATE_DOUBLES(M);
auto *temp1 = ALIGN_ALLOCATE_DOUBLES(snpsPerBlock);
memset(result, 0, sizeof(double) * M);
cout << "Time for allocating is " << timer.update_time() << " sec" << endl;
memset(matrix, 1.1234, sizeof(double) * N * M);
memset(vector, 1.5678, sizeof(double) * N);
// #pragma omp parallel for
// for (unsigned long row = 0; row < N * M; row++) {
// matrix[row] = (double)rand() / RAND_MAX;
// }
// #pragma omp parallel for
// for (unsigned long row = 0; row < N; row++) {
// vector[row] = (double)rand() / RAND_MAX;
// }
cout << "Time for generating data is " << timer.update_time() << " sec" << endl;
cout << "Starting calculating..." << endl;
for (unsigned long m0 = 0; m0 < M; m0 += snpsPerBlock) {
uint64 snpsPerBLockCrop = std::min(M, m0 + snpsPerBlock) - m0;
auto* snpBlock = matrix + m0 * N;
MKL_INT row = N;
MKL_INT col = snpsPerBLockCrop;
double alpha = 1.0;
MKL_INT lda = N;
MKL_INT incx = 1;
double beta = 0.0;
MKL_INT incy = 1;
cblas_dgemv(CblasColMajor, CblasTrans, row, col, alpha, snpBlock, lda, vector, incx, beta, temp1, incy);
// compute XA
double beta1 = 1.0;
cblas_dgemv(CblasColMajor, CblasNoTrans, row, col, alpha, snpBlock, lda, temp1, incx, beta1, result, incy);
}
cout << "Time for computation is " << timer.update_time() << " sec" << endl;
ALIGN_FREE(matrix);
ALIGN_FREE(vector);
ALIGN_FREE(result);
ALIGN_FREE(temp1);
return 0;
}
My cpu information is as the following.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 44
On-line CPU(s) list: 0-43
Thread(s) per core: 1
Core(s) per socket: 22
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6152 CPU # 2.10GHz
Stepping: 4
CPU MHz: 1252.786
CPU max MHz: 2101.0000
CPU min MHz: 1000.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 30976K
NUMA node0 CPU(s): 0-21
NUMA node1 CPU(s): 22-43

MKL by default implements some intelligent dynamic selection of the number of threads to use. This is controlled by the variable MKL_DYNAMIC, which is set to TRUE by default. The documentation for MKL states:
If you [sic] are able to detect the presence of MPI, but cannot determine if it has been called in a thread-safe mode (it is impossible to detect this with MPICH 1.2.x, for instance), and MKL_DYNAMIC has not been changed from its default value of TRUE, Intel MKL will run one thread.
Since you call MPI_Init() and not MPI_Init_thread() to initialise MPI, you are effectively asking for single-threaded MPI level (MPI_THREAD_SINGLE). The library is free to provide you any threading level and it will conservatively stick to MPI_THREAD_SINGLE. You can check that by calling MPI_Query_thread(&provided) after the initialisation and see if the output value is greater than MPI_THREAD_SINGLE.
Since you are mixing OpenMP and threaded MKL with MPI, you should really tell MPI to initialise at a higher threading support level by calling MPI_Init_thread():
int provided;
MPI_Init_thread(NULL, NULL, MPI_THREAD_MULTIPLE, &provided);
// This ensures that MPI actually provides MPI_THREAD_MULTIPLE
if (provided < MPI_THREAD_MULTIPLE) {
// Complain
}
(technically, you need MPI_THREAD_FUNNNELED, if you do not make MPI calls from outside the main thread, but that is not thread-safe mode as MKL understands it)
Even if you request certain thread support level from MPI, there is no guarantee that you will get it, which is why you have to examine the provided level. Also, older Open MPI versions must explicitly be build with such support - the default is to not build with support for MPI_THREAD_MULTIPLE as some network modules are not thread-safe. You can check if that's the case by running ompi_info and looking for a line similar to this one:
Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes, OMPI progress: no, ORTE progress: yes, Event lib: yes)
Now, the reality is that most threaded software that does not make MPI calls outside the main thread runs perfectly fine even if MPI does not provide higher level of thread support than MPI_THREAD_SINGLE, i.e., with most MPI implementations MPI_THREAD_SINGLE is equivalent to MPI_THREAD_FUNNELED. In that case, setting MKL_DYNAMIC to FALSE should make MKL behave as when run without mpirun:
mpirun -x MKL_DYNAMIC=FALSE ...
In any case, since your program accepts the number of threads as an argument, simply call both mkl_set_num_threads() and omp_set_num_threads() and do not rely on magical default mechanisms.
Edit: Enabling full thread support has consequences - increased latency and some network modules may refuse to work, for example the InfiniBand module in older Open MPI versions, resulting in the library quietly switching to slower transports such as TCP/IP. Better request MPI_THREAD_FUNNELED and explicitly set the number of MKL and OpenMP threads.

Related

OpenCL 'non-blocking' reads have higher cost than expected

Consider the following code, which enqueues between 1 and 100000 'non-blocking' random access buffer reads and measures the time:
#define __CL_ENABLE_EXCEPTIONS
#include <CL/cl.hpp>
#include <vector>
#include <iostream>
#include <chrono>
#include <stdio.h>
static const int size = 100000;
int host_buf[size];
int main() {
cl::Context ctx(CL_DEVICE_TYPE_DEFAULT, nullptr, nullptr, nullptr);
std::vector<cl::Device> devices;
ctx.getInfo(CL_CONTEXT_DEVICES, &devices);
printf("Using OpenCL devices: \n");
for (auto &dev : devices) {
std::string dev_name = dev.getInfo<CL_DEVICE_NAME>();
printf(" %s\n", dev_name.c_str());
}
cl::CommandQueue queue(ctx);
cl::Buffer gpu_buf(ctx, CL_MEM_READ_WRITE, sizeof(int) * size, nullptr, nullptr);
std::vector<int> values(size);
// Warmup
queue.enqueueReadBuffer(gpu_buf, false, 0, sizeof(int), &(host_buf[0]));
queue.finish();
// Run from 1 to 100000 sized chunks
for (int k = 1; k <= size; k *= 10) {
auto cstart = std::chrono::high_resolution_clock::now();
for (int j = 0; j < k; j++)
queue.enqueueReadBuffer(gpu_buf, false, sizeof(int) * (j * (size / k)), sizeof(int), &(host_buf[j]));
queue.finish();
auto cend = std::chrono::high_resolution_clock::now();
double time = std::chrono::duration<double>(cend - cstart).count() * 1000000.0;
printf("%8d: %8.02f us\n", k, time);
}
return 0;
}
As always, there is some random variation but the typical output for me is like this:
1: 10.03 us
10: 107.93 us
100: 794.54 us
1000: 8301.35 us
10000: 83741.06 us
100000: 981607.26 us
Whilst I did expect a relatively high latency for a single read, given the need for a PCIe round trip, I am surprised at the high cost of adding subsequent reads to the queue - as if there isn't really a 'queue' at all but each read adds the full latency penalty. This is on a GTX 960 with Linux and driver version 455.45.01.
Is this expected behavior?
Do other GPUs behave the same way?
Is there any workaround other than always doing random-access reads from inside a kernel?
You are using a single in-order command queue. Hence, all enqueued reads are performed sequentially by the hardware / driver.
The 'non-blocking' aspect simply means that the call itself is asynchronous and will not block your host code while GPU is working.
In your code, you use clFinish which blocks until all reads are done.
So yes, this is the expected behavior. You will pay the full time penalty for each DMA transfer.
As long as you create an in-order command queue (the default), other GPUs will behave the same.
If your hardware / driver support out-of-order queues, you could use them to potentially overlap DMA transfers. Alternatively you could use multiple in-order queues. But the performance is of-course hardware & driver dependent.
Using multiple queues / out-of-order queues is a bit more advanced. You should make sure you to properly utilize events to avoid race conditions or cause undefined behavior.
To reduce latency associated with GPU-Host DMA transfers, it is recommended you use a pinned host buffer rather then std::vector. Pinned host buffers are usually created via clCreateBuffer with the CL_MEM_ALLOC_HOST_PTR flag.

Multidimensional array allocation with Cuda Unified Memory on Power 8

I'm trying to allocate multi dimensional arrays by using CUDA UMA on Power 8 system. However, I'm having issue while size is getting bigger. The code I'm using is below. When size is 24 x 24 x 24 x 5 works fine. When I increase it to 64 x 64 x 64 x 8 I am having " out of memory" even though I have memory in my device. Afaik, I suppose to be able to allocate memory via UMA as much as GPU device physical memory. So I would not expect any error. Currently my main configuration is Power 8 and Tesla k40 where I am having seg fault during runtime. However, I tried the code piece I provided on x86 + k40 machine. It surprisingly worked.
BTW, if you tell me another way to do that apart from transforming all my code from 4d array to 1d array, I'll so appreciate.
Thanks in advance
Driver: Nvidia 361
#include <iostream>
#include <cuda_runtime.h>
void* operator new[] (size_t len) throw(std::bad_alloc) {
void *ptr;
cudaMallocManaged(&ptr, len);
return ptr;
}
template<typename T>
T**** create_4d(int a, int b, int c, int d){
T**** ary = new T***[a];
for(int i = 0; i < a; ++i)
{
ary[i] = new T**[b];
for(int j = 0; j < b; ++j){
ary[i][j] = new T*[c];
for(int k = 0; k < c; ++k){
ary[i][j][k] = new T[d];
}
}
}
return ary;
}
int main() {
double ****data;
std::cout << "allocating..." << std::endl;
data = create_4d<double>(32,65,65,5);
std::cout << "Hooreey !!!" << std::endl;
//segfault here
std::cout << "allocating..." << std::endl;
data = create_4d<double>(64,65,65,5);
std::cout << "Hooreey !!!" << std::endl;
return 0;
}
There's been a considerable amount of dialog on your cross-posting here including an answer to your main question. I'll use this answer to summarize what is there as well as to answer this question specifically:
BTW, if you tell me another way to do that apart from transforming all my code from 4d array to 1d array, I'll so appreciate.
One of your claims is that you are doing proper error checking (" I caught error propoerly."). You are not. CUDA runtime API calls (including cudaMallocManaged) by themselves do not generate C++ style exceptions, so your throw specification on the new operator definition is meaningless. CUDA runtime API calls return an error code. If you want to do proper error checking, you must collect this error code and process it. If you collect the error code, you can use it to generate an exception if you wish, and an example of how you might do that is contained in the canonical proper CUDA error checking question, as one of the answers by Jared Hoberock. As a result of this oversight, when your allocations eventually fail, you are ignoring this, and then when you attempt to use those (non-) allocated areas for subsequent pointer storage, you generate a seg fault.
The proximal reason for the allocation failure is that you are in fact running out of memory, as discussed in your cross-posting. You can confirm this easily enough with proper error checking. Managed allocations have a granularity, and so when you request allocations of relatively small amounts, you are in fact using more memory than you think - the small allocations you are requesting are each being rounded up to the allocation granularity. The size of the allocation granularity varies by system type, and so the OpenPower system you are operating on has a much larger allocation granularity than the x86 system you compared it to, and as a result you were not running out of memory on the x86 system, but you were on the Power system. As discussed in your cross-posting, this is easy to verify with strategic calls to cudaMemGetInfo.
From a performance perspective, this is a pretty bad approach to multidimensional allocations for several reasons:
The allocations you are creating are disjoint, connected by pointers. Therefore, to access an element by pointer dereferencing, it requires 3 or 4 such dereferences to go through a 4-subscripted pointer array. Each of these dereferences will involve a device memory access. Compared to using simulated 4-D access into a 1-D (flat) allocation, this will be noticeably slower. The arithmetic associated with converting the 4-D simulated access into a single linear index will be much faster than traversing through memory via pointer-chasing.
Since the allocations you are creating are disjoint, the managed memory subsystem cannot coalesce them into a single transfer, and therefore, under the hood, a number of transfers equal to the product of your first 3 dimensions will take place, at kernel launch time (and presumably at termination, ie. at the next cudaDeviceSynchronize() call). This data must all be transferred of course, but you will be doing a large number of very small transfers, compared to a single transfer for a "flat" allocation. The associated overhead of the large number of small transfers can be significant.
As we've seen, the allocation granularity can seriously impact the memory usage efficiency of such an allocation scheme. What should be only using a small percentage of system memory ends up using all of system memory.
Operations that work on contiguous data from "row" to "row" of such an allocation will fail, because the allocations are disjoint. For example, such a matrix or a subsection of such a matrix could not be reliably passed to a CUBLAS linear algebra routine, as the expectation for that matrix would have contiguity of row storage in memory associated with it.
The ideal solution would be to create a single flat allocation, and then use simulated 4-D indexing to create a single linear index. Such an approach would address all 4 concerns above. However it requires perhaps substantial code refactoring.
We can however come up with an alternate approach, which preserves the 4-subscripted indexing, but otherwise addresses the concerns in items 2, 3, and 4 above by creating a single underlying flat allocation.
What follows is a worked example. We will actually create 2 managed allocations: one underlying flat allocation for data storage, and one underlying flat allocation (regardless of dimensionality) for pointer storage. It would be possible to combine these two into a single allocation with some careful alignment work, but that is not required to achieve any of the proposed benefits.
The basic methodology is covered in various other CUDA questions here on the SO tag, but most of those have host-side usage (only) in view, since they did not have UM in view. However, UM allows us to extend the methodology to host- and device-side usage. We will start by creating a single "base" allocation of the necessary size to store the data. Then we will create an allocation for the pointer array, and we will then work through the pointer array, fixing up each pointer to point to the correct location in the pointer array, or else to the correct location in the "base" data array.
Here's a worked example, demonstrating host and device usage, and including proper error checking:
$ cat t1271.cu
#include <iostream>
#include <assert.h>
template<typename T>
T**** create_4d_flat(int a, int b, int c, int d){
T *base;
cudaError_t err = cudaMallocManaged(&base, a*b*c*d*sizeof(T));
assert(err == cudaSuccess);
T ****ary;
err = cudaMallocManaged(&ary, (a+a*b+a*b*c)*sizeof(T*));
assert(err == cudaSuccess);
for (int i = 0; i < a; i++){
ary[i] = (T ***)((ary + a) + i*b);
for (int j = 0; j < b; j++){
ary[i][j] = (T **)((ary + a + a*b) + i*b*c + j*c);
for (int k = 0; k < c; k++)
ary[i][j][k] = base + ((i*b+j)*c + k)*d;}}
return ary;
}
template<typename T>
void free_4d_flat(T**** ary){
if (ary[0][0][0]) cudaFree(ary[0][0][0]);
if (ary) cudaFree(ary);
}
template<typename T>
__global__ void fill(T**** data, int a, int b, int c, int d){
unsigned long long int val = 0;
for (int i = 0; i < a; i++)
for (int j = 0; j < b; j++)
for (int k = 0; k < c; k++)
for (int l = 0; l < d; l++)
data[i][j][k][l] = val++;
}
void report_gpu_mem()
{
size_t free, total;
cudaMemGetInfo(&free, &total);
std::cout << "Free = " << free << " Total = " << total <<std::endl;
}
int main() {
report_gpu_mem();
unsigned long long int ****data2;
std::cout << "allocating..." << std::endl;
data2 = create_4d_flat<unsigned long long int>(64, 63, 62, 5);
report_gpu_mem();
fill<<<1,1>>>(data2, 64, 63, 62, 5);
cudaError_t err = cudaDeviceSynchronize();
assert(err == cudaSuccess);
std::cout << "validating..." << std::endl;
for (int i = 0; i < 64*63*62*5; i++)
if (*(data2[0][0][0] + i) != i) {std::cout << "mismatch at " << i << " was " << *(data2[0][0][0] + i) << std::endl; return -1;}
free_4d_flat(data2);
return 0;
}
$ nvcc -arch=sm_35 -o t1271 t1271.cu
$ cuda-memcheck ./t1271
========= CUDA-MEMCHECK
Free = 5904859136 Total = 5975900160
allocating...
Free = 5892276224 Total = 5975900160
validating...
========= ERROR SUMMARY: 0 errors
$
Notes:
This still involves pointer chasing inefficiency. I don't know of a method to avoid that without removing the multiple subscript arrangement.
I've elected to use 2 different indexing schemes in host and device code. In device code, I am using a normal 4-subscripted index, to demonstrate the utility of that. In host code, I am using a "flat" index, to demonstrate that the underlying storage is contiguous and contiguously addressable.

OpenCL on Xeon Phi: 2D Convolution Experience - OpenCL vs OpenMP

The performance of Xeon Phi benchmarked with 2D convolution in opnecl seems much better than an openmp implementation even with compiler-enabled vectorization. Openmp version was run in phi native mode, and timing measured only computation part: For-loop. For the opencl implementation, timing was only for kernel computation as well: no data transfer included. OpenMp-enbaled version was tested with 2,4,60,120,240 threads. - 240 threads gave the best performance for a balanced thread affinity setting. But Opencl was around 17x better even for the 240-thread openmp baseline with pragma-enbled vectorization is source code. Input image size is for 1024x1024 up to 16384x16384, and filter size of 3x3 up to 17x17. In call runs, opencl was better than openmp. Is this an expected speedup of opencl?? Seems too good to be true.
EDIT:
Compilation (openmp)
icc Convolve.cpp -fopenmp -mmic -O3 -vec-report1 -o conv.mic
Convolve.cpp(71): (col. 17) remark: LOOP WAS VECTORIZED
Source (Convole.cpp):
void Convolution_Threaded(float * pInput, float * pFilter, float * pOutput,
const int nInWidth, const int nWidth, const int nHeight,
const int nFilterWidth, const int nNumThreads)
{
#pragma omp parallel for num_threads(nNumThreads)
for (int yOut = 0; yOut < nHeight; yOut++)
{
const int yInTopLeft = yOut;
for (int xOut = 0; xOut < nWidth; xOut++)
{
const int xInTopLeft = xOut;
float sum = 0;
for (int r = 0; r < nFilterWidth; r++)
{
const int idxFtmp = r * nFilterWidth;
const int yIn = yInTopLeft + r;
const int idxIntmp = yIn * nInWidth + xInTopLeft;
#pragma ivdep //discards any data dependencies assumed by compiler
#pragma vector aligned //all data accessed in the loop is properly aligned
for (int c = 0; c < nFilterWidth; c++)
{
const int idxF = idxFtmp + c;
const int idxIn = idxIntmp + c;
sum += pFilter[idxF]*pInput[idxIn];
}
}
const int idxOut = yOut * nWidth + xOut;
pOutput[idxOut] = sum;
}
}
}
Source 2 (convolve.cl)
__kernel void Convolve(const __global float * pInput,
__constant float * pFilter,
__global float * pOutput,
const int nInWidth,
const int nFilterWidth)
{
const int nWidth = get_global_size(0);
const int xOut = get_global_id(0);
const int yOut = get_global_id(1);
const int xInTopLeft = xOut;
const int yInTopLeft = yOut;
float sum = 0;
for (int r = 0; r < nFilterWidth; r++)
{
const int idxFtmp = r * nFilterWidth;
const int yIn = yInTopLeft + r;
const int idxIntmp = yIn * nInWidth + xInTopLeft;
for (int c = 0; c < nFilterWidth; c++)
{
const int idxF = idxFtmp + c;
const int idxIn = idxIntmp + c;
sum += pFilter[idxF]*pInput[idxIn];
}
}
const int idxOut = yOut * nWidth + xOut;
pOutput[idxOut] = sum;
}
Result of OpenMP (in comparison with OpenCL):
image filter exec Time (ms)
OpenMP 2048x2048 3x3 23.4
OpenCL 2048x2048 3x3 1.04*
*Raw kernel execution time. Data transfer time over PCI bus not included.
Previously: (with #pragma ivdep and #pragma vector aligned for inner inner-most loop):
Compiler output:
Convolve.cpp(24): (col. 17) remark: LOOP WAS VECTORIZED
Program output:
120 Cores: 0.0087 ms
After advice by #jprice (with #pragma simd on horizontal-wise data):
Compiler output:
Convolve.cpp(24): (col. 9) remark: **SIMD** LOOP WAS VECTORIZED
Program output:
120 Cores: 0.00305
OpenMP now 2.8X faster compared to its previous execution. A fair comparison can now be made with OpenCL!
Thanks jprice and to everyone who contributed. Learnt huge lessons from you all.
EDIT:
Here are my results and comparison:
image filter exec Time (ms)
OpenMP 2048x2048 3x3 4.3
OpenCL 2048x2048 3x3 1.04
Speedup: 4.1X
Indeed OpenCL can be this faster than OpenMP ?
Intel's OpenCL implementation will use what they call "implicit vectorisation" in order to take advantage of vector floating point units. This involves mapping work-items onto SIMD lanes. In your example, each work-item is processing a single pixel, which means that each hardware thread will be processing 16 pixels at a time using the Xeon Phi's 512-bit vector units.
By contrast, your OpenMP code is parallelising across pixels, and then vectorising the computation within a pixel. This is almost certainly where the performance difference is coming from.
In order to get ICC to vectorize your OpenMP code in a manner that is similar to the implicitly vectorised OpenCL code, you should remove your #pragma ivdep and #pragma vector aligned statements from the innermost loop, and instead just place a #pragma simd in front of the horizontal pixel loop:
#pragma omp parallel for num_threads(nNumThreads)
for (int yOut = 0; yOut < nHeight; yOut++)
{
const int yInTopLeft = yOut;
#pragma simd
for (int xOut = 0; xOut < nWidth; xOut++)
{
When I compile this with ICC, it reports that it is successfully vectorising the desired loop.
Your OpenMP program use one thread for a row of image.The pixels in the same row are vectorized. It equals you have one dimension workgroup in OpenCL. Each workgroup process one row of image. But in your OpenCL code, it seems that you have a two dimension workgroup. Each workgroup(mapped into one thread on phi) is processing a BLOCK of the image, not a ROW of image. The cache hit will be different.

speedup when using float4, opencl

I have the following opencl kernel function to get the column sum of a image.
__kernel void columnSum(__global float* src,__global float* dst,int srcCols,
int srcRows,int srcStep,int dstStep)
{
const int x = get_global_id(0);
srcStep >>= 2;
dstStep >>= 2;
if (x < srcCols)
{
int srcIdx = x ;
int dstIdx = x ;
float sum = 0;
for (int y = 0; y < srcRows; ++y)
{
sum += src[srcIdx];
dst[dstIdx] = sum;
srcIdx += srcStep;
dstIdx += dstStep;
}
}
}
I assign that each thread process a column here so that a lot of threads can get the column_sum of each column in parallel.
I also use float4 to rewrite the above kernel so that each thread can read 4 elements in a row at one time from the source image, which is shown below.
__kernel void columnSum(__global float* src,__global float* dst,int srcCols,
int srcRows,int srcStep,int dstStep)
{
const int x = get_global_id(0);
srcStep >>= 2;
dstStep >>= 2;
if (x < srcCols/4)
{
int srcIdx = x ;
int dstIdx = x ;
float4 sum = (float4)(0.0f, 0.0f, 0.0f, 0.0f);
for (int y = 0; y < srcRows; ++y)
{
float4 temp2;
temp2 = vload4(0, &src[4 * srcIdx]);
sum = sum + temp2;
vstore4(sum, 0, &dst[4 * dstIdx]);
srcIdx += (srcStep/4);
dstIdx += (dstStep/4);
}
}
}
In this case, theoretically, I think the time consumed by the second kernel to process a image should be 1/4 of the time consumed by the first kernel function. However, no matter how large the image is, the two kernels almost consume the same time. I don't know why. Can you guys give me some ideas? T
OpenCL vector data types like float4 were fitting better the older GPU architectures, especially AMD's GPUs. Modern GPUs don't have SIMD registers available for individual work-items, they are scalar in that respect. CL_DEVICE_PREFERRED_VECTOR_WIDTH_* equals 1 for OpenCL driver on NVIDIA Kepler GPU and Intel HD integrated graphics. So adding float4 vectors on modern GPU should require 4 operations. On the other hand, OpenCL driver on Intel Core CPU has CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT equal to 4, so these vectors could be added in a single step.
You are directly reading the values from "src" array (global memory). Which typically is 400 times slower than private memory. Your bottleneck is definitelly the memory access, not the "add" operation itself.
When you move from float to float4, the vector operation (add/multiply/...) is more efficient thanks to the ability of the GPU to operate with vectors. However, the read/write to global memory remains the same.
And since that is the main bottleneck, you will not see any speedup at all.
If you want to speed your algorithm, you should move to local memory. However you have to manually resolve the memory management, and the proper block size.
which architecture do you use?
Using float4 has higher instruction level parallelism (and then require 4 times less threads) so theoretically should be faster (see http://www.cs.berkeley.edu/~volkov/volkov10-GTC.pdf)
However did i understand correctly in you kernel you are doing prefix-sum (you store the partial sum at every iteration of y)? If so, because of the stores the bottleneck is at the memory writes.
I think on the GPU float4 is not a SIMD operation in OpenCL. In other words if you add two float4 values the sum is done in four steps rather than all at once. Floatn is really designed for the CPU. On the GPU floatn serves only as a convenient syntax, at least on Nvidia cards. Each thread on the GPU acts as if it is scalar processor without SIMD. But the threads in a warp are not independent like they are on the CPU. The right way to think of the GPGPU models is Single Instruction Multiple Threads (SIMT).
http://www.yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-gpus.html
Have you tried running your code on the CPU? I think the code with float4 should run quicker (potentially four times quicker) than the scalar code on the CPU. Also if you have a CPU with AVX then you should try float8. If the float4 code is faster on the CPU than float8 should be even faster on a CPU with AVX.
try to define __ attribute __ to kernel and see changes in run timing
for example try to define:
__ kernel void __ attribute__((vec_type_hint(int)))
or
__ kernel void __ attribute__((vec_type_hint(int4)))
or some floatN as you want
read more:
https://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/functionQualifiers.html

OpenCL kernel work-group size restriction

So I keep running into strange errors when I call my kernels; the stated max kernel work-group size is one, while the work group size of my device (my Macbook) is decidedly higher than that. What possible causes could there be for the kernels restricting the code to a single work group? Here's one of my kernels:
__kernel
void termination_kernel(const int Elements,
__global float* c_I,
__global float* c_Ihat,
__global float* c_rI,
__local float* s_a)
{
const int bdim = 128;
int n = get_global_id(0);
const int tx = get_local_id(0); // thread index in thread-block (0-indexed)
const int bx = get_group_id(0); // block index (0-indexed)
const int gx = get_num_groups(0);
// is thread in range for the addition
float d = 0.f;
while(n < Elements){
d += pow(c_I[n] - c_Ihat[n], 2);
n += gx * bdim;
}
// assume bx power of 2
int alive = bdim / 2;
s_a[tx] = d;
barrier(CLK_LOCAL_MEM_FENCE);
while(alive > 1){
if(tx < alive)
s_a[tx] += s_a[tx + alive];
alive /= 2;
barrier(CLK_LOCAL_MEM_FENCE);
}
if(tx == 0)
c_rI[bx] = s_a[0] + s_a[1];
}
and the error returned is
OpenCL Error (via pfn_notify): [CL_INVALID_WORK_GROUP_SIZE] : OpenCL Error : clEnqueueNDRangeKernel
failed: total work group size (128) is greater than the device can support (1)
OpenCL Error: 'clEnqueueNDRangeKernel(queue, kernel_N, dim, NULL, global_N, local_N, 0, NULL, NULL)'
I know it says the restriction is on the device, but debugging shows that
CL_DEVICE_MAX_WORK_GROUP_SIZE = 1024
and
CL_KERNEL_WORK_GROUP_SIZE = 1
The kernel construction is called by
char *KernelSource_T = readSource("Includes/termination_kernel.cl");
cl_program program_T = clCreateProgramWithSource(context, 1, (const char **) &KernelSource_T, NULL, &err);
clBuildProgram(program_T, 1, &device, flags, NULL, NULL);
cl_kernel kernel_T = clCreateKernel(program_T, "termination_kernel", &err);
I'd include the calling function, but I'm not sure if it's relevant; my intuition is that it's something in the kernel code that's forcing the restriction. Any ideas? Thanks in advance for the help!
Apple OpenCL doesn't support work-groups larger than [1, 1, 1] on the CPU. I have no idea why, but that's how it's been at least up to OSX 10.9.2. Larger work-groups are fine on the GPU, though.
CL_KERNEL_WORK_GROUP_SIZE tells you how large the maximum work group size can be for this particular kernel. OpenCL's runtime determines that by inspecting the kernel code. CL_KERNEL_WORK_GROUP_SIZE will be a number less or equal to CL_DEVICE_MAX_WORK_GROUP_SIZE.
Hope the amount of local memory avilable is less for that work group size . Please can you show the arguments? . You can try by reducing the work group size , start with 2,4,8,16,32,64,128 so on make sure its power of 2.
Time has passed since the answer of Tomi and it seems that Apple has become slightly more flexible on this aspect. On my OS X 10.12.3 (still OpenCL 1.2), it is possible to use up to CL_DEVICE_MAX_WORK_GROUP_SIZE in the first dimension.
According to the specification, it is also possible to get the maximum number of work-groups for each dimension through CL_DEVICE_MAX_WORK_ITEM_SIZES according to the documentation

Resources