I'm trying to allocate multi dimensional arrays by using CUDA UMA on Power 8 system. However, I'm having issue while size is getting bigger. The code I'm using is below. When size is 24 x 24 x 24 x 5 works fine. When I increase it to 64 x 64 x 64 x 8 I am having " out of memory" even though I have memory in my device. Afaik, I suppose to be able to allocate memory via UMA as much as GPU device physical memory. So I would not expect any error. Currently my main configuration is Power 8 and Tesla k40 where I am having seg fault during runtime. However, I tried the code piece I provided on x86 + k40 machine. It surprisingly worked.
BTW, if you tell me another way to do that apart from transforming all my code from 4d array to 1d array, I'll so appreciate.
Thanks in advance
Driver: Nvidia 361
#include <iostream>
#include <cuda_runtime.h>
void* operator new[] (size_t len) throw(std::bad_alloc) {
void *ptr;
cudaMallocManaged(&ptr, len);
return ptr;
}
template<typename T>
T**** create_4d(int a, int b, int c, int d){
T**** ary = new T***[a];
for(int i = 0; i < a; ++i)
{
ary[i] = new T**[b];
for(int j = 0; j < b; ++j){
ary[i][j] = new T*[c];
for(int k = 0; k < c; ++k){
ary[i][j][k] = new T[d];
}
}
}
return ary;
}
int main() {
double ****data;
std::cout << "allocating..." << std::endl;
data = create_4d<double>(32,65,65,5);
std::cout << "Hooreey !!!" << std::endl;
//segfault here
std::cout << "allocating..." << std::endl;
data = create_4d<double>(64,65,65,5);
std::cout << "Hooreey !!!" << std::endl;
return 0;
}
There's been a considerable amount of dialog on your cross-posting here including an answer to your main question. I'll use this answer to summarize what is there as well as to answer this question specifically:
BTW, if you tell me another way to do that apart from transforming all my code from 4d array to 1d array, I'll so appreciate.
One of your claims is that you are doing proper error checking (" I caught error propoerly."). You are not. CUDA runtime API calls (including cudaMallocManaged) by themselves do not generate C++ style exceptions, so your throw specification on the new operator definition is meaningless. CUDA runtime API calls return an error code. If you want to do proper error checking, you must collect this error code and process it. If you collect the error code, you can use it to generate an exception if you wish, and an example of how you might do that is contained in the canonical proper CUDA error checking question, as one of the answers by Jared Hoberock. As a result of this oversight, when your allocations eventually fail, you are ignoring this, and then when you attempt to use those (non-) allocated areas for subsequent pointer storage, you generate a seg fault.
The proximal reason for the allocation failure is that you are in fact running out of memory, as discussed in your cross-posting. You can confirm this easily enough with proper error checking. Managed allocations have a granularity, and so when you request allocations of relatively small amounts, you are in fact using more memory than you think - the small allocations you are requesting are each being rounded up to the allocation granularity. The size of the allocation granularity varies by system type, and so the OpenPower system you are operating on has a much larger allocation granularity than the x86 system you compared it to, and as a result you were not running out of memory on the x86 system, but you were on the Power system. As discussed in your cross-posting, this is easy to verify with strategic calls to cudaMemGetInfo.
From a performance perspective, this is a pretty bad approach to multidimensional allocations for several reasons:
The allocations you are creating are disjoint, connected by pointers. Therefore, to access an element by pointer dereferencing, it requires 3 or 4 such dereferences to go through a 4-subscripted pointer array. Each of these dereferences will involve a device memory access. Compared to using simulated 4-D access into a 1-D (flat) allocation, this will be noticeably slower. The arithmetic associated with converting the 4-D simulated access into a single linear index will be much faster than traversing through memory via pointer-chasing.
Since the allocations you are creating are disjoint, the managed memory subsystem cannot coalesce them into a single transfer, and therefore, under the hood, a number of transfers equal to the product of your first 3 dimensions will take place, at kernel launch time (and presumably at termination, ie. at the next cudaDeviceSynchronize() call). This data must all be transferred of course, but you will be doing a large number of very small transfers, compared to a single transfer for a "flat" allocation. The associated overhead of the large number of small transfers can be significant.
As we've seen, the allocation granularity can seriously impact the memory usage efficiency of such an allocation scheme. What should be only using a small percentage of system memory ends up using all of system memory.
Operations that work on contiguous data from "row" to "row" of such an allocation will fail, because the allocations are disjoint. For example, such a matrix or a subsection of such a matrix could not be reliably passed to a CUBLAS linear algebra routine, as the expectation for that matrix would have contiguity of row storage in memory associated with it.
The ideal solution would be to create a single flat allocation, and then use simulated 4-D indexing to create a single linear index. Such an approach would address all 4 concerns above. However it requires perhaps substantial code refactoring.
We can however come up with an alternate approach, which preserves the 4-subscripted indexing, but otherwise addresses the concerns in items 2, 3, and 4 above by creating a single underlying flat allocation.
What follows is a worked example. We will actually create 2 managed allocations: one underlying flat allocation for data storage, and one underlying flat allocation (regardless of dimensionality) for pointer storage. It would be possible to combine these two into a single allocation with some careful alignment work, but that is not required to achieve any of the proposed benefits.
The basic methodology is covered in various other CUDA questions here on the SO tag, but most of those have host-side usage (only) in view, since they did not have UM in view. However, UM allows us to extend the methodology to host- and device-side usage. We will start by creating a single "base" allocation of the necessary size to store the data. Then we will create an allocation for the pointer array, and we will then work through the pointer array, fixing up each pointer to point to the correct location in the pointer array, or else to the correct location in the "base" data array.
Here's a worked example, demonstrating host and device usage, and including proper error checking:
$ cat t1271.cu
#include <iostream>
#include <assert.h>
template<typename T>
T**** create_4d_flat(int a, int b, int c, int d){
T *base;
cudaError_t err = cudaMallocManaged(&base, a*b*c*d*sizeof(T));
assert(err == cudaSuccess);
T ****ary;
err = cudaMallocManaged(&ary, (a+a*b+a*b*c)*sizeof(T*));
assert(err == cudaSuccess);
for (int i = 0; i < a; i++){
ary[i] = (T ***)((ary + a) + i*b);
for (int j = 0; j < b; j++){
ary[i][j] = (T **)((ary + a + a*b) + i*b*c + j*c);
for (int k = 0; k < c; k++)
ary[i][j][k] = base + ((i*b+j)*c + k)*d;}}
return ary;
}
template<typename T>
void free_4d_flat(T**** ary){
if (ary[0][0][0]) cudaFree(ary[0][0][0]);
if (ary) cudaFree(ary);
}
template<typename T>
__global__ void fill(T**** data, int a, int b, int c, int d){
unsigned long long int val = 0;
for (int i = 0; i < a; i++)
for (int j = 0; j < b; j++)
for (int k = 0; k < c; k++)
for (int l = 0; l < d; l++)
data[i][j][k][l] = val++;
}
void report_gpu_mem()
{
size_t free, total;
cudaMemGetInfo(&free, &total);
std::cout << "Free = " << free << " Total = " << total <<std::endl;
}
int main() {
report_gpu_mem();
unsigned long long int ****data2;
std::cout << "allocating..." << std::endl;
data2 = create_4d_flat<unsigned long long int>(64, 63, 62, 5);
report_gpu_mem();
fill<<<1,1>>>(data2, 64, 63, 62, 5);
cudaError_t err = cudaDeviceSynchronize();
assert(err == cudaSuccess);
std::cout << "validating..." << std::endl;
for (int i = 0; i < 64*63*62*5; i++)
if (*(data2[0][0][0] + i) != i) {std::cout << "mismatch at " << i << " was " << *(data2[0][0][0] + i) << std::endl; return -1;}
free_4d_flat(data2);
return 0;
}
$ nvcc -arch=sm_35 -o t1271 t1271.cu
$ cuda-memcheck ./t1271
========= CUDA-MEMCHECK
Free = 5904859136 Total = 5975900160
allocating...
Free = 5892276224 Total = 5975900160
validating...
========= ERROR SUMMARY: 0 errors
$
Notes:
This still involves pointer chasing inefficiency. I don't know of a method to avoid that without removing the multiple subscript arrangement.
I've elected to use 2 different indexing schemes in host and device code. In device code, I am using a normal 4-subscripted index, to demonstrate the utility of that. In host code, I am using a "flat" index, to demonstrate that the underlying storage is contiguous and contiguously addressable.
Related
I'm running experiments aiming to understand the behavior of random read and write access to global memory.
The following kernel reads from an input vector (groupColumn) with a coalesced access pattern and reads random entries from a hash table in global memory.
struct Entry {
uint group;
uint payload;
};
typedef struct Entry Entry;
__kernel void global_random_write_access(__global const uint* restrict groupColumn,
__global Entry* globalHashTable,
__const uint HASH_TABLE_SIZE,
__const uint HASH_TABLE_SIZE_BITS,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
uint sum = 0;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
// hash keys are pre-computed
uint hash_key = groupColumn[idx]; // coalesced read access
__global Entry* entry = &globalHashTable[hash_key]; // pointer arithmetic
sum += entry->payload; // random read
}
if (local_id < HASH_TABLE_SIZE) {
globalHashTable[local_id].payload = sum; // rare coalesced write
}
}
I ran this kernel on a NVIDIA V100 card with multiple iterations. The variance of the results is very low, thus, I only plot one dot per group configuration. The input data size is 1 GiB and each thread processes 128 entries (BATCH = 128). Here are the results:
So far so good. The V100 has a max memory bandwidth of roughly 840GiB/sec and the measurements are close enough, given the fact that there are random memory reads involved.
Now I'm testing random writes to global memory with the following kernel:
__kernel void global_random_write_access(__global const uint* restrict groupColumn,
__global Entry* globalHashTable,
__const uint HASH_TABLE_SIZE,
__const uint HASH_TABLE_SIZE_BITS,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
uint sum = 0;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
// hash keys are pre-computed
uint hash_key = groupColumn[idx]; // coalesced read access
__global Entry* entry = &globalHashTable[hash_key]; // pointer arithmetic
sum += i;
entry->payload = sum; // random write
}
if (local_id < HASH_TABLE_SIZE) {
globalHashTable[local_id].payload = sum; // rare coalesced write
}
}
Godbolt: OpenCL -> PTX
The performance drops significantly to a few GiB/sec for few groups.
I can't make any sense of the behavior. As soon as the hash table reaches the size of L1 the performance seems to be limited by L2. For fewer groups the performance is way lower. I don't really understand what the limiting factors are.
The CUDA documentation doesn't say much about how store instructions are handled internally. The only thing I could find is that the st.wb PTX instruction (Cache Operations) might cause a hit on stale L1 cache if another thread would try to read the same addess via ld.ca. However, there are no reads to the hash table involved here.
Any hints or links to understanding the performance behavior are much appreciated.
Edit:
I actually found a bug in my code that didn't pre-compute the hash keys. The access to global memory wasn't random, but actually coalesced due to how I generated the values. I further simplified my experiments by removing the hash table. Now I only have one integer input column and one interger output column. Again, I want to see how the writes to global memory actually behave for different memory ranges. Ultimately, I want to understand which hardware properties influence the performance of writes to global memory and see if I can predict based on the code what performance to expect.
I tested this with two kernels that do the following:
Read from input, write to output
Read from input, read from output and write to output
I also applied two different access patterns, by generating the values in the group column:
SEQUENTIAL: sequentially increasing numbers until current group's size is reached. This pattern leads to a coalesced memory access when reading and writing from the output column.
RANDOM: uni-distributed random numbers within the current group's size. This pattern leads to a misaligned memory access when reading and writing from the output column.
(1) Read & Write
__kernel void global_write_access(__global const uint* restrict groupColumn,
__global uint *restrict output,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
uint sum = 0;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
uint group = groupColumn[idx]; // coalesced read access
sum += i;
output[group] = sum; // write (coalesced | random)
}
}
PTX Code: https://godbolt.org/z/19nTdK
(2) Read, Read & Write
__kernel void global_read_write_access(__global const uint* restrict groupColumn,
__global uint *restrict output,
__const uint BATCH,
__const uint STRIDE) {
int global_id = get_global_id(0);
int local_id = get_local_id(0);
uint end = BATCH * STRIDE;
for (int i = 0; i < end; i += STRIDE) {
uint idx = global_id + i;
uint group = groupColumn[idx]; // coalesced read access
output[group] += 1; // read & write (coalesced | random)
}
}
PTX Code: https://godbolt.org/z/b647cz
As ProjectPhysX pointed out, the access pattern makes a huge difference. However, for small groups the performance is quite similar for both access patterns. In general, I would like to better understand the shape of the curves and which hardware properties, architectural features etc. influence this shape.
From the cuda programming guide I learned that global memory accesses are conducted via 32-, 64-, or 128-byte transactions. Accesses to L2 are done via 32-byte transactions. So up to 8 integer words can be accessed via a single transaction. This might explain the plateau with a bump at 8 groups at the beginning of the curve. After that more transactions are needed and performance drops.
One cache line is 128 bytes long (both on L1 and L2), hence, 32 intergers fit into a single cache line. For more groups more cache lines are required which can be potentially processed in parallel by more memory controllers. That might be the reason for the performance to increase here. 8 controllers are available on the V100 So I would expect the performance to peak at 256 groups. Though, it doesn't. Instead it will steadily increase performance until reaching 4096 groups and plateau there with roughly 750 GiB/sec.
The plateauing in your second performane plot is GPU saturation: For only a few work groups, the GPU is partly idle and the latencies involved in launching the kernel significantly reduce performance. Above 8192 groups, the GPU fully saturates its memory bandwidth. The plateau only is at ~520GB/s because of the misaligned writes (have low performance on the V100) and also the "rare coalesced write" in the if-block, which happens at least once per group. For branching within the group, all other threads have to wait for the single write operation to finish. Also this write is not coalesced, because it is not happening for every thread in the group. On the V100, misaligned write performance is very poor at max. ~120GB/s, see the benchmark here.
Note that if you would comment the if-part, the compiler sees that you do not do anything with sum and optimizes everything out, leaving you with a blank kernel in PTX.
The first performance graph to me is a bit more confusing. The only difference in the first kernel to the second is that the random wrtite in the loop is replaced by a random read. Generally, read performance on the V100 is much better (~840GB/s, regardless of coalesced/misaligned) than misaligned write performance, so performance is expected to be much better overall and indeed it is. However I can't make sense of the performance dropping for more groups, where saturation should theoretically be better. But the performance drop isn't really that significant at ~760GB/s vs. 730GB/s.
To summarize, you are observing that the performance penalty for misaligned writes (~120GB/s vs. ~900GB/s for coalesced writes) is much larger than for reads, where performance is about the same for coalesced/misaligned at ~840GB/s. This is common thing for GPUs, with some variance of course between microarchitectures. Typically there is at least some performance penalty for misaligned reads, but not as large as for misaligned writes.
I have a question that I found many threads in, but none did explicitly answer my question.
I am trying to have a multidimensional array inside the kernel of the GPU using thrust. Flattening would be difficult, as all the dimensions are non-homogeneous and I go up to 4D. Now I know I cannot have device_vectors of device_vectors, for whichever underlying reason (explanation would be welcome), so I tried going the way over raw-pointers.
My reasoning is, a raw pointer points onto memory on the GPU, why else would I be able to access it from within the kernel. So I should technically be able to have a device_vector, which holds raw pointers, all pointers that should be accessible from within the GPU. This way I constructed the following code:
thrust::device_vector<Vector3r*> d_fluidmodelParticlePositions(nModels);
thrust::device_vector<unsigned int***> d_allFluidNeighborParticles(nModels);
thrust::device_vector<unsigned int**> d_nFluidNeighborsCrossFluids(nModels);
for(unsigned int fluidModelIndex = 0; fluidModelIndex < nModels; fluidModelIndex++)
{
FluidModel *model = sim->getFluidModelFromPointSet(fluidModelIndex);
const unsigned int numParticles = model->numActiveParticles();
thrust::device_vector<Vector3r> d_neighborPositions(model->getPositions().begin(), model->getPositions().end());
d_fluidmodelParticlePositions[fluidModelIndex] = CudaHelper::GetPointer(d_neighborPositions);
thrust::device_vector<unsigned int**> d_fluidNeighborIndexes(nModels);
thrust::device_vector<unsigned int*> d_nNeighborsFluid(nModels);
for(unsigned int pid = 0; pid < nModels; pid++)
{
FluidModel *fm_neighbor = sim->getFluidModelFromPointSet(pid);
thrust::device_vector<unsigned int> d_nNeighbors(numParticles);
thrust::device_vector<unsigned int*> d_neighborIndexesArray(numParticles);
for(unsigned int i = 0; i < numParticles; i++)
{
const unsigned int nNeighbors = sim->numberOfNeighbors(fluidModelIndex, pid, i);
d_nNeighbors[i] = nNeighbors;
thrust::device_vector<unsigned int> d_neighborIndexes(nNeighbors);
for(unsigned int j = 0; j < nNeighbors; j++)
{
d_neighborIndexes[j] = sim->getNeighbor(fluidModelIndex, pid, i, j);
}
d_neighborIndexesArray[i] = CudaHelper::GetPointer(d_neighborIndexes);
}
d_fluidNeighborIndexes[pid] = CudaHelper::GetPointer(d_neighborIndexesArray);
d_nNeighborsFluid[pid] = CudaHelper::GetPointer(d_nNeighbors);
}
d_allFluidNeighborParticles[fluidModelIndex] = CudaHelper::GetPointer(d_fluidNeighborIndexes);
d_nFluidNeighborsCrossFluids[fluidModelIndex] = CudaHelper::GetPointer(d_nNeighborsFluid);
}
Now the compiler won't complain, but accessing for example d_nFluidNeighborsCrossFluids from within the kernel will work, but return wrong values. I access it like this (again, from within a kernel):
d_nFluidNeighborsCrossFluids[iterator1][iterator2][iterator3];
// Note: out of bounds indexing guaranteed to not happen, indexing is definitely right
The question is, why does it return wrong values? The logic behind it should work in my opinion, since my indexing is correct and the pointers should be valid addresses from within the kernel.
Thank you already for your time and have a great day.
EDIT:
Here is a minimal reproducable example. For some reason the values appear right despite of having the same structure as my code, but cuda-memcheck reveals some errors. Uncommenting the two commented lines leads me to my main problem I am trying to solve. What does the cuda-memcheck here tell me?
/* Part of this example has been taken from code of Robert Crovella
in a comment below */
#include <thrust/device_vector.h>
#include <stdio.h>
template<typename T>
static T* GetPointer(thrust::device_vector<T> &vector)
{
return thrust::raw_pointer_cast(vector.data());
}
__global__
void k(unsigned int ***nFluidNeighborsCrossFluids, unsigned int ****allFluidNeighborParticles){
const unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
if(i > 49)
return;
printf("i: %d nNeighbors: %d\n", i, nFluidNeighborsCrossFluids[0][0][i]);
//for(int j = 0; j < nFluidNeighborsCrossFluids[0][0][i]; j++)
// printf("i: %d j: %d neighbors: %d\n", i, j, allFluidNeighborParticles[0][0][i][j]);
}
int main(){
const unsigned int nModels = 2;
const int numParticles = 50;
thrust::device_vector<unsigned int**> d_nFluidNeighborsCrossFluids(nModels);
thrust::device_vector<unsigned int***> d_allFluidNeighborParticles(nModels);
for(unsigned int fluidModelIndex = 0; fluidModelIndex < nModels; fluidModelIndex++)
{
thrust::device_vector<unsigned int*> d_nNeighborsFluid(nModels);
thrust::device_vector<unsigned int**> d_fluidNeighborIndexes(nModels);
for(unsigned int pid = 0; pid < nModels; pid++)
{
thrust::device_vector<unsigned int> d_nNeighbors(numParticles);
thrust::device_vector<unsigned int*> d_neighborIndexesArray(numParticles);
for(unsigned int i = 0; i < numParticles; i++)
{
const unsigned int nNeighbors = i;
d_nNeighbors[i] = nNeighbors;
thrust::device_vector<unsigned int> d_neighborIndexes(nNeighbors);
for(unsigned int j = 0; j < nNeighbors; j++)
{
d_neighborIndexes[j] = i + j;
}
d_neighborIndexesArray[i] = GetPointer(d_neighborIndexes);
}
d_nNeighborsFluid[pid] = GetPointer(d_nNeighbors);
d_fluidNeighborIndexes[pid] = GetPointer(d_neighborIndexesArray);
}
d_nFluidNeighborsCrossFluids[fluidModelIndex] = GetPointer(d_nNeighborsFluid);
d_allFluidNeighborParticles[fluidModelIndex] = GetPointer(d_fluidNeighborIndexes);
}
k<<<256, 256>>>(GetPointer(d_nFluidNeighborsCrossFluids), GetPointer(d_allFluidNeighborParticles));
if (cudaGetLastError() != cudaSuccess)
printf("Sync kernel error: %s\n", cudaGetErrorString(cudaGetLastError()));
cudaDeviceSynchronize();
}
A device_vector is a class definition. That class has various methods and operators associated with it. The thing that allows you to do this:
d_nFluidNeighborsCrossFluids[...]...;
is a square-bracket operator. That operator is a host operator (only). It is not usable in device code. Issues like this give rise to the general statements that "thrust::device_vector is not usable in device code." The device_vector object itself is generally not usable. However the data it contains is usable in device code, if you attempt to access it via a raw pointer.
Here is an example of a thrust device vector that contains an array of pointers to the data contained in other device vectors. That data is usable in device code, as long as you don't attempt to make use of the thrust::device_vector object itself:
$ cat t1509.cu
#include <thrust/device_vector.h>
#include <stdio.h>
template <typename T>
__global__ void k(T **data){
printf("the first element of vector 1 is: %d\n", (int)(data[0][0]));
printf("the first element of vector 2 is: %d\n", (int)(data[1][0]));
printf("the first element of vector 3 is: %d\n", (int)(data[2][0]));
}
int main(){
thrust::device_vector<int> vector_1(1,1);
thrust::device_vector<int> vector_2(1,2);
thrust::device_vector<int> vector_3(1,3);
thrust::device_vector<int *> pointer_vector(3);
pointer_vector[0] = thrust::raw_pointer_cast(vector_1.data());
pointer_vector[1] = thrust::raw_pointer_cast(vector_2.data());
pointer_vector[2] = thrust::raw_pointer_cast(vector_3.data());
k<<<1,1>>>(thrust::raw_pointer_cast(pointer_vector.data()));
cudaDeviceSynchronize();
}
$ nvcc -o t1509 t1509.cu
$ cuda-memcheck ./t1509
========= CUDA-MEMCHECK
the first element of vector 1 is: 1
the first element of vector 2 is: 2
the first element of vector 3 is: 3
========= ERROR SUMMARY: 0 errors
$
EDIT: In the mcve you have now posted, you point out that an ordinary run of the code appears to give correct results, but when you use cuda-memcheck, errors are reported. You have a general design problem that will cause this.
In C++, when an object is defined within a curly-braces region:
{
{
Object A;
// object A is in-scope here
}
// object A is out-of-scope here
}
// object A is out of scope here
k<<<...>>>(anything that points to something in object A); // is illegal
and you exit that region, the object defined within the region is now out of scope. For objects with constructors/destructors, this usually means the destructor of the object will be called when it goes out-of-scope. For a thrust::device_vector (or std::vector) this will deallocate any underlying storage associated with that vector. That does not necessarily "erase" any data, but attempts to use that data are illegal and would be considered UB (undefined behavior) in C++.
When you establish pointers to such data inside an in-scope region, and then go out-of-scope, those pointers no longer point to anything that would be legal to access, so attempts to dereference the pointer would be illegal/UB. Your code is doing this. Yes, it does appear to give the correct answer, because nothing is actually erased on deallocation, but the code design is illegal, and cuda-memcheck will highlight that.
I suppose one fix would be to pull all this stuff out of the inner curly-braces, and put it at main scope, just like the d_nFluidNeighborsCrossFluids device_vector is. But you might also want to rethink your general data organization strategy and flatten your data.
You should really provide a minimal, complete, verifiable/reproducible example; yours is neither minimal, nor complete, nor verifiable.
I will, however, answer your side-question:
I know I cannot have device_vectors of device_vectors, for whichever underlying reason (explanation would be welcome)
While a device_vector regards a bunch of data on the GPU, it's a host-side data structure - otherwise you would not have been able to use it in host-side code. On the host side, what it holds should be something like: The capacity, the size in elements, the device-side pointer to the actual data, and maybe more information. This is similar to how an std::vector variable may refer to data that's on the heap, but if you create the variable locally the fields I mentioned above will exist on the stack.
Now, those fields of the device vector that are located in host memory are not generally accessible from the device-side. In device-side code you would typically use the raw pointer to the device-side data the device_vector manages.
Also, note that if you have a thrust::device_vector<T> v, each use of operator[] means a bunch of separate CUDA calls to copy data to or from the device (unless there's some caching going on under the hoold). So you really want to avoid using square-brackets with this structure.
Finally, remember that pointer-chasing can be a performance killer, especially on a GPU. You might want to consider massaging your data structure somewhat in order to make it amenable to flattening.
I wrote the following code for my test NVIDIA and AMD GPUs
kernel void computeLayerOutput_Rolled(
global Layer* layers,
global float* weights,
global float* output,
constant int* restrict netSpec,
int layer)
{
const int n = get_global_size(0);
const int nodeNumber = get_global_id(0); //There will be an offset depending on the layer we are operating on
int numberOfWeights;
float t;
//getPosition(i, netSpec, &layer, &nodeNumber);
numberOfWeights = layers[layer].nodes[nodeNumber].numberOfWeights;
//if (sizeof(Layer) > 60000) // This is the extra code add for nvidia
// exit(0);
t = 0;
for (unsigned int j = 0; j != numberOfWeights; ++j)
t += threeD_access(weights, layer, nodeNumber, j, MAXSIZE, MAXSIZE) *
twoD_access(output, layer-1, j, MAXSIZE);
twoD_access(output, layer, nodeNumber, MAXSIZE) = sigmoid(t);
}
At the beginning, I did not add the code that checking the size of Layer, and it works on AMD Kalindi GPU, but crash and report an error code -36 on NVIDIA Tesla C2075.
Since I had rewritten the struct type Layer and decreased the size of it a lot before, I decided to check the size of Layer to determine whether this struct defined well in kernel code. Then I added this code
if (sizeof(Layer) > 60000)
exit(0);
Then it is OK on NVIDIA. However, the strange thing is, when I add // before this just as the given code above, it still works. (I believe I do not need to make clean && make when I rewrite something in kernel code, but I still did it) Nevertheless, when I roll back to the version not contains this comment, it fails and the error code -36 appears again. It really puzzles me. I think two versions of my code are identical, isn't it?
I started to learn coding backwards: high level first. This has the obvious liability of missing some basic concepts that I should definitely know, and when I try to learn a low level language, it throws me.
I have tried many times to understand pointers, however the explanations rapidly go over my head, usually because all of the example code uses languages that use pointers, which I don't understand other things about, and then I spin.
I am the most (and very at that) fluent in Javascript.
How would you explain pointers to a sad Javascript developer like me? Could someone provide me a practical, real life example?
Maybe even showing how, if Javascript had pointers, you could do x, and a pointer is different than a raw variable because of y.
Here's an attempt at a self-contained answer from first principles.
Pointers are part of a type system that permit the implementation of reference semantics. Here's how. We suppose that our language has a type system, by which every variable is of a certain type. C is a good example, but many languages work like this. So we can have a bunch of variables:
int a = 10;
int b = 25;
Further, we assume that function arguments are always copied from the caller scope into the function scope. (This is also true for many real languages, though the details can quickly become subtle when the type system gets 'hidden' from the user (e.g. such as in Java)). So let's have a function:
void foo(int x, int y);
When calling foo(a, b), the variables a and b are copied into local variables x and y corresponding to the formal parameters, and those copies are visible within the function scope. Whatever the function does with x and y has no effect on the variables a and b at the call site. The entire function call is opaque to the caller.
Now let's move on to pointers. A language with pointers contains, for every object type T, a related type T *, which is the type "pointer to T". Values of type T * are produced by taking the address of an existing object of type T. So a language that has pointers also needs to have a way to produce pointers, which is "taking the address of something". The purpose of a pointer is to store the address of an object.
But that's only one half of the picture. The other half is what to do with the address of an object. The main reason for caring about the address of an object is to be able to refer to the object whose address is being stored. This object is obtained by a second operation, suitably called dereferencing, which when applied to a pointer produces the object which is being "pointed to". Importantly, we do not a copy of the object, but we get the actual object.
In C, the address-of operator is spelled &, and the dereference operator is spelled *.
int * p = &a; // p stores the address of 'a'
*p = 12; // now a == 12
The first operand of the final assignment, *p, is the object a itself. Both a and *p are the same object.
Now why is this useful? Because we can pass pointers to functions to allow functions to change things outside the function's own scope. Pointers allow for indirection, and thus for referencing. You can tell the function about "something else". Here's the standard example:
void swap(int * p, int * q)
{
int tmp = *p;
*p = *q;
*q = tmp;
}
We can tell the function swap about our variables a and b by giving it the addresses of those variables:
swap(&a, &b);
In this way, we are using pointers to implement reference semantics for the function swap. The function gets to refer to variables elsewhere and can modify them.
The fundamental mechanism of reference semantics can thus be summarized thus:
The caller takes the address of the object to be refered to:
T a;
mangle_me(&a);
The callee takes a pointer parameter and dereferneces the pointer to access the refered value.
void mangle_me(T * p)
{
// use *p
}
Reference semantics are important for may aspects of programming, and many programming languages supply them in some way or another. For example, C++ adds native reference support to the language, largely removing the needs for pointers. Go uses explicit pointers, but offers some notational "convenience" by sometimes automagically dereferencing a pointer. Java and Python "hide" pointer-ness inside their type system, e.g. the type of a variable is in some sense a pointer to the type of the object. In some languages, some types like ints are naked value types, and others (like lists and dictionaries) are "hidden-pointer-included" reference types. Your milage may vary.
C++ rules are fairly simple and consistent. I actually find how Javascript handles object references and prototypes way more unintuitive.
Preface A: Why is Javascript A Bad Place To Start?
The first thing you need to fundamentally understand before you can tackle pointers is variables. You need to know what they are and how the computer keeps track of them.
Coming from a Javascript background you are used to every variable assigned to an object being a reference. That is, two variables can reference the same object. This is essentially pointers without any syntax to allow for more intricate use. You are also used to implicit copies of "basic" types like numbers. That is to say:
var a = MyObject;
var b = a;
Now if you change b you also change a. You would need to explicitly copy MyObject in order to have two variables pointing to different instances of it!
var a = 5;
var b = a;
Now if you change b, a is not actually changed. This is because assigning a to b when a is a simple type will copy it automatically for you. You cannot get the same behavior as objects with simple numbers and vise versa, so when you want two variables to refer to the same number you have to wrap it in an object. There is no explicit way to indicate how you want to handle references vs copies for primitive types.
You can see this inconsistent behavior with no variation on syntax (but an extreme variation on behavior) can make the relationship between variables and what they contain muddy. For this reason I highly suggest banishing this mental model for a moment as we continue on our journey to understand explicit pointers.
Preface B: YOLO: Variable Lifetime On The Stack
So, let's talk from here on out in C++ terms. C++ is one of the most explicit languages in terms of what a variable is vs a pointer. C++ is a good entry point because it is low level enough to talk in terms of memory and lifespan, but high level enough to understand things at a decent level of abstraction.
So, in C++ when you create any variable it exists in a certain scope. There are two ways to create a variable, on the stack, and on the heap.
The stack refers to the call stack of your application. Every brace pair pushes a new context onto the stack (and pops it when it runs out). When you create a local variable, it exists in that particular stack frame, when that stack frame is popped the variable is destroyed.
A simple example of scope:
#include <iostream>
#include <string>
struct ScopeTest{
ScopeTest(std::string a_name):
name(a_name){
std::cout << "Create " << name << std::endl;
}
~ScopeTest(){
std::cout << "Destroy " << name << std::endl;
}
ScopeTest(ScopeTest &a_copied){
std::cout << "Copy " << a_copied.name << std::endl;
name = a_copied.name + "(copy)";
a_copied.name += "(original)";
}
std::string name;
};
ScopeTest getVariable(){ //Stack frame push
ScopeTest c("c"); //Create c
return c; //Copy c + Destroy c(original)
}
int main(){
ScopeTest a("a"); //Create a
{
ScopeTest b("b"); //Create b
ScopeTest d = getVariable();
} //Destroy c(copy) + Destroy b
} //Destroy a
Output:
Create a
Create b
Create c
Copy c
Destroy c(original)
Destroy c(copy)
Destroy b
Destroy a
This should illustrate explicitly how a variable ties its life to the stack, how it is copied around, and when it dies.
Preface C: YOLO Variable Lifetime on the Heap
So, that's interesting conceptually, but variables can also be allocated outside of the stack, this is called "heap" memory because it is largely structure-less. The issue with heap memory is that you don't really have automatic cleanup based on scope. So you need a way to tie it to some kind of "handle" to keep track of it.
I'll illustrate here:
{
new ScopeTest("a"); //Create a
} //Whoa, we haven't destroyed it! Now we are leaking memory!
So, clearly we can't just say "new X" without keeping track of it. The memory gets allocated, but doesn't tie itself to a lifespan so it lives forever (like a memory vampire!)
In Javascript you can just tie it to a variable and the object dies when the last reference to it dies. Later I'll talk about a more advanced topic in C++ which allows for that, but for now let's look at simple pointers.
In C++ when you allocate a variable with new, the best way to track it is to assign it to a pointer.
Preface D: Pointers and The Heap
As I suggested, we can track allocated memory on the heap with a pointer. Our previous leaky program can be fixed like so:
{
ScopeTest *a = new ScopeTest("a"); //Create a
delete a; //Destroy a
}
ScopeTest *a; creates a pointer, and assigning it to a new ScopeTest("a") gives us a handle we can actually use to clean up and refer to the variable which exists in heap memory. I know heap memory sounds kinda confusing, but it's basically a jumble of memory that you can point to and say "hey you, I want a variable with no lifespan, make one and let me point at it".
Any variable created with the new keyword must be followed by exactly 1 (and no more than 1) delete or it will live forever, using up memory. If you try to delete any memory address other than 0 (which is a no-op) more than one time, you could be deleting memory not under your program's control which results in undefined behavior.
ScopeTest *a; declares a pointer. From here on out, any time you say "a" you are referring to a specific memory address. *a will refer to the actual object at that memory address, and you can access properties of it (*a).name. a-> in C++ is a special operator that does the same thing as (*a).
{
ScopeTest *a = new ScopeTest("a"); //Create a
std::cout << a << ": " << (*a).name << ", " << a->name << std::endl;
delete a; //Destroy a
}
Output for the above will look something like:
007FB430: a, a
Where 007FB430 is a hex representation of a memory address.
So in the purest sense, a pointer is literally a memory address and the ability to treat that address as a variable.
The Relationship Between Pointers and Variables
We don't just have to use pointers with heap allocated memory though! We can assign a pointer to any memory, even memory living on the stack. Just be careful your pointer doesn't out-live the memory it points to or you'll have a dangling pointer which could do bad things if you continue to try and use it.
It is always the programmer's job to make sure a pointer is valid, there are literally 0 checks in place in C++ to help you out when dealing with bare memory.
int a = 5; //variable named a has a value of 5.
int *pA = &a; //pointer named pA is now referencing the memory address of a (we reference "a" with & to get the address).
Now pA refers to the same value as &a, that is to say, it is the address of a.
*pA refers to the same value as a.
You can treat *pA = 6; the same as a = 6. Observe (continuing from the above two lines of code):
std::cout << *pA << ", " << a << std::endl; //output 5, 5
a = 6;
std::cout << *pA << ", " << a << std::endl; //output 6, 6
*pA = 7;
std::cout << *pA << ", " << a << std::endl; //output 7, 7
You can see why *pA is called a "pointer". It is literally pointing to the same address in memory as a. So far we have been using *pA to de-reference the pointer and access the value at the address it points to.
Pointers have a few interesting properties. One of those properties is that it can change the object it is pointing at.
int b = 20;
pA = &b;
std::cout << *pA << ", " << a << ", " << b << std::endl; //output 20, 7, 20
*pA = 25;
std::cout << *pA << ", " << a << ", " << b << std::endl; //output 25, 7, 25
pA = &a;
std::cout << *pA << ", " << a << ", " << b << std::endl; //output 7, 7, 25
*pA = 8;
std::cout << *pA << ", " << a << ", " << b << std::endl; //output 8, 8, 25
b = 30;
pA = &b;
std::cout << *pA << ", " << a << ", " << b << std::endl; //output 30, 8, 30
So you can see that a pointer is really just a handle to a point in memory. This can be exceptionally useful in many cases, do not write it off just because this sample is simplistic.
Now, the next thing you need to know about pointers is that you can increment them as long as the memory you are incrementing to belongs to your program. The most common example is C strings. In modern C++ strings are stored in a container called std::string, use that, but I will use an old C style string to demonstrate array access with a pointer.
Pay close attention to ++letter. What this does is increment the memory address the pointer is looking at by the size of the type it is pointing to.
Let's break this down a bit more, re-read the above sentence a few times then continue on.
If I have a type that is sizeof(T) == 4, every ++myPointerValue will shift 4 spaces in memory to point to the next "value" of that type. This is part of why the pointer "type" matters.
char text[] { 'H', 'e', 'l', 'l', 'o', '\0' }; //could be char text[] = "Hello"; but I want to show the \0 explicitly
char* letter = text;
for (char* letter = &text[0]; *letter != '\0';++letter){
std::cout << "[" << *letter << "]";
}
std::cout << std::endl;
The above will loop over the string as long as there is no '\0' (null) character. Keep in mind this can be dangerous and is a common source of insecurity in programs. Assuming your array is terminated by some value, but then getting an array that overflows allowing you to read arbitrary memory. That's a high level description anyway.
For that reason it is much better to be explicit with string length and use safer methods such as std::string in regular use.
Alright, and as a final example to put things into context. Let's say I have several discreet "cells" that I want to link together into one coherent "list". The most natural implementation of this with non-contiguous memory is to use pointers to direct each node to the next one in the sequence.
With pointers you can create all sorts of complex data structures, trees, lists, and more!
struct Node {
int value = 0;
Node* previous = nullptr;
Node* next = nullptr;
};
struct List {
List(){
head = new Node();
tail = head;
}
~List(){
std::cout << "Destructor: " << std::endl;
Node* current = head;
while (current != nullptr){
Node* next = current->next;
std::cout << "Deleting: " << current->value << std::endl;
delete current;
current = next;
}
}
void Append(int value){
Node* previous = tail;
tail = new Node();
tail->value = value;
tail->previous = previous;
previous->next = tail;
}
void Print(){
std::cout << "Printing the List:" << std::endl;
Node* current = head;
for (Node* current = head; current != nullptr;current = current->next){
std::cout << current->value << std::endl;
}
}
Node* tail;
Node* head;
};
And putting it to use:
List sampleList;
sampleList.Append(5);
sampleList.Append(6);
sampleList.Append(7);
sampleList.Append(8);
sampleList.Print();
List may seem complicated at a glance, but I am not introducing any new concepts here. This is exactly the same things I covered above, just implemented with a purpose.
Homework for you to completely understand pointers would be to provide two methods in List:
Node* NodeForIndex(int index)
void InsertNodeAtIndex(int index, int value)
This list implementation is exceptionally poor. std::list is a much better example, but it most cases due to data locality you really want to stick with std::vector. Pointers are exceptionally powerful tools, and fundamental in computer science. You need to understand them to appreciate how the common data types you rely on every day are composed, and in time you will come to appreciate the explicit separation of value from pointer in C++.
Beyond simple pointers: std::shared_ptr
std::shared_ptr gives C++ the ability to deal with reference counted pointers. That is to say, it gives a similar behavior to Javascript object assignment (where an object is destroyed when the last reference to that object is set to null or destroyed).
std::shared_ptr is just like any other stack based variable. It ties its lifetime to the stack, and then holds a pointer to memory allocated on the heap. In this regard, it encapsulates the concept of a pointer in a safer manner than having to remember to delete.
Let's re-visit our earlier example that did leak memory:
{
new ScopeTest("a"); //Create a
} //Whoa, we haven't destroyed it! Now we are leaking memory!
With a shared_ptr we can do the following:
{
std::shared_ptr<ScopeTest> a(new ScopeTest("a")); //Create a
}//Destroy a
And, a little more complex:
{
std::shared_ptr<ScopeTest> showingSharedOwnership;
{
std::shared_ptr<ScopeTest> a(new ScopeTest("a")); //"Create a" (ref count 1)
showingSharedOwnership = a; //increments a's ref count by 1. (now 2)
} //the shared_ptr named a is destroyed, decrements ref count by 1. (now 1)
} //"Destroy a" showingSharedOwnership dies and decrements the ref count by 1. (now 0)
I won't go too much further here, but this should open your mind to pointers.
I'm new to OpenCL and in order to get a better grasp of a few concepts I contrived a simple example of a geometric progression as follows (emphasis on contrived):
An array of N values and N coefficients (whose values could be
anything, but in the example they all are the same) are allocated.
M steps are performed in sequence where each value in the values array
is multiplied by its corresponding coefficient in the coefficients
array and assigned as the new value in the values array. Each step needs to fully complete before the next step can complete. I know this part is a bit contrived, but this is a requirement I want to enforce to help my understanding of OpenCL.
I'm only interested in the values in the values array after the final step has completed.
Here is the very simple OpenCL kernel (MultiplyVectors.cl):
__kernel void MultiplyVectors (__global float4* x, __global float4* y, __global float4* result)
{
int i = get_global_id(0);
result[i] = x[i] * y[i];
}
And here is the host program (main.cpp):
#include <CL/cl.hpp>
#include <vector>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
int main ()
{
auto context = cl::Context (CL_DEVICE_TYPE_GPU);
auto *sourceFile = fopen("MultiplyVectors.cl", "r");
if (sourceFile == nullptr)
{
perror("Couldn't open the source file");
return 1;
}
fseek(sourceFile, 0, SEEK_END);
const auto sourceSize = ftell(sourceFile);
auto *sourceBuffer = new char [sourceSize + 1];
sourceBuffer[sourceSize] = '\0';
rewind(sourceFile);
fread(sourceBuffer, sizeof(char), sourceSize, sourceFile);
fclose(sourceFile);
auto program = cl::Program (context, cl::Program::Sources {std::make_pair (sourceBuffer, sourceSize + 1)});
delete[] sourceBuffer;
const auto devices = context.getInfo<CL_CONTEXT_DEVICES> ();
program.build (devices);
auto kernel = cl::Kernel (program, "MultiplyVectors");
const size_t vectorSize = 1024;
float coeffs[vectorSize] {};
for (size_t i = 0; i < vectorSize; ++i)
{
coeffs[i] = 1.000001;
}
auto coeffsBuffer = cl::Buffer (context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof (coeffs), coeffs);
float values[vectorSize] {};
for (size_t i = 0; i < vectorSize; ++i)
{
values[i] = static_cast<float> (i);
}
auto valuesBuffer = cl::Buffer (context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, sizeof (values), values);
kernel.setArg (0, coeffsBuffer);
kernel.setArg (1, valuesBuffer);
kernel.setArg (2, valuesBuffer);
auto commandQueue = cl::CommandQueue (context, devices[0]);
for (size_t i = 0; i < 1000000; ++i)
{
commandQueue.enqueueNDRangeKernel (kernel, cl::NDRange (0), cl::NDRange (vectorSize / 4), cl::NullRange);
}
printf ("All kernels enqueued. Waiting to read buffer after last kernel...");
commandQueue.enqueueReadBuffer (valuesBuffer, CL_TRUE, 0, sizeof (values), values);
return 0;
}
What I'm basically asking is for advice on how to best optimize this OpenCL program to run on a GPU. I have the following questions based on my limited OpenCL experience to get the conversation going:
Could I be handling the buffers better? I'd like to minimize any
unnecessary ferrying of data between the host and the GPU.
What's the optimal work group configuration (in general at least, I
know this can very by GPU)? I'm not actually sharing any data
between work items and it doesn't seem like I'd benefit from work
groups much here, but just in case.
Should I be allocating and loading anything into local memory for a
work group (if that would at all makes sense)?
I'm currently enqueing one kernel for each step, which will create a
work item for each 4 floats to take advantage of a hypothetical GPU with a SIMD
width of 128 bits. I'm attempting to enqueue all of this
asynchronously (although I'm noticing the Nvidia implementation I have
seems to block each enqueue until the kernel is complete) at once
and then wait on the final one to complete. Is there a whole better
approach to this that I'm missing?
Is there a design that would allow for only one call to
enqueueNDRangeKernel (instead of one call per step) while
maintaining the ability for each step to be efficiently processed in
parallel?
Obviously I know that the example problem I'm solving can be done in much better ways, but I wanted to have as simple of an example as possible that illustrated a vector of values being operated on in a series of steps where each step has to be completed fully before the next. Any help and pointers on how to best go about this would be greatly appreciated.
Thanks!