I have a very large kernel which uses ~1000 temporary variables to compute ~1000 equations. So it is safe to assume that all of the temporary variables will be put in private off-chip memory aka CUDA __local memory (i know it's bad, but there is no other way).
My question is if common subexpressions are eliminated between neighboring lines like these:
const float t747 = t472*t28*t26*t715*t30*t11;
const float t748 = t472*t28*t717*t26*t30*t11;
As you can see the only difference is variable t717 versus t715. The question is if those two lines translate into 7 or 12 global loads?
Because if the target compiler (Nvidia Kepler GPU in my case) does not use registers to cache common subexpressions between lines i'm gonna need to implement it myself.
Note: All code is generated automatically, so manual tuning won't be possible.
EDIT: All t0-t999 variables are declared as "const float".
The compilers translate all the global reads as direct reads. So, in your case 12 reads.
This is due to the fact that the global memory is considered as volatile memory, and caching is not possible. However, if you simply do this: (I think you know it, but anyway...)
const float temp = t472*t28*t26*t30*t11;
const float t747 = temp*t715;
const float t748 = temp*t717;
The compiler will translate that into 7 global reads.
NOTE: At least this was valid with old arquitectures, I dunno if there is some new compiler/arquitecture that can cleverly detect these cases and optimize them.
Related
I have the following OpenCL kernel, which copies values from one buffer to another, optionally inverting the value (the 'invert' arg can be 1 or -1):-
__kernel void extraction(__global const short* src_buff, __global short* dest_buff, const int record_len, const int invert)
{
int i = get_global_id(0); // Index of record in buffer
int j = get_global_id(1); // Index of value in record
dest_buff[(i* record_len) + j] = src_buff[(i * record_len) + j] * invert;
}
The source buffer contains one or more "records", each containing N (record_len) short values. All records in the buffer are of equal length, and record_len is always a multiple of 32.
The global size is 2D (number of records in the buffer, record length), and I chose this as it seemed to make best use of the GPU parallel processing, with each thread being responsible for copying just one value in one record in the buffer.
(The local work size is set to NULL by the way, allowing OpenCL to determine the value itself).
After reading about vectors recently, I was wondering if I could use these to improve on the performance? I understand the concept of vectors but I'm not sure how to use them in practice, partly due to lack of good examples.
I'm sure the kernel's performance is pretty reasonable already, so this is mainly out of curiosity to see what difference it would make using vectors (or other more suitable approaches).
At the risk of being a bit naive here, could I simply change the two buffer arg types to short16, and change the second value in the 2-D global size from "record length" to "record length / 16"? Would this result in each kernel thread copying a block of 16 short values between the buffers?
Your naive assumption is basically correct, though you may want to add a hint to the compiler that this kernel is optimized for the vector type (Section 6.7.2 of spec), in your case, you would add
attribute((vec_type_hint(short16)))
above your kernel function. So in your example, you would have
__attribute__((vec_type_hint(short16)))
__kernel void extraction(__global const short16* src_buff, __global short16* dest_buff, const int record_len, const int invert)
{
int i = get_global_id(0); // Index of record in buffer
int j = get_global_id(1); // Index of value in record
dest_buff[(i* record_len) + j] = src_buff[(i * record_len) + j] * invert;
}
You are correct in that your 2nd global dimension should be divided by 16, and your record_len should also be divided by 16. Also, if you were to specify the local size instead of giving it NULL, you would also want to divide that by 16.
There are some other things to consider though.
You might think choosing the largest vector size should provide the best performance, especially with such a simple kernel. But in my experience, that rarely is the most optimal size. You may try asking clGetDeviceInfo for CL_DEVICE_PREFERRED_VECTOR_WIDTH_SHORT, but for me this rarely is accurate (also, it may give you 1, meaning the compiler will try auto-vectorization or the device doesn't have vector hardware). It is best to try different vector sizes and see which is fastest.
If your device supports auto-vectorization, and you want to give it a go, it may help to remove your record_len parameter and replace it with get_global_size(1) so the compiler/driver can take care of dividing record_len by whatever vector size it picks. I would recommend doing this anyway, assuming record_len is equal to the global size you gave that dimension.
Also, you gave NULL to the local size argument so that the implementation picks a size automatically. It is guaranteed to pick a size that works, but it will not necessarily pick the most optimal size.
Lastly, for general OpenCL optimizations, you may want to take a look at the NVIDIA OpenCL Best Practices Guide for NVidia hardware, or the AMD APP SDK OpenCL User Guide for AMD GPU hardware. The NVidia one is from 2009, and I'm not sure how much their hardware has changed since. Notice though that it actually says:
The CUDA architecture is a scalar architecture. Therefore, there is no performance
benefit from using vector types and instructions. These should only be used for
convenience.
Older AMD hardware (pre-GCN) benefited from using vector types, but AMD suggests not using them on GCN devices (see mogu's comment). Also if you are targeting a CPU, it will use AVX hardware if available.
Newbie to OpenCL here. I'm trying to convert a numerical method I've written to OpenCL for acceleration. I'm using the PyOpenCL package as I've written this once in Python already and as far as I can tell there's no compelling reason to use the C version. I'm all ears if I'm wrong on this, though.
I've managed to translate over most of the functionality I need in to OpenCL kernels. My question is on how to (properly) tell OpenCL to ignore my boundary/ghost cells. The reason I need to do this is that my method (for example) for point i accesses cells at [i-2:i+2], so if i=1, I'll run off the end of the array. So - I add some extra points that serve to prevent this, and then just tell my algorithm to only run on points [2:nPts-2]. It's easy to see how to do this with a for loop, but I'm a little more unclear on the 'right' way to do this for a kernel.
Is it sufficient to do, for example (pseudocode)
__kernel void myMethod(...) {
gid = get_global_id(0);
if (gid < nGhostCells || gid > nPts-nGhostCells) {
retVal[gid] = 0;
}
// Otherwise perform my calculations
}
or is there another/more appropriate way to enforce this constraint?
It looks sufficient.
Branching is same for nPts-nGhostCells*2 number of points and it is predictable if nPts and nGhostCells are compile-time constants. Even if it is not predictable, sufficiently large nPts vs nGhostCells (1024 vs 3) should not be distinctively slower than zero-branching version, except the latency of "or" operation. Even that "or" latency must be hidden behind array access latency, thanks to thread level parallelism.
At those "break" points, mostly 16 or 32 threads would lose some performance and only for several clock cycles because of the lock-step running of SIMD-like architectures.
If you happen to code some chaotic branching, like data-driven code path, then you should split them into different kernels(for different regions) or sort them before the kernel so that average branching between neighboring threads are minimized.
I need to pass a bunch of constants into my OpenCL kernel. Luckily, these are mostly known at compile-time, meaning: kernel compile-time. And therefore I can pass them in as a bunch of defines like -D leftOuterMargin=3 -D rightOuterMargin=2 -D leftInnerMargin=1 -D rightInnerMargin=2 ....
But this gets a bit unwieldy, and makes it hard to write re-usable functions inside the kernel. I'm looking for something a bit more structured, like say structs. However, structs would seem to be stored either in constant space (if create using a global constant instantiation, probably via an appropriate #define), or private space (if create inside the kernel function, again probably via an appropriate #define)?
What options are available for structuring constant data in a kernel, that is known at compile-time? Some things I hope for:
structured, can just pass a single pointer-like thing into reusable methods, rather than having 8-16 clumsily-named #defines, or 8 parameters into each non-kernel method
the values load quickly when used, just like normal #defined values
the values can be used by the compiler for optimizations, eg if I use one for a loop upper-bound, that loop can be fully unwrapped at compile time
won't increase register pressure
relatively standard, will work generally, across different gpu platforms/manufacturers
This:
the values load quickly when used, just like normal #defined values
the values can be used by the compiler for optimizations, eg if I use one for a loop upper-bound, that loop can be fully unwrapped at compile time
won't increase register pressure
Is incompatible with this:
can just pass a single pointer-like thing into reusable methods [...] or 8 parameters into each non-kernel method
If you want the values to be constant expressions (in other words, "known to the compiler"), then the only options are #define and global-scope const. No passing values dynamically by parameter or by indirection.
I suggest you make a struct with your different options:
struct Options {
int leftOuterMargin;
int rightOuterMargin;
int leftInnerMargin;
int rightInnerMargin;
// and so on ...
};
Then you can define a header included in all translation units where the constants are required:
// constants.h
static const Options constants = {
.leftOuterMargin = 3,
.rightOuterMargin = 2,
.leftInnerMargin = 1,
.rightInnerMargin = 2
};
The compiler should be able to optimize your code just as well as if you had used #define.
I was wondering the differences of variable and constants as I see different declaration of variable/constant in the codes written by ex-colleagues.
I know that variable is something that can be change throughout the code and the value of constant is fixed and can't be changed. By far I've written everything in variable (even if the variable will not be change). Is it my practice is incorrect? Perhaps my code is not complicated therefore I use variable all the time.
Anyhow, if my understanding proven wrong, please enlighten me with the correct guidelines on this matter will do.
It is a good code practice to use constants whenever possible.
At runtime / compile time it will be known that only Read operations can be done on those values, thus some accessing / IO optimizations will be done to the code automatically , which will significantly increase performance.
Another difference is that constants are stored in a different preallocated section of your code (compiler dependent, but on most compilers this is what happens), which makes them easier to access , and they don't get allocated / deallocated all the time (so another performance optimization).
And finnaly, constants can be evaluated at compile time .
For example, if you have an ecuation of constants, something like the following :
float a = const1 * const2 / const3 + const4;
Then the whole expression will be evaluated at compile time, saving cycles at runtime (since the value will always be the same).
Some popular constants that refer to this sort of optimization are PI , PI/2 , PI/4, 1/PI.
const int const_a = 10;
int static_a = 70;
public void sample()
{
static_a = const_a+10; //This is correct
// const_a=88; //It is wrong
}
In the above example, if we declare the variable as const we can't able to assign the value from anywhere but we can use that variable.
I'm trying to write a histogram kernel in OpenCL to compute 256 bin R, G, and B histograms of an RGBA32F input image. My kernel looks like this:
const sampler_t mSampler = CLK_NORMALIZED_COORDS_FALSE |
CLK_ADDRESS_CLAMP|
CLK_FILTER_NEAREST;
__kernel void computeHistogram(read_only image2d_t input, __global int* rOutput,
__global int* gOutput, __global int* bOutput)
{
int2 coords = {get_global_id(0), get_global_id(1)};
float4 sample = read_imagef(input, mSampler, coords);
uchar rbin = floor(sample.x * 255.0f);
uchar gbin = floor(sample.y * 255.0f);
uchar bbin = floor(sample.z * 255.0f);
rOutput[rbin]++;
gOutput[gbin]++;
bOutput[bbin]++;
}
When I run it on an 2100 x 894 image (1,877,400 pixels) i tend to only see in or around 1,870,000 total values being recorded when I sum up the histogram values for each channel. It's also a different number each time. I did expect this since once in a while two kernels probably grab the same value from the output array and increment it, effectively cancelling out one increment operation (I'm assuming?).
The 1,870,000 output is for a {1,1} workgroup size (which is what seems to get set by default if I don't specify otherwise). If I force a larger workgroup size like {10,6}, I get a drastically smaller sum in my histogram (proportional to the change in workgroup size). This seemed strange to me, but I'm guessing what happens is that all of the work items in the group increment the output array value at the same time, and so it just counts as a single increment?
Anyways, I've read in the spec that OpenCL has no global memory syncronization, only syncronization within local workgroups using their __local memory. The histogram example by nVidia breaks up the histogram workload into a bunch of subproblems of a specific size, computes their partial histograms, then merges the results into a single histogram after. This doesn't seem like it'll work all that well for images of arbitrary size. I suppose I could pad the image data out with dummy values...
Being new to OpenCL, I guess I'm wondering if there's a more straightforward way to do this (since it seems like it should be a relatively straightforward GPGPU problem).
Thanks!
As stated before, you write into a shared memory unsynchronized and non atomic. This leads to errors. If the picture is big enough, I have a suggestion:
Split your work group into a one dimensional one for cols or rows. Use each kernel to sum up the histogram for the col or row and afterwards sum it globally with atomic atom_inc. This brings the most sum ups in private memory which is much faster and reduces atomic ops.
If you work in two dimensions you can do it on parts of the picture.
[EDIT:]
I think, I have a better answer: ;-)
Have a look to: http://developer.download.nvidia.com/compute/opencl/sdk/website/samples.html#oclHistogram
They have an interesting implementation there...
Yes, you're writing to a shared memory from many work-items at the same time, so you will lose elements if you don't do the updates in a safe way (or worse ? Just don't do it). The increase in group size actually increases the utilization of your compute device, which in turn increases the likelihood of conflicts. So you end up losing more updates.
However, you seem to be confusing synchronization (ordering thread execution order) and shared memory updates (which typically require either atomic operations, or code synchronization and memory barriers, to make sure the memory updates are visible to other threads that are synchronized).
the synchronization+barrier is not particularly useful for your case (and as you noted is not available for global synchronization anyways. Reason is, 2 thread-groups may never run concurrently so trying to synchronize them is nonsensical). It's typically used when all threads start working on generating a common data-set, and then all start to consume that data-set with a different access pattern.
In your case, you can use atomic operations (e.g. atom_inc, see http://www.cmsoft.com.br/index.php?option=com_content&view=category&layout=blog&id=113&Itemid=168). However, note that updating a highly contended memory address (say, because you have thousands of threads trying all to write to only 256 ints) is likely to yield poor performance. All the hoops typical histogram code goes through are there to reduce the contention on the histogram data.
You can check
The histogram example from AMD Accelerated Parallel Processing (APP) SDK.
Chapter 14 - Image Histogram of OpenCL Programming Guide book (ISBN-13: 978-0-321-74964-2).
GPU Histogram - Sample code from Apple