Strange CUDA behavior in vector multiplication program - vector

I'm having some trouble with a very basic CUDA program. I have a program that multiplies two vectors on the Host and on the Device and then compares them. This works without a problem. What's wrong is that I'm trying to test different number of threads and blocks for learning purposes. I have the following kernel:
__global__ void multiplyVectorsCUDA(float *a,float *b, float *c, int N){
int idx = threadIdx.x;
if (idx<N)
c[idx] = a[idx]*b[idx];
}
which I call like:
multiplyVectorsCUDA <<<nBlocks, nThreads>>> (vector_a_d,vector_b_d,vector_c_d,N);
For the moment I've fixed nBLocks to 1 so I only vary the vector size N and the number of threads nThreads. From what I understand, there will be a thread for each multiplication so N and nThreads should be equal.
The problem is the following
I first call the kernel with N=16 and nThreads<16 which doesn't work. (This is ok)
Then I call it with N=16 and nThreads=16 which works fine. (Again
works as expected)
But when I call it with N=16 and nThreads<16 it still works!
I don't understand why the last step doesn't fail like the first one. It only fails again if I restart my PC.
Has anyone run into something like this before or can explain this behavior?

Wait, so are you calling all three in a row? I don't know the rest of your code, but are you sure you're clearing out the graphics memory you alloced between each run? If not, that could explain why it doesn't work the first time but does the third time when you're passing the same values, and why it only works again after rebooting (rebooting clears all the memory alloced).

Don't know if its ok to answer my own question but I realized I had a bug in my code when comparing the host and device vectors (that part of the code wasn't posted). Sorry for the inconvenience. Could someone please close this post since it won't let me delete it?

Related

Why my attempt to global syncronization does not work?

I try to use this code.
But kernel exits after executing cycle only once.
If I remove "while(...)" line - cycle works, but results of course are mess.
If I state "volatile __global uint *g_barrier" it freezes a PC with black screen for a while and then program deadlocks.
__kernel void Some_Kernel(__global uint *g_barrier)
{
uint i, t;
for (i = 1; i < MAX; i++) {
// some useful code here
barrier(CLK_GLOBAL_MEM_FENCE);
if (get_local_id(0) == 0) atomic_add(g_barrier, 1);
t = i*get_num_groups(0);
while(*g_barrier < t); // try to sync it all
}
}
You seem to be expecting all work groups to be scheduled to run in parallel. OpenCL does not guarantee this to happen. Some work groups may not start until some other work groups have entirely completed running the kernel.
Moreover, barriers only synchronise within a work group. Atomic operations on global memory are atomic with regard to other work groups too, but there is no guarantee about order.
If you need other work groups to complete some code before running some other code, you will need to enqueue each of those chunks of work separately on a serial command queue (or appropriately connect them using events on an out-of-order queue). So for your example code, you need to remove your for and while loops, and enqueue your kernel MAX-1 times and pass i as a kernel argument.
Depending on the capabilities of your device and the size of your data set, your other option is to submit only one large work group, though this is unlikely to give you good performance unless you have a lot of such smaller tasks which are independent from one another.
(I will point out that there is a good chance your question suffers from the XY problem - you have not stated the overall problem your code is trying to solve. So there may be better solutions than the ones I have suggested.)

Operate only on a subset of buffer in OpenCL kernel

Newbie to OpenCL here. I'm trying to convert a numerical method I've written to OpenCL for acceleration. I'm using the PyOpenCL package as I've written this once in Python already and as far as I can tell there's no compelling reason to use the C version. I'm all ears if I'm wrong on this, though.
I've managed to translate over most of the functionality I need in to OpenCL kernels. My question is on how to (properly) tell OpenCL to ignore my boundary/ghost cells. The reason I need to do this is that my method (for example) for point i accesses cells at [i-2:i+2], so if i=1, I'll run off the end of the array. So - I add some extra points that serve to prevent this, and then just tell my algorithm to only run on points [2:nPts-2]. It's easy to see how to do this with a for loop, but I'm a little more unclear on the 'right' way to do this for a kernel.
Is it sufficient to do, for example (pseudocode)
__kernel void myMethod(...) {
gid = get_global_id(0);
if (gid < nGhostCells || gid > nPts-nGhostCells) {
retVal[gid] = 0;
}
// Otherwise perform my calculations
}
or is there another/more appropriate way to enforce this constraint?
It looks sufficient.
Branching is same for nPts-nGhostCells*2 number of points and it is predictable if nPts and nGhostCells are compile-time constants. Even if it is not predictable, sufficiently large nPts vs nGhostCells (1024 vs 3) should not be distinctively slower than zero-branching version, except the latency of "or" operation. Even that "or" latency must be hidden behind array access latency, thanks to thread level parallelism.
At those "break" points, mostly 16 or 32 threads would lose some performance and only for several clock cycles because of the lock-step running of SIMD-like architectures.
If you happen to code some chaotic branching, like data-driven code path, then you should split them into different kernels(for different regions) or sort them before the kernel so that average branching between neighboring threads are minimized.

Product of range in Prolog

I need to write a program, which calculates product of product in range:
I written the following code:
mult(N,N,R,R).
mult(N,Nt,R,Rt):-N1=Nt+1,R1=Rt*(1/(log(Nt))),mult(N,N1,R,R1).
This should implement basic product from Nt to N of 1/ln(j). As far as I understand it's got to be stopped when Nt and N are equal. However, I can't get it working due to:
?- mult(10,2,R,1), write(R).
ERROR: Out of global stack
The following error. Is there any other way to implement loop not using default libraries of SWI-Prolog?
Your program never terminates! To see this consider the following failure-slice of your program:
mult(N,N,R,R) :- false.
mult(N,Nt,R,Rt):-
N1=Nt+1,
R1=Rt*(1/(log(Nt))),
mult(N,N1,R,R1), false.
This new program does never terminate, and thus the original program doesn't terminate. To see that this never terminates, consider the two (=)/2 goals. In the first, the new variable N1 is unified with something. This will always succeed. Similarly, the second goal with always succeed. There will never be a possibility for failure prior to the recursive goal. And thus, this program never terminates.
You need to add some goal, or to replace existing goals. in the visible part. Maybe add
N > Nt.
Further, it might be a good idea to replace the two (=)/2 goals by (is)/2. But this is not required for termination, strictly speaking.
Out of global stack means you entered a too-long chain of recursion, possibly an infinite one.
The problem stems from using = instead of is in your assignments.
mult(N,N,R,R).
mult(N,Nt,R,Rt):-N1 is Nt+1, R1 is Rt*(1/(log(Nt))), mult(N,N1,R,R1).
You might want to insert a cut in your first clause to avoid going on after getting an answer.
If you have a graphical debugger (like the one in SWI) try setting 'trace' and 'debug' on and running. You'll soon realize that performing N1 = Nt+1 giving Ntas 2 yields the term 2+1. Since 2+1+1+1+(...) will never unify with with 10, that's the problem right there.

Does OpenCL always zero-initialize device memory?

I've noticed that often, global and constant device memory is initialized to 0. Is this a universal rule? I wasn't able to find anything in the standard.
No it doesn't. For instance I had this small kernel to test atomic add:
kernel void atomicAdd(volatile global int *result){
atomic_add(&result[0], 1);
}
Calling it with this host code (pyopencl + unittest):
def test_atomic_add(self):
NDRange = (4, 4)
result = np.zeros(1, dtype=np.int32)
out_buf = cl.Buffer(self.ctx, self.mf.WRITE_ONLY, size=result.nbytes)
self.prog.atomicAdd(self.queue, NDRange, NDRange, out_buf)
cl.enqueue_copy(self.queue, result, out_buf).wait()
self.assertEqual(result, 16)
was always returning the correct value when using my CPU. However on a ATI HD 5450 the returned value was always junk.
And If I well recall, on an NVIDIA the first run was returning the correct value, i.e. 16, but for the following run, the values were 32, 48, etc. It was reusing the same location with the old value still stored there.
When I corrected my host code with this line (copying the 0 value to the buffer):
out_buf = cl.Buffer(self.ctx, self.mf.WRITE_ONLY | self.mf.COPY_HOST_PTR, hostbuf=result)
Everything worked fine on any devices.
As far as I know there is no sentence in standard that states this.
Maybe some driver implementations will do this automatically, but you shoudn't rely on it.
I remember that once I had a case where a buffer was not initialized to 0, but I can't remember the settings of "OS + driver".
Probably what is going on is that the typical OS does not use even 1% of now a days devices memory. So when you start a OpenCL, there is a huge probability that you will fall into an empty zone.
It depends on the platform that you are developing. As #DarkZeros mentioned in the previous answers, the spec does not imply anything. Please see page 104 of OpenCL 2.1 Spec.
However, based on our experience in Mali GPUs, the driver initializes all elements of the newly allocated buffers to zero. This is for the first touch. Later on, as time goes by, and we release this buffer and its memory space is occupied by a new buffer, that memory space is not initialized with zero. "Again, the first touch sees the zero values. After that, you would see normal gibberish values."
Hope this helps after such long time!

kernel OpenCL not saving results?

Im making my first steps in OpenCL, and I got a problem with I think results aren't saved correctly. I first made a simple program, that is working, giving the correct results etc. So main is functioning correctly, arrays are filled the way they are supposed to etc.
Brief explanation of the program:
I have a grid of 4x4 points (OP array), in a field that is 40x40. There is a plane flying above this field, and it's route is divided into 100 segments (Segment array). Coord is a typedef struct, with currently only a double x and double y value. I know there are simpler solutions, but I have to use this later on, so I do it this way. The kernel is supposed to calculate the distance from a point (an OP) to a segment. The result has to be saved in the array Distances.
Description of the kernel: it gets is global id, from wich it calculates which OP and which segment it has to calculate with. With that info it gets the x and y from those 2, and calculates the x and y distance between each other. With this information it does an square root from the distances and the flightaltitude.
The problem is, that the Distances table only shows zero's :P
kernel: http://pastebin.com/U9hTWvv2 There is an parameter that says __global const Coord* Afstanden, this has to be __global double* Afstanden
OpenCL stuff in main: http://pastebin.com/H3mPwuUH (because Im not 100% sure Im doing that is right)
I can give you the whole program, but it's mostly dutch :P
I tried to explain everything as good as possible, if something is not clear, I'll try to clear it up ;)
As I made numerous little faults in the first kernel, here is his successor: http://pastebin.com/rs6T1nKs Also the OpenCL part is a bit updated: http://pastebin.com/4NbUQ40L because it also had one or two faults in it.
It should work now, in my opinion... But it isn't, the problem is still standing
Well, it seems that the "Afstanden" variable is declared as a "const Coord*". Remove the constness - i.e, "Coord* only - and be sure that when you use clCreateBuffer "CL_MEM_WRITE_ONLY" or "CL_MEM_READ_WRITE" as an argument.
"The problem is, that the Distances table only shows zero's :P"
You are probably initializing Afstanden on the CPU side with zeros. Am I correct? This would explain this behavior.

Resources