How do I make the following sequential code parallel? - opencl

I wanted to to make the following kernel code parallel,
In the code below size of a is n, b and c is 8*n and of d is some value less than n(eg : 3*n/4)
j=0;
for(i=0;i<n;i++)
{
if (a[b[i]]!=a[c[i]])
{
d[j]=b[i];
j++;
}
}
Since the number of elements of a and d aren't the same I am facing a problem to give i=get_global_id(0), since by doing this, in some elements of d there would be nothing placed if the 'if'' condition violates...! So how do I make parallel..?
If not this then, is it possible to delete the "no value" positions of d in the kernel if I store the positions where the values are placed in d in a different array..?

Basically this is parallel array compaction based on a predicate.
Have a look at techniques described in
Parallel Prefix Sum (Scan) with CUDA or in thrust
http://docs.nvidia.com/cuda/thrust/index.html

Related

Iterating results back into an OpenCL kernel

I have written an openCL kernel that takes 25million points and checks them relative to two lines, (A & B). It then outputs two lists; i.e. set A of all of the points found to be beyond line A, and vice versa.
I'd like to run the kernel repeatedly, updating the input points with each of the line results sets in turn (and also updating the checking line). I'm guessing that reading the two result sets out of the kernel, forming them into arrays and then passing them back in one at a time as inputs is quite a slow solution.
As an alternative, I've tested keeping a global index in the kernel that logs which points relate to which line. This is updated at each line checking cycle. During each iteration, the index for each point in the overall set is switched to 0 (no line), A or B or so forth (i.e. the related line id). In subsequent iterations only points with an index that matches the 'live' set being checked in that cycle (i.e. tagged with A for set A) are tested further.
The problem is that, in each iteration, the kernels still have to check through the full index (i.e. all 25m points) to discover wether or not they are in the 'live' set. As a result, the speed of each cycle does not significantly improve as the size of the results set decrease over time. Again, this seems a slow solution; whilst avoiding passing too much information between GPU and CPU it instead means that a large number of the work items aren't doing very much work at all.
Is there an alternative solution to what I am trying to do here?
You could use atomics to sort the outputs into two arrays. Ie if we're in A then get my position by incrementing the A counter and put me into A, and do the same for B
Using global atomics on everything might be horribly slow (fast on amd, slow on nvidia, no idea about other devices) - instead you can use a local atomic_inc in a 0'd local integer to do exactly the same thing (but for only the local set of x work-items), and then at the end do an atomic_add to both global counters based on your local counters
To put this more clearly in code (my explanation is not great)
int id;
if(is_a)
id = atomic_inc(&local_a);
else
id = atomic_inc(&local_b);
barrier(CLK_LOCAL_MEM_FENCE);
__local int a_base, b_base;
int lid = get_local_id(0);
if(lid == 0)
{
a_base = atomic_add(a_counter, local_a);
b_base = atomic_add(b_counter, local_b);
}
barrier(CLK_LOCAL_MEM_FENCE);
if(is_a)
a_buffer[id + a_base] = data;
else
b_buffer[id + b_base] = data;
This involves faffing around with atomics which are inherently slow, but depending on how quickly your dataset reduces it might be much faster. Additionally if B data is not considered live, you can omit getting the b ids and all the atomics involving b, as well as the write back

Replacing functions with Table Lookups

I've been watching this MSDN video with Brian Beckman and I'd like to better understand something he says:
Every imperitive programmer goes through this phase of learning that
functions can be replaced with table lookups
Now, I'm a C# programmer who never went to university, so perhaps somewhere along the line I missed out on something everyone else learned to understand.
What does Brian mean by:
functions can be replaced with table lookups
Are there practical examples of this being done and does it apply to all functions? He gives the example of the sin function, which I can make sense of, but how do I make sense of this in more general terms?
Brian just showed that the functions are data too. Functions in general are just a mapping of one set to another: y = f(x) is mapping of set {x} to set {y}: f:X->Y. The tables are mappings as well: [x1, x2, ..., xn] -> [y1, y2, ..., yn].
If function operates on finite set (this is the case in programming) then it's can be replaced with a table which represents that mapping. As Brian mentioned, every imperative programmer goes through this phase of understanding that the functions can be replaced with the table lookups just for performance reason.
But it doesn't mean that all functions easily can or should be replaced with the tables. It only means that you theoretically can do that for every function. So the conclusion would be that the functions are data because tables are (in the context of programming of course).
There is a lovely trick in Mathematica that creates a table as a side-effect of evaluating function-calls-as-rewrite-rules. Consider the classic slow-fibonacci
fib[1] = 1
fib[2] = 1
fib[n_] := fib[n-1] + fib[n-2]
The first two lines create table entries for the inputs 1 and 2. This is exactly the same as saying
fibTable = {};
fibTable[1] = 1;
fibTable[2] = 1;
in JavaScript. The third line of Mathematica says "please install a rewrite rule that will replace any occurrence of fib[n_], after substituting the pattern variable n_ with the actual argument of the occurrence, with fib[n-1] + fib[n-2]." The rewriter will iterate this procedure, and eventually produce the value of fib[n] after an exponential number of rewrites. This is just like the recursive function-call form that we get in JavaScript with
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
return result;
}
Notice it checks the table first for the two values we have explicitly stored before making the recursive calls. The Mathematica evaluator does this check automatically, because the order of presentation of the rules is important -- Mathematica checks the more specific rules first and the more general rules later. That's why Mathematica has two assignment forms, = and :=: the former is for specific rules whose right-hand sides can be evaluated at the time the rule is defined; the latter is for general rules whose right-hand sides must be evaluated when the rule is applied.
Now, in Mathematica, if we say
fib[4]
it gets rewritten to
fib[3] + fib[2]
then to
fib[2] + fib[1] + 1
then to
1 + 1 + 1
and finally to 3, which does not change on the next rewrite. You can imagine that if we say fib[35], we will generate enormous expressions, fill up memory, and melt the CPU. But the trick is to replace the final rewrite rule with the following:
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
This says "please replace every occurrence of fib[n_] with an expression that will install a new specific rule for the value of fib[n] and also produce the value." This one runs much faster because it expands the rule-base -- the table of values! -- at run time.
We can do likewise in JavaScript
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
fibTable[n] = result;
return result;
}
This runs MUCH faster than the prior definition of fib.
This is called "automemoization" [sic -- not "memorization" but "memoization" as in creating a memo for yourself].
Of course, in the real world, you must manage the sizes of the tables that get created. To inspect the tables in Mathematica, do
DownValues[fib]
To inspect them in JavaScript, do just
fibTable
in a REPL such as that supported by Node.JS.
In the context of functional programming, there is the concept of referential transparency. A function that is referentially transparent can be replaced with its value for any given argument (or set of arguments), without changing the behaviour of the program.
Referential Transparency
For example, consider a function F that takes 1 argument, n. F is referentially transparent, so F(n) can be replaced with the value of F evaluated at n. It makes no difference to the program.
In C#, this would look like:
public class Square
{
public static int apply(int n)
{
return n * n;
}
public static void Main()
{
//Should print 4
Console.WriteLine(Square.apply(2));
}
}
(I'm not very familiar with C#, coming from a Java background, so you'll have to forgive me if this example isn't quite syntactically correct).
It's obvious here that the function apply cannot have any other value than 4 when called with an argument of 2, since it's just returning the square of its argument. The value of the function only depends on its argument, n; in other words, referential transparency.
I ask you, then, what the difference is between Console.WriteLine(Square.apply(2)) and Console.WriteLine(4). The answer is, there's no difference at all, for all intents are purposes. We could go through the entire program, replacing all instances of Square.apply(n) with the value returned by Square.apply(n), and the results would be the exact same.
So what did Brian Beckman mean with his statement about replacing function calls with a table lookup? He was referring to this property of referentially transparent functions. If Square.apply(2) can be replaced with 4 with no impact on program behaviour, then why not just cache the values when the first call is made, and put it in a table indexed by the arguments to the function. A lookup table for values of Square.apply(n) would look somewhat like this:
n: 0 1 2 3 4 5 ...
Square.apply(n): 0 1 4 9 16 25 ...
And for any call to Square.apply(n), instead of calling the function, we can simply find the cached value for n in the table, and replace the function call with this value. It's fairly obvious that this will most likely bring about a large speed increase in the program.

Convergence using for loop in R

I have the following code:
for(n in 1:1000){
..............
}
This will run ............ 1000 times. I havent put the full code in because its extremely long and not relevant to the answer
My question is there any way i can get the code to run until it reaches a specified convergence value to four decimal places. There are initial values being fed into this equation which generates new values and the process is continually iterative until a convergence attained (as specified above).
EDIT
I have a set of 4 values at the end of my code with different labels (A, B, C, D). Within my code there are two separate functions when each calculate different values and feed each other. So when i say convergence, i mean that when function 1 tells function 2 specific values and it calculates new values for A, B, C and D and the cycle continues and the next time these values are the same in as calculated by function 2
The key question im asking here is what format the code should take (the below would suggest that repeat is perferrable) and how to code the convergence criteria correctly as the assignment notation for successive iterations will be the same.
Just making an answer out of my comment, I think often repeat will be the best here. It doesn't require you to evaluate the condition at the start and doesn't stop after a finite number of iterations (unless of course that is what you want):
repeat
{
# Do stuff
if (condition) break
}
If you are just looking for a way of exiting for loops you can just use break.
for (n in 1:1000)
{
...
if (condition)
break;
}
You could always just use a while loop if you don't know how many iterations it will take. The general form could look something like this:
while(insert_convergence_check_here){
insert_your_code_here
}
Edit: In response to nico's comment I should add that you could also follow this pattern to essentially create a do/while loop in case you need the loop to run at least once before you can check the convergence criteria.
continue_indicator <- TRUE
while(continue_indicator){
insert_your_code_here
continue_indicator <- convergence_check_here
}

Performing operations on CUDA matrices while reading from a global Point

Hey there,
I have a mathematical function (multidimensional which means that there's an index which I pass to the C++-function on which single mathematical function I want to return. E.g. let's say I have a mathematical function like that:
f = Vector(x^2*y^2 / y^2 / x^2*z^2)
I would implement it like that:
double myFunc(int function_index)
{
switch(function_index)
{
case 1:
return PNT[0]*PNT[0]*PNT[1]*PNT[1];
case 2:
return PNT[1]*PNT[1];
case 3:
return PNT[2]*PNT[2]*PNT[1]*PNT[1];
}
}
whereas PNT is defined globally like that: double PNT[ NUM_COORDINATES ]. Now I want to implement the derivatives of each function for each coordinate thus generating the derivative matrix (columns = coordinates; rows = single functions). I wrote my kernel already which works so far and which call's myFunc().
The Problem is: For calculating the derivative of the mathematical sub-function i concerning coordinate j, I would use in sequential mode (on CPUs e.g.) the following code (whereas this is simplified because usually you would decrease h until you reach a certain precision of your derivative):
f0 = myFunc(i);
PNT[ j ] += h;
derivative = (myFunc(j)-f0)/h;
PNT[ j ] -= h;
now as I want to do this on the GPU in parallel, the problem is coming up: What to do with PNT? As I have to increase certain coordinates by h, calculate the value and than decrease it again, there's a problem coming up: How to do it without 'disturbing' the other threads? I can't modify PNT because other threads need the 'original' point to modify their own coordinate.
The second idea I had was to save one modified point for each thread but I discarded this idea quite fast because when using some thousand threads in parallel, this is a quite bad and probably slow (perhaps not realizable at all because of memory limits) idea.
'FINAL' SOLUTION
So how I do it currently is the following, which adds the value 'add' on runtime (without storing it somewhere) via preprocessor macro to the coordinate identified by coord_index.
#define X(n) ((coordinate_index == n) ? (PNT[n]+add) : PNT[n])
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
//*// Example: f[i] = x[i]^3
return (X(function_index)*X(function_index)*X(function_index));
// */
}
That works quite nicely and fast. When using a derivative matrix with 10000 functions and 10000 coordinates, it just takes like 0.5seks. PNT is defined either globally or as constant memory like __constant__ double PNT[ NUM_COORDINATES ];, depending on the preprocessor variable USE_CONST.
The line return (X(function_index)*X(function_index)*X(function_index)); is just an example where every sub-function looks the same scheme, mathematically spoken:
f = Vector(x0^3 / x1^3 / ... / xN^3)
NOW THE BIG PROBLEM ARISES:
myFunc is a mathematical function which the user should be able to implement as he likes to. E.g. he could also implement the following mathematical function:
f = Vector(x0^2*x1^2*...*xN^2 / x0^2*x1^2*...*xN^2 / ... / x0^2*x1^2*...*xN^2)
thus every function looking the same. You as a programmer should only code once and not depending on the implemented mathematical function. So when the above function is being implemented in C++, it looks like the following:
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
double ret = 1.0;
for(int i = 0; i < NUM_COORDINATES; i++)
ret *= X(i)*X(i);
return ret;
}
And now the memory accesses are very 'weird' and bad for performance issues because each thread needs access to each element of PNT twice. Surely, in such a case where each function looks the same, I could rewrite the complete algorithm which surrounds the calls to myFunc, but as I stated already: I don't want to code depending on the user-implemented function myFunc...
Could anybody come up with an idea how to solve this problem??
Thanks!
Rewinding back to the beginning and starting with a clean sheet, it seems you want to be able to do two things
compute an arbitrary scalar valued
function over an input array
approximate the partial derivative of an arbitrary scalar
valued function over the input array
using first order accurate finite differencing
While the function is scalar valued and arbitrary, it seems that there are, in fact, two clear forms which this function can take:
A scalar valued function with scalar arguments
A scalar valued function with vector arguments
You appeared to have started with the first type of function and have put together code to deal with computing both the function and the approximate derivative, and are now wrestling with the problem of how to deal with the second case using the same code.
If this is a reasonable summary of the problem, then please indicate so in a comment and I will continue to expand it with some code samples and concepts. If it isn't, I will delete it in a few days.
In comments, I have been trying to suggest that conflating the first type of function with the second is not a good approach. The requirements for correctness in parallel execution, and the best way of extracting parallelism and performance on the GPU are very different. You would be better served by treating both types of functions separately in two different code frameworks with different usage models. When a given mathematical expression needs to be implemented, the "user" should make a basic classification as to whether that expression is like the model of the first type of function, or the second. The act of classification is what drives algorithmic selection in your code. This type of "classification by algorithm" is almost universal in well designed libraries - you can find it in C++ template libraries like Boost and the STL, and you can find it in legacy Fortran codes like the BLAS.

maple 13 tridiagonal matrix help

i am trying to make a 100 x 100 tridiagonal matrix with 2's going down the diagonal and -1's surrounding the 2's. i can make a tridiagonal matrix with only 1's in the three diagonals and preform matrix addition to get what i want, but i want to know if there is a way to customize the three diagonals to what ever you want. maplehelp doesn't list anything useful.
The Matrix function in the LinearAlgebra package can be called with a parameter (init) that is a function that can assign a value to each entry of the matrix depending on its position.
This would work:
f := (i, j) -> if i = j then 2 elif abs(i - j) = 1 then -1 else 0; end if;
Matrix(100, f);
LinearAlgebra[BandMatrix] works too (and will be WAY faster), especially if you use storage=band[1]. You should probably use shape=symmetric as well.
The answers involving an initializer function f will do O(n^2) work for square nxn Matrix. Ideally, this task should be O(n), since there are just less than 3*n entries to be filled.
Suppose also that you want a resulting Matrix without any special (eg. band) storage or indexing function (so that you can later write to any part of it arbitrarily). And suppose also that you don't want to get around such an issue by wrapping the band structure Matrix with another generic Matrix() call which would double the temp memory used and produce collectible garbage.
Here are two ways to do it (without applying f to each entry in an O(n^2) manner, or using a separate do-loop). The first one involves creation of the three bands as temps (which is garbage to be collected, but at least not n^2 size of it).
M:=Matrix(100,[[-1$99],[2$100],[-1$99]],scan=band[1,1]);
This second way uses a routine which walks M and populates it with just the three scalar values (hence not needing the 3 band lists explicitly).
M:=Matrix(100):
ArrayTools:-Fill(100,2,M,0,100+1);
ArrayTools:-Fill(99,-1,M,1,100+1);
ArrayTools:-Fill(99,-1,M,100,100+1);
Note that ArrayTools:-Fill is a compiled external routine, and so in principal might well be faster than an interpreted Maple language (proper) method. It would be especially fast for a Matrix M with a hardware datatype such as 'float[8]'.
By the way, the reason that the arrow procedure above failed with error "invalid arrow procedure" is likely that it was entered in 2D Math mode. The 2D Math parser of Maple 13 does not understand the if...then...end syntax as the body of an arrow operator. Alternatives (apart from writing f as a proc like someone else answered) is to enter f (unedited) in 1D Maple notation mode, or to edit f to use the operator form of if. Perhaps the operator form of if here requires a nested if to handle the elif. For example,
f := (i,j) -> `if`(i=j,2,`if`(abs(i-j)=1,-1,0));
Matrix(100,f);
jmbr's proposed solutions can be adapted to work:
f := proc(i, j)
if i = j then 2
elif abs(i - j) = 1 then -1
else 0
end if
end proc;
Matrix(100, f);
Also, I understand your comment as saying you later need to destroy the band matrix nature, which prevents you from using BandMatrix - is that right? The easiest solution to that is to wrap the BandMatrix call in a regular Matrix call, which will give you a Matrix you can change however you'd like:
Matrix(LinearAlgebra:-BandMatrix([1,2,1], 1, 100));

Resources