Efficiently multiply OpenCL vector components? - opencl

I have a float8 vector type that I multiply the components of the vector using vector component addressing as follows ( note the variable v below isn't a constant in reality);
float8 v = (float8) (1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f);
float result = v.s0 * v.s1 * v.s2 * v.s3 * v.s4 * v.s5 * v.s6 * v.s7;
However this prevents my kernel from being vectorised when being compiled with Intel Code builder.
Device build started
Device build done
Kernel <test> was not vectorized
To over come this I started to create copies of the vector, masking the required components and multiplying them all together before trying to call the dot function however this all seemed rather inefficient and convoluted.
My question is therefore how can I multiply the components of my vector in a efficient vectorised manor?

My comment was wrong as it is not a dot product you need in the result. It is simply a multiplication of 8 numbers. Parallel work data should be parallel, not in same container. If you want to multiply s0 s1 s2 ... s7 then you put them in consecutive vector variables
variable-1: s0 p0 r0 q0 .... z0
variable-2: s1 p1 r1 q1 .... z1
variable-8: s7 p7 .... z7
you can multiply those at SIMD speed and have 8 multiplications at a time using float8 type and continue as many times as you need, not just 8.
At each multiplication, you have responsibility to check for errors and overflows. But when hardware does 8 multiplications in a single instruction, which order do you want? You want them multiplied in increasing index order(serial,slow) or something like a pairwise multiplication on tree elements(less multiplications,faster,but give different results)? Order of operations may be important sometimes.
If it is a gpu, simply multiply items and instruction level parallelism + hyperthread engine of gpu achieves efficiency. If it is cpu, you should first check if your cpu supports vertical multiplication instructions(I doubt such thing exists), if not then you need to multiply on array elements not vector elements. This should be easier to vectorise as it is a continuous data on main memory since a cpu does not give explicit control on local memory.

Related

Set Theory & Geometry: Two arcs on the same circle overlap with wrapping values

As a background, I'm a computer programmer and I'm working on a software library that allows a computer to quickly search through all dates to find a set of dates that satisfies a criteria. For example:
I want a list of every possible time that has ever occurred that has occurred on a friday or a saturday that is in April or May during the first week of the month.
My library uses numerical sets to efficiently represent ranges of dates that satisfy a criteria.
I've been thinking about ways to improve the performance of some parts of the app and I think that by combining sets and some geometry, I can really improve my results. However, my geometry is a bit rusty and I was hoping you might could help.
Here's my thought:
Certain elements of time can be represented as a circular dial. For example, Minutes can be positioned on a clock with values between 0...59. We could store valid ranges as a list of arcs. For example, If we wanted all times that ended with 05..10, we could store [5,10]. If we wanted all times that end with :45-59 or :00-15, we could store [45, 15]. Notice how this last arc "loops around" the dial. Here's a mockup showing different ranges intersecting on a dial
My question is this:
Given a set of whole numbers between N...M arranged into a circle.
Given Arc1 which is representing by [A, B] and Arc2 which is represented by [C, D] where A, B, C, and D are all within in range N...M
How do I determine:
A. Whether the arcs intersect.
B. If they do, what their intersection is.
C. If they do, what their union is.
Thank you so much for your help. If you're not able to help, if you can point me in the right direction, that would be great.
Thanks!
A simple and safe approach is to split the intervals that straddle 0. Then you perform pairwise interval intersection/union (for instance if A < D and C < B then [max(A,C), min(B,D)] for the intersection), and merge them if they meet at 0.
It seems the primitive operation to implement would be something like 'is the number X contained in the arch [A,B]'. Once you have that, you could implement an [A,B]/[C,D] arch-intersection predicate by something like -
Arch intersection means exactly that at least one of the following conditions is met:
C is contained in [A,B]
D is contained in [A,B]
A is contained in [C,D]
B is contained in [C,D]
One way to implement this contained-in-arch test without any branches is with some trigonometry and vector cross product. Not sure it would be faster (the math/branches performance tradeoff is entirely empiric), but it might be worth a try.
Denote Xa = sin(X/N * 2PI), Ya = cos(X/N * 2PI) and similarly for Xb,Yb etc.
C is contained in [A,B] is equivalent to:
Xa * Yc - Ya * Xc > 0
AND
Xc * Yb - Yc * Xb > 0
You can complete the other 3 conditions in an identical manner.
Hope this turns out useful.

How do you do parallel matrix multiplication in Julia?

Is there a good way to do parallel matrix multiplication in julia? I tried using DArrays, but it was significantly slower than just a single-thread multiplication.
Parallel in what sense? If you mean single-machine, multi-threaded, then Julia does this by default as OpenBLAS (the underlying linear algebra library used) is multithreaded.
If you mean multiple-machine, distributed-computing-style, then you will be encountering a lot of communications overhead that will only be worth it for very large problems, and a customized approach might be needed.
The problem is most likely that direct (maybe single-threaded) matrix-multiplication is normally performed with an optimized library function. In the case of OpenBLAS, this is already multithreaded. For arrays with size 2000x2000, the simple matrixmultiplication
#time c = sa * sb;
results in 0.3 seconds multithreaded and 0.7 seconds singlethreaded.
Splitting of a single dimension in multiplication the times get even worse and reach around 17 seconds in singlethreaded mode.
#time for j = 1:n
sc[:,j] = sa[:,:] * sb[:,j]
end
shared arrays
The solution to your problem might be the use of shared arrays, which share the same data across your processes on a single computer. Please note that shared arrays are still marked as experimental.
# create shared arrays and initialize them with random numbers
sa = SharedArray(Float64,(n,n),init = s -> s[localindexes(s)] = rand(length(localindexes(s))))
sb = SharedArray(Float64,(n,n),init = s -> s[localindexes(s)] = rand(length(localindexes(s))))
sc = SharedArray(Float64,(n,n));
Then you have to create a function, which performs a cheap matrix multiplication on a subset of the matrix.
#everywhere function mymatmul!(n,w,sa,sb,sc)
# works only for 4 workers and n divisible by 4
range = 1+(w-2) * div(n,4) : (w-1) * div(n,4)
sc[:,range] = sa[:,:] * sb[:,range]
end
Finally, the main process tells the workers to work on their part.
#time #sync begin
for w in workers()
#async remotecall_wait(w, mymatmul!, n, w, sa, sb, sc)
end
end
which takes around 0.3 seconds which is the same time as the multithreaded single-process time.
It sounds like you're interested in dense matrices, in which case see the other answers. Should you be (or become) interested in sparse matrices, see https://github.com/madeleineudell/ParallelSparseMatMul.jl.

How to store a polynomial?

Integers can be used to store individual numbers, but not mathematical expressions. For example, lets say I have the expression:
6x^2 + 5x + 3
How would I store the polynomial? I could create my own object, but I don't see how I could represent the polynomial through member data. I do not want to create a function to evaluate a passed in argument because I do not only need to evaluate it, but also need to manipulate the expression.
Is a vector my only option or is there a more apt solution?
A simple yet inefficient way would be to store it as a list of coefficients. For example, the polynomial in the question would look like this:
[6, 5, 3]
If a term is missing, place a zero in its place. For instance, the polynomial 2x^3 - 4x + 7 would be represented like this:
[2, 0, -4, 7]
The degree of the polynomial is given by the length of the list minus one. This representation has one serious disadvantage: for sparse polynomials, the list will contain a lot of zeros.
A more reasonable representation of the term list of a sparse polynomial is as a list of the nonzero terms, where each term is a list containing the order of the term and the coefficient for that order; the degree of the polynomial is given by the order of the first term. For example, the polynomial x^100+2x^2+1 would be represented by this list:
[[100, 1], [2, 2], [0, 1]]
As an example of how useful this representation is, the book SICP builds a simple but very effective symbolic algebra system using the second representation for polynomials described above.
A list is not the only option.
You can use a map (dictionary) mapping the exponent to the corresponding coefficient.
Using a map, your example would be
{2: 6, 1: 5, 0: 3}
A list of (coefficient, exponent) pairs is quite standard. If you know your polynomial is dense, that is, all the exponent positions are small integers in the range 0 to some small maximum exponent, you can use the array, as I see Óscar Lopez just posted. :)
You can represent expressions as Expression Trees. See for example .NET Expression Trees.
This allows for much more complex expressions than simple polynomials and those expressions can also use multiple variables.
In .NET you can manipulate the expression tree as a tree AND you can evaluate it as a function.
Expression<Func<double,double>> polynomial = x => (x * x + 2 * x - 1);
double result = polynomial.Compile()(23.0);
An object-oriented approach would say that a Polynomial is a collection of Monomials, and a Monomial encapsulates a coefficient and exponent together.
This approach works when when you have a polynomial like this:
y(x) = x^1000 + 1
An approach that tied a data structure to a polynomial order would be terribly wasteful for this pathological case.
You need to store two things:
The degree of your polynomial (e.g. "3")
A list containing each coefficient (e.g. "{3, 0, 2}")
In standard C++, "std::vector<>" and "std::list<>" can do both.
Vector/array is obvious choice. Depending on type of expressions you may consider some sort of sparse vector type (custom made, i.e. based on dictionary or even linked list if you expressions have 2-3 non-zero coefficients 5x^100+x ).
In either case exposing through custom class/interface would be beneficial as you can replace implementation later. You would likely want to provide standard operations (+, -, *, equals) if you plan to write a lot of expression manipulation code.
Just store the coefficients in an array or vector. For example, in C++ if you are only using integer coefficients, you could use std::vector<int>, or for real numbers, std::vector<double>. Then you just push the coefficients in order and access them by variable exponent number.
For example (again in C++), to store 5*x^3 + 9*x - 2 you might do:
std::vector<int> poly;
poly.push_back(-2); // x^0, acceesed with poly[0]
poly.push_back(9); // x^1, accessed with poly[1]
poly.push_back(0); // x^2, etc
poly.push_back(5); // x^3, etc
If you have large, sparse, polynomials, then maybe you'd want to use a map instead of a vector. If you have fixed sized lengths, then you'd perhaps use an fixed length array instead of a vector.
I've used C++ for examples, but this same scheme can be used in any language.
You can also transform it into reverse Polish notation:
6x^2 + 5x + 3 -> x 2 ^ 6 * x 5 * + 3 +
Where x and numbers are "pushed" onto a stack and operations (^,*,+) take the two top-most values from the stack and replace them with the result of the operation. In the end you get the resultant value on the stack.
In this form it's easy to calculate arbitrarily complex expressions.
This representation is also close to tree representation of expressions where non-leaf tree nodes represent operations and functions and leaf nodes are for constants and variables.
What's good about trees is that you can also easily evaluate expressions and you can also do things like symbolic differentiation on them. Both have recursive nature.

Efficient Multiplication of Varying-Length #s [Conceptual]

EDIT
So it seems I "underestimated" what varying length numbers meant. I didn't even think about situations where the operands are 100 digits long. In that case, my proposed algorithm is definitely not efficient. I'd probably need an implementation who's complexity depends on the # of digits in each operands as opposed to its numerical value, right?
As suggested below, I will look into the Karatsuba algorithm...
Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm.
I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this:
a * b = a/2 * 2b if a is even
a * b = (a-1)/2 * 2b + a if a is odd
My pseudocode is:
rpa(x, y){
if x is 1
return y
if x is even
return rpa(x/2, 2y)
if x is odd
return rpa((x-1)/2, 2y) + y
}
I have 3 questions:
Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically...
Can I apply the Master's Theorem to understand the complexity...?
a = # subproblems in recursion = 1 (max 1 recursive call across all states)
n / b = size of each subproblem = n / 1 -> b = 1 (problem doesn't change size...?)
f(n^d) = work done outside recursive calls = 1 -> d = 0 (the addition when a is odd)
a = 1, b^d = 1, a = b^d -> complexity is in n^d*log(n) = log(n)
this makes sense logically since we are halving the problem at each step, right?
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
Many thanks in advance
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
This actually change everything about the problem (and make your algorithm incorrect).
It means than 1234 is provided as 1,2,3,4 and you cannot operate directly on the whole number. You need to analyze your algorithm in terms of #additions, #multiplications, #divisions.
You should expect a division to be a bit more expensive than a multiplication, and a multiplication to be lot more expensive than an addition. So a good algorithm try to reduce the number of divisions and multiplications.
Check out the Karatsuba algorithm, (ps don't copy it that's not what your teacher want) is one of the fastest for this specification.
Add 3): Native integers are limited in how large (or small) numbers they can represent (32- or 64-bit integers for example). To represent arbitrary length numbers you can choose strings, because then you are not really limited by this. The problem is then, of course, that your arithmetic units are not really made to add strings ;-)

Sum all elements in a quadword vector in ARM assembly with NEON

Im rather new to assembly and although the arm information center is often helpful sometimes the instructions can be a little confusing to a newbie. Basically what I need to do is sum 4 float values in a quadword register and store the result in a single precision register. I think the instruction VPADD can do what I need but I'm not quite sure.
You might try this (it's not in ASM, but you should be able to convert it easily):
float32x2_t r = vadd_f32(vget_high_f32(m_type), vget_low_f32(m_type));
return vget_lane_f32(vpadd_f32(r, r), 0);
In ASM it would be probably only VADD and VPADD.
I'm not sure if this is only one method to do this (and most optimal), but I haven't figured/found better one...
PS. I'm new to NEON too
It seems that you want to get the sum of a certain length of array, and not only four float values.
In that case, your code will work, but is far from optimized :
many many pipeline interlocks
unnecessary 32bit addition per iteration
Assuming the length of the array is a multiple of 8 and at least 16 :
vldmia {q0-q1}, [pSrc]!
sub count, count, #8
loop:
pld [pSrc, #32]
vldmia {q3-q4}, [pSrc]!
subs count, count, #8
vadd.f32 q0, q0, q3
vadd.f32 q1, q1, q4
bgt loop
vadd.f32 q0, q0, q1
vpadd.f32 d0, d0, d1
vadd.f32 s0, s0, s1
pld - while being an ARM instruction and not NEON - is crucial for performance. It drastically increases cache hit rate.
I hope the rest of the code above is self explanatory.
You will notice that this version is many times faster than your initial one.
Here is the code in ASM:
vpadd.f32 d1,d6,d7 # q3 is register that needs all of its contents summed
vadd.f32 s1,s2,s3 # now we add the contents of d1 together (the sum)
vadd.f32 s0,s0,s1 # sum += s1;
I may have forgotten to mention that in C the code would look like this:
float sum = 1.0f;
sum += number1 * number2;
I have omitted the multiplication from this little piece asm of code.

Resources