loop with array access is slow in Julia - julia

I did a comparison between a loop with and without array access as below and found that the performance difference between the two was huge: 1.463677[sec] vs 0.086808[sec].
Could you explain how to improve my code with array access and why this happens?
#inline dist2(p, q) = sqrt((p[1]-q[1])^2+(p[2]-q[2])^2)
function rand_gen()
r2set = Array[]
for i=1:10000
r2_add = rand(2, 1)
push!(r2set, r2_add)
end
return r2set
end
function test()
N = 10000
r2set = rand_gen()
a = [1 1]
b = [2 2]
#time for i=1:N, j=1:N
dist2(r2set[i], r2set[j])
end
#time for i=1:N, j=1:N
dist2(a, b)
end
end
test()

Make r2set have a concrete type like this (see also https://docs.julialang.org/en/latest/manual/performance-tips/#Avoid-containers-with-abstract-type-parameters-1):
#inline dist2(p, q) = sqrt((p[1]-q[1])^2+(p[2]-q[2])^2)
function rand_gen()
r2set = Matrix{Float64}[]
for i=1:10000
r2_add = rand(2, 1)
push!(r2set, r2_add)
end
return r2set
end
function test()
N = 10000
r2set = rand_gen()
a = [1 1]
b = [2 2]
#time for i=1:N, j=1:N
dist2(r2set[i], r2set[j])
end
#time for i=1:N, j=1:N
dist2(a, b)
end
end
test()
And now the tests are:
julia> test()
0.347000 seconds
0.147696 seconds
which is already better.
Now if you really want speed use immutable type, e.g. Tuple not an array like this:
#inline dist2(p, q) = sqrt((p[1]-q[1])^2+(p[2]-q[2])^2)
function rand_gen()
r2set = Tuple{Float64,Float64}[]
for i=1:10000
r2_add = (rand(), rand())
push!(r2set, r2_add)
end
return r2set
end
function test()
N = 10000
r2set = rand_gen()
a = (1,1)
b = (2,2)
s = 0.0
#time for i=1:N, j=1:N
#inbounds s += dist2(r2set[i], r2set[j])
end
#time for i=1:N, j=1:N
s += dist2(a, b)
end
end
test()
And you will comparable speed of both:
julia> test()
0.038901 seconds
0.039666 seconds
julia> test()
0.041379 seconds
0.039910 seconds
Note that I have added an addition of s because without it Julia optimized out the loop by noticing that it does not do any work.
The key is that if you store arrays in an array then the outer array holds pointers to inner arrays while with immutable types the data is stored directly.

Related

nested async and sync in Julia

I want to do multi Task (A,B) pairs which have no connections with each other. Each Task A include multi Task a. Task A has to be done before B start in the same one (A,B) pair. So it's like
#async for loop do multi (A,B)s
#sync do one (A,B)
#async for loop do Task A
do Task a
do Task B
how can I achieve this?
I have tried:
b = Vector{String}(undef, 6)
ta = #async for i in range(1, length(b))
a = Vector{String}(undef, 6)
#async for j in range(1, length(a)) # task A
a[j] = "hello "
sleep(1) # task a
end
#task B
b[i] = prod(a) * "world!"
sleep(1)
end
#time wait(ta)
fail: Task B does not wait for task A in the same pair.
b = Vector{String}(undef, 6)
ta = #async for i in range(1, length(b))
a = Vector{String}(undef, 6)
for j in range(1, length(a)) # task A
#async begin
a[j] = "hello "
sleep(1) # task a
end
end
#task B
b[i] = prod(a) * "world!"
sleep(1)
end
#time wait(ta)
fail: Task B does not wait for task A in the same pair.
b = Vector{String}(undef, 6)
ta = #async for i in range(1, length(b))
a = Vector{String}(undef, 6)
taA= #async for j in range(1, length(a)) # task A
a[j] = "hello "
sleep(1) # task a
end
wait(taA)
#task B
b[i] = prod(a) * "world!"
sleep(1)
end
#time wait(ta)
fail: multi (A,B) pairs not async to each other.
b = Vector{String}(undef, 6)
ta = #async for i in range(1, length(b))
a = Vector{String}(undef, 6)
taA = #async for j in range(1, length(a)) # task A
a[j] = "hello "
sleep(1) # task a
end
while !istaskdone(taA)
sleep(0.5)
end
b[i] = prod(a) * "world!"
sleep(1)
end
#time wait(ta)
fail: multi (A,B) pairs not async to each other.
If I understand your question right you want to have:
#sync for (A, B) in ABs
#async begin
#sync for a in A
#async do_task_A(a)
end
do_task_B(B)
end
end
For an example consider the function:
function do_task(t, x)
sleep(1/x)
println("Done $t : $x")
flush(stdout)
end
Lets run it!
julia> #time #sync for (A, B) in [([1,2], 100), ([3,4], 101),([5,6], 102)]
#async begin
#sync for a in A
#async do_task(:A, a)
end
do_task(:B, B)
end
end
Done A : 6
Done A : 5
Done B : 102
Done A : 4
Done A : 3
Done B : 101
Done A : 2
Done A : 1
Done B : 100
1.079950 seconds (20.25 k allocations: 1020.699 KiB, 3.52% compilation time)
You can see that tasks got executed asynchronously exactly in the order you requested.
All the answer from #Przemyslaw Szufel, I just sum it up as a note:
#sync for loop do task A
#async do task(a)
do task(B)
then Task B will wait until Task A finishes. And Multi Task a in Task A will be async to each other.
It is the building block for do multi Task a and then do Task B.
if you want to nest it, then just:
#sync for loop do Task (A,B)
#async begin
#sync for loop do task A
#async do task(a)
do task(B)
end
do task C
use the #sync macro before the for loop and then use the #async macro to wrap the content in the for loop. Then the multi little tasks in the for loop will run async to each other, but the code after the for loop will wait until the for loop finishes.
example code from #Przemyslaw Szufel
function do_task(t, x)
println("start $t : $x")
flush(stdout)
sleep(1 / x)
println("Done $t : $x")
flush(stdout)
end
#time #sync for (A, B) in [([1, 2], 100), ([3, 4], 101), ([5, 6], 102)]
#async begin
#sync for a in A
#async do_task(:A, a)
end
do_task(:B, B)
end
end

Pmap with Euclidean Distance Operations

I am very new to the Julia programming language and I am testing out some Euclidean distance operations I typically perform in other languages. The functions work if calling it serially, but the pmap calls are not returning the desired results. Could someone take a look and let me know if I am going about this the right way? is pmap even the best way to approach this?
using Distributed
#Example data
d1 = randn(50000,3)
d2 = randn(50000,3)
First Function: Euclidean Distance Matrix
function EDM(m1, m2)
n1 = size(m1, 1)
n2 = size(m2,1)
k = size(m1, 2)
Dist = zeros(n1,n2)
for i in 1:n1
for j in 1:n2
dtemp = 0
for a in 1:k
dtemp += (m1[i,a] - m2[j,a]) ^ 2
end
Dist[i,j] = sqrt(dtemp)
end
end
return Dist
end
#pmap call
function pmap_EDM(m1,m2)
return pmap(EDM, m1, m2)
end
Second Function: Minimum Euclidean Distances Unidirectional
function MED(m1, m2)
n1 = size(m1, 1)
n2 = size(m2,1)
k = size(m1, 2)
Dist = zeros(n1,1)
for i in 1:n1
dsum = Inf
for j in 1:n2
dtemp = 0
for a in 1:k
dtemp += (m1[i,a] - m2[j,a]) ^ 2
end
dtemp = sqrt(dtemp)
if dtemp < dsum
dsum = copy(dtemp)
end
end
Dist[i,1] = dsum
end
return Dist
end
#pmap call
function pmap_MED(m1,m2)
return pmap(MED, m1, m2)
end
Third Function: Minimum Euclidean Distances and Corresponding Indices Unidirectional
function MEDI(m1, m2)
n1 = size(m1, 1)
n2 = size(m2,1)
k = size(m1, 2)
Dist = zeros(n1,2)
for i in 1:n1
dsum = Inf
dsum_ind = 0
for j in 1:n2
dtemp = 0
for a in 1:k
dtemp += (m1[i,a] - m2[j,a]) ^ 2
end
dtemp = sqrt(dtemp)
if dtemp < dsum
dsum = copy(dtemp)
dsum_ind = copy(j)
end
end
Dist[i,1] = dsum
Dist[i,2] = dsum_ind
end
return Dist
end
#pmap call
function pmap_MEDI(m1,m2)
return pmap(MEDI, m1, m2)
end
Calling functions
r1 = EDM(d1,d2) #serial
r2 = pmap_EDM(d1,d2)
r3 = MED(d1,d2) #serial
r4 = pmap_MED(d1,d2)
r5 = MEDI(d1,d2) #serial
r6 = pmap_MEDI(d1,d2)
Edited:
The first function should return a simple Euclidean distance matrix with the distances between each row in one array to every row in the second array. The second and third functions are deviations of this to return a subset of those distances based on the minimum distance for each row in one array to every other row in another array (with the third function returning the index position of the minimum distance). The distances do not appear to be calculated correctly and the latter two functions using pmap are returning an nx3 matrix rather than nx1 and nx2 respectively.
Edited 2: example using smaller data set to show results
d1 = randn(5,3)
d2 = randn(5,3)
julia> EDM(d1,d2)
5×5 Array{Float64,2}:
2.60637 3.18867 1.0745 2.60328 1.58608
1.2763 2.31037 3.04379 2.74113 2.00452
1.70024 2.07731 3.12397 2.60893 2.05932
2.44581 1.57345 0.910323 1.08718 0.407675
3.42936 1.13001 2.18345 1.08764 1.70883
julia> pmap_EDM(d1,d2)
5×3 Array{Array{Float64,2},2}:
[0.397928] [2.39283] [0.953501]
[1.06776] [0.815057] [1.87973]
[0.151963] [3.05161] [0.650967]
[0.571021] [0.275554] [0.883151]
[0.109293] [0.635398] [1.58254]
julia> MED(d1,d2)
5×1 Array{Float64,2}:
1.0744953977891307
1.2762979313081781
1.7002448697495505
0.40767454400155695
1.0876399289364607
julia> pmap_MED(d1,d2)
5×3 Array{Array{Float64,2},2}:
[0.397928] [2.39283] [0.953501]
[1.06776] [0.815057] [1.87973]
[0.151963] [3.05161] [0.650967]
[0.571021] [0.275554] [0.883151]
[0.109293] [0.635398] [1.58254]
julia> MEDI(d1,d2)
5×2 Array{Float64,2}:
1.0745 3.0
1.2763 1.0
1.70024 1.0
0.407675 5.0
1.08764 4.0
julia> pmap_MEDI(d1,d2)
5×3 Array{Array{Float64,2},2}:
[0.397928 1.0] [2.39283 1.0] [0.953501 1.0]
[1.06776 1.0] [0.815057 1.0] [1.87973 1.0]
[0.151963 1.0] [3.05161 1.0] [0.650967 1.0]
[0.571021 1.0] [0.275554 1.0] [0.883151 1.0]
[0.109293 1.0] [0.635398 1.0] [1.58254 1.0]
Edited 3: #distributed version of function two
using Distributed
using SharedArrays
#Minimum Euclidean Distances Unidirectional
#everywhere function MD(v1, m2)
n = size(m2, 1)
dsum = Inf
for j in 1:n
dtemp = sqrt((v1[1] - m2[j,1]) ^ 2 + (v1[2] - m2[j,2]) ^ 2 + (v1[3] - m2[j,3]) ^ 2)
if dtemp < dsum
dsum = dtemp
end
end
return dsum
end
function MED(m1, m2)
n1 = size(m1,1)
Dist = SharedArray{Float64}(n1)
m3 = SharedArray{Float64}(m2)
#sync #distributed for k in 1:n1
Dist[k] = MD(m1[k,:], m3)
end
return Dist
end
I did not went into the details of your code, but could it be that you apply pmap at the wrong code level?
For instance if you have the following serial code
for i = 1:imax
# do some work
end
You would write this as:
function function_for_single_iteration(i)
# do some work
end
pmap(function_for_single_iteration,1:imax)
Essentially the pmap replaces a (outer) for loop.
Before using pmap, I usually first use the serial map function to check that I have the same results.
Note that pmap and map would return a vector. In your case, probably a vector of vectors of distances. You would need to use cat to turn this into a matrix.

How to avoid memory allocation in Julia?

Consider the following simple Julia code operating on four complex matrices:
n = 400
z = eye(Complex{Float64},n)
id = eye(Complex{Float64},n)
fc = map(x -> rand(Complex{Float64}), id)
cr = map(x -> rand(Complex{Float64}), id)
s = 0.1 + 0.1im
#time for j = 1:n
for i = 1:n
z[i,j] = id[i,j] - fc[i,j]^s * cr[i,j]
end
end
The timing shows a few million memory allocations, despite all variables being preallocated:
0.072718 seconds (1.12 M allocations: 34.204 MB, 7.22% gc time)
How can I avoid all those allocations (and GC)?
One of the first tips for performant Julia code is to avoid using global variables. This alone can cut the number of allocations by 7 times. If you must use globals, one way to improve their performance is to use const. Using const prevents change of type but change of value is possible with a warning.
consider this modified code without using functions:
const n = 400
z = Array{Complex{Float64}}(n,n)
const id = eye(Complex{Float64},n)
const fc = map(x -> rand(Complex{Float64}), id)
const cr = map(x -> rand(Complex{Float64}), id)
const s = 0.1 + 0.1im
#time for j = 1:n
for i = 1:n
z[i,j] = id[i,j] - fc[i,j]^s * cr[i,j]
end
end
The timing shows this result:
0.028882 seconds (160.00 k allocations: 4.883 MB)
Not only did the number of allocations get 7 times lower, but also the execution speed is 2.2 times faster.
Now let's apply the second tip for high performance Julia code; write every thing in functions. Writing the above code into a function z_mat(n):
function z_mat(n)
z = Array{Complex{Float64}}(n,n)
id = eye(Complex{Float64},n)
fc = map(x -> rand(Complex{Float64}), id)
cr = map(x -> rand(Complex{Float64}), id)
s = 1.0 + 1.0im
#time for j = 1:n
for i = 1:n
z[i,j] = id[i,j] - fc[i,j]^s * cr[i,j]
end
end
end
and running
z_mat(40)
0.000273 seconds
#time z_mat(400)
0.027273 seconds
0.032443 seconds (429 allocations: 9.779 MB)
That is 2610 times fewer allocations than the original code for the whole function because the loop alone does zero allocations.

Conditional closures in Julia

In many applications of map(f,X), it helps to create closures that depending on parameters apply different functions f to data X.
I can think of at least the following three ways to do this (note that the second for some reason does not work, bug?)
f0(x,y) = x+y
f1(x,y,p) = x+y^p
function g0(power::Bool,X,y)
if power
f = x -> f1(x,y,2.0)
else
f = x -> f0(x,y)
end
map(f,X)
end
function g1(power::Bool,X,y)
if power
f(x) = f1(x,y,2.0)
else
f(x) = f0(x,y)
end
map(f,X)
end
abstract FunType
abstract PowerFun <: FunType
abstract NoPowerFun <: FunType
function g2{S<:FunType}(T::Type{S},X,y)
f(::Type{PowerFun},x) = f1(x,y,2.0)
f(::Type{NoPowerFun},x) = f0(x,y)
map(x -> f(T,x),X)
end
X = 1.0:1000000.0
burnin0 = g0(true,X,4.0) + g0(false,X,4.0);
burnin1 = g1(true,X,4.0) + g1(false,X,4.0);
burnin2 = g2(PowerFun,X,4.0) + g2(NoPowerFun,X,4.0);
#time r0true = g0(true,X,4.0); #0.019515 seconds (12 allocations: 7.630 MB)
#time r0false = g0(false,X,4.0); #0.002984 seconds (12 allocations: 7.630 MB)
#time r1true = g1(true,X,4.0); # 0.004517 seconds (8 allocations: 7.630 MB, 26.28% gc time)
#time r1false = g1(false,X,4.0); # UndefVarError: f not defined
#time r2true = g2(PowerFun,X,4.0); # 0.085673 seconds (2.00 M allocations: 38.147 MB, 3.90% gc time)
#time r2false = g2(NoPowerFun,X,4.0); # 0.234087 seconds (2.00 M allocations: 38.147 MB, 60.61% gc time)
What is the optimal way to do this in Julia?
There's no need to use map here at all. Using a closure doesn't make things simpler or faster. Just use "dot-broadcasting" to apply the functions directly:
function g3(X,y,power=1)
if power != 1
return f1.(X, y, power) # or simply X .+ y^power
else
return f0.(X, y) # or simply X .+ y
end
end

How can I do a bitwise-or reduction along an axis of a boolean array in Julia?

I'm trying to find the best way to do a bitwise-or reduction of a 3D boolean array of masks to 2D in Julia.
I can always write a for loop, of course:
x = randbool(3,3,3)
out = copy(x[:,:,1])
for i = 1:3
for j = 1:3
for k = 2:3
out[i,j] |= x[i,j,k]
end
end
end
But I'm wondering if there is a better way to do the reduction.
A simple answer would be
out = x[:,:,1] | x[:,:,2] | x[:,:,3]
but I did some benchmarking:
function simple(n,x)
out = x[:,:,1] | x[:,:,2]
for k = 3:n
#inbounds out |= x[:,:,k]
end
return out
end
function forloops(n,x)
out = copy(x[:,:,1])
for i = 1:n
for j = 1:n
for k = 2:n
#inbounds out[i,j] |= x[i,j,k]
end
end
end
return out
end
function forloopscolfirst(n,x)
out = copy(x[:,:,1])
for j = 1:n
for i = 1:n
for k = 2:n
#inbounds out[i,j] |= x[i,j,k]
end
end
end
return out
end
shorty(n,x) = |([x[:,:,i] for i in 1:n]...)
timholy(n,x) = any(x,3)
function runtest(n)
x = randbool(n,n,n)
#time out1 = simple(n,x)
#time out2 = forloops(n,x)
#time out3 = forloopscolfirst(n,x)
#time out4 = shorty(n,x)
#time out5 = timholy(n,x)
println(all(out1 .== out2))
println(all(out1 .== out3))
println(all(out1 .== out4))
println(all(out1 .== out5))
end
runtest(3)
runtest(500)
which gave the following results
# For 500
simple: 0.039403016 seconds (39716840 bytes allocated)
forloops: 6.259421683 seconds (77504 bytes allocated)
forloopscolfirst 1.809124505 seconds (77504 bytes allocated)
shorty: elapsed time: 0.050384062 seconds (39464608 bytes allocated)
timholy: 2.396887396 seconds (31784 bytes allocated)
So I'd go with simple or shorty
Try any(x, 3). Just typing a little more here so StackOverflow doesn't nix this response.
There are various standard optimization tricks and hints that can be applied, but the critical observation to make here is that Julia organizes array in column-major rather than row-major order. For small size arrays this is not easily seen but when the arrays grow large it's telling. There is a method reduce provided that is optimized to perform an function on a collection (in this case OR), but it comes at a cost. If the number of combining steps is relatively small then it's better to simply loop. In all cases minimizing the number of memory access is over all better. Below are various attempts at optimization using these 2 things in mind.
Various Attempts and Observations
Initial function
Here's a function that takes your example and generalizes it.
function boolReduce1(x)
out = copy(x[:,:,1])
for i = 1:size(x,1)
for j = 1:size(x,2)
for k = 2:size(x,3)
out[i,j] |= x[i,j,k]
end
end
end
out
end
Creating a fairly large array, we can time it's performance
julia> #time boolReduce1(b);
elapsed time: 42.372058096 seconds (1056704 bytes allocated)
Applying optimizations
Here's another similar version but with the standard type hints, use of #inbounds and inverting the loops.
function boolReduce2(b::BitArray{3})
a = BitArray{2}(size(b)[1:2]...)
for j = 1:size(b,2)
for i = 1:size(b,1)
#inbounds a[i,j] = b[i,j,1]
for k = 2:size(b,3)
#inbounds a[i,j] |= b[i,j,k]
end
end
end
a
end
And take the time
julia> #time boolReduce2(b);
elapsed time: 12.892392891 seconds (500520 bytes allocated)
The insight
The 2nd function is a lot faster, and also less memory is allocated because a temporary array wasn't created. But what if we simply take the first function and invert the array indexing?
function boolReduce3(x)
out = copy(x[:,:,1])
for j = 1:size(x,2)
for i = 1:size(x,1)
for k = 2:size(x,3)
out[i,j] |= x[i,j,k]
end
end
end
out
end
and take the time now
julia> #time boolReduce3(b);
elapsed time: 12.451501749 seconds (1056704 bytes allocated)
That's just as fast as the 2nd function.
Using reduce
There is a function called reduce that we can use to eliminate the 3rd loop. Its function is to repeatedly apply an operation on all of the elements with the result of the previous operation. This is exactly what we want.
function boolReduce4(b)
a = BitArray{2}(size(b)[1:2]...)
for j = 1:size(b,2)
for i = 1:size(b,1)
#inbounds a[i,j] = reduce(|,b[i,j,:])
end
end
a
end
Now take it's time
julia> #time boolReduce4(b);
elapsed time: 15.828273008 seconds (1503092520 bytes allocated, 4.07% gc time)
That's ok, but not even as fast as the simple optimized original. The reason is, take a look at all of the extra memory that was allocated. This is because data has to be copied from all over to produce input for reduce.
Combining things
But what if we max out the insight as best we can. Instead of the last index being reduced, the first one is?
function boolReduceX(b)
a = BitArray{2}(size(b)[2:3]...)
for j = 1:size(b,3)
for i = 1:size(b,2)
#inbounds a[i,j] = reduce(|,b[:,i,j])
end
end
a
end
And now create a similar array and time it.
julia> c = randbool(200,2000,2000);
julia> #time boolReduceX(c);
elapsed time: 1.877547669 seconds (927092520 bytes allocated, 21.66% gc time)
Resulting in a function 20x faster than the original version for large arrays. Pretty good.
But what if medium size?
If the size is very large then the above function appears best, but if the data set size is smaller, the use of reduce doesn't pay enough back and the following is faster. Including a temporary variable speeds things from version 2. Another version of boolReduceX using a loop instead of reduce (not show here) was even faster.
function boolReduce5(b)
a = BitArray{2}(size(b)[1:2]...)
for j = 1:size(b,2)
for i = 1:size(b,1)
#inbounds t = b[i,j,1]
for k = 2:size(b,3)
#inbounds t |= b[i,j,k]
end
#inbounds a[i,j] = t
end
end
a
end
julia> b = randbool(2000,2000,20);
julia> c = randbool(20,2000,2000);
julia> #time boolReduceX(c);
elapsed time: 1.535334322 seconds (799092520 bytes allocated, 23.79% gc time)
julia> #time boolReduce5(b);
elapsed time: 0.491410981 seconds (500520 bytes allocated)
It is faster to devectorize. It's just a matter of how much work you want to put in. The naïve devectorized approach is slow because it's a BitArray: extracting contiguous regions and bitwise OR can both be done a 64-bit chunk at a time, but the naïve devectorized approach operates an element at a time. On top of that, indexing BitArrays is slow, both because there is a sequence of bit operations involved and because it can't presently be inlined due to the bounds check. Here's a strategy that is devectorized but exploits the structure of the BitArray. Most of the code is copy-pasted from copy_chunks! in bitarray.jl and I didn't try to prettify it (sorry!).
function devec(n::Int, x::BitArray)
src = x.chunks
out = falses(n, n)
dest = out.chunks
numbits = n*n
kd0 = 1
ld0 = 0
for j = 1:n
pos_s = (n*n)*(j-1)+1
kd1, ld1 = Base.get_chunks_id(numbits - 1)
ks0, ls0 = Base.get_chunks_id(pos_s)
ks1, ls1 = Base.get_chunks_id(pos_s + numbits - 1)
delta_kd = kd1 - kd0
delta_ks = ks1 - ks0
u = Base._msk64
if delta_kd == 0
msk_d0 = ~(u << ld0) | (u << (ld1+1))
else
msk_d0 = ~(u << ld0)
msk_d1 = (u << (ld1+1))
end
if delta_ks == 0
msk_s0 = (u << ls0) & ~(u << (ls1+1))
else
msk_s0 = (u << ls0)
end
chunk_s0 = Base.glue_src_bitchunks(src, ks0, ks1, msk_s0, ls0)
dest[kd0] |= (dest[kd0] & msk_d0) | ((chunk_s0 << ld0) & ~msk_d0)
delta_kd == 0 && continue
for i = 1 : kd1 - kd0
chunk_s1 = Base.glue_src_bitchunks(src, ks0 + i, ks1, msk_s0, ls0)
chunk_s = (chunk_s0 >>> (64 - ld0)) | (chunk_s1 << ld0)
dest[kd0 + i] |= chunk_s
chunk_s0 = chunk_s1
end
end
out
end
With Iain's benchmarks, this gives me:
simple: 0.051321131 seconds (46356000 bytes allocated, 30.03% gc time)
forloops: 6.226652258 seconds (92976 bytes allocated)
forloopscolfirst: 2.099381939 seconds (89472 bytes allocated)
shorty: 0.060194226 seconds (46387760 bytes allocated, 36.27% gc time)
timholy: 2.464298752 seconds (31784 bytes allocated)
devec: 0.008734413 seconds (31472 bytes allocated)

Resources