In Julia, how to skip certain numbers in a for loop - julia

In the julia for loop, I want to skip the numbers that can be divided by 500. For example, in the loop below, I want to skip i=500, 1000, 1500, 2000, 2500, ..., 10000. How can I do that?
n=10000
result = zeros(n)
for i = 1:n
result[i] = i
end

just use continue:
for i = 1:n
iszero(i%500) && continue
result[i] = i
end

A more efficient option than continue would be to use Iterators.filter to create a lazy iterator which filters the values you don't require.
For the example given in the OP, this would be:
n=10000
result = zeros(n)
iter = Iterators.filter(i -> (i % 500) != 0, 1:n)
for i = iter
result[i] = i
end
A benchmark comparing with #jling's answer:
julia> #btime for i = 1:n
iszero(i%500) && continue
result[i] = i
end
1.693 ms (28979 allocations: 609.06 KiB)
julia> #btime for i = iter
result[i] = i
end
1.177 ms (28920 allocations: 607.81 KiB)

While the answer by #jing is perfectly correct, sometimes you have a list of values that you want to actually skip. In such cases Not may be an elegant solution:
julia> skipvals = [2,4,5];
julia> for i in (1:7)[Not(skipvals)]
print("$i ")
end
1 3 6 7
Not is a part of the InvertedIndices package, but also gets imported together via DataFrames, so in most data science codes it is just there for you to use :)

Related

Fast summation of symmetric matrix in Julia

I want to sum all elements in a matrix A with dimension n times n. The matrix is symmetric and has 0s on the diagonal. The fastest way to do so that I have found is simply
sum(A). However this seems wasteful since it doesn't use the fact that I only need to calculate the lower triangle of the matrix. However, sum(tril(A, -1)) is significantly slower, and sum(A[i, j] for i = 1:n-1 for j = i+1:n) even more so. Is there a more efficient way to sum the matrix?
Edit: The solution by #AboAmmar performs well. Here is code (with summing the diagonal separately, something that can be removed if there is only zeros on the diagonal) to compare:
using BenchmarkTools
using LinearAlgebra
function sum_triu(A)
m, n = size(A)
#assert m == n
s = zero(eltype(A))
for j = 2:n
#simd for i = 1:j-1
s += #inbounds A[i,j]
end
end
s *= 2
for i = 1:n
s += A[i, i]
end
return s
end
N = 1000
A = Symmetric(rand(0:9,N,N))
A -= diagm(diag(A))
#btime sum(A)
#btime 2 * sum(tril(A))
#btime sum_triu(A)
This is 2.7X faster than sum for n = 1000 matrix. Make sure to add a #simd before the loop and use #inbounds. Also, use the correct loop order for fast memory access.
function sum_triu(A)
m, n = size(A)
#assert m == n
s = zero(eltype(A))
for j = 1:n
#simd for i = 1:j
s += #inbounds A[i,j]
end
end
return 2 * s
end
Example run on my PC:
sum_triu(A) = 499268.7328022966
sum(A) = 499268.73280229873
93.000 μs (0 allocations: 0 bytes)
249.900 μs (0 allocations: 0 bytes)
How about
2 * sum(LowerTriangular(A))
help?> LA.LowerTriangular
LowerTriangular(A::AbstractMatrix)
Construct a LowerTriangular view of the matrix A.
tril creates a new matrix, which allocates memory. Since a LowerTriangular is a view into the existing matrix, there's no memory allocation.

Improving the speed of a for loop in Julia

Here is my code in Julia platform and I like to speed it up. Is there anyway that I can make this faster? It takes 0.5 seconds for a dataset of 50k*50k. I was expecting Julia to be a lot faster than this or I am not sure if I am doing a silly implementation.
ar = [[1,2,3,4,5], [2,3,4,5,6,7,8], [4,7,8,9], [9,10], [2,3,4,5]]
SV = rand(10,5)
function h_score_0(ar ,SV)
m = length(ar)
SC = Array{Float64,2}(undef, size(SV, 2), m)
for iter = 1:m
nodes = ar[iter]
for jj = 1:size(SV, 2)
mx = maximum(SV[nodes, jj])
mn = minimum(SV[nodes, jj])
term1 = (mx - mn)^2;
SC[jj, iter] = (term1);
end
end
return score = sum(SC, dims = 1)
end
You have some unnecessary allocations in your code:
mx = maximum(SV[nodes, jj])
mn = minimum(SV[nodes, jj])
Slices allocate, so each line makes a copy of the data here, you're actually copying the data twice, once on each line. You can either make sure to copy only once, or even better: use view, so there is no copy at all (note that view is much faster on Julia v1.5, in case you are using an older version).
SC = Array{Float64,2}(undef, size(SV, 2), m)
And no reason to create a matrix here, and sum over it afterwards, just accumulate while you are iterating:
score[i] += (mx - mn)^2
Here's a function that is >5x as fast on my laptop for the input data you specified:
function h_score_1(ar, SV)
score = zeros(eltype(SV), length(ar))
#inbounds for i in eachindex(ar)
nodes = ar[i]
for j in axes(SV, 2)
SVview = view(SV, nodes, j)
mx = maximum(SVview)
mn = minimum(SVview)
score[i] += (mx - mn)^2
end
end
return score
end
This function outputs a one-dimensional vector instead of a 1xN matrix in your original function.
In principle, this could be even faster if we replace
mx = maximum(SVview)
mn = minimum(SVview)
with
(mn, mx) = extrema(SVview)
which only traverses the vector once, instead of twice. Unfortunately, there is a performance issue with extrema, so it is currently not as fast as separate maximum/minimum calls: https://github.com/JuliaLang/julia/issues/31442
Finally, for absolutely getting the best performance at the cost of brevity, we can avoid creating a view at all and turn the calls to maximum and minimum into a single explicit loop traversal:
function h_score_2(ar, SV)
score = zeros(eltype(SV), length(ar))
#inbounds for i in eachindex(ar)
nodes = ar[i]
for j in axes(SV, 2)
mx, mn = -Inf, +Inf
for node in nodes
x = SV[node, j]
mx = ifelse(x > mx, x, mx)
mn = ifelse(x < mn, x, mn)
end
score[i] += (mx - mn)^2
end
end
return score
end
This also avoids the performance issue that extrema suffers, and looks up the SV element once per node. Although this version is annoying to write, it's substantially faster, even on Julia 1.5 where views are free. Here are some benchmark timings with your test data:
julia> using BenchmarkTools
julia> #btime h_score_0($ar, $SV)
2.344 μs (52 allocations: 6.19 KiB)
1×5 Matrix{Float64}:
1.95458 2.94592 2.79438 0.709745 1.85877
julia> #btime h_score_1($ar, $SV)
392.035 ns (1 allocation: 128 bytes)
5-element Vector{Float64}:
1.9545848011260765
2.9459235098820167
2.794383144368953
0.7097448590904598
1.8587691646610984
julia> #btime h_score_2($ar, $SV)
118.243 ns (1 allocation: 128 bytes)
5-element Vector{Float64}:
1.9545848011260765
2.9459235098820167
2.794383144368953
0.7097448590904598
1.8587691646610984
So explicitly writing out the innermost loop is worth it here, reducing time by another 3x or so. It's annoying that the Julia compiler isn't yet able to generate code this efficient, but it does get smarter with every version. On the other hand, the explicit loop version will be fast forever, so if this code is really performance critical, it's probably worth writing it out like this.

List of vectors slower in Julia than R?

I tried to speed up an R function by porting it to Julia, but to my surprise Julia was slower. The function sequentially updates a list of vectors (array of arrays in Julia). Beforehand the index of the list element to be updated is unknown and the length of the new vector is unknown.
I have written a test function that demonstrates the behavior.
Julia
function MyTest(n)
a = [[0.0] for i in 1:n]
for i in 1:n
a[i] = cumsum(ones(i))
end
a
end
R
MyTest <- function(n){
a <- as.list(rep(0, n))
for (i in 1:n)
a[[i]] <- cumsum(rep(1, i))
a
}
By setting n to 5000, 10000 and 20000, typical computing times are (median of 21 tests):
R: 0.14, 0.45, and 1.28 seconds
Julia: 0.31, 3.38, and 27.03 seconds
I used a windows-laptop with 64 bit Julia-1.3.1 and 64 bit R-3.6.1.
Both these functions use 64 bit floating-point types. My real problem involves integers and then R is even more favorable. But integer comparison isn’t fair since R uses 32 bit integers and Julia 64 bit.
Is it something I can do to speed up Julia or is really Julia much slower than R in this case?
I don't quite see how you get your test results. Assuming you want 32 bit integers, as you said, then we have
julia> function mytest(n)
a = Vector{Vector{Int32}}(undef, n)
for i in 1:n
a[i] = cumsum(ones(i))
end
return a
end
mytest (generic function with 1 method)
julia> #btime mytest(20000);
1.108 s (111810 allocations: 3.73 GiB)
When we only get rid of those allocations, we already get down to the following:
julia> function mytest(n)
a = Vector{Vector{Int32}}(undef, n)
#inbounds for i in 1:n
a[i] = collect(UnitRange{Int32}(1, i))
end
return a
end
mytest (generic function with 1 method)
julia> #btime mytest(20000);
115.702 ms (35906 allocations: 765.40 MiB)
Further devectorization does not even help:
julia> function mytest(n)
a = Vector{Vector{Int32}}(undef, n)
#inbounds for i in 1:n
v = Vector{Int32}(undef, i)
v[1] = 1
#inbounds for j = 2:i
v[j] = v[j-1] + 1
end
a[i] = v
end
return a
end
mytest (generic function with 1 method)
julia> #btime mytest(20000);
188.856 ms (35906 allocations: 765.40 MiB)
But with a couple of threads (I assume the inner arrays are independent), we get 2x speed-up again:
julia> Threads.nthreads()
4
julia> function mytest(n)
a = Vector{Vector{Int32}}(undef, n)
Threads.#threads for i in 1:n
v = Vector{Int32}(undef, i)
v[1] = 1
#inbounds for j = 2:i
v[j] = v[j-1] + 1
end
a[i] = v
end
return a
end
mytest (generic function with 1 method)
julia> #btime mytest(20000);
99.718 ms (35891 allocations: 763.13 MiB)
But this is only about as fast as the second variant above.
That is, for the specific case of cumsum. Other inner functions are slower, of course, but can be equally threaded, and optimized in the same ways, with possibly different results.
(This is on Julia 1.2, 12 GiB RAM, and an older i7.)
Perhaps R is doing some type of buffering for such simple functions?
Here is the Julia version with buffering:
using Memoize
#memoize function cumsum_ones(i)
cumsum(ones(i))
end
function MyTest2(n)
a = Vector{Vector{Float64}}(undef, n)
for i in 1:n
a[i] = cumsum_ones(i)
end
a
end
In a warmed-up function, the timings look the following:
julia> #btime MyTest2(5000);
442.500 μs (10002 allocations: 195.39 KiB)
julia> #btime MyTest2(10000);
939.499 μs (20002 allocations: 390.70 KiB)
julia> #btime MyTest2(20000);
3.554 ms (40002 allocations: 781.33 KiB)

Julia: Searching for a column in a sorted matrix

I have a matrix that is sorted like the one shown below
1 1 2 2 3
1 2 3 4 1
2 1 2 1 1
It's a bit hard for me to describe the ordering, but hopefully it's clear from the example. The rough idea is that we first sort on the first row, then the second, etc.
I would like to find a specific column in the matrix, and that column may or may not exist in it.
I tried the following code:
index = searchsortedfirst(1:total_cols, col, lt=(index,x) -> (matrix[: index] < x))
The above code works, but it is slow. I profiled the code, and it spends a lot of time in "_get_index". I then tried the following
#views index = searchsortedfirst(1:total_cols, col, lt=(index,x) -> (matrix[: index] < x))
As expected this helped a lot, likely due to the slices I'm taking. However, is there a better way to go about this? There still seems to be a lot of overhead, and I feel like there might be a cleaner way to write this, which would be easier to optimize.
However, I absolutely value speed over clarity.
Here is some code I wrote to compare binary vs. linear search.
using Profile
function test_search()
max_val = 20
rows = 4
matrix = rand(1:max_val, rows, 10^5)
matrix = Array{Int64,2}(sortslices(matrix, dims=2))
indices = #time #profile lin_search(matrix, rows, max_val, 10^3)
indices = #time #profile bin_search(matrix, rows, max_val, 10^3)
end
function bin_search(matrix, rows, max_val, repeats)
indices = zeros(repeats)
x = zeros(Int64, rows)
cols = size(matrix)[2]
for i = 1:repeats
x = rand(1:max_val, rows)
#inbounds #views index = searchsortedfirst(1:cols, x, lt=(index,x)->(matrix[:,index] < x))
indices[i] = index
end
return indices
end
function array_eq(matrix, index, y, rows)
for i=1:rows
#inbounds if view(matrix, i, index) != y[i]
return false
end
end
return true
end
function lin_search(matrix, rows, max_val, repeats)
indices = zeros(repeats)
x = zeros(Int64, rows)
cols = size(matrix)[2]
for i = 1:repeats
index = cols + 1
x = rand(1:max_val, rows)
for j=1:cols
if array_eq(matrix, j, x, rows)
index = j;
break
end
end
indices[i] = index
end
return indices
end
Profile.clear()
test_search()
Here is some sample output
0.041356 seconds (68.90 k allocations: 3.431 MiB)
0.070224 seconds (110.45 k allocations: 5.418 MiB)
After adding some more #inbounds, it looks like a linear search is faster than binary. Seems strange when there are 10^5 columns.
If speed is most important, why not simply use the fact that Julia allows you to write fast loops?
julia> function findcol(M, col)
#inbounds #views for c in axes(M, 2)
M[:,c] == col && return c
end
return nothing
end
findcol (generic function with 1 method)
julia> col = [2,3,2];
julia> M = [1 1 2 2 3;
1 2 3 4 1;
2 1 2 1 1];
julia> #btime findcol($M, $col)
32.854 ns (3 allocations: 144 bytes)
3
This should probably be fast enough and does not even take into account any ordering.
I discovered two issues, that when fixed result in both linear and binary searches being much faster. And the binary search becomes faster than linear.
First, there was some type instability. I changed on one of the lines to
matrix::Array{Int64,2} = Array{Int64,2}(sortslices(matrix, dims=2))
This resulted in an order of magnitude speedup. Also it turns out that using #views does not do anything in the following code
#inbounds #views index = searchsortedfirst(1:cols, x, lt=(index,x)->(matrix[:,index] < x))
I am new to Julia, but my hunch is that since matrix[:,index] is copied no matter what in the anonymous function. This would make sense, since it allows for closures.
If I write a separate non-anonymous function, then that copy goes away. Linear search didn't copy the slices, so this also really sped up the binary search.

How can I do a bitwise-or reduction along an axis of a boolean array in Julia?

I'm trying to find the best way to do a bitwise-or reduction of a 3D boolean array of masks to 2D in Julia.
I can always write a for loop, of course:
x = randbool(3,3,3)
out = copy(x[:,:,1])
for i = 1:3
for j = 1:3
for k = 2:3
out[i,j] |= x[i,j,k]
end
end
end
But I'm wondering if there is a better way to do the reduction.
A simple answer would be
out = x[:,:,1] | x[:,:,2] | x[:,:,3]
but I did some benchmarking:
function simple(n,x)
out = x[:,:,1] | x[:,:,2]
for k = 3:n
#inbounds out |= x[:,:,k]
end
return out
end
function forloops(n,x)
out = copy(x[:,:,1])
for i = 1:n
for j = 1:n
for k = 2:n
#inbounds out[i,j] |= x[i,j,k]
end
end
end
return out
end
function forloopscolfirst(n,x)
out = copy(x[:,:,1])
for j = 1:n
for i = 1:n
for k = 2:n
#inbounds out[i,j] |= x[i,j,k]
end
end
end
return out
end
shorty(n,x) = |([x[:,:,i] for i in 1:n]...)
timholy(n,x) = any(x,3)
function runtest(n)
x = randbool(n,n,n)
#time out1 = simple(n,x)
#time out2 = forloops(n,x)
#time out3 = forloopscolfirst(n,x)
#time out4 = shorty(n,x)
#time out5 = timholy(n,x)
println(all(out1 .== out2))
println(all(out1 .== out3))
println(all(out1 .== out4))
println(all(out1 .== out5))
end
runtest(3)
runtest(500)
which gave the following results
# For 500
simple: 0.039403016 seconds (39716840 bytes allocated)
forloops: 6.259421683 seconds (77504 bytes allocated)
forloopscolfirst 1.809124505 seconds (77504 bytes allocated)
shorty: elapsed time: 0.050384062 seconds (39464608 bytes allocated)
timholy: 2.396887396 seconds (31784 bytes allocated)
So I'd go with simple or shorty
Try any(x, 3). Just typing a little more here so StackOverflow doesn't nix this response.
There are various standard optimization tricks and hints that can be applied, but the critical observation to make here is that Julia organizes array in column-major rather than row-major order. For small size arrays this is not easily seen but when the arrays grow large it's telling. There is a method reduce provided that is optimized to perform an function on a collection (in this case OR), but it comes at a cost. If the number of combining steps is relatively small then it's better to simply loop. In all cases minimizing the number of memory access is over all better. Below are various attempts at optimization using these 2 things in mind.
Various Attempts and Observations
Initial function
Here's a function that takes your example and generalizes it.
function boolReduce1(x)
out = copy(x[:,:,1])
for i = 1:size(x,1)
for j = 1:size(x,2)
for k = 2:size(x,3)
out[i,j] |= x[i,j,k]
end
end
end
out
end
Creating a fairly large array, we can time it's performance
julia> #time boolReduce1(b);
elapsed time: 42.372058096 seconds (1056704 bytes allocated)
Applying optimizations
Here's another similar version but with the standard type hints, use of #inbounds and inverting the loops.
function boolReduce2(b::BitArray{3})
a = BitArray{2}(size(b)[1:2]...)
for j = 1:size(b,2)
for i = 1:size(b,1)
#inbounds a[i,j] = b[i,j,1]
for k = 2:size(b,3)
#inbounds a[i,j] |= b[i,j,k]
end
end
end
a
end
And take the time
julia> #time boolReduce2(b);
elapsed time: 12.892392891 seconds (500520 bytes allocated)
The insight
The 2nd function is a lot faster, and also less memory is allocated because a temporary array wasn't created. But what if we simply take the first function and invert the array indexing?
function boolReduce3(x)
out = copy(x[:,:,1])
for j = 1:size(x,2)
for i = 1:size(x,1)
for k = 2:size(x,3)
out[i,j] |= x[i,j,k]
end
end
end
out
end
and take the time now
julia> #time boolReduce3(b);
elapsed time: 12.451501749 seconds (1056704 bytes allocated)
That's just as fast as the 2nd function.
Using reduce
There is a function called reduce that we can use to eliminate the 3rd loop. Its function is to repeatedly apply an operation on all of the elements with the result of the previous operation. This is exactly what we want.
function boolReduce4(b)
a = BitArray{2}(size(b)[1:2]...)
for j = 1:size(b,2)
for i = 1:size(b,1)
#inbounds a[i,j] = reduce(|,b[i,j,:])
end
end
a
end
Now take it's time
julia> #time boolReduce4(b);
elapsed time: 15.828273008 seconds (1503092520 bytes allocated, 4.07% gc time)
That's ok, but not even as fast as the simple optimized original. The reason is, take a look at all of the extra memory that was allocated. This is because data has to be copied from all over to produce input for reduce.
Combining things
But what if we max out the insight as best we can. Instead of the last index being reduced, the first one is?
function boolReduceX(b)
a = BitArray{2}(size(b)[2:3]...)
for j = 1:size(b,3)
for i = 1:size(b,2)
#inbounds a[i,j] = reduce(|,b[:,i,j])
end
end
a
end
And now create a similar array and time it.
julia> c = randbool(200,2000,2000);
julia> #time boolReduceX(c);
elapsed time: 1.877547669 seconds (927092520 bytes allocated, 21.66% gc time)
Resulting in a function 20x faster than the original version for large arrays. Pretty good.
But what if medium size?
If the size is very large then the above function appears best, but if the data set size is smaller, the use of reduce doesn't pay enough back and the following is faster. Including a temporary variable speeds things from version 2. Another version of boolReduceX using a loop instead of reduce (not show here) was even faster.
function boolReduce5(b)
a = BitArray{2}(size(b)[1:2]...)
for j = 1:size(b,2)
for i = 1:size(b,1)
#inbounds t = b[i,j,1]
for k = 2:size(b,3)
#inbounds t |= b[i,j,k]
end
#inbounds a[i,j] = t
end
end
a
end
julia> b = randbool(2000,2000,20);
julia> c = randbool(20,2000,2000);
julia> #time boolReduceX(c);
elapsed time: 1.535334322 seconds (799092520 bytes allocated, 23.79% gc time)
julia> #time boolReduce5(b);
elapsed time: 0.491410981 seconds (500520 bytes allocated)
It is faster to devectorize. It's just a matter of how much work you want to put in. The naïve devectorized approach is slow because it's a BitArray: extracting contiguous regions and bitwise OR can both be done a 64-bit chunk at a time, but the naïve devectorized approach operates an element at a time. On top of that, indexing BitArrays is slow, both because there is a sequence of bit operations involved and because it can't presently be inlined due to the bounds check. Here's a strategy that is devectorized but exploits the structure of the BitArray. Most of the code is copy-pasted from copy_chunks! in bitarray.jl and I didn't try to prettify it (sorry!).
function devec(n::Int, x::BitArray)
src = x.chunks
out = falses(n, n)
dest = out.chunks
numbits = n*n
kd0 = 1
ld0 = 0
for j = 1:n
pos_s = (n*n)*(j-1)+1
kd1, ld1 = Base.get_chunks_id(numbits - 1)
ks0, ls0 = Base.get_chunks_id(pos_s)
ks1, ls1 = Base.get_chunks_id(pos_s + numbits - 1)
delta_kd = kd1 - kd0
delta_ks = ks1 - ks0
u = Base._msk64
if delta_kd == 0
msk_d0 = ~(u << ld0) | (u << (ld1+1))
else
msk_d0 = ~(u << ld0)
msk_d1 = (u << (ld1+1))
end
if delta_ks == 0
msk_s0 = (u << ls0) & ~(u << (ls1+1))
else
msk_s0 = (u << ls0)
end
chunk_s0 = Base.glue_src_bitchunks(src, ks0, ks1, msk_s0, ls0)
dest[kd0] |= (dest[kd0] & msk_d0) | ((chunk_s0 << ld0) & ~msk_d0)
delta_kd == 0 && continue
for i = 1 : kd1 - kd0
chunk_s1 = Base.glue_src_bitchunks(src, ks0 + i, ks1, msk_s0, ls0)
chunk_s = (chunk_s0 >>> (64 - ld0)) | (chunk_s1 << ld0)
dest[kd0 + i] |= chunk_s
chunk_s0 = chunk_s1
end
end
out
end
With Iain's benchmarks, this gives me:
simple: 0.051321131 seconds (46356000 bytes allocated, 30.03% gc time)
forloops: 6.226652258 seconds (92976 bytes allocated)
forloopscolfirst: 2.099381939 seconds (89472 bytes allocated)
shorty: 0.060194226 seconds (46387760 bytes allocated, 36.27% gc time)
timholy: 2.464298752 seconds (31784 bytes allocated)
devec: 0.008734413 seconds (31472 bytes allocated)

Resources