How to zero out small values in an array? - julia

Is there a generic way to zero out small values in an array?
By "small" I mean elements whose absolute value is less than some threshold like 10.0^-5.
Edit: For now, I loop with eachindex.
function sparsify(a, eps)
for i in eachindex(a)
if abs(a[i]) < eps
a[i] = 0
end
end
end

Why not just apply a mask and an element-wise less than operator?
>>> x = rand(Float32, 100)
>>> eps = 0.5
>>> x[abs(x) .< eps] = 0
or as a function (note the function modifies the vector x inplace):
>>> sparsify!(x, eps) = x[abs(x) .< eps] = 0;
You could also replace 0 with zero(eltype(x)) to ensure it has the same type as x.
The temporary boolean mask created by x .< eps will compare every element of x to eps. Then, every element that satisfies that condition will be set to 0.

To complete answer of Imanol Luengo, and extend it to multiple dimensions,
x[abs.(x) .< eps(eltype(x))] .= zero(eltype(x))

I ended up with a vectorized method, which is much shorter.
sparsify(x, eps) = abs(x) < eps ? 0.0 : x
#vectorize_2arg Float64 sparsify

Disclaimer (2019): The below answer is badly out of date, and refers to an old version of Julia (<0.7). In version 1.x you should instead use x .= 0 or fill!(x, 0).
What approach to choose depends on what you need. If you just need a simple one-liner, then the vectorized version is fine. But if you want optimal performance, a loop will serve you better.
Here are a few alternatives compared by performance. Do keep in mind that map is slow on version 0.4. The timings here are done with version 0.5.
function zerofy!(x, vmin)
for (i, val) in enumerate(x)
if abs(val) < vmin
x[i] = zero(eltype(x))
end
end
end
zerofy2!(x, vmin) = ( x[abs(x) .< vmin] = zero(eltype(x)) )
zerofy3(x, eps) = abs(x) < eps ? 0.0 : x
#vectorize_2arg Float64 zerofy3!
zerofy4(y, vmin) = map(x -> abs(x)<vmin ? zero(x) : x, y)
zerofy4!(y, vmin) = map!(x -> abs(x)<vmin ? zero(x) : x, y)
function time_zerofy(n, vmin)
x1 = rand(n)
x2, x3, x4, x5 = copy(x1), copy(x1), copy(x1), copy(x1)
#time zerofy!(x1, vmin)
#time zerofy2!(x2, vmin)
#time zerofy3(x3, vmin)
#time zerofy4(x4, vmin)
#time zerofy4!(x5, vmin)
return nothing
end
julia> time_sparse(10^8, 0.1)
0.122510 seconds
1.078589 seconds (73.25 k allocations: 778.590 MB, 5.42% gc time)
0.558914 seconds (2 allocations: 762.940 MB)
0.688640 seconds (5 allocations: 762.940 MB)
0.243921 seconds
There's a pretty big difference between the loop (fastest) and the naively vectorized one.
Edit: zerofy3! => zerofy3 since it's not in-place.

Related

Optim Julia parameter meaning

I'm trying to use Optim in Julia to solve a two variable minimization problem, similar to the following
x = [1.0, 2.0, 3.0]
y = 1.0 .+ 2.0 .* x .+ [-0.3, 0.3, -0.1]
function sqerror(betas, X, Y)
err = 0.0
for i in 1:length(X)
pred_i = betas[1] + betas[2] * X[i]
err += (Y[i] - pred_i)^2
end
return err
end
res = optimize(b -> sqerror(b, x, y), [0.0,0.0])
res.minimizer
I do not quite understand what [0.0,0.0] means. By looking at the document http://julianlsolvers.github.io/Optim.jl/v0.9.3/user/minimization/. My understanding is that it is the initial condition. However, if I change that to [0.0,0., 0.0], the algorithm still work despite the fact that I only have two unknowns, and the algorithm gives me three instead of two minimizer. I was wondering if anyone knows what[0.0,0.0] really stands for.
It is initial value. optimize by itself cannot know how many values your sqerror function takes. You specify it by passing this initial value.
For example if you add dimensionality check to sqerror you will get a proper error:
julia> function sqerror(betas::AbstractVector, X::AbstractVector, Y::AbstractVector)
#assert length(betas) == 2
err = 0.0
for i in eachindex(X, Y)
pred_i = betas[1] + betas[2] * X[i]
err += (Y[i] - pred_i)^2
end
return err
end
sqerror (generic function with 2 methods)
julia> optimize(b -> sqerror(b, x, y), [0.0,0.0,0.0])
ERROR: AssertionError: length(betas) == 2
Note that I also changed the loop condition to eachindex(X, Y) to ensure that your function checks if X and Y vectors have aligned indices.
Finally if you want performance and reduce compilation cost (so e.g. assuming you do this optimization many times) it would be better to define your optimized function like this:
objective_factory(x, y) = b -> sqerror(b, x, y)
optimize(objective_factory(x, y), [0.0,0.0])

Fast summation of symmetric matrix in Julia

I want to sum all elements in a matrix A with dimension n times n. The matrix is symmetric and has 0s on the diagonal. The fastest way to do so that I have found is simply
sum(A). However this seems wasteful since it doesn't use the fact that I only need to calculate the lower triangle of the matrix. However, sum(tril(A, -1)) is significantly slower, and sum(A[i, j] for i = 1:n-1 for j = i+1:n) even more so. Is there a more efficient way to sum the matrix?
Edit: The solution by #AboAmmar performs well. Here is code (with summing the diagonal separately, something that can be removed if there is only zeros on the diagonal) to compare:
using BenchmarkTools
using LinearAlgebra
function sum_triu(A)
m, n = size(A)
#assert m == n
s = zero(eltype(A))
for j = 2:n
#simd for i = 1:j-1
s += #inbounds A[i,j]
end
end
s *= 2
for i = 1:n
s += A[i, i]
end
return s
end
N = 1000
A = Symmetric(rand(0:9,N,N))
A -= diagm(diag(A))
#btime sum(A)
#btime 2 * sum(tril(A))
#btime sum_triu(A)
This is 2.7X faster than sum for n = 1000 matrix. Make sure to add a #simd before the loop and use #inbounds. Also, use the correct loop order for fast memory access.
function sum_triu(A)
m, n = size(A)
#assert m == n
s = zero(eltype(A))
for j = 1:n
#simd for i = 1:j
s += #inbounds A[i,j]
end
end
return 2 * s
end
Example run on my PC:
sum_triu(A) = 499268.7328022966
sum(A) = 499268.73280229873
93.000 μs (0 allocations: 0 bytes)
249.900 μs (0 allocations: 0 bytes)
How about
2 * sum(LowerTriangular(A))
help?> LA.LowerTriangular
LowerTriangular(A::AbstractMatrix)
Construct a LowerTriangular view of the matrix A.
tril creates a new matrix, which allocates memory. Since a LowerTriangular is a view into the existing matrix, there's no memory allocation.

Row-wise operations between matrices in Julia

I'm attempting to translate the equivalent of the following Python code (from SMT GEKPLS) into Julia:
def differences(X, Y):
D = X[:, np.newaxis, :] - Y[np.newaxis, :, :]
return D.reshape((-1, X.shape[1]))
So, given an input like this:
X = np.array([[1.0,1.0,1.0], [2.0,2.0,2.0]])
Y = np.array([[1.0,2.0,3.0], [4.0,5.0,6.0], [7.0,8.0,9.0]])
diff = differences(X,Y)
We get an output (diff) that looks like this:
[[ 0. -1. -2.]
[-3. -4. -5.]
[-6. -7. -8.]
[ 1. 0. -1.]
[-2. -3. -4.]
[-5. -6. -7.]]
What is an efficient way to do this with Julia code? I expect the X and Y input matrices to be quite large.
After some thinking, I came to this function:
function differences(X, Y)
Rx = repeat(X, inner=(size(Y, 1), 1))
Ry = repeat(Y, size(X, 1))
Rx - Ry
end
I hope I was helpful.
Here's a version that avoids repeat, which creates unnecessary data duplication:
function diffs_row(X, Y)
N = size(X, 2)
return reshape(reshape(X', 1, N, :) .- Y', N, :)'
end
The reason for all the adjoints ' is that it isn't really natural to operate row-wise in Julia. Julia arrays are column-major so reshape will retrieve data column-wise. If you decide instead to change the orientation of the data, you could write
function diffs_col(X, Y)
N = size(X, 1)
return reshape(reshape(X, N, 1, :) .- Y, N, :)
end
instead.
One often sees this when translating numpy code to Julia. Numpy is natively row-major, so the translation becomes a bit awkward. You should consider changing your data layout to be column major in many cases.
This might be faster than other alternatives, while still being easy to understand.
[x .- y for x ∈ X for y ∈ Y]
6-element Vector{Vector{Float64}}:
[0.0, -1.0, -2.0]
[-3.0, -4.0, -5.0]
[-6.0, -7.0, -8.0]
[1.0, 0.0, -1.0]
[-2.0, -3.0, -4.0]
[-5.0, -6.0, -7.0]
The one thing I disliked about numpy is that one has to exactly remember each function in conjunction with a combination of input parameters. In Julia, the traditional loop can serve as an efficient drop-in replacement for most algorithms.
Addendum: The above might be the fastest solution as I said, provided that working with a Vector{Vector{Float64}} is not an issue. If it is, here is another solution that outputs a Matrix{Float64} while being fast as well.
function diffr(X,Y)
i, l, m, n = 0, length(first(X)), length(X), length(Y)
Z = Matrix{Float64}(undef, m*n, l)
for x in X, y in Y
Z[i+=1,:] .= x .- y
end
Z
end
And here is a performance comparison of all posted solutions on my computer.
#btime [x.-y for x∈$X for y∈$Y] # 312.245 ns (9 allocations: 656 bytes)
#btime diffr($X, $Y) # 73.868 ns (1 allocation: 208 bytes)
#btime differences($X, $Y) # 439.000 ns (12 allocations: 896 bytes)
#btime diffs_row($X, $Y) # 463.131 ns (11 allocations: 784 bytes)

Improving the speed of a for loop in Julia

Here is my code in Julia platform and I like to speed it up. Is there anyway that I can make this faster? It takes 0.5 seconds for a dataset of 50k*50k. I was expecting Julia to be a lot faster than this or I am not sure if I am doing a silly implementation.
ar = [[1,2,3,4,5], [2,3,4,5,6,7,8], [4,7,8,9], [9,10], [2,3,4,5]]
SV = rand(10,5)
function h_score_0(ar ,SV)
m = length(ar)
SC = Array{Float64,2}(undef, size(SV, 2), m)
for iter = 1:m
nodes = ar[iter]
for jj = 1:size(SV, 2)
mx = maximum(SV[nodes, jj])
mn = minimum(SV[nodes, jj])
term1 = (mx - mn)^2;
SC[jj, iter] = (term1);
end
end
return score = sum(SC, dims = 1)
end
You have some unnecessary allocations in your code:
mx = maximum(SV[nodes, jj])
mn = minimum(SV[nodes, jj])
Slices allocate, so each line makes a copy of the data here, you're actually copying the data twice, once on each line. You can either make sure to copy only once, or even better: use view, so there is no copy at all (note that view is much faster on Julia v1.5, in case you are using an older version).
SC = Array{Float64,2}(undef, size(SV, 2), m)
And no reason to create a matrix here, and sum over it afterwards, just accumulate while you are iterating:
score[i] += (mx - mn)^2
Here's a function that is >5x as fast on my laptop for the input data you specified:
function h_score_1(ar, SV)
score = zeros(eltype(SV), length(ar))
#inbounds for i in eachindex(ar)
nodes = ar[i]
for j in axes(SV, 2)
SVview = view(SV, nodes, j)
mx = maximum(SVview)
mn = minimum(SVview)
score[i] += (mx - mn)^2
end
end
return score
end
This function outputs a one-dimensional vector instead of a 1xN matrix in your original function.
In principle, this could be even faster if we replace
mx = maximum(SVview)
mn = minimum(SVview)
with
(mn, mx) = extrema(SVview)
which only traverses the vector once, instead of twice. Unfortunately, there is a performance issue with extrema, so it is currently not as fast as separate maximum/minimum calls: https://github.com/JuliaLang/julia/issues/31442
Finally, for absolutely getting the best performance at the cost of brevity, we can avoid creating a view at all and turn the calls to maximum and minimum into a single explicit loop traversal:
function h_score_2(ar, SV)
score = zeros(eltype(SV), length(ar))
#inbounds for i in eachindex(ar)
nodes = ar[i]
for j in axes(SV, 2)
mx, mn = -Inf, +Inf
for node in nodes
x = SV[node, j]
mx = ifelse(x > mx, x, mx)
mn = ifelse(x < mn, x, mn)
end
score[i] += (mx - mn)^2
end
end
return score
end
This also avoids the performance issue that extrema suffers, and looks up the SV element once per node. Although this version is annoying to write, it's substantially faster, even on Julia 1.5 where views are free. Here are some benchmark timings with your test data:
julia> using BenchmarkTools
julia> #btime h_score_0($ar, $SV)
2.344 μs (52 allocations: 6.19 KiB)
1×5 Matrix{Float64}:
1.95458 2.94592 2.79438 0.709745 1.85877
julia> #btime h_score_1($ar, $SV)
392.035 ns (1 allocation: 128 bytes)
5-element Vector{Float64}:
1.9545848011260765
2.9459235098820167
2.794383144368953
0.7097448590904598
1.8587691646610984
julia> #btime h_score_2($ar, $SV)
118.243 ns (1 allocation: 128 bytes)
5-element Vector{Float64}:
1.9545848011260765
2.9459235098820167
2.794383144368953
0.7097448590904598
1.8587691646610984
So explicitly writing out the innermost loop is worth it here, reducing time by another 3x or so. It's annoying that the Julia compiler isn't yet able to generate code this efficient, but it does get smarter with every version. On the other hand, the explicit loop version will be fast forever, so if this code is really performance critical, it's probably worth writing it out like this.

How to find the index of the last maximum in julialang?

I have an array that contains repeated nonnegative integers, e.g., A=[5,5,5,0,1,1,0,0,0,3,3,0,0]. I would like to find the position of the last maximum in A. That is the largest index i such that A[i]>=A[j] for all j. In my example, i=3.
I tried to find the indices of all maximum of A then find the maximum of these indices:
A = [5,5,5,0,1,1,0,0,0,3,3,0,0];
Amax = maximum(A);
i = maximum(find(x -> x == Amax, A));
Is there any better way?
length(A) - indmax(#view A[end:-1:1]) + 1
should be pretty fast, but I didn't benchmark it.
EDIT: I should note that by definition #crstnbr 's solution (to write the algorithm from scratch) is faster (how much faster is shown in Xiaodai's response). This is an attempt to do it using julia's inbuilt array functions.
What about findlast(A.==maximum(A)) (which of course is conceptually similar to your approach)?
The fastest thing would probably be explicit loop implementation like this:
function lastindmax(x)
k = 1
m = x[1]
#inbounds for i in eachindex(x)
if x[i]>=m
k = i
m = x[i]
end
end
return k
end
I tried #Michael's solution and #crstnbr's solution and I found the latter much faster
a = rand(Int8(1):Int8(5),1_000_000_000)
#time length(a) - indmax(#view a[end:-1:1]) + 1 # 19 seconds
#time length(a) - indmax(#view a[end:-1:1]) + 1 # 18 seconds
function lastindmax(x)
k = 1
m = x[1]
#inbounds for i in eachindex(x)
if x[i]>=m
k = i
m = x[i]
end
end
return k
end
#time lastindmax(a) # 3 seconds
#time lastindmax(a) # 2.8 seconds
Michael's solution doesn't support Strings (ERROR: MethodError: no method matching view(::String, ::StepRange{Int64,Int64})) or sequences so I add another solution:
julia> lastimax(x) = maximum((j,i) for (i,j) in enumerate(x))[2]
julia> A="abžcdž"; lastimax(A) # unicode is OK
6
julia> lastimax(i^2 for i in -10:7)
1
If you more like don't catch exception for empty Sequence:
julia> lastimax(x) = !isempty(x) ? maximum((j,i) for (i,j) in enumerate(x))[2] : 0;
julia> lastimax(i for i in 1:3 if i>4)
0
Simple(!) benchmarks:
This is up to 10 times slower than Michael's solution for Float64:
julia> mlastimax(A) = length(A) - indmax(#view A[end:-1:1]) + 1;
julia> julia> A = rand(Float64, 1_000_000); #time lastimax(A); #time mlastimax(A)
0.166389 seconds (4.00 M allocations: 91.553 MiB, 4.63% gc time)
0.019560 seconds (6 allocations: 240 bytes)
80346
(I am surprised) it is 2 times faster for Int64!
julia> A = rand(Int64, 1_000_000); #time lastimax(A); #time mlastimax(A)
0.015453 seconds (10 allocations: 304 bytes)
0.031197 seconds (6 allocations: 240 bytes)
423400
it is 2-3 times slower for Strings
julia> A = ["A$i" for i in 1:1_000_000]; #time lastimax(A); #time mlastimax(A)
0.175117 seconds (2.00 M allocations: 61.035 MiB, 41.29% gc time)
0.077098 seconds (7 allocations: 272 bytes)
999999
EDIT2:
#crstnbr solution is faster and works with Strings too (doesn't work with generators). There difference between lastindmax and lastimax - first return byte index, second return character index:
julia> S = "1š3456789ž"
julia> length(S)
10
julia> lastindmax(S) # return value is bigger than length
11
julia> lastimax(S) # return character index (which is not byte index to String) of last max character
10
julia> S[chr2ind(S, lastimax(S))]
'ž': Unicode U+017e (category Ll: Letter, lowercase)
julia> S[chr2ind(S, lastimax(S))]==S[lastindmax(S)]
true

Resources