Im working with some roughly 100000x100000 hermitian complex sparse-matrices, with roughly 5% of entries populated, and want to calculate the eigenvalues/eigenvectors.
Sofar ive been using Arpack.jl eigs(A).
But this is not working well as soon as i crank the size to higher then 5000.
For the benchmarks ive been using the following code to generate some TestMatrices:
using Arpack
using SparseArrays
using ProgressMeter
pop = 0.05
n = 3000 # for example
A = spzeros(Complex{Float64}, n, n)
#showprogress for _ in 1:round(Int,pop * (n^2))
A[rand(1:n), rand(1:n)] = rand(Complex{Float64})
end
# make A hermite
A = A + conj(A)
t = #elapsed eigs(A,maxiter=1500) # ends up being ~ 13 seconds
For n ~ 3000 the eigs() call already takes 13 seconds on my machine, and for bigger n it doesn't finish in any 'reasonable' time or outright quits.
Is there a specialized package/method for this ?
Any help is appreciated
https://github.com/JuliaLinearAlgebra/ArnoldiMethod.jl seems to be what you want:
julia> let pop=0.05, n=3000
A = sprand(Complex{Float64},n,n, 0.05)
A = A + conj(A)
#time eigs(A; maxiter=1500)
#time decomp, history = partialschur(A, nev=10, tol=1e-6, which=LM());
end;
10.521786 seconds (50.73 k allocations: 3.458 MiB, 0.04% gc time)
2.244129 seconds (19 allocations: 1.892 MiB)
sanity check:
julia> a,(b,c) = let pop=0.05, n=300
A = sprand(Complex{Float64},n,n, 0.05)
A = A + conj(A)
eigs(A; maxiter=2500), partialschur(A, nev=6, tol=1e-6, which=LM());
end;
julia> a[1]
6-element Vector{ComplexF64}:
14.5707071003175 + 8.218901803015509e-16im
4.493079744504954 - 0.8390429567118733im
4.493079744504933 + 0.8390429567118641im
-0.3415176925293196 + 4.254184281244591im
-0.3415176925293088 - 4.25418428124452im
0.49406553681541177 - 4.229680489599233im
julia> b
PartialSchur decomposition (ComplexF64) of dimension 6
eigenvalues:
6-element Vector{ComplexF64}:
14.570707100307926 + 7.10698633463049e-12im
4.493079906269516 + 0.8390429076809746im
4.493079701528448 - 0.8390430155670777im
-0.3415174262177961 + 4.254183175902487im
-0.34151626930774975 - 4.25418321627979im
0.49406543866702 + 4.229680079205066im
Related
I have the following coupled system of ODEs (that come from discretizing an integrodifferential PDE):
The xi's are points on an x-grid that I control. I can solve this with the following simple piece of code:
using DifferentialEquations
function ode_syst(du,u,p, t)
N = Int64(p[1])
beta= p[2]
deltax = 1/(N+1)
xs = [deltax*i for i in 1:N]
for j in 1:N
du[j] = -xs[j]^(beta)*u[j]+deltax*sum([u[i]*xs[i]^(beta) for i in 1:N])
end
end
N = 1000
u0 = ones(N)
beta = 2.0
p = [N, beta]
tspan = (0.0, 10^3);
prob = ODEProblem(ode_syst,u0,tspan,p);
sol = solve(prob);
However, as I make my grid finer, i.e. increase N, the computation time grows rapidly (I guess the scaling is quadratic in N). Is there any suggestion on how to implement this using either distributed parallelism or multithreading?
Additional information: I attach the profiling diagram that might be useful to understand where the program spends most of the time
I've looked into your code and found a few problems such as an accidentally introduced O(N^2) behavior due to recalculating the sum term.
My improved version uses the Tullio package to get further speed up from vectorization. Tullio also has tuneable parameters which would allow for multi threading if your system becomes big enough. See here what parameters you can tune in the options section. You might also see GPU support there, i didn't test that but it might yield further speed up or break hooribly. I also choose get the length from the acutal array which should make the use more economical and less error prone.
using Tullio
function ode_syst_t(du,u,p, t)
N = length(du)
beta = p[1]
deltax = 1/(N+1)
#tullio s := deltax*(u[i]*(i*deltax)^(beta))
#tullio du[j] = -(j*deltax)^(beta)*u[j] + s
return nothing
end
Your code:
#btime sol = solve(prob);
80.592 s (1349001 allocations: 10.22 GiB)
My code:
prob2 = ODEProblem(ode_syst_t,u0,tspan,[2.0]);
#btime sol2 = solve(prob2);
1.171 s (696 allocations: 18.50 MiB)
and the result more or less agree:
julia> sum(abs2, sol2(1000.0) .- sol(1000.0))
1.079046922815598e-14
Lutz Lehmanns solution i also benchmarked:
prob3 = ODEProblem(ode_syst_lehm,u0,tspan,p);
#btime sol3 = solve(prob3);
1.338 s (3348 allocations: 39.38 MiB)
However as we scale N to 1000000 with a tspan of (0.0, 10.0)
prob2 = ODEProblem(ode_syst_t,u0,tspan,[2.0]);
#time solve(prob2);
2.429239 seconds (280 allocations: 869.768 MiB, 13.60% gc time)
prob3 = ODEProblem(ode_syst_lehm,u0,tspan,p);
#time solve(prob3);
5.961889 seconds (580 allocations: 1.967 GiB, 11.08% gc time)
My code becomes more than twice as fast due to using the 2 cores in my old and rusty machine.
Analyze the formula. Obviously, the atomic terms repeat. So they should only be computed once.
function ode_syst(du,u,p, t)
N = Int64(p[1])
beta= p[2]
deltax = 1/(N+1)
xs = [deltax*i for i in 1:N]
term = [ xs[i]^(beta)*u[i] for i in 1:N]
term_sum = deltax*sum(term)
for j in 1:N
du[j] = -term[j]+term_sum
end
end
This should only linearly increase in N.
In many applications of map(f,X), it helps to create closures that depending on parameters apply different functions f to data X.
I can think of at least the following three ways to do this (note that the second for some reason does not work, bug?)
f0(x,y) = x+y
f1(x,y,p) = x+y^p
function g0(power::Bool,X,y)
if power
f = x -> f1(x,y,2.0)
else
f = x -> f0(x,y)
end
map(f,X)
end
function g1(power::Bool,X,y)
if power
f(x) = f1(x,y,2.0)
else
f(x) = f0(x,y)
end
map(f,X)
end
abstract FunType
abstract PowerFun <: FunType
abstract NoPowerFun <: FunType
function g2{S<:FunType}(T::Type{S},X,y)
f(::Type{PowerFun},x) = f1(x,y,2.0)
f(::Type{NoPowerFun},x) = f0(x,y)
map(x -> f(T,x),X)
end
X = 1.0:1000000.0
burnin0 = g0(true,X,4.0) + g0(false,X,4.0);
burnin1 = g1(true,X,4.0) + g1(false,X,4.0);
burnin2 = g2(PowerFun,X,4.0) + g2(NoPowerFun,X,4.0);
#time r0true = g0(true,X,4.0); #0.019515 seconds (12 allocations: 7.630 MB)
#time r0false = g0(false,X,4.0); #0.002984 seconds (12 allocations: 7.630 MB)
#time r1true = g1(true,X,4.0); # 0.004517 seconds (8 allocations: 7.630 MB, 26.28% gc time)
#time r1false = g1(false,X,4.0); # UndefVarError: f not defined
#time r2true = g2(PowerFun,X,4.0); # 0.085673 seconds (2.00 M allocations: 38.147 MB, 3.90% gc time)
#time r2false = g2(NoPowerFun,X,4.0); # 0.234087 seconds (2.00 M allocations: 38.147 MB, 60.61% gc time)
What is the optimal way to do this in Julia?
There's no need to use map here at all. Using a closure doesn't make things simpler or faster. Just use "dot-broadcasting" to apply the functions directly:
function g3(X,y,power=1)
if power != 1
return f1.(X, y, power) # or simply X .+ y^power
else
return f0.(X, y) # or simply X .+ y
end
end
I have this code(primitive heat transfer):
function heat(first, second, m)
#sync #parallel for d = 2:m - 1
for c = 2:m - 1
#inbounds second[c,d] = (first[c,d] + first[c+1, d] + first[c-1, d] + first[c, d+1] + first[c, d-1]) / 5.0;
end
end
end
m = parse(Int,ARGS[1]) #size of matrix
firstm = SharedArray(Float64, (m,m))
secondm = SharedArray(Float64, (m,m))
for c = 1:m
for d = 1:m
if c == m || d == 1
firstm[c,d] = 100.0
secondm[c,d] = 100.0
else
firstm[c,d] = 0.0
secondm[c,d] = 0.0
end
end
end
#time for i = 0:opak
heat(firstm, secondm, m)
firstm, secondm = secondm, firstm
end
This code give good times when run sequentially, but when I add #parallel it slow down even if I run on one thread. I just need explanation why this is happening? Code only if it doesn't change algorithm of heat function.
Have a look at http://docs.julialang.org/en/release-0.4/manual/performance-tips/ . Contrary to advised, you make use of global variables a lot. They are considered to change types anytime so they have to be boxed and unboxed everytime they are referenced. This question also Julia pi aproximation slow suffers from the same. In order to make your function faster, have global variables as input arguments to the function.
There are some points to consider. One of them is the size of m. If it is small, parallelism would give much overhead for not a big gain:
julia 36967257.jl 4
# Parallel:
0.040434 seconds (4.44 k allocations: 241.606 KB)
# Normal:
0.042141 seconds (29.13 k allocations: 1.308 MB)
For bigger m you could have better results:
julia 36967257.jl 4000
# Parallel:
0.054848 seconds (4.46 k allocations: 241.935 KB)
# Normal:
3.779843 seconds (29.13 k allocations: 1.308 MB)
Plus two remarks:
1/ initialisation could be simplified to:
for c = 1:m, d = 1:m
if c == m || d == 1
firstm[c,d] = 100.0
secondm[c,d] = 100.0
else
firstm[c,d] = 0.0
secondm[c,d] = 0.0
end
end
2/ your finite difference schema does not look stable. Please take a look at Linear multistep method or ADI/Crank Nicolson.
I am using Julia version 0.4.5 and I am experiencing the following issue:
As far as I know, taking inner product between a sparse vector and a dense vector should be as fast as updating the dense vector by a sparse vector. The latter one is much slower.
A = sprand(100000,100000,0.01)
w = rand(100000)
#time for i=1:100000
w += A[:,i]
end
26.304380 seconds (1.30 M allocations: 150.556 GB, 8.16% gc time)
#time for i=1:100000
A[:,i]'*w
end
0.815443 seconds (921.91 k allocations: 1.540 GB, 5.58% gc time)
I created a simple sparse matrix type of my own, and the addition code was ~ the same as the inner product.
Am I doing something wrong? I feel like there should be a special function doing the operation w += A[:,i], but I couldn't find it.
Any help is appreciated.
I asked the same question on GitHub and we came to the following conclusion. The type SparseVector was added as of Julia 0.4 and with it the BLAS function LinAlg.axpy!, which updates in-place a (possibly dense) vector x by a sparse vector y multiplied by a scalar a, i.e. performs x += a*y efficiently. However, in Julia 0.4 it is not implemented properly. It works only in Julia 0.5
#time for i=1:100000
LinAlg.axpy!(1,A[:,i],w)
end
1.041587 seconds (799.49 k allocations: 1.530 GB, 8.01% gc time)
However, this code is still sub-optimal, as it creates the SparseVector A[:,i]. One can get an even faster version with the following function:
function upd!(w,A,i,c)
rowval = A.rowval
nzval = A.nzval
#inbounds for j = nzrange(A,i)
w[rowval[j]] += c* nzval[j]
end
return w
end
#time for i=1:100000
upd!(w,A,i,1)
end
0.500323 seconds (99.49 k allocations: 1.518 MB)
This is exactly what I needed to achieve, after some research we managed to get there, thanks everyone!
Assuming you want to compute w += c * A[:, i], there is an easy way to vectorize it:
>>> A = sprand(100000, 100000, 0.01)
>>> c = rand(100000)
>>> r1 = zeros(100000)
>>> #time for i = 1:100000
>>> r1 += A[:, i] * c[i]
>>> end
29.997412 seconds (1.90 M allocations: 152.077 GB, 12.73% gc time)
>>> #time r2 = sum(A .* c', 2);
1.191850 seconds (50 allocations: 1.493 GB, 0.14% gc time)
>>> all(r1 == r2)
true
First, create a vector c of the constants to multiply with. Then multiplay de columns of A element-wise by the values of c (A .* c', it does broadcasting inside). Last, reduce over the columns of A (the part sum(.., 2)).
I would like to initialize a 3d tensor (multi-dimensional array) with the values of the "diagonal Gaussian"
exp(-32*(u^2 + 16*(v^2 + w^2)))
where u = 1/sqrt(3)*(x+y+z) and v,w are any two coordinates orthogonal to u, discretised on a uniform mesh on [-1,1]^3. The following code achieves this:
function gaussian3d(n)
q = qr(ones(3,1), thin=false)[1]
x = linspace(-1.,1., n)
p = Array(Float64,(n,n,n))
square(x) = x*x
Base.#nloops 3 i p begin
#inbounds p[i_1,i_2,i_3] =
exp(
-32*(
square(q[1,1]*x[i_1] + q[2,1]*x[i_2] + q[3,1]*x[i_3])
+ 16*(
square(q[1,2]*x[i_1] + q[2,2]*x[i_2] + q[3,2]*x[i_3]) +
square(q[1,3]*x[i_1] + q[2,3]*x[i_2] + q[3,3]*x[i_3])
)
)
)
end
return p
end
It seems to be quite slow, however. For example, if I replace the defining function with exp(x*y*z), the code runs 50x faster. Also, the #time macro reports ~20% GC time for the above code which I do not understand where they come from. (These numeric values were obtained with n = 128.) My questions therefore are
How can I speed up this piece of code?
Where is the memory allocation hidden which causes the GC overhead?
Knowing nothing of 3D tensors with values of the "diagonal Gaussian", using thesquare comment from the original post, "typing" q (#code_warntype helps here: Big performance jump!), and further specializing the #nloops, this works much faster on the platforms I tried it on.
julia> square(x::Float64) = x * x
square (generic function with 1 method)
julia> function my_gaussian3d(n)
q::Array{Float64,2} = qr(ones(3,1), thin=false)[1]
x = linspace(-1.,1., n)
p = Array(Float64,(n,n,n))
Base.#nloops 3 i p d->x_d=x[i_d] begin
#inbounds p[i_1,i_2,i_3] =
exp(
-32*(
square(q[1,1]*x_1 + q[2,1]*x_2 + q[3,1]*x_3)
+ 16*(
square(q[1,2]*x_1 + q[2,2]*x_2 + q[3,2]*x_3) +
square(q[1,3]*x_1 + q[2,3]*x_2 + q[3,3]*x_3)
)
)
)
end
return p
end
my_gaussian3d (generic function with 1 method)
julia> #time gaussian3d(128);
elapsed time: 3.952389337 seconds (1264 MB allocated, 4.50% gc time in 57 pauses with 0 full sweep)
julia> #time gaussian3d(128);
elapsed time: 3.527316699 seconds (1264 MB allocated, 4.42% gc time in 58 pauses with 0 full sweep)
julia> #time my_gaussian3d(128);
elapsed time: 0.285837566 seconds (16 MB allocated)
julia> #time my_gaussian3d(128);
elapsed time: 0.28476448 seconds (16 MB allocated, 1.22% gc time in 0 pauses with 0 full sweep)
julia> my_gaussian3d(128) == gaussian3d(128)
true