How to solve a linear system where both inputs are sparse? - julia

is there any equivalent to scipy.sparse.linalg.spsolve in Julia? Here's the description of the function in Python.
In [59]: ?spsolve
Signature: spsolve(A, b, permc_spec=None, use_umfpack=True)
Docstring:
Solve the sparse linear system Ax=b, where b may be a vector or a matrix.
I couldn't find this in Julia's LinearAlgebra and SparseArrays. Is there anything I miss or any alternatives?
Thanks
EDIT
For example:
In [71]: A = sparse.csc_matrix([[3, 2, 0], [1, -1, 0], [0, 5, 1]], dtype=float)
In [72]: B = sparse.csc_matrix([[2, 0], [-1, 0], [2, 0]], dtype=float)
In [73]: spsolve(A, B).data
Out[73]: array([ 1., -3.])
In [74]: spsolve(A, B).toarray()
Out[74]:
array([[ 0., 0.],
[ 1., 0.],
[-3., 0.]])
In Julia, with \ operator
julia> A = Float64.(sparse([3 2 0; 1 -1 0; 0 5 1]))
3×3 SparseMatrixCSC{Float64,Int64} with 6 stored entries:
[1, 1] = 3.0
[2, 1] = 1.0
[1, 2] = 2.0
[2, 2] = -1.0
[3, 2] = 5.0
[3, 3] = 1.0
julia> B = Float64.(sparse([2 0; -1 0; 2 0]))
3×2 SparseMatrixCSC{Float64,Int64} with 3 stored entries:
[1, 1] = 2.0
[2, 1] = -1.0
[3, 1] = 2.0
julia> A \ B
ERROR: MethodError: no method matching ldiv!(::SuiteSparse.UMFPACK.UmfpackLU{Float64,Int64}, ::SparseMatrixCSC{Float64,Int64})
Closest candidates are:
ldiv!(::Number, ::AbstractArray) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/generic.jl:236
ldiv!(::SymTridiagonal, ::Union{AbstractArray{T,1}, AbstractArray{T,2}} where T; shift) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/tridiag.jl:208
ldiv!(::LU{T,Tridiagonal{T,V}}, ::Union{AbstractArray{T,1}, AbstractArray{T,2}} where T) where {T, V} at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/lu.jl:588
...
Stacktrace:
[1] \(::SuiteSparse.UMFPACK.UmfpackLU{Float64,Int64}, ::SparseMatrixCSC{Float64,Int64}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/factorization.jl:99
[2] \(::SparseMatrixCSC{Float64,Int64}, ::SparseMatrixCSC{Float64,Int64}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/SparseArrays/src/linalg.jl:1430
[3] top-level scope at REPL[81]:1

Yes, it's the \ function.
julia> using SparseArrays, LinearAlgebra
julia> A = sprand(Float64, 20, 20, 0.01) + I # just adding the identity matrix so A is non-singular.
julia> typeof(A)
SparseMatrixCSC{Float64,Int64}
julia> v = rand(20);
julia> A \ v
20-element Array{Float64,1}:
0.5930744938331236
0.8726507741810358
0.6846427450637211
0.3135234897986168
0.8366321472466727
0.11338490488638651
0.3679058951515244
0.4931583108292607
0.3057947282994271
0.27481281228206955
0.888942874188458
0.905356044150361
0.17546911165214607
0.13636389619386557
0.9607381212005248
0.2518153541168824
0.6237205353883974
0.6588050295549153
0.14748809413104935
0.9806131247053784
Edit in response to question edit:
If you want v here to instead be a sparse matrix B, then we can proceed by using the QR decomposition of B (note that cases where B is truly sparse are rare:
function myspsolve(A, B)
qrB = qr(B)
Q, R = qrB.Q, qrB.R
R = [R; zeros(size(Q, 2) - size(R, 1), size(R, 2))]
A\Q * R
end
now:
julia> A = Float64.(sparse([3 2 0; 1 -1 0; 0 5 1]))
3×3 SparseMatrixCSC{Float64,Int64} with 6 stored entries:
[1, 1] = 3.0
[2, 1] = 1.0
[1, 2] = 2.0
[2, 2] = -1.0
[3, 2] = 5.0
[3, 3] = 1.0
julia> B = Float64.(sparse([2 0; -1 0; 2 0]))
3×2 SparseMatrixCSC{Float64,Int64} with 3 stored entries:
[1, 1] = 2.0
[2, 1] = -1.0
[3, 1] = 2.0
julia> mysolve(A, B)
3×2 Array{Float64,2}:
0.0 0.0
1.0 0.0
-3.0 0.0
and we can test to make sure we did it right:
julia> mysolve(A, B) ≈ A \ collect(B)
true

Related

Julia JuMP: define a multidimensional variable when a dimension depends on other dimension

While I am defining a linear programming variables, I have to consider
index_i = 1:3 index_j = J = [1:2, 1:5, 1:3]
I want to define a variable x indexed with both i and j such that i is {1,2,3} and j is in {1,2} if i is 1, {1,2,3,4,5} if i is 2 and {1,2,3} if i is 3.
I tried several syntaxes but non of them delivered it successfully. Any suggestion?
I wonder why this is not working
#variable(m, e[i for i in I, j for j in J[i]])
I m expecting a result like this
e[1,1]
e[1,2]
e[1,3]
e[2,1]
e[2,2]
e[2,3]
e[2,4]
e[2,5]
e[3,1]
e[3,2]
e[3,3]
Assuming I=1:3 and J=[1:2, 1:5, 1:3]
you can do:
julia> #variable(m, e[i in I, j in J[i]])
JuMP.Containers.SparseAxisArray{VariableRef, 2, Tuple{Int64, Int64}} with 10 entries:
[1, 1] = e[1,1]
[1, 2] = e[1,2]
[2, 1] = e[2,1]
[2, 2] = e[2,2]
[2, 3] = e[2,3]
[2, 4] = e[2,4]
[2, 5] = e[2,5]
[3, 1] = e[3,1]
[3, 2] = e[3,2]
[3, 3] = e[3,3]

GaussianMixtures.jl - Equivalent of sklearn GaussianMixture.predict

How can I do the equivalent of the following SK Learn code using GaussianMixtures.jl
import numpy as np
from sklearn.mixture import GaussianMixture
X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
gm = GaussianMixture(n_components=2, random_state=0).fit(X)
gm.predict([[0, 0], [12, 3]]) #prints "array([1, 0])"
Here's how I'm currently handling this requirement:
using GaussianMixtures
using Random
Random.seed!(23);
function _cluster_predict(gmm::GMM, X::Matrix)
llpg_X = llpg(gmm, X)
return map(argmin, eachrow(llpg_X))
end
X = [1.0 2; 1 4; 1 0; 10 2; 10 4; 10 0] + rand(Float64, (6, 2))
gm = GMM(2, X)
_cluster_predict(gm, [0.0 0.0; 12.0 3.0]) #returns [2,1]
Is there a better approach?

Using Julia's dot notation and in place operation

How do I use both Julia's dot notation to do elementwise operations AND ensure that the result is saved in an already existing array?
function myfun(x, y)
return x + y
end
a = myfun(1, 2) # Results in a == 3
a = myfun.([1 2], [3; 4]) # Results in a == [4 5; 5 6]
function myfun!(x, y, out)
out .= x + y
end
a = zeros(2, 2)
myfun!.([1 2], [3; 4], a) # Results in a DimensionMismatch error
Also, does #. a = myfun([1 2], [3; 4]) write the result to a in the same way as I am trying to achieve with myfun!()? That is, does that line write the result directly to a without saving storing the result anywhere else first?
Your code should be:
julia> function myfun!(x, y, out)
out .= x .+ y
end
myfun! (generic function with 1 method)
julia> myfun!([1 2], [3; 4], a)
2×2 Matrix{Float64}:
4.0 5.0
5.0 6.0
julia> a
2×2 Matrix{Float64}:
4.0 5.0
5.0 6.0
As for #. a = myfun([1 2], [3; 4]) - the answer is yes, it does not create temporary arrays and operates in-place.
This isn't commonly required, and there are usually better ways to achieve this, but it's possible to broadcast on an output argument by using Reference values that point inside the output array.
julia> a = zeros(2, 2)
2×2 Matrix{Float64}:
0.0 0.0
0.0 0.0
julia> function myfun!(out, x, y)
out[] = x + y
end
myfun! (generic function with 1 method)
julia> myfun!.((Ref(a, i) for i in LinearIndices(a)), [1 2], [3; 4])
2×2 Matrix{Int64}:
4 5
5 6
julia> a
2×2 Matrix{Float64}:
4.0 5.0
5.0 6.0
Edit: Changed out to be the first parameter as per the Style guide - thanks to #phipsgabler for the reminder.

learning to index batches in JuliaLang

I have a example python batch processing snippet that I am attempting to recreate in JuliaLang
softmax_outputs = np.array([[ 0.7, 0.1, 0.2 ],
[ 0.1, 0.5, 0.4 ],
[ 0.02, 0.9, 0.08]])
class_targets = [0, 1, 1]
print(softmax_outuputs[[0,1,2], class_targets])
>>>
[0.7 0.5 0.9]
Julia
softmax_outputs = [ 0.7 0.1 0.2
0.1 0.5 0.4
0.02 0.9 0.08]
class_targets = [1 2 2]
println(softmax_outputs[[1,2,3],class_targets])
[0.7; 0.1; 0.02]
[0.1; 0.5; 0.9]
[0.1; 0.5; 0.9]
julia>
You can use CartesianIndex like this:
julia> softmax_outputs[CartesianIndex.([1, 2, 3], [1, 2, 2])]
3-element Vector{Float64}:
0.7
0.5
0.9
(note that [1, 2, 2] is vector not a matrix like in your code)
Alternatively (assuming you always want to index the whole range) you can write:
julia> getindex.(eachrow(softmax_outputs), [1, 2, 2])
3-element Vector{Float64}:
0.7
0.5
0.9
Finally you could use a comprehension (which is probably most natural when porting from Python):
julia> [softmax_outputs[i, j] for (i, j) in zip([1, 2, 3], [1, 2, 2])]
3-element Vector{Float64}:
0.7
0.5
0.9

How to use nested list comprehension in julia

How do I make the following code into nested list comprehension?
node_x = 5
node_y = 5
node_z = 5
xyz = Matrix(undef, node_x*node_y*node_z,3)
ii = 0
dx = 1.0
for k in 0:node_z-1
for j in 0:node_y-1
for i in 0:node_x-1
x = i * dx
y = j * dx
z = k * dx
ii += 1
#println([x, y, z])
xyz[ii, 1] = x
xyz[ii, 2] = y
xyz[ii, 3] = z
end
end
end
In python and numpy, I can write such as following codes.
xyz = np.array([[i*dx, j*dx, k*dx] for k in range(node_z) for j in range(node_y) for i in range(node_x)])
Comprehensions can be nested just the same, it's just range that is a bit different, but in your case there is the start:end syntactic sugar:
julia> [[i*dx, j*dx, k*dx] for k in 1:node_z for j in 1:node_y for i in 1:node_x]
125-element Vector{Vector{Float64}}:
[1.0, 1.0, 1.0]
[2.0, 1.0, 1.0]
⋮
[4.0, 5.0, 5.0]
[5.0, 5.0, 5.0]
To get the same array as your Python example, you'd have to permute the dimensions of the 3-element vectors and concatenate the list:
julia> vcat(([i*dx j*dx k*dx] for k in 1:node_z for j in 1:node_y for i in 1:node_x)...)
125×3 Matrix{Float64}:
1.0 1.0 1.0
2.0 1.0 1.0
⋮
4.0 5.0 5.0
5.0 5.0 5.0

Resources