Julia triangular matrix vector BLAS wrapper BLAS.trmv - julia

I am trying to use the BLAS function dtrmv for triangular matrix vector multiply. According to the docs:
trmv!(ul, tA, dA, A, b)
Returns op(A)*b, where op is determined by tA (N for identity, T for transpose A, and C for conjugate transpose A). Only the ul triangle (U for upper, L for lower) of A is used. dA indicates if A is unit-triangular (the diagonal is assumed to be all ones if U, or non-unit if N). The multiplication occurs in-place on b.
I'm having trouble actually using this. Here is my example:
julia> BLAS.trmv('L','N','N',Mchol,Z)
ERROR: MethodError: `trmv` has no method matching trmv(::Char, ::Char, ::Char, ::LowerTriangular{Float64,Array{Float64,2}}, ::Array{Float64,1})
Closest candidates are:
trmv(::Char, ::Char, ::Char, ::Union{DenseArray{Float64,2},SubArray{Float64,2,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Union{DenseArray{Float64,1},SubArray{Float64,1,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}})
trmv(::Char, ::Char, ::Char, ::Union{DenseArray{Float32,2},SubArray{Float32,2,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Union{DenseArray{Float32,1},SubArray{Float32,1,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}})
trmv(::Char, ::Char, ::Char, ::Union{DenseArray{Complex{Float64},2},SubArray{Complex{Float64},2,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Union{DenseArray{Complex{Float64},1},SubArray{Complex{Float64},1,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}})
...
julia> typeof(Mchol)
LowerTriangular{Float64,Array{Float64,2}}
julia> typeof(Z)
Array{Float64,1}
I'm having trouble interpreting the error. Can anyone help?
EDIT: SOLVED
Mchol as computed by
Mchol=chol(M)'
does not work but MChol computed by
LAPACK.potrf!('L',Mchol)
works

You can use Mchol=chol(M)' but you'll have to extract the buffer first, i.e. BLAS.trmv('L','N','N',Mchol.data,Z). However, I'd recommend that you don't call trmv directly. Most often you should use the Ax_mul_Bx! family of functions. In this case, the most efficient would probably be to compute
Mchol = chol(M)
Ac_mul_B!(Mchol,Z)
This will call BLAS.trmv when the elements are one of the four BLAS element types but in contrast to BLAS.trmv it will still work for e.g. BigFloat elements.

Related

How does the methods function work in Julia?

The methods function returns the method table of a function as also mentioned here. I am looking for an explanation on how the function works.
Consider the following example in Julia 1.7:
julia> f(a::Int64,b::Int64=1,c::Float64=1.0) = (a+b+c)
f (generic function with 3 methods)
julia> g(a::Int64,b::Int64=1;c::Float64=1.0) = (a+b+c)
g (generic function with 2 methods)
julia> methods(f)
# 3 methods for generic function "f":
[1] f(a::Int64) in Main at REPL[1]:1
[2] f(a::Int64, b::Int64) in Main at REPL[1]:1
[3] f(a::Int64, b::Int64, c::Float64) in Main at REPL[1]:1
julia> methods(g)
# 2 methods for generic function "g":
[1] g(a::Int64) in Main at REPL[1]:1
[2] g(a::Int64, b::Int64; c) in Main at REPL[1]:1
julia> f(1,1.0)
ERROR: MethodError: no method matching f(::Int64, ::Float64)
Closest candidates are:
f(::Int64) at REPL[1]:1
f(::Int64, ::Int64) at REPL[1]:1
f(::Int64, ::Int64, ::Float64) at REPL[1]:1
Stacktrace:
[1] top-level scope
# REPL[4]:1
julia> g(1,c=1.0)
3.0
julia>
It is not quite clear to me why there is no method f(::Int64, ::Float64) (hence the error). I am also wondering why there is no error for g(1,c=1.0) given that g(::Int64, ::Float64) or g(::Int64, c) are not listed as valid methods for g.
Ah, so to be a bit technical this is really more accurately a question about how type annotations, dispatch, optional arguments, and keyword arguments work in Julia; the methods function just gives you some insight into that process, but it's not the methods function that makes those decisions. To answer your individual questions
It is not quite clear to me why there is no method f(::Int64, ::Float64) (hence the error).
There is no method for this because you you can only omit optional normal (non-keyword) arguments contiguously from the last normal (non-keyword) argument. Consider the following case:
julia> f(a=1, b=1, c=1, d=1) = a + 2b +3c +4d
f (generic function with 8 methods)
julia> f(2,4)
If there were not a rule for this, the compiler would have no idea whether the 2 and 4 provided were supposed to be for a and b, or do I mean that actually I wanted the 2 to go to a and the 4 to go to d? or c? Or anything! This would be undecidable. So we have a rule, and the rule is that the first argument goes to a, the second to b, and the omitted ones are c and d. Even though you have specified defaults, you cannot omit middle arguments, you can only omit the last N optional arguments. This is just the rule, and it does not matter whether or not you have applied type annotations or not.
I am also wondering why there is no error for g(1,c=1.0) given that g(::Int64, ::Float64) or g(::Int64, c) are not listed as valid methods for g.
Firstly, there is no method for g(::Int64, ::Float64) or g(::Int64, c) because keyword arguments (in this example, c) do not participate in dispatch. There is no error for g(1,c=1.0) because when you write g(1,c=1.0), you the optional argument for b is falling back to its default, so you are actually calling g(1,1,c=1.0) When you write g(1,c=1.0), you have explicitly specified that the 1.0 is being assigned to c, so it cannot possibly be the value for b. The value for b has to fall back to it's default, 1.

Error plotting array with exponential operation

I am new to Julia and I am trying to create a plot with the following:
xi2 = range(0,sqrt(6),step=1e-3)
collect(xi2)
plot(xi2, 1-xi2^2/6, label="n = 0")
When I try this though, I have the error:
MethodError: no method matching ^(::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, ::Int64)
Closest candidates are:
^(::Union{AbstractChar, AbstractString}, ::Integer) at C:\Users\Acer\AppData\Local\Programs\Julia-1.7.0\share\julia\base\strings\basic.jl:721
^(::Rational, ::Integer) at C:\Users\Acer\AppData\Local\Programs\Julia-1.7.0\share\julia\base\rational.jl:475
^(::Complex{<:AbstractFloat}, ::Integer) at C:\Users\Acer\AppData\Local\Programs\Julia-1.7.0\share\julia\base\complex.jl:839
...
What am I missing here?
You want the elements of xi2 raised to the power of two, so you want element-wise operations using the dot operator:
julia> xi2 = range(0,sqrt(6),step=1e-3);
julia> plot(xi2, 1 .- xi2.^2/6, label="n = 0")
(The collect step was unnecessary, since most array operations can be performed on a range directly. And in case you did want to collect - i.e. allocate memory and make it a full array - you have to assign the result of collect to some variable. In your original code, the elements were being collected into an array, but then thrown away since the result wasn't assigned to anything.)

Julia - exponentiating a matrix returned from another function

I have functions f1 and f2 returning matrices m1 and m2, which are calculated using Diagonal, Tridiagonal, SymTridiagonal from LinearAlgebra package.
In a new function f3 I try computing
j = m1 - m2*im
m3 = exp(j)
but this gives a Method error on computation unless I use j=Matrix(m1-m2*im), saying that no matching method for exp(::LinearAlgebra.Tridiagonal ...)
My question is how can I do this computation in the most optimal way? I am a total beginner in Julia.
Unless you have a very special structure of j (i.e. if its exponential is sparse - which is unlikely) the best you can do AFAICT is to use a dense matrix as an input to exp:
m3 = LinearAlgebra.exp!([float(x) for x in Tridiagonal(dl, d, du)])
If you expect m3 to be sparse then I think currently there is no special algorithm implemented for that case in Julia.
Note that I use exp! to do operation in place and use a comprehension to make sure the argument to exp! is dense. As exp! expects LinearAlgebra.BlasFloat (that is Union{Complex{Float32}, Complex{Float64}, Float32, Float64}) I use float to make sure that elements of j are appropriately converted. Note that it might fail if you work with e.g. BigFloat or Float16 values - in this case you have to do an appropriate conversion to the expected types yourself.

Error using ifft with Julia

I try to compute the inverse fourier transform of a array of coefficients using ifft with Julia.
I have N complex numbers on an array organized as : Y=[Y_0,.., Y_(N-1)] representing my Fourier coefficients and by computing
ifft(Y)
I get the following error message :
MethodError: no method matching plan_bfft(::Array{Complex,1},
::UnitRange{Int64}) Closest candidates are:
plan_bfft{T<:Union{Complex{Float32},Complex{Float64}},N}(::Union{Base.ReshapedArray{T<:Union{Complex{Float32},Complex{Float64}},N,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N}}},DenseArray{T<:Union{Complex{Float32},Complex{Float64}},N},SubArray{T<:Union{Complex{Float32},Complex{Float64}},N,A<:Union{Base.ReshapedArray{T,N,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N}}},DenseArray},I<:Tuple{Vararg{Union{Base.AbstractCartesianIndex,Colon,Int64,Range{Int64}},N}},L}},
::Any; flags, timelimit) at fft/FFTW.jl:601
plan_bfft{T<:Real}(::AbstractArray{T<:Real,N}, ::Any; kws...) at
dft.jl:205
plan_bfft{T<:Union{Integer,Rational{T<:Integer}}}(::AbstractArray{Complex{T<:Union{Integer,Rational}},N},
::Any; kws...) at dft.jl:207 ...
in #plan_ifft#15(::Array{Any,1}, ::Function, ::Array{Complex,1},
::UnitRange{Int64}) at ./dft.jl:268 in #plan_ifft#3(::Array{Any,1},
::Function, ::Array{Complex,1}) at ./dft.jl:58 in
ifft(::Array{Complex,1}) at ./dft.jl:56
Could anyone help with this?
when I ask typeof(Y) the answer is Array{Complex,1}.
Thank you
Just a guess here: ifft expects the array elements to be of type Complex{Float64}, not Complex. Furthermore,
julia> Complex<:Complex{Float64}
false
How did you get an array of Complex?
When using Complex{Float64} things work correctly:
julia> Y=complex([1.,2.,3.],[4.,3.,2.])
3-element Array{Complex{Float64},1}:
1.0+4.0im
2.0+3.0im
3.0+2.0im
julia> ifft(Y)
3-element Array{Complex{Float64},1}:
2.0+3.0im
-0.788675+0.211325im
-0.211325+0.788675im

Pearson's r in Julia

I couldn't find an already made function in Julia to compute Pearson's r so I resorted to trying to make it myself however I run into trouble.
code:
r(x,y) = (sum(x*y) - (sum(x)*sum(y))/length(x))/sqrt((sum(x^2)-(sum(x)^2)/length(x))*(sum(y^2)-(sum(y)^2)/length(x)))
if I attempt to run this on two arrays:
b = [4,8,12,16,20,24,28]
q = [5,10,15,20,25,30,35]
I get the following error:
ERROR: `*` has no method matching *(::Array{Int64,1}, ::Array{Int64,1})
in r at none:1
Pearson's r is available in Julia as cor:
julia> cor(b,q)
1.0
When you're looking for functions in Julia, the apropos function can be very helpful:
julia> apropos("pearson")
Base.cov(v1[, v2][, vardim=1, corrected=true, mean=nothing])
Base.cor(v1[, v2][, vardim=1, mean=nothing])
The issue you're running into with your definition is the difference between elementwise multiplication/exponentiation and matrix multiplication/exponentiation. In order to use elementwise behavior as you intend, you need to .* and .^:
r(x,y) = (sum(x.*y) - (sum(x)*sum(y))/length(x))/sqrt((sum(x.^2)-(sum(x)^2)/length(x))*(sum(y.^2)-(sum(y)^2)/length(x)))
With only those three changes, your r definition seems to match Julia's cor to within a few ULPs:
julia> cor(b,q)
1.0
julia> x,y = randn(10),randn(10)
([-0.2384626335813905,0.0793838075714518,2.395918475924737,-1.6271954454542266,-0.7001484742860653,-0.33511064476423336,-1.5419149314518956,-0.8284664940238087,-0.6136547926069563,-0.1723749334766532],[0.08581770755520171,2.208288163473674,-0.5603452667737798,-3.0599443201343854,0.585509815026569,0.3876891298047877,-0.8368409374755644,1.672421071281691,0.19652240951291933,0.9838306761261647])
julia> r(x,y)
0.23514468093214283
julia> cor(x,y)
0.23514468093214275
Julia's cor is defined iteratively (this is the zero-mean implementation — calling cor first subtracts the mean and then calls corzm) which means fewer allocations and better performance. I can't speak to the numerical accuracy.
Your function is trying to multiply two column vectors. You will need to invert transpose one of them. Consider:
> [1,2]*[3,4]
ERROR: `*` has no method matching *(::Array{Int64,1}, ::Array{Int64,1})
but:
> [1,2]'*[3,4]
1-element Array(Int64,1)
11
and:
> [1,2]*[3,4]'
2x2 Array(Int64,2):
3 4
6 8

Resources