I've been going through some tutorial on factorization in Julia. For the sake of practice, I am trying to take eigendecompositions from a matrix and recreate the original Matrix, using the formula:
A = VλV⁻¹
Where V is a matrix of eigenvectors, λ is a diagonal matrix of eigenvalues, and V⁻¹ is the inverted matrix V.
What confuses me is that the eigenvalues are returned as a vector, while the the guides I found states it should be returned as a diagonal matrix.
Code example:
using LinearAlgebra
# Create matrix
A = rand(3, 3)
# Eigendecomposition
AEig = eigen(A)
λ = AEig.values
3-element Vector{Float64}:
V = AEig.vectors
3×3 Matrix{Float64}:
Acomp = V*λ*inv(V)
A ≈ Acomp
Trying to multiply the vector and matrices returns an error:
DimensionMismatch("A has dimensions (3,1) but B has dimensions (3,3)")
This occurs because multiplying V with λ returns a 3-element vector, which is then attempted multiplied with V⁻¹, which is a 3×3 Matrix. My question is, is there a straightforward way to create a diagonal matrix from a vector? Alternatively, can "recomposition" of the original matrix be achieved another way?
You can use identity matrix from LinearAlgebra presented as I like this:
julia> λ
3-element Vector{Float64}:
-0.4445656542213612
0.5573883013610712
1.310095519651262
julia> λ .* I(3)
3×3 Matrix{Float64}:
-0.444566 -0.0 -0.0
0.0 0.557388 0.0
0.0 0.0 1.3101
The .* there means that every element of the vector is being multiplied by the corresponding row of the matrix.
[Edit:] I discovered another way to create a diagonal matrix, after posting the question using the Diagonal() function. While the above solution works, this creates somewhat simpler syntax:
julia> Diagonal(λ)
3×3 Diagonal{Float64, Vector{Float64}}:
-0.444566 ⋅ ⋅
⋅ 0.557388 ⋅
⋅ ⋅ 1.3101
Related
I wanna fill my array in Julia by positive real numbers. But I found information only how to do it with integers or real numbers (including negatives). Is it possible?
Thanks!
You can use any mathematical formula that maps the [0, 1) range into the [0, +inf]:
For example, if x is your random variable in the [0, 1) range (obtained with e.g. rand() for float data types):
tan(x * pi / 2)
atanh(x)
log(x) ^ 2
-log(x) / x ^ p (for p non-negative integer -- it will change the number distribution)
There are many other functions.
Of course the numbers are no longer uniformly distributed, but that is impossible to achieve.
Technically, the built-in randexp fulfils your requirement: the exponential distribution has the positive reals as its support. The scale of the numbers you practically get is much lower, though. The same holds for abs ∘ randn, the half-normal distribution. (In both cases, you could multiply the results with a large positive number to increase the variance to your requirements.)
Here's a funny alternative: you can generate uniformly random bits, and reinterpret them as floats (and just set the sign always to positive):
julia> bitrand(3*64).chunks
3-element Vector{UInt64}:
0xe7c7c52703987e68
0xc221b9864e7bab7e
0xa45b39faa65b446e
julia> reinterpret(Float64, bitrand(3*64).chunks)
3-element reinterpret(Float64, ::Vector{UInt64}):
2.8135484124856866e-108
-4.521596431965459e53
-5.836451011310255e78
julia> abs.(reinterpret(Float64, bitrand(3*64).chunks))
3-element Vector{Float64}:
1.6467305137006711e236
3.3503597018864875e-260
1.2211675821672628e77
julia> bitstring.(abs.(reinterpret(Float64, bitrand(3*64).chunks)))
3-element Vector{String}:
"0000110000011000001110000110001111010000011110111101000101101101"
"0011000010110101111100111011110100111100011011000101001100010011"
"0110111000001000101011010100011011010010100111111011001000001100"
This is still not a uniform distribution on the values, though, as the precision of floats gets smaller the larger the exponent gets.
I have a general real matrix (i.e. not symmetric or Hermitian, etc.), and I would like to find its right eigenvectors and corresponding left eigenvectors in Julia.
Julia's eigen function returns the right eigenvectors only. I can find the left eigenvectors by doing
eigen(copy(M'))
but this requires copying the whole matrix and performing the eigendecomposition again, and there is no guarantee that the eigenvectors will be in the same order. (The copy is necessary because there is no eigen method for matrices of type Adjoint.)
In Python we have scipy.linalg.eigs, which can compute the left and right eigenvectors simultaneously in a single pass, which is more efficient and guarantees that they will be in the same order. Is there something similar in Julia?
The left eigenvectors can be computed by taking the inverse of the matrix formed by the right eigenvectors:
using LinearAlgebra
A = [1 0.1; 0.1 1]
F = eigen(A)
Q = eigvecs(F) # right eigenvectors
QL = inv(eigvecs(F)) # left eigenvectors
Λ = Diagonal(eigvals(F))
# check the results
A * Q ≈ Q * Λ # returns true
QL * A ≈ Λ * QL # returns true, too
# in general we have:
A ≈ Q * Λ * inv(Q)
In the above example QL are the left eigenvectors.
If the left eigenvectors are applied to a vector is it preferable to compute Q \ v, instead of inv(QL)*v.
I use the SVD factorization which decomposes a matrix into 3 matrixes USV' = M. The U matrix contains columnwise the left eigenvectors, V the right eigenvectors, and S is diagonal with the sqrt of the eigenvalues.
Note that U = inv(V) if and only if A is symetric (as in PCA where both are used indistinctly on a correlation matrix) and must be positive definite.
Here are some sources where I confirmed my info:
https://www.cc.gatech.edu/~dellaert/pubs/svd-note.pdf
https://en.wikipedia.org/wiki/Singular_value_decomposition
I'm looking at the numerics of some matrices that depends on a parameter x. The matrix has real eigenvalues for certain values x, but for other values I have a degeneracy in both eigenvalues and eigenvectors (the occurrence of exceptional points).
One of the simplest examples to get an exceptional point is with the matrix:
julia> h=[1 1im; 1im -1]
2×2 Array{Complex{Int64},2}:
1+0im 0+1im
0+1im -1+0im
The eigenvalues are zero, as they should be
2-element Array{Complex{Float64},1}:
-2.22045e-16+0.0im
0.0+0.0im
However, I would like to know why Julia give me the eigenvectors:
julia> b[2][:,1]
2-element Array{Complex{Float64},1}:
-0.0-0.707107im
0.707107+0.0im
julia> b[2][:,2]
2-element Array{Complex{Float64},1}:
0.707107+0.0im
0.0+0.707107im
Since in this case the eigenvalue is zero, I think it doesn't really matter what is the associated eigenvector. But if the eigenvalues coalesce somewhere in the
complex plane, do I really get two equal eigenvectors?
Is there an specific way to treat this cases in Julia?
The kernel of your matrix consists of multiples of (1,i)', which is what you get. As the matrix is not the zero matrix, it has rank 1 and thus also co-rank 1, the eigenspace has dimension 1. The generalized eigenspace is the full space, you get A*(1,0)' = (1,i)' so that in that basis ((1,i)',(1,0)') the linear operator has the matrix [[0,1],[0,0]], its Jordan normal form.
julia> r
3×3 Array{Float64,2}:
-1.77951 -0.79521 -2.57472
0.0 0.630793 0.630793
0.0 0.0 -1.66533e-16
julia> sort(abs(diag(r)))[1]
1.6653345369377348e-16
julia> isequal(floor(sort(abs(diag(r)))[1]),0)
true
But this is not right
julia> isequal(sort(abs(diag(r)))[1],convert(AbstractFloat,0.0))
false
Is there a function in Julia to check for floating point equivalent to zero?
-1.66533e-16 is not equivalent to zero. It is, however, approximately equivalent to zero (with respect to a particular tolerance), and julia does provide just such a function:
isapprox(1e-16, 0.0; atol=1e-15, rtol=0)
edit: or as Chris pointed out, a good choice for atol is eps() which corresponds to machine precision for that particular type:
julia> isapprox(1e-20, 0.0; atol=eps(Float64), rtol=0)
true
Do read the description for isapprox to see what the default arguments mean, and to see if you prefer an "absolute" or "relative" tolerance approach (or a mixed approach). Though for a comparison to zero specifically, using an absolute tolerance is fine and probably more intuitive.
I'm trying to execute this equation in scilab; however, I'm getting error: 59 of function %s_pow called ... even though I define x.
n=0:1:3;
x=[0:0.1:2];
z = factorial(3); w = factorial(n);u = factorial(3-n);
y = z /(w.*u);
t = y.*x^n*(1-x)^(3-n)
(at this point I haven't added in the plot command, although I would assume it's plot(t)?)
Thanks for any input.
The power x^n and (1-x)^(3-n) on the last line both cause the problem, because x and n are matrices and they are not the same size.
As mentioned in the documentation the power operation can only be performed between:
(A:square)^(b:scalar) If A is a square matrix and b is a scalar then A^b is the matrix A to the power b.
(A:matrix).^(b:scalar) If b is a scalar and A a matrix then A.^b is
the matrix formed by the element of A to the power b (elementwise
power). If A is a vector and b is a scalar then A^b and A.^b performs
the same operation (i.e elementwise power).
(A:scalar).^(b:matrix) If A is a scalar and b is a matrix (or
vector) A^b and A.^b are the matrices (or vectors) formed by
a^(b(i,j)).
(A:matrix).^(b:matrix) If A and b are vectors (matrices) of the same
size A.^b is the A(i)^b(i) vector (A(i,j)^b(i,j) matrix).