I'm interested in computing the log of the determinant of a large, sparse, complex (floating point) matrix. My first thought is to use an LU decomposition, i.e.:
srand(123)
A=complex.(rand(3,3), rand(3,3))
LUF=lufact(A)
LUFs=lufact(sparse(A))
if round(det(LUFs[:L])*det(LUFs[:U])-det(A[LUFs[:p], LUFs[:q]]), 5)==0
println("Sparse LU determinant is correct\n")
else
println("Sparse LU determinant is NOT correct\n")
end
which will always print out the "NOT correct" option. Furthermore,
round.(LUFs[:L]*LUFs[:U], 5)==round.(A[LUFs[:p], LUFs[:q]], 5)
always comes out as false.
If instead I try to directly use
logdet(LUFs)
or
logdet(sparse(A))
I get an error:
LoadError: MethodError: no method matching
logabsdet(::Base.SparseArrays.UMFPACK.UmfpackLU{Complex{Float64},Int64})
Closest candidates are:
logabsdet(!Matched::Base.LinAlg.UnitUpperTriangular{T,S} where S<:
(AbstractArray{T,2} where T)) where T at linalg/triangular.jl:2184
logabsdet(!Matched::Base.LinAlg.UnitLowerTriangular{T,S} where S<:
(AbstractArray{T,2} where T)) where T at linalg/triangular.jl:2185
logabsdet(!Matched::Union{LowerTriangular{T,S} where S<:
(AbstractArray{T,2} where T), UpperTriangular{T,S} where S<:
(AbstractArray{T,2} where T)}) where T at linalg/triangular.jl:2189
...
while loading...
I'm not sure if it's something wrong in the way I've coded it (I'm a beginner transitioning from matlab), or if there's something wrong with my Julia install (though I have replicated these results on another computer). Any pointers you could give me would be great!
The reason is that the sparse LU has a scaling factor as well which can be extracted with LUFs[:Rs] (in Julia 0.6 and LUFs.Rs in Julia 0.7-). Hence the computation becomes
julia> det(LUFs[:U])/prod(LUFs[:Rs])
0.4576579970452131 - 0.07585833005688908im
julia> det(A)
0.4576579970452133 - 0.07585833005688908im
We should probably have logabsdet for the sparse case as well. However, is your matrix positive definite by any chance? In so, you'd be able to compute the logdet of the Cholesky factorization.
Related
I'm trying to do some exercises of Compressed Sensing on Julia, but i realize that the discrete cosine transformation (using FFTW.jl) of an identity matrix doesn't looks as the result of other programming languages (aka. Mathematica and Matlab).
For example in Julia
using Plots, FFTW, LinearAlgebra
n = 100
Psi = dct(Matrix(1.0I,n,n))
heatmap(Psi)
results in this matrix (which is essentially an identity matrix with some noise)
But in Matlab
imagesc(dct(eye(100,100),'Type',2))
this is the result (as expected)
Finally in Mathematica
MatrixPlot[N[FourierDCTMatrix[100, 2]], PlotLegends -> Automatic]
returns this
Why Julia behaves so differently?
And is this normal?
Matlab (and I guess Mathematica), does dct of each column in your matrix. FFTW performs a 2-dimensional dct when the input is two-dimensional. The same happens for fft.
If you want column-wise transformation, you can specify the dimension:
Psi1 = dct(Matrix(1.0I,n,n), 1); # along first dimension
heatmap(Psi1)
Notice that the direction of the y-axis is opposite for Plots.jl relative to Matlab.
(BTW, you can also just write I(n) or 1.0I(n) instead of Matrix(1.0I,n,n))
This is something that sets Julia apart from some other languages. It tends to treat matrices as matrices, and not as just a collection of vectors or a bunch of scalars. For example exp(M) and log(M) for matrices not operate elementwise, but will calculate the matrix exponential and matrix logarithm according to their linear algebra definitions.
I'm struggling on how I can simplify the quotient of two normal probability functions in R. Actually, I'm calculating a conditional skew-Normal density, them I have the division between this two function:
pnorm(alpha0+t(alpha2)%*%chol2inv(chol(omega2))%*%t(y2-xi2.1))/pnorm(tau2.1)
where alpha0+t(alpha2)%*%chol2inv(chol(omega2))%*%t(y2-xi2.1) and tau2.1 result in real numbers. For example, sometimes I have pnorm(-50)/pnorm(-40), e.g. an inconsistency 0/0. But these values are not zero, R is just approximating. I tried to use the erf function, but I got the same problem (0/0).
Any hint on how can I overcome this issue?
pnorm has a log parameter, which makes it return log(p). Change your equation to exp(log(p1) - log(p2)):
exp(pnorm(-50, log = TRUE) - pnorm(-40, log = TRUE))
#[1] 2.95577e-196
I have a square symmetric real matrix S of dimension 31. I want to compute its trace (nuclear) norm, Frobenius (Hilbert--Schmidt) norm and operator (spectral) norm. I am using eigen:
x <- eigen(S, only.values = TRUE)$values
sum(abs(x))
sqrt(sum(x^2))
max(abs(x))
Is there a faster way to do this? For the Frobenius norm, I suppose that sum(S^2) should be faster. I also believe that there should be a way to compute the operator norm faster, as it only requires the maximal and minimal eigenvalues rather than all of them. I am not sure, however, how to handle the trace norm efficiently. It can be computed as the trace of the matrix square root of t(S) %*% S but (at least to my knowledge) computing the matrix square root is done using eigen too (see below my code if helpful).
I don't know if it helps at all, but I also know that S+diag(31) is positive semidefinite.
I need to do this for a lot of matrices (4 000 000 or so) so even mild improvements would be consequential.
Here is the code for matrix square root I am using
sqm <- function(A)
{
A <- eigen(A)
A$vectors %*% (sqrt(A$values) * t(A$vectors))
}
I am trying to calculate P^100 where P is my transition matrix. I want to do this by diagonalizing P so that way we have P = Q*D*Q^-1.
Of course, if I can get P to be of this form, then I can easily calculate P^100 = Q*D^100*Q^-1 (where * denotes matrix multiplication).
I discovered that if you just do P^5 that all you'll get in return is a matrix where each of your entries of P were raised to the 5th power, rather than the fifth power of the matrix (P*P*P*P*P).
I found a question on here that asks how to check if a matrix is diagonalizable but not how to explicitly construct the diagonalization of a matrix. In MATLAB it's super easy but well, I'm using R and not MATLAB.
The eigen() function will compute eigenvalues and eigenvectors for you (the matrix of eigenvectors is Q in your expression, diag() of the eigenvalues is D).
You could also use the %^% operator in the expm package, or functions from other packages described in the answers to this question.
The advantages of using someone else's code are that it's already been tested and debugged, and may use faster or more robust algorithms (e.g., it's often more efficient to compute the matrix power by composing powers of two of the matrix rather than doing the eigenvector computations). The advantage of writing your own method is that you'll understand it better.
I want to minimize function FlogV (working with a multinormal distribution, Z is data matrix NxC; SIGMA it´s a square matrix CxC of var-covariance of data, R a vector with length C)
FLogV <- function(P){
(here I define parameters, P, within R and SIGMA)
logC <- (C/2)*N*log(2*pi)+(1/2)*N*log(det(SIGMA))
SOMA.t <- 0
for (j in 1:N){
SOMA.t <- SOMA.t+sum(t(Z[j,]-R)%*%solve(SIGMA)%*%(Z[j,]-R))
}
MlogV <- logC + (1/2)*SOMA.t
return(MlogV)
}
minLogV <- optim(P,FLogV)
All this is part of an extend code which was already tested and works well, except in the most important thing: I can´t optimize because I get this error:
“Error in solve.default(SIGMA) :
system is computationally singular: reciprocal condition number = 3.57726e-55”
If I use ginv() or pseudoinverse() or qr.solve() I get:
“Error in svd(X) : infinite or missing values in 'x'”
The thing is: if I take the SIGMA matrix after the error message, I can solve(SIGMA), the eigen values are all positive and the determinant is very small but positive
det(SIGMA)
[1] 3.384674e-76
eigen(SIGMA)$values
[1] 0.066490265 0.024034173 0.018738777 0.015718562 0.013568884 0.013086845
….
[31] 0.002414433 0.002061556 0.001795105 0.001607811
I already read several papers about change matrices like SIGMA (which are close to singular), did several transformations on data scale and form but I realized that, for a 34x34 matrix like the example, after det(SIGMA) close to e-40, R assumes it like 0 and calculation fails; also I can´t reduce matrix dimensions and can´t input in my function correction algorithms to singular matrices because R can´t evaluate it working with this optimization functions like optim. I really appreciate any suggestion to this problem.
Thanks in advance,
Maria D.
It isn't clear from your post whether the failure is coming from det() or solve()
If its just the solve in the quadratic term, you may want to try the two argument version of solve, it can be a bit more stable. solve(X,Y) is the same as solve(X) %*% Y
If you can factor sigma using chol(), you will get a triangular matrix such that LL'=Sigma. The determinant is the product of the diagonals, and you might try this for the quadratic term:
crossprod( backsolve(L, Z[j,]-R))