QR decomposition in R - Forcing a Positive Diagonal - r

I have a problem about qr function in R. My input matrix is positive definite, so R should be give r function a triangular matrix with diagonal are all positive. However, I found there are some negative values in the diagonal. How can I address this problem?
Suppose we have a matrix y looks like this:
[1,] 0.07018171 -0.07249188 -0.01952050
[2,] -0.09617788 0.52664014 -0.02930578
[3,] -0.01962719 -0.09521439 0.81718699
It is positive-definite:
> eigen(y)$values
[1] 0.82631283 0.53350907 0.05418694
I apply qr() in R, it gave me
Q =
[,1] [,2] [,3]
[1,] -0.5816076 -0.6157887 0.5315420
[2,] 0.7970423 -0.5620336 0.2210021
[3,] 0.1626538 0.5521980 0.8176926
and R =
[1,] -0.1206685 0.4464293 0.1209139
[2,] 0.0000000 -0.3039269 0.4797403
[3,] 0.0000000 0.0000000 0.6513551
which the diagonal is not positive.
Many thanks.
Here is the matrix:
structure(c(0.07018171, -0.09617788, -0.01962719, -0.07249188,
0.52664014, -0.09521439, -0.0195205, -0.02930578, 0.81718699), .Dim = c(3L,
3L))

I can simply multiply a diagonal matrix with sign(R) to force the diagonal entries to be positive and then adjust corresponding value of Q. Q then still an orthogonal matrix.
Sample code
qr.decom <- qr(A)
Q <- qr.Q(qr.decom)
R <- qr.R(qr.decom)
sgn <- sign(diag(R))
R.new <- diag(sgn) %*% R
Q.new <- Q %*% diag(sgn)
Then R.new has a positive diagonal elements.
We could use example in the question part to try it in R.

I think you can also use pracma::gramSchmidt. This function returns automatically a gram-schmidt decomposition with positives on the diagonale. Hope it helps.

Related

create a random non-singular matrix reliably

How can I create a matrix of pseudo-random values that is guaranteed to be non-singular? I tried the code below, but it failed. I suppose I could just loop until I got one by chance but I would prefer a more elegant "R-like" solution if anyone has an idea.
library(matrixcalc)
exampledf<- matrix(ceiling(runif(16,0,50)), ncol=4)
is.singular.matrix(exampledf) #this may or may not return false
using a while loop:
exampledf<-NULL
library(matrixcalc)
while(is.singular.matrix(exampledf)!=TRUE){
exampledf<- matrix(ceiling(runif(16,0,50)), ncol=4)
}
I suppose one method that guarantees (not is fairly likely, but actually guarantees) that the matrix is non-singular, is to start from a known non-singular matrix and apply the basic linear operations used for example in Gaussian Elimination: 1. add / subtract a multiple of one row from another row or 2. multiply row by a constant.
Depending on how "random" and how dense you want your matrix to be you can start from the identity matrix and multiply all elements with a random constant. Afterwards, you can apply a randomly selected set of operations from above, that will result in a non singular matrix. You can even apply a predefined set of operations, but using a randomly selected constant at each step.
An alternative could be to start from an upper triangular matrix for which the product of main diagonal entries is not zero. This is because the determinant of a triangular matrix is the product of the elements on the main diagonal. This effectively boils down to generating N random numbers, placing them on the main diagonal, and setting the rest of the entries (above the main diagonal) to whatever you like. If you want the matrix to be fully dense, add the first row to every other row of the matrix.
Of course this approach (like any other probably would) assumes that the matrix is relatively numerically stable and the singularity will not be affected by precision errors (as you know the precision of data types in all programming languages is limited). You would do well to avoid very small / very large values which can make the method numerically unstable.
It should be fairly unlikely that this will produce a singular matrix:
Mat1 <- matrix(rnorm(100), ncol=4)
Mat2 <- matrix(rnorm(100), ncol=4)
crossprod(Mat1,Mat2)
[,1] [,2] [,3] [,4]
[1,] 0.8138 5.112 2.945 -5.003
[2,] 4.9755 -2.420 1.801 -4.188
[3,] -3.8579 8.791 -2.594 3.340
[4,] 7.2057 6.426 2.663 -1.235
solve( crossprod(Mat1,Mat2) )
[,1] [,2] [,3] [,4]
[1,] -0.11273 0.15811 0.05616 0.07241
[2,] 0.03387 0.01187 0.07626 0.02881
[3,] 0.19007 -0.60377 -0.40665 0.17771
[4,] -0.07174 -0.31751 -0.15228 0.14582
inv1000 <- replicate(1000, {
Mat1 <- matrix(rnorm(100), ncol=4)
Mat2 <- matrix(rnorm(100), ncol=4)
try(solve( crossprod(Mat1,Mat2)))} )
str(inv1000)
#num [1:4, 1:4, 1:1000] 0.1163 0.0328 0.3424 -0.227 0.0347 ...
max(inv1000)
#[1] 451.6
> inv100000 <- replicate(100000, {Mat1 <- matrix(rnorm(100), ncol=4)
+ Mat2 <- matrix(rnorm(100), ncol=4)
+ is.singular.matrix( crossprod(Mat1,Mat2))} )
> sum(inv100000)
[1] 0

Can I generate bivariate normal random variables with correlation 1 using Cholesky factorization?

Is it possible to set a correlation = 1 using the cholesky decomposition technique?
set.seed(88)
mu<- 0
sigma<-1
x<-rnorm(10000, mu, sigma)
y<-rnorm(10000, mu, sigma)
MAT<-cbind(x,y)
cor(MAT[,1],MAT[,2])
#this doesn't work because 1 makes it NOT positive-definite. any number 0 to .99 works
correlationMAT<- matrix(1,nrow = 2,ncol = 2)
U<-chol(correlationMAT)
newMAT<- MAT %*% U
cor(newMAT[,1], newMAT[,2]) #.....but I want to make this cor = 1
Any ideas?
Actually you can, by using pivoted Cholesky factorization.
correlationMAT<- matrix(1,nrow = 2,ncol = 2)
U <- chol(correlationMAT, pivot = TRUE)
#Warning message:
#In chol.default(correlationMAT, pivot = TRUE) :
# the matrix is either rank-deficient or indefinite
U
# [,1] [,2]
#[1,] 1 1
#[2,] 0 0
#attr(,"pivot")
#[1] 1 2
#attr(,"rank")
#[1] 1
Note, U has identical columns. If we do MAT %*% U, we replicate MAT[, 1] twice, which means the second random variable will be identical to the first one.
newMAT<- MAT %*% U
cor(newMAT)
# [,1] [,2]
#[1,] 1 1
#[2,] 1 1
You don't need to worry that two random variables are identical. Remember, this only means they are identical after standardization (to N(0, 1)). You can rescale them by different standard deviation, then shift them by different mean to make them different.
Pivoted Cholesky factorization is very useful. My answer for this post: Generate multivariate normal r.v.'s with rank-deficient covariance via Pivoted Cholesky Factorization gives a more comprehensive picture.

How to compute the power of a matrix in R [duplicate]

This question already has answers here:
A^k for matrix multiplication in R?
(6 answers)
Closed 9 years ago.
I'm trying to compute the -0.5 power of the following matrix:
S <- matrix(c(0.088150041, 0.001017491 , 0.001017491, 0.084634294),nrow=2)
In Matlab, the result is (S^(-0.5)):
S^(-0.5)
ans =
3.3683 -0.0200
-0.0200 3.4376
> library(expm)
> solve(sqrtm(S))
[,1] [,2]
[1,] 3.36830328 -0.02004191
[2,] -0.02004191 3.43755429
After some time, the following solution came up:
"%^%" <- function(S, power)
with(eigen(S), vectors %*% (values^power * t(vectors)))
S%^%(-0.5)
The result gives the expected answer:
[,1] [,2]
[1,] 3.36830328 -0.02004191
[2,] -0.02004191 3.43755430
The square root of a matrix is not necessarily unique (most real numbers have at least 2 square roots, so it is not just matricies). There are multiple algorithms for generating a square root of a matrix. Others have shown the approach using expm and eigenvalues, but the Cholesky decomposition is another possibility (see the chol function).
To extend this answer beyond square roots, the following function exp.mat() generalizes the "Moore–Penrose pseudoinverse" of a matrix and allows for one to calculate the exponentiation of a matrix via a Singular Value Decomposition (SVD) (even works for non square matrices, although I don't know when one would need that).
exp.mat() function:
#The exp.mat function performs can calculate the pseudoinverse of a matrix (EXP=-1)
#and other exponents of matrices, such as square roots (EXP=0.5) or square root of
#its inverse (EXP=-0.5).
#The function arguments are a matrix (MAT), an exponent (EXP), and a tolerance
#level for non-zero singular values.
exp.mat<-function(MAT, EXP, tol=NULL){
MAT <- as.matrix(MAT)
matdim <- dim(MAT)
if(is.null(tol)){
tol=min(1e-7, .Machine$double.eps*max(matdim)*max(MAT))
}
if(matdim[1]>=matdim[2]){
svd1 <- svd(MAT)
keep <- which(svd1$d > tol)
res <- t(svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep]))
}
if(matdim[1]<matdim[2]){
svd1 <- svd(t(MAT))
keep <- which(svd1$d > tol)
res <- svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep])
}
return(res)
}
Example
S <- matrix(c(0.088150041, 0.001017491 , 0.001017491, 0.084634294),nrow=2)
exp.mat(S, -0.5)
# [,1] [,2]
#[1,] 3.36830328 -0.02004191
#[2,] -0.02004191 3.43755429
Other examples can be found here.

using eigenvalues to test for singularity: identifying collinear columns

I am trying to check if my matrix is singular using the eigenvalues approach (i.e. if one of the eigenvalues is zero then the matrix is singular). Here is the code:
z <- matrix(c(-3,2,1,4,-9,6,3,12,5,5,9,4),nrow=4,ncol=3)
eigen(t(z)%*%z)$values
I know the eigenvalues are sorted in descending order. Can someone please let me know if there is a way to find out what eigenvalue is associated to what column in the matrix? I need to remove the collinear columns.
It might be obvious in the example above but it is just an example intended to save you time from creating a new matrix.
Example:
z <- matrix(c(-3,2,1,4,-9,6,3,12,5,5,9,4),nrow=4,ncol=3)
m <- crossprod(z) ## slightly more efficient than t(z) %*% z
This tells you that the third eigenvector corresponds to the collinear combinations:
ee <- eigen(m)
(evals <- zapsmall(ee$values))
## [1] 322.7585 124.2415 0.0000
Now examine the corresponding eigenvectors, which are listed as columns corresponding to their respective eigenvalues:
(evecs <- zapsmall(ee$vectors))
## [1,] -0.2975496 -0.1070713 0.9486833
## [2,] -0.8926487 -0.3212138 -0.3162278
## [3,] -0.3385891 0.9409343 0.0000000
The third eigenvalue is zero; the first two elements of the third eigenvector (evecs[,3]) are non-zero, which tells you that columns 1 and 2 are collinear.
Here's a way to automate this test:
testcols <- function(ee) {
## split eigenvector matrix into a list, by columns
evecs <- split(zapsmall(ee$vectors),col(ee$vectors))
## for non-zero eigenvalues, list non-zero evec components
mapply(function(val,vec) {
if (val!=0) NULL else which(vec!=0)
},zapsmall(ee$values),evecs)
}
testcols(ee)
## [[1]]
## NULL
## [[2]]
## NULL
## [[3]]
## [1] 1 2
You can use tmp <- svd(z) to do a svd. The eigenvalues are then saved in tmp$d as a diagonal matrix of eigenvalues. This works also with a non square matrix.
> diag(tmp$d)
[,1] [,2] [,3]
[1,] 17.96548 0.00000 0.000000e+00
[2,] 0.00000 11.14637 0.000000e+00
[3,] 0.00000 0.00000 8.787239e-16

Determining if a matrix is diagonalizable in the R Programming Language

I have a matrix and I would like to know if it is diagonalizable. How do I do this in the R programming language?
If you have a given matrix, m, then one way is the take the eigen vectors times the diagonal of the eigen values times the inverse of the original matrix. That should give us back the original matrix. In R that looks like:
m <- matrix( c(1:16), nrow = 4)
p <- eigen(m)$vectors
d <- diag(eigen(m)$values)
p %*% d %*% solve(p)
m
so in that example p %*% d %*% solve(p) should be the same as m
You can implement the full algorithm to check if the matrix reduces to a Jordan form or a diagonal one (see e.g., this document). Or you can take the quick and dirty way: for an n-dimensional square matrix, use eigen(M)$values and check that they are n distinct values. For random matrices, this always suffices: degeneracy has prob.0.
P.S.: based on a simple observation by JD Long below, I recalled that a necessary and sufficient condition for diagonalizability is that the eigenvectors span the original space. To check this, just see that eigenvector matrix has full rank (no zero eigenvalue). So here is the code:
diagflag = function(m,tol=1e-10){
x = eigen(m)$vectors
y = min(abs(eigen(x)$values))
return(y>tol)
}
# nondiagonalizable matrix
m1 = matrix(c(1,1,0,1),nrow=2)
# diagonalizable matrix
m2 = matrix(c(-1,1,0,1),nrow=2)
> m1
[,1] [,2]
[1,] 1 0
[2,] 1 1
> diagflag(m1)
[1] FALSE
> m2
[,1] [,2]
[1,] -1 0
[2,] 1 1
> diagflag(m2)
[1] TRUE
You might want to check out this page for some basic discussion and code. You'll need to search for "diagonalized" which is where the relevant portion begins.
All symmetric matrices across the diagonal are diagonalizable by orthogonal matrices. In fact if you want diagonalizability only by orthogonal matrix conjugation, i.e. D= P AP' where P' just stands for transpose then symmetry across the diagonal, i.e. A_{ij}=A_{ji}, is exactly equivalent to diagonalizability.
If the matrix is not symmetric, then diagonalizability means not D= PAP' but merely D=PAP^{-1} and we do not necessarily have P'=P^{-1} which is the condition of orthogonality.
you need to do something more substantial and there is probably a better way but you could just compute the eigenvectors and check rank equal to total dimension.
See this discussion for a more detailed explanation.

Resources