I am trying to check if my matrix is singular using the eigenvalues approach (i.e. if one of the eigenvalues is zero then the matrix is singular). Here is the code:
z <- matrix(c(-3,2,1,4,-9,6,3,12,5,5,9,4),nrow=4,ncol=3)
eigen(t(z)%*%z)$values
I know the eigenvalues are sorted in descending order. Can someone please let me know if there is a way to find out what eigenvalue is associated to what column in the matrix? I need to remove the collinear columns.
It might be obvious in the example above but it is just an example intended to save you time from creating a new matrix.
Example:
z <- matrix(c(-3,2,1,4,-9,6,3,12,5,5,9,4),nrow=4,ncol=3)
m <- crossprod(z) ## slightly more efficient than t(z) %*% z
This tells you that the third eigenvector corresponds to the collinear combinations:
ee <- eigen(m)
(evals <- zapsmall(ee$values))
## [1] 322.7585 124.2415 0.0000
Now examine the corresponding eigenvectors, which are listed as columns corresponding to their respective eigenvalues:
(evecs <- zapsmall(ee$vectors))
## [1,] -0.2975496 -0.1070713 0.9486833
## [2,] -0.8926487 -0.3212138 -0.3162278
## [3,] -0.3385891 0.9409343 0.0000000
The third eigenvalue is zero; the first two elements of the third eigenvector (evecs[,3]) are non-zero, which tells you that columns 1 and 2 are collinear.
Here's a way to automate this test:
testcols <- function(ee) {
## split eigenvector matrix into a list, by columns
evecs <- split(zapsmall(ee$vectors),col(ee$vectors))
## for non-zero eigenvalues, list non-zero evec components
mapply(function(val,vec) {
if (val!=0) NULL else which(vec!=0)
},zapsmall(ee$values),evecs)
}
testcols(ee)
## [[1]]
## NULL
## [[2]]
## NULL
## [[3]]
## [1] 1 2
You can use tmp <- svd(z) to do a svd. The eigenvalues are then saved in tmp$d as a diagonal matrix of eigenvalues. This works also with a non square matrix.
> diag(tmp$d)
[,1] [,2] [,3]
[1,] 17.96548 0.00000 0.000000e+00
[2,] 0.00000 11.14637 0.000000e+00
[3,] 0.00000 0.00000 8.787239e-16
Related
I have a problem about qr function in R. My input matrix is positive definite, so R should be give r function a triangular matrix with diagonal are all positive. However, I found there are some negative values in the diagonal. How can I address this problem?
Suppose we have a matrix y looks like this:
[1,] 0.07018171 -0.07249188 -0.01952050
[2,] -0.09617788 0.52664014 -0.02930578
[3,] -0.01962719 -0.09521439 0.81718699
It is positive-definite:
> eigen(y)$values
[1] 0.82631283 0.53350907 0.05418694
I apply qr() in R, it gave me
Q =
[,1] [,2] [,3]
[1,] -0.5816076 -0.6157887 0.5315420
[2,] 0.7970423 -0.5620336 0.2210021
[3,] 0.1626538 0.5521980 0.8176926
and R =
[1,] -0.1206685 0.4464293 0.1209139
[2,] 0.0000000 -0.3039269 0.4797403
[3,] 0.0000000 0.0000000 0.6513551
which the diagonal is not positive.
Many thanks.
Here is the matrix:
structure(c(0.07018171, -0.09617788, -0.01962719, -0.07249188,
0.52664014, -0.09521439, -0.0195205, -0.02930578, 0.81718699), .Dim = c(3L,
3L))
I can simply multiply a diagonal matrix with sign(R) to force the diagonal entries to be positive and then adjust corresponding value of Q. Q then still an orthogonal matrix.
Sample code
qr.decom <- qr(A)
Q <- qr.Q(qr.decom)
R <- qr.R(qr.decom)
sgn <- sign(diag(R))
R.new <- diag(sgn) %*% R
Q.new <- Q %*% diag(sgn)
Then R.new has a positive diagonal elements.
We could use example in the question part to try it in R.
I think you can also use pracma::gramSchmidt. This function returns automatically a gram-schmidt decomposition with positives on the diagonale. Hope it helps.
How can I create a matrix of pseudo-random values that is guaranteed to be non-singular? I tried the code below, but it failed. I suppose I could just loop until I got one by chance but I would prefer a more elegant "R-like" solution if anyone has an idea.
library(matrixcalc)
exampledf<- matrix(ceiling(runif(16,0,50)), ncol=4)
is.singular.matrix(exampledf) #this may or may not return false
using a while loop:
exampledf<-NULL
library(matrixcalc)
while(is.singular.matrix(exampledf)!=TRUE){
exampledf<- matrix(ceiling(runif(16,0,50)), ncol=4)
}
I suppose one method that guarantees (not is fairly likely, but actually guarantees) that the matrix is non-singular, is to start from a known non-singular matrix and apply the basic linear operations used for example in Gaussian Elimination: 1. add / subtract a multiple of one row from another row or 2. multiply row by a constant.
Depending on how "random" and how dense you want your matrix to be you can start from the identity matrix and multiply all elements with a random constant. Afterwards, you can apply a randomly selected set of operations from above, that will result in a non singular matrix. You can even apply a predefined set of operations, but using a randomly selected constant at each step.
An alternative could be to start from an upper triangular matrix for which the product of main diagonal entries is not zero. This is because the determinant of a triangular matrix is the product of the elements on the main diagonal. This effectively boils down to generating N random numbers, placing them on the main diagonal, and setting the rest of the entries (above the main diagonal) to whatever you like. If you want the matrix to be fully dense, add the first row to every other row of the matrix.
Of course this approach (like any other probably would) assumes that the matrix is relatively numerically stable and the singularity will not be affected by precision errors (as you know the precision of data types in all programming languages is limited). You would do well to avoid very small / very large values which can make the method numerically unstable.
It should be fairly unlikely that this will produce a singular matrix:
Mat1 <- matrix(rnorm(100), ncol=4)
Mat2 <- matrix(rnorm(100), ncol=4)
crossprod(Mat1,Mat2)
[,1] [,2] [,3] [,4]
[1,] 0.8138 5.112 2.945 -5.003
[2,] 4.9755 -2.420 1.801 -4.188
[3,] -3.8579 8.791 -2.594 3.340
[4,] 7.2057 6.426 2.663 -1.235
solve( crossprod(Mat1,Mat2) )
[,1] [,2] [,3] [,4]
[1,] -0.11273 0.15811 0.05616 0.07241
[2,] 0.03387 0.01187 0.07626 0.02881
[3,] 0.19007 -0.60377 -0.40665 0.17771
[4,] -0.07174 -0.31751 -0.15228 0.14582
inv1000 <- replicate(1000, {
Mat1 <- matrix(rnorm(100), ncol=4)
Mat2 <- matrix(rnorm(100), ncol=4)
try(solve( crossprod(Mat1,Mat2)))} )
str(inv1000)
#num [1:4, 1:4, 1:1000] 0.1163 0.0328 0.3424 -0.227 0.0347 ...
max(inv1000)
#[1] 451.6
> inv100000 <- replicate(100000, {Mat1 <- matrix(rnorm(100), ncol=4)
+ Mat2 <- matrix(rnorm(100), ncol=4)
+ is.singular.matrix( crossprod(Mat1,Mat2))} )
> sum(inv100000)
[1] 0
I'm trying to compute the PCA scores, and, part of the algorithm says: subtract the mean of the matrix, divided by the standard deviation
I have the following 2x2 matrix given by: A = [1 3; 2 4] let's say in Matlab, I do the following:
mean(A) -> This gives me back a vector of 2 values (column based) so.. 1.5 and 3.5. Which to me in this instance this would be correct.
In R however, when computing the mean mean(A) the mean is just one value. This is the same for the standard deviation.
So my question is, which is right? For the purposes of this function (in the algorithm):
function(x) {(x - mean(x))/sd(x) (http://strata.uga.edu/software/pdf/pcaTutorial.pdf)
Should I be subtracting the mean based on two values by Matlab or 1 value by R?
Thanks
The R command that will do this in one swoop for matrices or dataframes is scale()
> A = matrix(c(1, 3, 2, 4), 2)
> scale(A)
[,1] [,2]
[1,] -0.7071068 -0.7071068
[2,] 0.7071068 0.7071068
attr(,"scaled:center")
[1] 2 3
attr(,"scaled:scale")
[1] 1.414214 1.414214
It's done by column. When you used 'mean' you got the mean for all four numbers rather than by column. That is not what you would want if you are doing PCA calculations.
I have this correlation matrix
A
[,1] 1.00000 0.00975 0.97245 0.43887 0.02241
[,2] 0.00975 1.00000 0.15428 0.69141 0.86307
[,3] 0.97245 0.15428 1.00000 0.51472 0.12193
[,4] 0.43887 0.69141 0.51472 1.00000 0.77765
[,5] 0.02241 0.86307 0.12193 0.77765 1.00000
And I need to get the eigenvalues, eigenvectors and loadings in R.
When I use the princomp(A,cor=TRUE) function I get the variances(Eigenvalues)
but when I use the eigen(A) function I get the Eigenvalues and Eigenvectors, but the Eigenvalues in this case are different than when I use the Princomp-function..
Which function is the right one to get the Eigenvalues?
I believe you are referring to a PCA analysis when you talk of eigenvalues, eigenvectors and loadings. prcomp is essentially doing the following (when cor=TRUE):
###Step1
#correlation matrix
Acs <- scale(A, center=TRUE, scale=TRUE)
COR <- (t(Acs) %*% Acs) / (nrow(Acs)-1)
COR ; cor(Acs) # equal
###STEP 2
# Decompose matrix using eigen() to derive PC loadings
E <- eigen(COR)
E$vectors # loadings
E$values # eigen values
###Step 3
# Project data on loadings to derive new coordinates (principal components)
B <- Acs %*% E$vectors
eigen(M) gives you the correct eigen values and vectors of M.
princomp() is to be handed the data matrix - you are mistakenly feeding it the correlation matrix!
princomp(A,) will treat A as the data and then come up with a correlation matrix and its eigen vectors and values. So the eigen values of A (in case A holds the data as supposed) are not just irrelevant they are of course different from what princomp() comes up with at the end.
For an illustration of performing a PCA in R see here: http://www.joyofdata.de/blog/illustration-of-principal-component-analysis-pca/
Is there a function that can convert a covariance matrix built using log-returns into a covariance matrix based on simple arithmetic returns?
Motivation: We'd like to use a mean-variance utility function where expected returns and variance is specified in arithmetic terms. However, estimating returns and covariances is often performed with log-returns because of the additivity property of log returns, and we assume asset prices follow a lognormal stochastic process.
Meucci describes a process to generate a arithmetic-returns based covariance matrix for a generic/arbitrary distribution of lognormal returns on Appendix page 5.
Here's my translation of the formulae:
linreturn <- function(mu,Sigma) {
m <- exp(mu+diag(Sigma)/2)-1
x1 <- outer(mu,mu,"+")
x2 <- outer(diag(Sigma),diag(Sigma),"+")/2
S <- exp(x1+x2)*(exp(Sigma)-1)
list(mean=m,vcov=S)
}
edit: fixed -1 issue based on comments.
Try an example:
m1 <- c(1,2)
S1 <- matrix(c(1,0.2,0.2,1),nrow=2)
Generate multivariate log-normal returns:
set.seed(1001)
r1 <- exp(MASS::mvrnorm(200000,mu=m1,Sigma=S1))-1
colMeans(r1)
## [1] 3.485976 11.214211
var(r1)
## [,1] [,2]
## [1,] 34.4021 12.4062
## [2,] 12.4062 263.7382
Compare with expected results from formulae:
linreturn(m1,S1)
## $mean
## [1] 3.481689 11.182494
## $vcov
## [,1] [,2]
## [1,] 34.51261 12.08818
## [2,] 12.08818 255.01563