I have this correlation matrix
A
[,1] 1.00000 0.00975 0.97245 0.43887 0.02241
[,2] 0.00975 1.00000 0.15428 0.69141 0.86307
[,3] 0.97245 0.15428 1.00000 0.51472 0.12193
[,4] 0.43887 0.69141 0.51472 1.00000 0.77765
[,5] 0.02241 0.86307 0.12193 0.77765 1.00000
And I need to get the eigenvalues, eigenvectors and loadings in R.
When I use the princomp(A,cor=TRUE) function I get the variances(Eigenvalues)
but when I use the eigen(A) function I get the Eigenvalues and Eigenvectors, but the Eigenvalues in this case are different than when I use the Princomp-function..
Which function is the right one to get the Eigenvalues?
I believe you are referring to a PCA analysis when you talk of eigenvalues, eigenvectors and loadings. prcomp is essentially doing the following (when cor=TRUE):
###Step1
#correlation matrix
Acs <- scale(A, center=TRUE, scale=TRUE)
COR <- (t(Acs) %*% Acs) / (nrow(Acs)-1)
COR ; cor(Acs) # equal
###STEP 2
# Decompose matrix using eigen() to derive PC loadings
E <- eigen(COR)
E$vectors # loadings
E$values # eigen values
###Step 3
# Project data on loadings to derive new coordinates (principal components)
B <- Acs %*% E$vectors
eigen(M) gives you the correct eigen values and vectors of M.
princomp() is to be handed the data matrix - you are mistakenly feeding it the correlation matrix!
princomp(A,) will treat A as the data and then come up with a correlation matrix and its eigen vectors and values. So the eigen values of A (in case A holds the data as supposed) are not just irrelevant they are of course different from what princomp() comes up with at the end.
For an illustration of performing a PCA in R see here: http://www.joyofdata.de/blog/illustration-of-principal-component-analysis-pca/
Related
i was currently doing an excercise about steady state vector of a markov-chain, i was able to calculate it manually but I'm lost at how to calculate steady state using R, is there a function or library i can use to calculate this? any helps is appreciated, this is the transition matrix I'm using to calculate steady state:
P
P.data=c(0.95,0.03,0.05,0.97)
P = matrix(P.data, nrow=2,ncol=2,byrow=TRUE)
P
The steady state is a left eigen vector wit corresponding eigen value 1. To calculate the eigen vectors/values in R, there is the function eigen, but it calculates the right eigen vectors, so you have to transpose the Markov matrix.
> P.data <- c(0.95,0.03,0.05,0.97)
> P <- matrix(P.data, nrow=2, ncol=2, byrow=TRUE)
> eigen(t(P))
eigen() decomposition
$values
[1] 1.00 0.92
$vectors
[,1] [,2]
[1,] -0.7071068 -0.8574929
[2,] -0.7071068 0.5144958
The eigen vectors are defined up to a multiplicative factor, so here we find (0.5, 0.5) (the coefficients must sum to 1).
See this post and its answers for more details.
This question already has answers here:
A^k for matrix multiplication in R?
(6 answers)
Closed 9 years ago.
I'm trying to compute the -0.5 power of the following matrix:
S <- matrix(c(0.088150041, 0.001017491 , 0.001017491, 0.084634294),nrow=2)
In Matlab, the result is (S^(-0.5)):
S^(-0.5)
ans =
3.3683 -0.0200
-0.0200 3.4376
> library(expm)
> solve(sqrtm(S))
[,1] [,2]
[1,] 3.36830328 -0.02004191
[2,] -0.02004191 3.43755429
After some time, the following solution came up:
"%^%" <- function(S, power)
with(eigen(S), vectors %*% (values^power * t(vectors)))
S%^%(-0.5)
The result gives the expected answer:
[,1] [,2]
[1,] 3.36830328 -0.02004191
[2,] -0.02004191 3.43755430
The square root of a matrix is not necessarily unique (most real numbers have at least 2 square roots, so it is not just matricies). There are multiple algorithms for generating a square root of a matrix. Others have shown the approach using expm and eigenvalues, but the Cholesky decomposition is another possibility (see the chol function).
To extend this answer beyond square roots, the following function exp.mat() generalizes the "Moore–Penrose pseudoinverse" of a matrix and allows for one to calculate the exponentiation of a matrix via a Singular Value Decomposition (SVD) (even works for non square matrices, although I don't know when one would need that).
exp.mat() function:
#The exp.mat function performs can calculate the pseudoinverse of a matrix (EXP=-1)
#and other exponents of matrices, such as square roots (EXP=0.5) or square root of
#its inverse (EXP=-0.5).
#The function arguments are a matrix (MAT), an exponent (EXP), and a tolerance
#level for non-zero singular values.
exp.mat<-function(MAT, EXP, tol=NULL){
MAT <- as.matrix(MAT)
matdim <- dim(MAT)
if(is.null(tol)){
tol=min(1e-7, .Machine$double.eps*max(matdim)*max(MAT))
}
if(matdim[1]>=matdim[2]){
svd1 <- svd(MAT)
keep <- which(svd1$d > tol)
res <- t(svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep]))
}
if(matdim[1]<matdim[2]){
svd1 <- svd(t(MAT))
keep <- which(svd1$d > tol)
res <- svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep])
}
return(res)
}
Example
S <- matrix(c(0.088150041, 0.001017491 , 0.001017491, 0.084634294),nrow=2)
exp.mat(S, -0.5)
# [,1] [,2]
#[1,] 3.36830328 -0.02004191
#[2,] -0.02004191 3.43755429
Other examples can be found here.
I am trying to check if my matrix is singular using the eigenvalues approach (i.e. if one of the eigenvalues is zero then the matrix is singular). Here is the code:
z <- matrix(c(-3,2,1,4,-9,6,3,12,5,5,9,4),nrow=4,ncol=3)
eigen(t(z)%*%z)$values
I know the eigenvalues are sorted in descending order. Can someone please let me know if there is a way to find out what eigenvalue is associated to what column in the matrix? I need to remove the collinear columns.
It might be obvious in the example above but it is just an example intended to save you time from creating a new matrix.
Example:
z <- matrix(c(-3,2,1,4,-9,6,3,12,5,5,9,4),nrow=4,ncol=3)
m <- crossprod(z) ## slightly more efficient than t(z) %*% z
This tells you that the third eigenvector corresponds to the collinear combinations:
ee <- eigen(m)
(evals <- zapsmall(ee$values))
## [1] 322.7585 124.2415 0.0000
Now examine the corresponding eigenvectors, which are listed as columns corresponding to their respective eigenvalues:
(evecs <- zapsmall(ee$vectors))
## [1,] -0.2975496 -0.1070713 0.9486833
## [2,] -0.8926487 -0.3212138 -0.3162278
## [3,] -0.3385891 0.9409343 0.0000000
The third eigenvalue is zero; the first two elements of the third eigenvector (evecs[,3]) are non-zero, which tells you that columns 1 and 2 are collinear.
Here's a way to automate this test:
testcols <- function(ee) {
## split eigenvector matrix into a list, by columns
evecs <- split(zapsmall(ee$vectors),col(ee$vectors))
## for non-zero eigenvalues, list non-zero evec components
mapply(function(val,vec) {
if (val!=0) NULL else which(vec!=0)
},zapsmall(ee$values),evecs)
}
testcols(ee)
## [[1]]
## NULL
## [[2]]
## NULL
## [[3]]
## [1] 1 2
You can use tmp <- svd(z) to do a svd. The eigenvalues are then saved in tmp$d as a diagonal matrix of eigenvalues. This works also with a non square matrix.
> diag(tmp$d)
[,1] [,2] [,3]
[1,] 17.96548 0.00000 0.000000e+00
[2,] 0.00000 11.14637 0.000000e+00
[3,] 0.00000 0.00000 8.787239e-16
Is there a function that can convert a covariance matrix built using log-returns into a covariance matrix based on simple arithmetic returns?
Motivation: We'd like to use a mean-variance utility function where expected returns and variance is specified in arithmetic terms. However, estimating returns and covariances is often performed with log-returns because of the additivity property of log returns, and we assume asset prices follow a lognormal stochastic process.
Meucci describes a process to generate a arithmetic-returns based covariance matrix for a generic/arbitrary distribution of lognormal returns on Appendix page 5.
Here's my translation of the formulae:
linreturn <- function(mu,Sigma) {
m <- exp(mu+diag(Sigma)/2)-1
x1 <- outer(mu,mu,"+")
x2 <- outer(diag(Sigma),diag(Sigma),"+")/2
S <- exp(x1+x2)*(exp(Sigma)-1)
list(mean=m,vcov=S)
}
edit: fixed -1 issue based on comments.
Try an example:
m1 <- c(1,2)
S1 <- matrix(c(1,0.2,0.2,1),nrow=2)
Generate multivariate log-normal returns:
set.seed(1001)
r1 <- exp(MASS::mvrnorm(200000,mu=m1,Sigma=S1))-1
colMeans(r1)
## [1] 3.485976 11.214211
var(r1)
## [,1] [,2]
## [1,] 34.4021 12.4062
## [2,] 12.4062 263.7382
Compare with expected results from formulae:
linreturn(m1,S1)
## $mean
## [1] 3.481689 11.182494
## $vcov
## [,1] [,2]
## [1,] 34.51261 12.08818
## [2,] 12.08818 255.01563
I have a matrix and I would like to know if it is diagonalizable. How do I do this in the R programming language?
If you have a given matrix, m, then one way is the take the eigen vectors times the diagonal of the eigen values times the inverse of the original matrix. That should give us back the original matrix. In R that looks like:
m <- matrix( c(1:16), nrow = 4)
p <- eigen(m)$vectors
d <- diag(eigen(m)$values)
p %*% d %*% solve(p)
m
so in that example p %*% d %*% solve(p) should be the same as m
You can implement the full algorithm to check if the matrix reduces to a Jordan form or a diagonal one (see e.g., this document). Or you can take the quick and dirty way: for an n-dimensional square matrix, use eigen(M)$values and check that they are n distinct values. For random matrices, this always suffices: degeneracy has prob.0.
P.S.: based on a simple observation by JD Long below, I recalled that a necessary and sufficient condition for diagonalizability is that the eigenvectors span the original space. To check this, just see that eigenvector matrix has full rank (no zero eigenvalue). So here is the code:
diagflag = function(m,tol=1e-10){
x = eigen(m)$vectors
y = min(abs(eigen(x)$values))
return(y>tol)
}
# nondiagonalizable matrix
m1 = matrix(c(1,1,0,1),nrow=2)
# diagonalizable matrix
m2 = matrix(c(-1,1,0,1),nrow=2)
> m1
[,1] [,2]
[1,] 1 0
[2,] 1 1
> diagflag(m1)
[1] FALSE
> m2
[,1] [,2]
[1,] -1 0
[2,] 1 1
> diagflag(m2)
[1] TRUE
You might want to check out this page for some basic discussion and code. You'll need to search for "diagonalized" which is where the relevant portion begins.
All symmetric matrices across the diagonal are diagonalizable by orthogonal matrices. In fact if you want diagonalizability only by orthogonal matrix conjugation, i.e. D= P AP' where P' just stands for transpose then symmetry across the diagonal, i.e. A_{ij}=A_{ji}, is exactly equivalent to diagonalizability.
If the matrix is not symmetric, then diagonalizability means not D= PAP' but merely D=PAP^{-1} and we do not necessarily have P'=P^{-1} which is the condition of orthogonality.
you need to do something more substantial and there is probably a better way but you could just compute the eigenvectors and check rank equal to total dimension.
See this discussion for a more detailed explanation.