Can I generate bivariate normal random variables with correlation 1 using Cholesky factorization? - r

Is it possible to set a correlation = 1 using the cholesky decomposition technique?
set.seed(88)
mu<- 0
sigma<-1
x<-rnorm(10000, mu, sigma)
y<-rnorm(10000, mu, sigma)
MAT<-cbind(x,y)
cor(MAT[,1],MAT[,2])
#this doesn't work because 1 makes it NOT positive-definite. any number 0 to .99 works
correlationMAT<- matrix(1,nrow = 2,ncol = 2)
U<-chol(correlationMAT)
newMAT<- MAT %*% U
cor(newMAT[,1], newMAT[,2]) #.....but I want to make this cor = 1
Any ideas?

Actually you can, by using pivoted Cholesky factorization.
correlationMAT<- matrix(1,nrow = 2,ncol = 2)
U <- chol(correlationMAT, pivot = TRUE)
#Warning message:
#In chol.default(correlationMAT, pivot = TRUE) :
# the matrix is either rank-deficient or indefinite
U
# [,1] [,2]
#[1,] 1 1
#[2,] 0 0
#attr(,"pivot")
#[1] 1 2
#attr(,"rank")
#[1] 1
Note, U has identical columns. If we do MAT %*% U, we replicate MAT[, 1] twice, which means the second random variable will be identical to the first one.
newMAT<- MAT %*% U
cor(newMAT)
# [,1] [,2]
#[1,] 1 1
#[2,] 1 1
You don't need to worry that two random variables are identical. Remember, this only means they are identical after standardization (to N(0, 1)). You can rescale them by different standard deviation, then shift them by different mean to make them different.
Pivoted Cholesky factorization is very useful. My answer for this post: Generate multivariate normal r.v.'s with rank-deficient covariance via Pivoted Cholesky Factorization gives a more comprehensive picture.

Related

OLS estimator in R

I'm trying to compute OLS estimators manually in R for given vectors and matrices, but when I get the formula beta=(x'x)^-1(x'y), R tells me that the is a dimension issue, and I can't figure out why.
My code is
nr = 100
nc = 1000
x=matrix(rnorm(nr * nc, mean=1, sd=1), nrow = nr)
epsilon=matrix(rnorm(nr * nc, mean=0, sd=1), nrow = nr)
k=c(1,2,4,8)
eta1=((epsilon^1-mean(epsilon^1))/(mean(epsilon^(1*2))-mean(epsilon^1)^2)^(1/2))
eta2=((epsilon^2-mean(epsilon^2))/(mean(epsilon^(2*2))-mean(epsilon^2)^2)^(1/2))
eta4=((epsilon^4-mean(epsilon^4))/(mean(epsilon^(4*2))-mean(epsilon^4)^2)^(1/2))
eta8=((epsilon^8-mean(epsilon^8))/(mean(epsilon^(8*2))-mean(epsilon^8)^2)^(1/2))
y1=x+eta1
y2=x+eta2
y4=x+eta4
y8=x+eta8
beta1=inv(t(x)*x)*(t(x)*y1)
beta2=inv(t(x)*x)*(t(x)*y2)
beta4=inv(t(x)*x)*(t(x)*y4)
beta8=inv(t(x)*x)*(t(x)*y8)
Also, I feel that there should be a way to loop through the values of k to get this automated, instead of doing each eta by hand. So, a bit of help in this area would also be appreciated.
The ouput I'm looking for is to get a vector of beta for each of the different values of k.
You have got several issues. Firstly, you should have nx1 matrix for y and epsilon, but you have nxm matrix for them instead. Secondly, you should use matrix multiplication which is %*% in R. i.e. t(x)%*%y1. However you use dot product (*) instead.
For the sake of simplicity, lets create a matrix with 5 columns. My approach is creating a dependent variable which is related with x columns (independent variables or feature matrix in machine learning terminology)
nr = 100
nc = 5
x=matrix(rnorm(nr * nc, mean=1, sd=1), nrow = nr)
epsilon=matrix(rnorm(nr, mean=0, sd=1), nrow = nr) # it should be nx1
k=c(1,2,4,8)
eta1=((epsilon^1-mean(epsilon^1))/(mean(epsilon^(1*2))-mean(epsilon^1)^2)^(1/2))
eta2=((epsilon^2-mean(epsilon^2))/(mean(epsilon^(2*2))-mean(epsilon^2)^2)^(1/2))
eta4=((epsilon^4-mean(epsilon^4))/(mean(epsilon^(4*2))-mean(epsilon^4)^2)^(1/2))
eta8=((epsilon^8-mean(epsilon^8))/(mean(epsilon^(8*2))-mean(epsilon^8)^2)^(1/2))
To check the output we should create y values wisely. So, let's define some betas and create y values wrt them. At the end, we can compare the output with the inputs we defined. Note that, you should have 5 betas for 5 columns.
# made up betas
beta1_real <- 1:5
beta2_real <- -4:0
beta4_real <- 7:11
beta8_real <- seq(0.25,1.25,0.25)
To create the y values,
y1= 10 + x %*% matrix(beta1_real) + eta1
y2= 20 + x %*% matrix(beta2_real) + eta2
y4= 30 + x %*% matrix(beta4_real) + eta4
y8= 40 + x %*% matrix(beta8_real) + eta8
Here, I also added a constant term for each y values. To get the constant term at the end, we should add ones at the beginning of our x matrix like,
x <- cbind(matrix(1,nrow = nr),x)
The rest is almost same with yours. Only difference is I used solve instead of inv and also I used the matrix multiplication (%*%)
beta1=solve(t(x)%*%x)%*%(t(x)%*%y1)
beta2=solve(t(x)%*%x)%*%(t(x)%*%y2)
beta4=solve(t(x)%*%x)%*%(t(x)%*%y4)
beta8=solve(t(x)%*%x)%*%(t(x)%*%y8)
If we compare the outputs,
beta1_real was,
# [1] 1 2 3 4 5
and the output of beta1 is,
# [,1]
# [1,] 10.0049631
# [2,] 0.9632124
# [3,] 1.8987402
# [4,] 2.9816673
# [5,] 4.2111817
# [6,] 4.9529084
The results are similar. 10 at the beginning is the constant term I added. The difference stems from the error term applied (etas).

cor function with NA values due to 0 variance

Beginner R user here. I am using the cor function to get the Kendal's tau-b rank correlation coefficient between 2 columns of a dataframe. Examples of such columns are as folows:
A B
1 1
1 2
1 3
when I use cor(d,method="kendall")
The result is NA for the correlation between A and B. Shouldnt it be 0? And if not is there a way that I can replace this NA result with 0 using a parameter in the cor function?
Consider what would happen if we slightly perturb the constant column. We get vastly different solutions depending on the particular perturbation used. In fact we can get any correlation we like with different perturbations. As a result it really makes no sense to use any particular value for the correlation and it would be best left as NA.
x <- c(1, 1, 1)
y <- 1:3
cor(x + (1:3) * 1e-10, y, method = "spearman")
## [1] 1
cor(x - (1:3) * 1e-10, y, method = "spearman")
## [1] -1

QR decomposition in R - Forcing a Positive Diagonal

I have a problem about qr function in R. My input matrix is positive definite, so R should be give r function a triangular matrix with diagonal are all positive. However, I found there are some negative values in the diagonal. How can I address this problem?
Suppose we have a matrix y looks like this:
[1,] 0.07018171 -0.07249188 -0.01952050
[2,] -0.09617788 0.52664014 -0.02930578
[3,] -0.01962719 -0.09521439 0.81718699
It is positive-definite:
> eigen(y)$values
[1] 0.82631283 0.53350907 0.05418694
I apply qr() in R, it gave me
Q =
[,1] [,2] [,3]
[1,] -0.5816076 -0.6157887 0.5315420
[2,] 0.7970423 -0.5620336 0.2210021
[3,] 0.1626538 0.5521980 0.8176926
and R =
[1,] -0.1206685 0.4464293 0.1209139
[2,] 0.0000000 -0.3039269 0.4797403
[3,] 0.0000000 0.0000000 0.6513551
which the diagonal is not positive.
Many thanks.
Here is the matrix:
structure(c(0.07018171, -0.09617788, -0.01962719, -0.07249188,
0.52664014, -0.09521439, -0.0195205, -0.02930578, 0.81718699), .Dim = c(3L,
3L))
I can simply multiply a diagonal matrix with sign(R) to force the diagonal entries to be positive and then adjust corresponding value of Q. Q then still an orthogonal matrix.
Sample code
qr.decom <- qr(A)
Q <- qr.Q(qr.decom)
R <- qr.R(qr.decom)
sgn <- sign(diag(R))
R.new <- diag(sgn) %*% R
Q.new <- Q %*% diag(sgn)
Then R.new has a positive diagonal elements.
We could use example in the question part to try it in R.
I think you can also use pracma::gramSchmidt. This function returns automatically a gram-schmidt decomposition with positives on the diagonale. Hope it helps.

Using R to honor correlations for LatinHypercube / Monte Carlo trials

I am currently using python and RPY to use the functionality inside R.
How do I use R library to generate Monte carlo samples that honor the correlation between 2 variables..
e.g
if variable A and B have a correlation of 85% (0.85), i need to generate all the monte carlo samples honoring that correlation between A & B.
Would appreciate if anyone can share ideas / snippets
Thanks
The rank correlation method of Iman and Conover seems to be a widely used and general approach to producing correlated monte carlo samples for computer based experiments, sensitivity analysis etc. Unfortunately I have only just come across this and don't have access to the PDF so don't know how the authors actually implement their method, but you could follow this up.
Their method is more general because each variable can come from a different distribution unlike the multivariate normal of #Dirk's answer.
Update: I found an R implementation of the above approach in package mc2d, in particular you want the cornode() function.
Here is an example taken from ?cornode
> require(mc2d)
> x1 <- rnorm(1000)
> x2 <- rnorm(1000)
> x3 <- rnorm(1000)
> mat <- cbind(x1, x2, x3)
> ## Target
> (corr <- matrix(c(1, 0.5, 0.2, 0.5, 1, 0.2, 0.2, 0.2, 1), ncol=3))
[,1] [,2] [,3]
[1,] 1.0 0.5 0.2
[2,] 0.5 1.0 0.2
[3,] 0.2 0.2 1.0
> ## Before
> cor(mat, method="spearman")
x1 x2 x3
x1 1.00000000 0.01218894 -0.02203357
x2 0.01218894 1.00000000 0.02298695
x3 -0.02203357 0.02298695 1.00000000
> matc <- cornode(mat, target=corr, result=TRUE)
Spearman Rank Correlation Post Function
x1 x2 x3
x1 1.0000000 0.4515535 0.1739153
x2 0.4515535 1.0000000 0.1646381
x3 0.1739153 0.1646381 1.0000000
The rank correlations in matc are now very close to the target correlations of corr.
The idea with this is that you draw the samples separately from the distribution for each variable, and then use the Iman & Connover approach to make the samples (as close) to the target correlations as possible.
That is a FAQ. Here is one answer using a recommended package:
R> library(MASS)
R> example(mvrnorm)
mvrnrmR> Sigma <- matrix(c(10,3,3,2),2,2)
mvrnrmR> Sigma
[,1] [,2]
[1,] 10 3
[2,] 3 2
mvrnrmR> var(mvrnorm(n=1000, rep(0, 2), Sigma))
[,1] [,2]
[1,] 8.82287 2.63987
[2,] 2.63987 1.93637
mvrnrmR> var(mvrnorm(n=1000, rep(0, 2), Sigma, empirical = TRUE))
[,1] [,2]
[1,] 10 3
[2,] 3 2
R>
Switching between correlation and covariance is straightforward (hint: outer product of vector of standard deviations).
This question was not tagged as python, but based on your comment it looks like you might be looking for a Python solution as well. The most basic Python implementation of Iman Convover, that I can concoct looks like the following in Python (actually numpy):
def makeCorrelated( y, corMatrix ):
c = multivariate_normal(zeros(size( y, 0 ) ) , corMatrix, size( y, 1 ) )
key = argsort( argsort(c, axis=0), axis=0 ).T
out = map(take, map(sort, y), key)
out = array(out)
return out
where y is an array of samples from the marginal distributions and corMatrix is a positive semi definite, symmetric correlation matrix. Given that this function uses multivariate_normal() for the c matrix, you can tell this uses an implied Gaussian Copula. To use different copula structures you'll need to use different drivers for the c matrix.

Determining if a matrix is diagonalizable in the R Programming Language

I have a matrix and I would like to know if it is diagonalizable. How do I do this in the R programming language?
If you have a given matrix, m, then one way is the take the eigen vectors times the diagonal of the eigen values times the inverse of the original matrix. That should give us back the original matrix. In R that looks like:
m <- matrix( c(1:16), nrow = 4)
p <- eigen(m)$vectors
d <- diag(eigen(m)$values)
p %*% d %*% solve(p)
m
so in that example p %*% d %*% solve(p) should be the same as m
You can implement the full algorithm to check if the matrix reduces to a Jordan form or a diagonal one (see e.g., this document). Or you can take the quick and dirty way: for an n-dimensional square matrix, use eigen(M)$values and check that they are n distinct values. For random matrices, this always suffices: degeneracy has prob.0.
P.S.: based on a simple observation by JD Long below, I recalled that a necessary and sufficient condition for diagonalizability is that the eigenvectors span the original space. To check this, just see that eigenvector matrix has full rank (no zero eigenvalue). So here is the code:
diagflag = function(m,tol=1e-10){
x = eigen(m)$vectors
y = min(abs(eigen(x)$values))
return(y>tol)
}
# nondiagonalizable matrix
m1 = matrix(c(1,1,0,1),nrow=2)
# diagonalizable matrix
m2 = matrix(c(-1,1,0,1),nrow=2)
> m1
[,1] [,2]
[1,] 1 0
[2,] 1 1
> diagflag(m1)
[1] FALSE
> m2
[,1] [,2]
[1,] -1 0
[2,] 1 1
> diagflag(m2)
[1] TRUE
You might want to check out this page for some basic discussion and code. You'll need to search for "diagonalized" which is where the relevant portion begins.
All symmetric matrices across the diagonal are diagonalizable by orthogonal matrices. In fact if you want diagonalizability only by orthogonal matrix conjugation, i.e. D= P AP' where P' just stands for transpose then symmetry across the diagonal, i.e. A_{ij}=A_{ji}, is exactly equivalent to diagonalizability.
If the matrix is not symmetric, then diagonalizability means not D= PAP' but merely D=PAP^{-1} and we do not necessarily have P'=P^{-1} which is the condition of orthogonality.
you need to do something more substantial and there is probably a better way but you could just compute the eigenvectors and check rank equal to total dimension.
See this discussion for a more detailed explanation.

Resources