How to obtain right eigenvectors of matrix in R? - r

Edition : the problem in my question was I've tried to find matrix S from equation 8 but this equation have error.
How to directly obtain right eigenvectors of matrix in R ? 'eigen()' gives only left eigenvectors
Really last edition, I've made big mess here, but this question is really important for me :
eigen() provides some matrix of eigenvectors, from function help :
" If ‘r <- eigen(A)’, and ‘V <- r$vectors; lam <- r$values’, then
A = V Lmbd V^(-1)
(up to numerical fuzz), where Lmbd =diag(lam)"
that is A V = V Lmbd, where V is matrix now we check it :
set.seed(1)
A<-matrix(rnorm(16),4,4)
Lmbd=diag(eigen(A)$values)
V=eigen(A)$vectors
A%*%V
> A%*%V
[,1] [,2] [,3] [,4]
[1,] 0.0479968+0.5065111i 0.0479968-0.5065111i 0.2000725+0i 0.30290103+0i
[2,] -0.2150354+1.1746298i -0.2150354-1.1746298i -0.4751152+0i -0.76691563+0i
[3,] -0.2536875-0.2877404i -0.2536875+0.2877404i 1.3564475+0i 0.27756026+0i
[4,] 0.9537141-0.0371259i 0.9537141+0.0371259i 0.3245555+0i -0.03050335+0i
> V%*%Lmbd
[,1] [,2] [,3] [,4]
[1,] 0.0479968+0.5065111i 0.0479968-0.5065111i 0.2000725+0i 0.30290103+0i
[2,] -0.2150354+1.1746298i -0.2150354-1.1746298i -0.4751152+0i -0.76691563+0i
[3,] -0.2536875-0.2877404i -0.2536875+0.2877404i 1.3564475+0i 0.27756026+0i
[4,] 0.9537141-0.0371259i 0.9537141+0.0371259i 0.3245555+0i -0.03050335+0i
and I would like to find matrix of right eigenvectors R,
equation which define matrix of left eigenvectors L is :
L A = LambdaM L
equation which define matrix of right eigenvectors R is :
A R = LambdaM R
and eigen() provides only matrix V:
A V = V Lmbd
I would like to obtain matrix R and LambdaM for real matrix A which may be negative-definite.

A worked example.
Default (= right eigenvectors):
m <- matrix(1:9,nrow=3)
e <- eigen(m)
e1 <- e$vectors
zapsmall((m %*% e1)/e1) ## right e'vec
## [,1] [,2] [,3]
## [1,] 16.11684 -1.116844 0
## [2,] 16.11684 -1.116844 0
## [3,] 16.11684 -1.116844 0
Left eigenvectors:
eL <- eigen(t(m))
eL1 <- eL$vectors
(We have to go to a little more effort since we need
to be multiplying by row vectors on the left; if
we extracted just a single eigenvector, R's ignorance
of row/column vector distinctions would make it
"do the right thing" (i.e. (eL1[,1] %*% m)/eL1[,1]
just works).)
zapsmall(t(eL1) %*% m/(t(eL1)))
## [,1] [,2] [,3]
## [1,] 16.116844 16.116844 16.116844
## [2,] -1.116844 -1.116844 -1.116844
## [3,] 0.000000 0.000000 0.000000

This should work
Given a matrix A.
lefteigen <- function(A){
return(t(eigen(t(A))$vectors))
}
Every left eigenvector is the transpose of a right eigenvector of the transpose of a matrix

Related

Is there a way to generate a matrix in R with at least some negative eigenvalues?

I want to generate a matrix with at least some negative eigenvalues? I am attempting to use the spectral decomposition of a matrix to do so but it does not guarantee at least one negative eigenvalue
Here is a simple example that may help you construct such kind of matrix
library(pracma)
N <- 3
U <- randortho(N, type = "orthonormal")
A <- diag(sample(c(-runif(1),rnorm(N-1)))) # ensure at least one negative eigenvalue
M <- U %*% A %*% t(U)
then
> M
[,1] [,2] [,3]
[1,] -0.36818879 0.02406988 0.1634275
[2,] 0.02406988 -0.72613068 -0.1872272
[3,] 0.16342748 -0.18722722 -0.3116400
To double check the eigenvalues
> eig(M)
[1] -0.1432527 -0.4484647 -0.8142421
and
> A
[,1] [,2] [,3]
[1,] -0.1432527 0.0000000 0.0000000
[2,] 0.0000000 -0.4484647 0.0000000
[3,] 0.0000000 0.0000000 -0.8142421
What if you generate a random positive definite matrix and multiply it by -1 ... ?
In this example I'll create a lower-triangular matrix with positive diagonal and multiply it by its transpose (there are lots of other ways):
set.seed(101)
m <- matrix(0,5,5)
m[lower.tri(m,diag=TRUE)] <- rnorm(15)
diag(m) <- abs(diag(m))
m2 <- m %*% t(m)
(If you only want it semidefinite you just need to make sure the diagonal is non-negative ...)
Is it pos def?
v1 <- eigen(m2)$values
## [1] 5.976142640 1.908831945 0.904991040 0.037025982 0.002181558
all(v1>0) ## TRUE
Check that evals(-m2) == -evals(m2) ...
v2 <- eigen(-m2)$values
all(v2<0) ## TRUE
all.equal(sort(v1),-v2) ## TRUE

How can i generate joint probabilities matrix with R?

I am trying to generate a matrix of joint probabilities. It's a symmetric matrix. The main diagonal elements are interpreted as probabilities
p
(
A
i
)
that a binary variable
A
i
equals 1. The off-diagonal elements are the probabilities
p
(
A
i
A
j
)
that both
A
i
and
A
j
are 1. This matrix should respond to the following conditions :
0
≤
p
A
i
≤
1
max
(
0
,
p
A
i
+
p
A
j
−
1
)
≤
p
A
i
A
j
≤
min
(
p
A
i
,
p
A
j
)
,
i
≠
j
p
A
i
+
p
A
j
+
p
A
k
−
p
A
i
A
j
−
p
A
i
A
k
−
p
A
j
A
k
≤
1
,
i
≠
j
,
i
≠
k
,
j
≠
k
These conditions are checked with check.commonprob.
I built a function to generate this matrix respecting these conditions:
# First I need another function to make the matrix symmetric
makeSymm <- function(m) {
m[upper.tri(m)] <- t(m)[upper.tri(m)]
return(m) }
b=matrix(0,10,10)
#The functionthat generates joint probabilities
joint=function(b,x,y,u,z,k,m){
repeat{
diag(b)=runif(k, min=x, max=y)
b[lower.tri(b,diag=FALSE)]<-runif(m,min=u, max=z)
b<-makeSymm(b)
check.commonprob(b)->c
if(c==TRUE)
break}
return(b)}
Since b is 10*10 matrix => there is 10 diagonal elements and 45 elements in the lower triangular matrix. I got this result:
b=joint(b,0.4,0.6,0.2,0.4,10,45)
> b
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 0.4479626 0.2128775 0.3103472 0.2342798 0.2719423 0.3114339 0.3978305
[2,] 0.2128775 0.4413829 0.2603543 0.2935595 0.2556380 0.2486850 0.2694443
[3,] 0.3103472 0.2603543 0.5170409 0.3003153 0.2651415 0.3410199 0.2321201
[4,] 0.2342798 0.2935595 0.3003153 0.5930984 0.2719581 0.3982266 0.3157343
[5,] 0.2719423 0.2556380 0.2651415 0.2719581 0.4031691 0.2157856 0.3016181
[6,] 0.3114339 0.2486850 0.3410199 0.3982266 0.2157856 0.4042654 0.2595399
[7,] 0.3978305 0.2694443 0.2321201 0.3157343 0.3016181 0.2595399 0.5195244
[8,] 0.3154185 0.3174374 0.2920965 0.3259053 0.2847335 0.3560568 0.2070868
[9,] 0.2892746 0.2510410 0.3232922 0.2970148 0.3070217 0.3445408 0.3180946
[10,] 0.2948818 0.2264481 0.3210267 0.2866854 0.3783635 0.3427585 0.2306935
[,8] [,9] [,10]
[1,] 0.3154185 0.2892746 0.2948818
[2,] 0.3174374 0.2510410 0.2264481
[3,] 0.2920965 0.3232922 0.3210267
[4,] 0.3259053 0.2970148 0.2866854
[5,] 0.2847335 0.3070217 0.3783635
[6,] 0.3560568 0.3445408 0.3427585
[7,] 0.2070868 0.3180946 0.2306935
[8,] 0.5958957 0.2710500 0.2318991
[9,] 0.2710500 0.5003779 0.2512744
[10,] 0.2318991 0.2512744 0.5004233
Up to now , everything seems good, but the problem is that when I wanted to generate a 100*100 matrix, I noticed that beyond a dimension of 20*20 the running time becomes so long (hours) and I can't get a result at the end because i have to stop it.
Do you have any suggestions to improve this function so I can try it on 100*100 matrix ? Also can I stipulate the mean and the standard deviation of the joint probabilities matrix in advance? Thanks !
If you are simply trying to generate examples of such matrices and don't have any other constraints, you can do so by generating observations from a population that would be implicitly described by such a matrix and then tabulate the observed probabilities. You can start by writing a function which does the tabulation:
p.matrix <- function(A){
n <- nrow(A)
k <- ncol(A)
outer(1:n,1:n,Vectorize(function(i,j) sum(A[i,]*A[j,])))/k
}
The above function can take any binary matrix and turn it into a matrix of probabilities that will statisfy check.commonprob. To get a matrix of a given size you can do something like:
prob.matrix <- function(n,p = 0.5){
k <- max(1000,10*n^2)
pop <- replicate(k,ifelse(runif(n) < p,1,0))
p.matrix(pop)
}
For example:
> M <- prob.matrix(4,c(0.1,0.9,0.3,0.4))
> M
[,1] [,2] [,3] [,4]
[1,] 0.098 0.090 0.019 0.042
[2,] 0.090 0.903 0.278 0.366
[3,] 0.019 0.278 0.306 0.121
[4,] 0.042 0.366 0.121 0.410
> bindata::check.commonprob(M)
[1] TRUE
For n = 100 this takes about 30 seconds on my machine.
In this function the resulting variables are basically uncorrelated. To get correlated variables, replace the simple ifelse() call by a custom function which e.g. doesn't allow for runs of 3 or more consecutive 1's. If you want finer control on the correlations, you would need to first be clear on just what you would want them to be.

Computing the pseudo inverse of a matrix in R

I am trying to compute the pseudo inverse of a matrix, call it M, which might look the following way:
M=matrix(c(-1,-1,1,0,0,1),nrow=2,ncol=3)
What I need is the left inverse of this matrix, such that:
M_inv_l M=I
Using the MASS package, I am able to find the right inverse:
M_inv_r=ginv(M)
Where M M_inv_r=I.
Is there a way to compute the left inverse instead of the right inverse? I haven't been able to find an answer on the forum.
Thanks
A matrix of full row rank has a left inverse:
> M %*% ginv(M)
[,1] [,2]
[1,] 1.000000e+00 -2.220446e-16
[2,] 1.110223e-16 1.000000e+00
A matrix of full column rank has a right inverse:
> ginv(t(M)) %*% t(M)
[,1] [,2]
[1,] 1.000000e+00 0
[2,] -5.551115e-17 1
See the Wikipedia article on generalized inverses.
I don't think that this is possible in general - you're trying to solve 9 linear equations with only 6 values. Specifically, look at the top row of your inverse:
-1* Minv[1,1] + -1*Minv[1,2] = 1 [1]
1* Minv[1,1] + 0*Minv[1,2] = 0 => Minv[1,1]=0 [2]
0* Minv[1,1] + 1*Minv[1,2] = 0 => Minv[1,2]=0 [3]
It should be clear that substituting [2] and [3] into [1] produces a contradiction.

Weighted variance-covariance matrices and lapply

I have a list prob with 50 elements. Each element is a 601x3 matrix of probabilities, each row of which represents a complete sample space (i.e., each row of each matrix sums to 1). For instance, here are the first five rows of the first element of prob:
> prob[[1]][1:5,]
[,1] [,2] [,3]
[1,] 0.6027004 0.3655563 0.03174335
[2,] 0.6013667 0.3665756 0.03205767
[3,] 0.6000306 0.3675946 0.03237481
[4,] 0.5986921 0.3686131 0.03269480
[5,] 0.5973513 0.3696311 0.03301765
Now, what I want to do is to create the following matrix for each row of each matrix/element in the list prob. Taking the first row, let a = .603, b = .366, and c = .032 (rounding to three decimal places). Then,
> w
[,1] [,2] [,3]
[1,] a*(1-a) -a*b -a*c
[2,] -b*a b*(1-b) -b*c
[3,] -c*a -c*b c*(1-c)
Such that:
> w
[,1] [,2] [,3]
[1,] 0.239391 -0.220698 -0.019296
[2,] -0.220698 0.232044 -0.011712
[3,] -0.019296 -0.011712 0.030976
I want to obtain a similar 3x3 matrix 600 more times (for the rest of the rows of this matrix) and then to repeat this entire process 49 more times for the rest of the elements of prob. The only thing I can think of is to call apply within lapply so that I am accessing each row of each matrix one-at-a-time. I'm sure that is not an elegant way to do this (not to mention I can't get it to work), but I can't think of anything else. Can anyone help me out with this? I'd also love to hear suggestions for using a different structure (e.g., is it bad to use matrices within lists?).
Running this process with lapply on a list of similarly dimensioned matrices should be very simple. If it represents a challenge, then you should post the dput(.) output for a two element list with similar matrices. The challenge is really to do the processing row by row which is illustrated here with the output being a 3x3xN array:
w <- apply(M, 1, function(rw) diag( rw*(1-rw) ) +
rbind( rw*c(0, -rw[1], -rw[1] ),
rw*c(-rw[2],0, -rw[2] ),
rw*c(-rw[3], -rw[3], 0)
)
)
w
[,1] [,2] [,3] [,4] [,5]
[1,] 0.23945263 0.23972479 0.23999388 0.24025987 0.24052272
[2,] -0.22032093 -0.22044636 -0.22056801 -0.22068575 -0.22079962
[3,] -0.01913173 -0.01927842 -0.01942588 -0.01957412 -0.01972314
[4,] -0.22032093 -0.22044636 -0.22056801 -0.22068575 -0.22079962
[5,] 0.23192489 0.23219793 0.23246881 0.23273748 0.23300395
[6,] -0.01160398 -0.01175156 -0.01190081 -0.01205173 -0.01220435
[7,] -0.01913173 -0.01927842 -0.01942588 -0.01957412 -0.01972314
[8,] -0.01160398 -0.01175156 -0.01190081 -0.01205173 -0.01220435
[9,] 0.03073571 0.03102998 0.03132668 0.03162585 0.03192748
w <- array(w, c(3,3,5) )
w
, , 1
[,1] [,2] [,3]
[1,] 0.23945263 -0.22032093 -0.01913173
[2,] -0.22032093 0.23192489 -0.01160398
[3,] -0.01913173 -0.01160398 0.03073571
, , 2
[,1] [,2] [,3]
[1,] 0.23972479 -0.22044636 -0.01927842
[2,] -0.22044636 0.23219793 -0.01175156
[3,] -0.01927842 -0.01175156 0.03102998
.... snipped remaining output

Mystified by qr.Q(): what is an orthonormal matrix in "compact" form?

R has a qr() function, which performs QR decomposition using either LINPACK or LAPACK (in my experience, the latter is 5% faster). The main object returned is a matrix "qr" that contains in the upper triangular matrix R (i.e. R=qr[upper.tri(qr)]). So far so good. The lower triangular part of qr contains Q "in compact form". One can extract Q from the qr decomposition by using qr.Q(). I would like to find the inverse of qr.Q(). In other word, I do have Q and R, and would like to put them in a "qr" object. R is trivial but Q is not. The goal is to apply to it qr.solve(), which is much faster than solve() on large systems.
Introduction
R uses the LINPACK dqrdc routine, by default, or the LAPACK DGEQP3 routine, when specified, for computing the QR decomposition. Both routines compute the decomposition using Householder reflections. An m x n matrix A is decomposed into an m x n economy-size orthogonal matrix (Q) and an n x n upper triangular matrix (R) as A = QR, where Q can be computed by the product of t Householder reflection matrices, with t being the lesser of m-1 and n: Q = H1H2...Ht.
Each reflection matrix Hi can be represented by a length-(m-i+1) vector. For example, H1 requires a length-m vector for compact storage. All but one entry of this vector is placed in the first column of the lower triangle of the input matrix (the diagonal is used by the R factor). Therefore, each reflection needs one more scalar of storage, and this is provided by an auxiliary vector (called $qraux in the result from R's qr).
The compact representation used is different between the LINPACK and LAPACK routines.
The LINPACK Way
A Householder reflection is computed as Hi = I - viviT/pi, where I is the identity matrix, pi is the corresponding entry in $qraux, and vi is as follows:
vi[1..i-1] = 0,
vi[i] = pi
vi[i+1:m] = A[i+1..m, i] (i.e., a column of the lower triangle of A after calling qr)
LINPACK Example
Let's work through the example from the QR decomposition article at Wikipedia in R.
The matrix being decomposed is
> A <- matrix(c(12, 6, -4, -51, 167, 24, 4, -68, -41), nrow=3)
> A
[,1] [,2] [,3]
[1,] 12 -51 4
[2,] 6 167 -68
[3,] -4 24 -41
We do the decomposition, and the most relevant portions of the result is shown below:
> Aqr = qr(A)
> Aqr
$qr
[,1] [,2] [,3]
[1,] -14.0000000 -21.0000000 14
[2,] 0.4285714 -175.0000000 70
[3,] -0.2857143 0.1107692 -35
[snip...]
$qraux
[1] 1.857143 1.993846 35.000000
[snip...]
This decomposition was done (under the covers) by computing two Householder reflections and multiplying them by A to get R. We will now recreate the reflections from the information in $qr.
> p = Aqr$qraux # for convenience
> v1 <- matrix(c(p[1], Aqr$qr[2:3,1]))
> v1
[,1]
[1,] 1.8571429
[2,] 0.4285714
[3,] -0.2857143
> v2 <- matrix(c(0, p[2], Aqr$qr[3,2]))
> v2
[,1]
[1,] 0.0000000
[2,] 1.9938462
[3,] 0.1107692
> I = diag(3) # identity matrix
> H1 = I - v1 %*% t(v1)/p[1] # I - v1*v1^T/p[1]
> H2 = I - v2 %*% t(v2)/p[2] # I - v2*v2^T/p[2]
> Q = H1 %*% H2
> Q
[,1] [,2] [,3]
[1,] -0.8571429 0.3942857 0.33142857
[2,] -0.4285714 -0.9028571 -0.03428571
[3,] 0.2857143 -0.1714286 0.94285714
Now let's verify the Q computed above is correct:
> qr.Q(Aqr)
[,1] [,2] [,3]
[1,] -0.8571429 0.3942857 0.33142857
[2,] -0.4285714 -0.9028571 -0.03428571
[3,] 0.2857143 -0.1714286 0.94285714
Looks good! We can also verify QR is equal to A.
> R = qr.R(Aqr) # extract R from Aqr$qr
> Q %*% R
[,1] [,2] [,3]
[1,] 12 -51 4
[2,] 6 167 -68
[3,] -4 24 -41
The LAPACK Way
A Householder reflection is computed as Hi = I - piviviT, where I is the identity matrix, pi is the corresponding entry in $qraux, and vi is as follows:
vi[1..i-1] = 0,
vi[i] = 1
vi[i+1:m] = A[i+1..m, i] (i.e., a column of the lower triangle of A after calling qr)
There is another twist when using the LAPACK routine in R: column pivoting is used, so the decomposition is solving a different, related problem: AP = QR, where P is a permutation matrix.
LAPACK Example
This section does the same example as before.
> A <- matrix(c(12, 6, -4, -51, 167, 24, 4, -68, -41), nrow=3)
> Bqr = qr(A, LAPACK=TRUE)
> Bqr
$qr
[,1] [,2] [,3]
[1,] 176.2554964 -71.1694118 1.668033
[2,] -0.7348557 35.4388886 -2.180855
[3,] -0.1056080 0.6859203 -13.728129
[snip...]
$qraux
[1] 1.289353 1.360094 0.000000
$pivot
[1] 2 3 1
attr(,"useLAPACK")
[1] TRUE
[snip...]
Notice the $pivot field; we will come back to that. Now we generate Q from the information the Aqr.
> p = Bqr$qraux # for convenience
> v1 = matrix(c(1, Bqr$qr[2:3,1]))
> v1
[,1]
[1,] 1.0000000
[2,] -0.7348557
[3,] -0.1056080
> v2 = matrix(c(0, 1, Bqr$qr[3,2]))
> v2
[,1]
[1,] 0.0000000
[2,] 1.0000000
[3,] 0.6859203
> H1 = I - p[1]*v1 %*% t(v1) # I - p[1]*v1*v1^T
> H2 = I - p[2]*v2 %*% t(v2) # I - p[2]*v2*v2^T
> Q = H1 %*% H2
[,1] [,2] [,3]
[1,] -0.2893527 -0.46821615 -0.8348944
[2,] 0.9474882 -0.01602261 -0.3193891
[3,] 0.1361660 -0.88346868 0.4482655
Once again, the Q computed above agrees with the R-provided Q.
> qr.Q(Bqr)
[,1] [,2] [,3]
[1,] -0.2893527 -0.46821615 -0.8348944
[2,] 0.9474882 -0.01602261 -0.3193891
[3,] 0.1361660 -0.88346868 0.4482655
Finally, let's compute QR.
> R = qr.R(Bqr)
> Q %*% R
[,1] [,2] [,3]
[1,] -51 4 12
[2,] 167 -68 6
[3,] 24 -41 -4
Notice the difference? QR is A with its columns permuted given the order in Bqr$pivot above.
I have researched for this same problem as the OP asks and I don't think it is possible. Basically the OP question is whether having the explicitly computed Q, one can recover the H1 H2 ... Ht. I do not think this is possible without computing the QR from scratch but I would also be very interested to know whether there is such solution.
I have a similar issue as the OP but in a different context, my iterative algorithm needs to mutate the matrix A by adding columns and/or rows. The first time, the QR is computed using DGEQRF and thus, the compact LAPACK format. After the matrix A is mutated e.g. with new rows I can quickly build a new set of reflectors or rotators that will annihilate the non-zero elements of the lowest diagonal of my existing R and build a new R but now I have a set of H1_old H2_old ... Hn_old and H1_new H2_new ... Hn_new (and similarly tau's) which can't be mixed up into a single QR compact representation. The two possibilities I have are, and maybe the OP has the same two possibilities:
Always maintain Q and R explicitly separated whether when computed the first time or after every update at the cost of extra flops but keeping the required memory well bounded.
Stick to the compact LAPACK format but then every time a new update comes in, keep a list of all these mini sets of update reflectors. At the point of solving the system one would do a big Q'*c i.e. H1_u3*H2_u3*...*Hn_u3*H1_u2*H2_u2*...*Hn_u2*H1_u1*H2_u1...*Hn_u1*H1*H2*...*Hn*c where ui is the QR update number and this is potentially a lot of multiplications to do and memory to keep track of but definitely the fastest way.
The long answer from David basically explains what the compact QR format is but not how to get to this compact QR format having the explicit computed Q and R as input.

Resources