Calculate inverse of a non-square matrix in R - r

I'm pretty new to the R language and trying to find out how you can calculate the inverse of a matrix that isn't square. (non-square? irregular? I'm unsure of the correct terminology).
From my book and a quick google search, (see source), I've found you can use solve(a) to find the inverse of a matrix if a is square.
The matrix I have created is, and from what I understand, not square:
> matX<-matrix(c(rep(1, 8),2,3,4,0,6,4,3,7,-2,-4,3,5,7,8,9,11), nrow=8, ncol=3);
> matX
[,1] [,2] [,3]
[1,] 1 2 -2
[2,] 1 3 -4
[3,] 1 4 3
[4,] 1 0 5
[5,] 1 6 7
[6,] 1 4 8
[7,] 1 3 9
[8,] 1 7 11
>
Is there a function for solving a matrix of this size or will I have to do something to each element? as the solve() function gives this error:
Error in solve.default(matX) : 'a' (8 x 3) must be square
The calculation I'm trying to achieve from the above matrix is: (matX'matX)^-1
Thanks in advance.

ginv ginv in the MASS package will give the generalized inverse of a matrix. Premultiplying the original matrix by it will give the identity:
library(MASS)
inv <- ginv(matX)
# test it out
inv %*% matX
## [,1] [,2] [,3]
## [1,] 1.000000e+00 6.661338e-16 4.440892e-15
## [2,] -8.326673e-17 1.000000e+00 -1.110223e-15
## [3,] 6.938894e-17 8.326673e-17 1.000000e+00
As suggested in the comments this can be displayed in a nicer way using zapsmall:
zapsmall(inv %*% matX)
## [,1] [,2] [,3]
## [1,] 1 0 0
## [2,] 0 1 0
## [3,] 0 0 1
The inverse of matX'matX is now:
tcrossprod(inv)
## [,1] [,2] [,3]
## [1,] 0.513763935 -0.104219636 -0.002371406
## [2,] -0.104219636 0.038700372 -0.007798748
## [3,] -0.002371406 -0.007798748 0.006625269
solve however, if you aim is to calculate the inverse of matX'matX you don't need it in the first place. This does not involve MASS:
solve(crossprod(matX))
## [,1] [,2] [,3]
## [1,] 0.513763935 -0.104219636 -0.002371406
## [2,] -0.104219636 0.038700372 -0.007798748
## [3,] -0.002371406 -0.007798748 0.006625269
svd The svd could also be used and similarly does not require MASS:
with(svd(matX), v %*% diag(1/d^2) %*% t(v))
## [,1] [,2] [,3]
## [1,] 0.513763935 -0.104219636 -0.002371406
## [2,] -0.104219636 0.038700372 -0.007798748
## [3,] -0.002371406 -0.007798748 0.006625269
ADDED some additional info.

You can do what's called a "Moore–Penrose pseudoinverse". Here's a function exp.matthat will do this for you. There is also an example outlining it's use here.
exp.mat():
#The exp.mat function performs can calculate the pseudoinverse of a matrix (EXP=-1)
#and other exponents of matrices, such as square roots (EXP=0.5) or square root of
#its inverse (EXP=-0.5).
#The function arguments are a matrix (MAT), an exponent (EXP), and a tolerance
#level for non-zero singular values.
exp.mat<-function(MAT, EXP, tol=NULL){
MAT <- as.matrix(MAT)
matdim <- dim(MAT)
if(is.null(tol)){
tol=min(1e-7, .Machine$double.eps*max(matdim)*max(MAT))
}
if(matdim[1]>=matdim[2]){
svd1 <- svd(MAT)
keep <- which(svd1$d > tol)
res <- t(svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep]))
}
if(matdim[1]<matdim[2]){
svd1 <- svd(t(MAT))
keep <- which(svd1$d > tol)
res <- svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep])
}
return(res)
}
example of use:
source("exp.mat.R")
X <- matrix(c(rep(1, 8),2,3,4,0,6,4,3,7,-2,-4,3,5,7,8,9,11), nrow=8, ncol=3)
iX <- exp.mat(X, -1)
zapsmall(iX %*% X) # results in identity matrix
[,1] [,2] [,3]
[1,] 1 0 0
[2,] 0 1 0
[3,] 0 0 1

Related

Schur decomposition of a complex matrix

I don't understand why the Schur's decomposition doesn't work on a complex matrix.
My program for testing is :
M <- matrix(data=c(2-1i,0+1i,3-1i,0+1i,1+0i,0+1i,1+0i,1+1i,2+0i), nrow=3, ncol=3, byrow=FALSE)
M
S <- Schur(M)
S
(S$Q)%*%(S$T)%*%(solve(S$Q))
Results are :
> M
[,1] [,2] [,3]
[1,] 2-1i 0+1i 1+0i
[2,] 0+1i 1+0i 1+1i
[3,] 3-1i 0+1i 2+0i
>
> S <- Schur(M)
Warning message:
In Schur(M) : imaginary parts discarded in coercion
>
> S
$Q
[,1] [,2] [,3]
[1,] 0 0.500 -0.866
[2,] 1 0.000 0.000
[3,] 0 0.866 0.500
$T
[,1] [,2] [,3]
[1,] 1 0.866 0.5000
[2,] 0 3.732 -2.0000
[3,] 0 0.000 0.2679
$EValues
[1] 1.0000 3.7321 0.2679
>
> (S$Q)%*%(S$T)%*%(solve(S$Q))
[,1] [,2] [,3]
[1,] 2 0 1
[2,] 0 1 1
[3,] 3 0 2
So that Q*T*Q^{-1} does not give M back in its true complex form... What code/instructions am I missing, please ?
As said in #Eldioo's comment, Matrix::Schur deals only with real matrices. For complex matrices, you can use the QZ package:
library(QZ)
M <- matrix(data=c(2-1i,0+1i,3-1i,0+1i,1+0i,0+1i,1+0i,1+1i,2+0i),
nrow=3, ncol=3, byrow=FALSE)
schur <- qz(M)
> all.equal(M, schur$Q %*% schur$T %*% solve(schur$Q))
[1] TRUE
> all.equal(M, schur$Q %*% schur$T %*% t(Conj(schur$Q)))
[1] TRUE

Eigen() in R: How to return non-normalized eigenvectors [duplicate]

#eigen values and vectors
a <- matrix(c(2, -1, -1, 2), 2)
eigen(a)
I am trying to find eigenvalues and eigenvectors in R. Function eigen works for eigenvalues but there are errors in eigenvectors values. Is there any way to fix that?
Some paper work tells you
the eigenvector for eigenvalue 3 is (-s, s) for any non-zero real value s;
the eigenvector for eigenvalue 1 is (t, t) for any non-zero real value t.
Scaling eigenvectors to unit-length gives
s = ± sqrt(0.5) = ±0.7071068
t = ± sqrt(0.5) = ±0.7071068
Scaling is good because if the matrix is real symmetric, the matrix of eigenvectors is orthonormal, so that its inverse is its transpose. Taking your real symmetric matrix a for example:
a <- matrix(c(2, -1, -1, 2), 2)
# [,1] [,2]
#[1,] 2 -1
#[2,] -1 2
E <- eigen(a)
d <- E[[1]]
#[1] 3 1
u <- E[[2]]
# [,1] [,2]
#[1,] -0.7071068 -0.7071068
#[2,] 0.7071068 -0.7071068
u %*% diag(d) %*% solve(u) ## don't do this stupid computation in practice
# [,1] [,2]
#[1,] 2 -1
#[2,] -1 2
u %*% diag(d) %*% t(u) ## don't do this stupid computation in practice
# [,1] [,2]
#[1,] 2 -1
#[2,] -1 2
crossprod(u)
# [,1] [,2]
#[1,] 1 0
#[2,] 0 1
tcrossprod(u)
# [,1] [,2]
#[1,] 1 0
#[2,] 0 1
How to find eigenvectors using textbook method
The textbook method is to solve the homogenous system: (A - λI)x = 0 for the Null Space basis. The NullSpace function in my this answer would be helpful.
## your matrix
a <- matrix(c(2, -1, -1, 2), 2)
## knowing that eigenvalues are 3 and 1
## eigenvector for eigenvalue 3
NullSpace(a - diag(3, nrow(a)))
# [,1]
#[1,] -1
#[2,] 1
## eigenvector for eigenvalue 1
NullSpace(a - diag(1, nrow(a)))
# [,1]
#[1,] 1
#[2,] 1
As you can see, they are not "normalized". By contrasts, pracma::nullspace gives "normalized" eigenvectors, so you get something consistent with the output of eigen (up to possible sign flipping):
library(pracma)
nullspace(a - diag(3, nrow(a)))
# [,1]
#[1,] -0.7071068
#[2,] 0.7071068
nullspace(a - diag(1, nrow(a)))
# [,1]
#[1,] 0.7071068
#[2,] 0.7071068

R: Is there a simple and efficient way to get back the list of building block matrices of a block-diagonal matrix?

I'm looking for a (build-in) function, which efficiently returns the list of building blocks of a block-diagonal matrix in the following way (rather than iterating over the slots to get the list manually):
#construct bdiag-matrix
library("Matrix")
listElems <- list(matrix(1:4,ncol=2,nrow=2),matrix(5:8,ncol=2,nrow=2))
mat <- bdiag(listElems)
#get back the list
res <- theFunctionImLookingFor(mat)
The result res yields the building blocks:
[[1]]
[,1] [,2]
[1,] 1 3
[2,] 2 4
[[2]]
[,1] [,2]
[1,] 5 7
[2,] 6 8
Edit: Regarding my use case, the list elements in listElems are square and symmetric matrices. If the block is a diagonal matrix, theFunctionImLookingFor should return a list element for each diagonal element.
However, the function should be able to deal with building block matrices like
[,1] [,2] [,3]
[1,] 1 1 0
[2,] 1 1 1
[3,] 0 1 1
or
[,1] [,2] [,3]
[1,] 1 0 1
[2,] 0 1 1
[3,] 1 1 1
i.e. deal with zeros in blocks, which are not diagonal matrices.
I hope this will work for all your cases, the test at the bottom includes a block that contains zeroes.
theFunctionImLookingFor <- function(mat, plot.graph = FALSE) {
stopifnot(nrow(mat) == ncol(mat))
x <- mat
diag(x) <- 1
edges <- as.matrix(summary(x)[c("i", "j")])
library(igraph)
g <- graph.edgelist(edges, directed = FALSE)
if (plot.graph) plot(g)
groups <- unique(Map(sort, neighborhood(g, nrow(mat))))
sub.Mat <- Map(`[`, list(mat), groups, groups, drop = FALSE)
sub.mat <- Map(as.matrix, sub.Mat)
return(sub.mat)
}
listElems <- list(matrix(1:4,ncol=2,nrow=2),
matrix(5:8,ncol=2,nrow=2),
matrix(c(0, 1, 0, 0, 0, 1, 0, 0, 1),ncol=3,nrow=3),
matrix(1:1,ncol=1, nrow=1))
mat <- bdiag(listElems)
theFunctionImLookingFor(mat, plot.graph = TRUE)
# [[1]]
# [,1] [,2]
# [1,] 1 3
# [2,] 2 4
# [[2]]
# [,1] [,2]
# [1,] 5 7
# [2,] 6 8
# [[3]]
# [,1] [,2] [,3]
# [1,] 0 0 0
# [2,] 1 0 0
# [3,] 0 1 1
# [[4]]
# [,1]
# [1,] 1

Solving non-square linear system with R

How to solve a non-square linear system with R : A X = B ?
(in the case the system has no solution or infinitely many solutions)
Example :
A=matrix(c(0,1,-2,3,5,-3,1,-2,5,-2,-1,1),3,4,T)
B=matrix(c(-17,28,11),3,1,T)
A
[,1] [,2] [,3] [,4]
[1,] 0 1 -2 3
[2,] 5 -3 1 -2
[3,] 5 -2 -1 1
B
[,1]
[1,] -17
[2,] 28
[3,] 11
If the matrix A has more rows than columns, then you should use least squares fit.
If the matrix A has fewer rows than columns, then you should perform singular value decomposition. Each algorithm does the best it can to give you a solution by using assumptions.
Here's a link that shows how to use SVD as a solver:
http://www.ecse.rpi.edu/~qji/CV/svd_review.pdf
Let's apply it to your problem and see if it works:
Your input matrix A and known RHS vector B:
> A=matrix(c(0,1,-2,3,5,-3,1,-2,5,-2,-1,1),3,4,T)
> B=matrix(c(-17,28,11),3,1,T)
> A
[,1] [,2] [,3] [,4]
[1,] 0 1 -2 3
[2,] 5 -3 1 -2
[3,] 5 -2 -1 1
> B
[,1]
[1,] -17
[2,] 28
[3,] 11
Let's decompose your A matrix:
> asvd = svd(A)
> asvd
$d
[1] 8.007081e+00 4.459446e+00 4.022656e-16
$u
[,1] [,2] [,3]
[1,] -0.1295469 -0.8061540 0.5773503
[2,] 0.7629233 0.2908861 0.5773503
[3,] 0.6333764 -0.5152679 -0.5773503
$v
[,1] [,2] [,3]
[1,] 0.87191556 -0.2515803 -0.1764323
[2,] -0.46022634 -0.1453716 -0.4694190
[3,] 0.04853711 0.5423235 0.6394484
[4,] -0.15999723 -0.7883272 0.5827720
> adiag = diag(1/asvd$d)
> adiag
[,1] [,2] [,3]
[1,] 0.1248895 0.0000000 0.00000e+00
[2,] 0.0000000 0.2242431 0.00000e+00
[3,] 0.0000000 0.0000000 2.48592e+15
Here's the key: the third eigenvalue in d is very small; conversely, the diagonal element in adiag is very large. Before solving, set it equal to zero:
> adiag[3,3] = 0
> adiag
[,1] [,2] [,3]
[1,] 0.1248895 0.0000000 0
[2,] 0.0000000 0.2242431 0
[3,] 0.0000000 0.0000000 0
Now let's compute the solution (see slide 16 in the link I gave you above):
> solution = asvd$v %*% adiag %*% t(asvd$u) %*% B
> solution
[,1]
[1,] 2.411765
[2,] -2.282353
[3,] 2.152941
[4,] -3.470588
Now that we have a solution, let's substitute it back to see if it gives us the same B:
> check = A %*% solution
> check
[,1]
[1,] -17
[2,] 28
[3,] 11
That's the B side you started with, so I think we're good.
Here's another nice SVD discussion from AMS:
http://www.ams.org/samplings/feature-column/fcarc-svd
Aim is to solve Ax = b
where A is p by q, x is q by 1 and b is p by 1 for x given A and b.
Approach 1: Generalized Inverse: Moore-Penrose
https://en.wikipedia.org/wiki/Generalized_inverse
Multiplying both sides of the equation, we get
A'Ax = A' b
where A' is the transpose of A. Note that A'A is q by q matrix now. One way to solve this now multiply both sides of the equation by the inverse of A'A. Which gives,
x = (A'A)^{-1} A' b
This is the theory behind generalized inverse. Here G = (A'A)^{-1} A' is pseudo-inverse of A.
library(MASS)
ginv(A) %*% B
# [,1]
#[1,] 2.411765
#[2,] -2.282353
#[3,] 2.152941
#[4,] -3.470588
Approach 2: Generalized Inverse using SVD.
#duffymo used SVD to obtain a pseudoinverse of A.
However, last elements of svd(A)$d may not be quite as small as in this example. So, probably one shouldn't use that method as is. Here's an example where none of the last elements of A is close to zero.
A <- as.matrix(iris[11:13, -5])
A
# Sepal.Length Sepal.Width Petal.Length Petal.Width
# 11 5.4 3.7 1.5 0.2
# 12 4.8 3.4 1.6 0.2
# 13 4.8 3.0 1.4 0.1
svd(A)$d
# [1] 10.7820526 0.2630862 0.1677126
One option would be to look as the singular values in cor(A)
svd(cor(A))$d
# [1] 2.904194e+00 1.095806e+00 1.876146e-16 1.155796e-17
Now, it is clear there is only two large singular values are present. So, one now can apply svd on A to get pseudo-inverse as below.
svda <- svd(A)
G = svda$v[, 1:2] %*% diag(1/svda$d[1:2]) %*% t(svda$u[, 1:2])
# to get x
G %*% B

Creating a new matrix in R using old matrix values as exponents

If I have a matrix mat1
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
it is possible via a very simple command to square all individual values by
mat1 * mat1
[,1] [,2] [,3]
[1,] 1 9 25
[2,] 4 16 36
Now, what I want to do is to create a new matrix where all values are computed by e^(old_value), e.g., e^1, e^2, e^3 and so forth. How can I do this?
exp computes the exponential function
> mat1 <- matrix(1:6, nrow=2)
> exp(mat1)
[,1] [,2] [,3]
[1,] 2.718282 20.08554 148.4132
[2,] 7.389056 54.59815 403.4288

Resources