I have an upper triangular matrix and I'd like to compute its inverse in a fast way. I tried qr.solve() but I have the feeling that it's equivalent to solve(), and that it does not exploit the triangular nature of the input matrix. What is the best way to do it?
Try backsolve() and use the identity matrix with appropriate dimension as the right-hand value.
library(microbenchmark)
n <- 2000
n.cov <- 1000
X <- matrix(sample(c(0L, 1L), size = n * n.cov, replace = TRUE),
nrow = n, ncol = n.cov)
R <- chol(crossprod(X))
rm(X)
microbenchmark(
backsolve = backsolve(r = R, x = diag(ncol(R))),
solve = solve(R),
times = 10)
Unit: milliseconds
expr min lq mean median uq max neval
backsolve 467.2802 467.5142 469.4457 468.1578 468.6501 482.2431 10
solve 748.2351 748.8318 750.0764 750.3319 750.9583 751.5005 10
If you want to get the inverse of your matrix M, you can simply use backsolve(M, diag(dim(M)[1])). For example:
M <- matrix(rnorm(100), 10, 10) # random matrix
M[lower.tri(M)] <- 0 # make it triangular
Minv <- backsolve(M, diag(dim(M)[1]))
M%*%Minv
When running this code you may see M%*%Minv with very low coefficient (~10^-15) due to numerical approximation.
I have, of late, confronted the same problem. My solution has been to load the Matrix package, and then define the following two R functions serving as wrappers for that task:
my.backsolve <- function(A, ...) solve(triu(A), ...)
my.forwardsolve <- function(A, ...) solve(tril(A), ...)
These need the Matrix package because triuand tril are defined in it. Then my.backsolve(A) (resp. my.forwardsolve(A)) computes the inverse of A for the upper (resp. lower) triangular case.
A minor issue, though, might be that the result of each of the above two R functions is of class Matrix. If one has issues with that, then apply as.vector or as.matrixto the result according to whether the r.h.s. is a vector or a matrix (in the algebraic sense).
It seems solve performs somewhat faster than qr.solve but qr.solve is more robust.
n <- 50
mymatrix <- matrix(0, nrow=n, ncol=n)
fun1 <- function() {
for (i in 1:n) {
mymatrix[i, i:n] <- rnorm(n-i+1)+3
}
solve(mymatrix)
}
fun2 <- function() {
for (i in 1:n) {
mymatrix[i, i:n] <- rnorm(n-i+1)+3
}
qr.solve(mymatrix)
}
> system.time(for (i in 1:1000) fun1())
user system elapsed
1.92 0.03 1.95
> system.time(for (i in 1:1000) fun2())
user system elapsed
2.92 0.00 2.92
Note that if we remove the +3 when editing the matrix cells, solve will almost always fail and qr.solve will still usually give an answer.
> set.seed(0)
> fun1()
Error in solve.default(mymatrix) :
system is computationally singular: reciprocal condition number = 3.3223e-22
> set.seed(0)
> fun2()
[returns the inverted matrix]
The R base function chol2inv might be doing the trick for inverting triangular matrix. Inverts a symmetric, positive definite square matrix from its Choleski decomposition. Equivalently, compute (X'X)^(-1) from the (R part) of the QR decomposition of X.
Even more generally, given an upper triangular matrix R, compute (R'R)^(-1).
See https://stat.ethz.ch/R-manual/R-devel/library/base/html/chol2inv.html
I tested its speed by microbenchmark method: qr.solve is slowest, solve ranks 3rd in speed, backsolve ranks 2nd in speed, chol2inv is the fastest.
cma <- chol(ma <- cbind(1, 1:3, c(1,3,7)))
microbenchmark(qr.solve(ma), solve(ma), cma<-chol(ma), chol2inv(cma), invcma<-backsolve(cma, diag(nrow(cma))), backinvma<-tcrossprod(invcma))
Related
I assumed that chol2inv(chol(x)) and solve(x) are two different methods that arrive at the same conclusion in all cases. Consider for instance a matrix S
A <- matrix(rnorm(3*3), 3, 3)
S <- t(A) %*% A
where the following two commands will give equivalent results:
solve(S)
chol2inv(chol(S))
Now consider the transpose of the Cholesky decomposition of S:
L <- t(chol(S))
where now the results of the following two commands do not give equivalent results anymore:
solve(L)
chol2inv(chol(L))
This surprised me a bit. Is this expected behavior?
chol expects (without checking) that its first argument x is a symmetric positive definite matrix, and it operates only on the upper triangular part of x. Thus, if L is a lower triangular matrix and D = diag(diag(L)) is its diagonal part, then chol(L) is actually equivalent to chol(D), and chol2inv(chol(L)) is actually equivalent to solve(D).
set.seed(141339L)
n <- 3L
S <- crossprod(matrix(rnorm(n * n), n, n))
L <- t(chol(S))
D <- diag(diag(L))
all.equal(chol(L), chol(D)) # TRUE
all.equal(chol2inv(chol(L)), solve(D)) # TRUE
I need to compute the sum of squares crossproduct matrix (indeed the trace of this matrix) in a multivariate linear model, with Y (n x q) and X (n x p). Standard R code for doing that is:
require(MASS)
require(car)
# Example data
q <- 10
n <- 1000
p <- 10
Y <- mvrnorm(n, mu = rep(0, q), Sigma = diag(q))
X <- as.data.frame(mvrnorm(n, mu = rnorm(p), Sigma = diag(p)))
# Fit lm
fit <- lm( Y ~ ., data = X )
# Type I sums of squares
summary(manova(fit))$SS
# Type III sums of squares
type = 3 # could be also 2 (II)
car::Anova(fit, type = type)$SSP
This has to be done thousands of times, unfortunately, it gets slow when the number of predictors is relatively large. As often I am interested only in a subset of s predictors, I tried to re-implement this calculation. Although my implementation directly translating linear algebra for s = 1 (below) is faster for small sample sizes (n),
# Hat matrix (X here stands for the actual design matrix)
H <- tcrossprod(tcrossprod(X, solve(crossprod(X))), X)
# Remove predictor of interest (e.g. 2)
X.r <- X[, -2]
H1 <- tcrossprod(tcrossprod(X.r, solve(crossprod(X.r))), X.r)
# Compute e.g. type III sum of squares
SS <- crossprod(Y, H - H1) %*% Y
car still goes faster for large n:
I already tried Rcpp implementation which much success, as these matrix products in R already use a very efficient code.
Any hint on how to do this faster?
UPDATE
After reading the answers, I tried the solution proposed in this post which relies on QR/SVD/Cholesky factorization for hat matrix calculation. However it seems that car::Anova is still faster to compute all p = 30 matrices than me computing just one (s = 1)!! for e.g. n = 5000, q = 10:
Unit: milliseconds
expr min lq mean median uq max neval
ME 1137.5692 1202.9888 1257.8979 1251.6834 1318.9282 1398.9343 10
QR 1005.9082 1031.9911 1084.5594 1037.5659 1095.7449 1364.9508 10
SVD 1026.8815 1065.4629 1152.6631 1087.9585 1241.4977 1446.8318 10
Chol 969.9089 1056.3093 1115.9608 1102.1169 1210.7782 1267.1274 10
CAR 205.1665 211.8523 218.6195 214.6761 222.0973 242.4617 10
UPDATE 2
The best solution for now was to go over the car::Anova code (i.e. functions car:::Anova.III.mlm and subsequently car:::linearHypothesis.mlm) and re-implement them to account for a subset of predictors, instead of all of them.
The relevant code by car is as follows (I skipped checks, and simplified a bit):
B <- coef(fit) # Model coefficients
M <- model.matrix(fit) # Model matrix M
V <- solve(crossprod(M)) # M'M
p <- ncol(M) # Number of predictors in M
I.p <- diag(p) # Identity (p x p)
terms <- labels(terms(fit)) # terms (add intercept)
terms <- c("(Intercept)", terms)
n.terms <- length(terms)
assign <- fit$assign # assignation terms <-> p variables
SSP <- as.list(rep(0, n.terms)) # Initialize empty list for sums of squares cross-product matrices
names(SSP) <- terms
for (term in 1:n.terms){
subs <- which(assign == term - 1)
L <- I.p[subs, , drop = FALSE]
SSP[[term]] <- t(L %*% B) %*% solve(L %*% V %*% t(L)) %*% (L %*% B)
}
Then it is just a matter of selecting the subset of terms.
This line and the similar one below it for H1 could probably be improved:
H <- tcrossprod(tcrossprod(X, solve(crossprod(X))), X)
The general idea is that you should rarely use solve(Y) %*% Z, because it is the same as solve(Y, Z) but slower. I haven't fully expanded your tcrossprod calls to see what the best equivalent formulation of the expressions for H and H1 would be.
You could also look at this question https://stats.stackexchange.com/questions/139969/speeding-up-hat-matrices-like-xxx-1x-projection-matrices-and-other-as for a description of doing it via QR decomposition.
When making predictions for a linear statistical model we usually have a model matrix X of predictors corresponding to the points at which we want to make predictions; a vector of coefficients beta; and a variance-covariance matrix V. Computing the predictions is just X %*% beta. The most straightforward way to compute the variances of the predictions is
diag(X %*% V %*% t(X))
or slightly more efficiently
diag(X %*% tcrossprod(V,X))
However, this is very inefficient, because it constructs an n*n matrix when all we really want is the diagonal. I know I could write some Rcpp-loopy thing that would compute just the diagonal terms, but I'm wondering if there is an existing linear algebra trick in R that will nicely do what I want ... (if someone wants to write the Rcpp-loopy thing for me as an answer I wouldn't object, but I'd prefer a pure-R solution)
FWIW predict.lm seems to do something clever by multiplying X by the inverse of the R component of the QR-decomposition of the lm; I'm not sure that's always going to be available, but it might be a good starting point (see here)
Along the lines of this Octave/Matlab question, for two matrices A and B, we can use the use the fact that the nth diagonal entry of AB will be the product of the nth row of A with the nth column of B. We can naively extend that to the case of three matrices, ABC. I have not considered how to optimize in the case where C=A^T, but aside from that, this code looks like promising speedup:
start_time <- Sys.time()
A=matrix(1:1000000, nrow = 1000, ncol = 1000)
B=matrix(1000000:1, nrow = 1000, ncol = 1000)
# Try one of these two
res=diag(A %*% B %*% t(A)) # ~0.47s
res=rowSums(A * t(B %*% t(A))) # ~0.27s
end_time <- Sys.time()
print(end_time - start_time)
Using tcrossprod did not appear to accelerate the results when I ran this code. However, just using the row-sum-dot-product approach appears to be a lot more efficient already, at least on this silly example, which suggests (though I'm not sure) that rowSums is not computing the full intermediate matrices before returning the diagonal entries, as I'd expect happens with diag.
I am not quite sure how efficient this is,
Find U such that V = U %*% t(U); this is possible since V is cov matrix.
XU = X %*% U
result = apply(XU, 1, function(x) sum(x^2))
Demo
V <- cov(iris[, -5])
X <- as.matrix(iris[1:5, -5])
Using SVD
svd_v <- svd(V)
U <- svd_v$u %*% diag(sqrt(svd_v$d))
XU = X %*% U
apply(XU, 1, function(x) sum(x^2))
# 1 2 3 4 5
#41.35342 39.36286 35.42369 38.25584 40.30839
Another approach - this isn't also going to be faster than #davewy's
U <- chol(V)
XU = (X %*% U)^2
rowSums(XU)
I recently found emulator::quad.diag(), which is just
colSums(crossprod(M, Conj(x)) * x)
This is slightly better than #davewy's solution (although the overall differences are less than I thought they would be anyway).
library(microbenchmark)
microbenchmark(full=diag(A %*% B %*% t(A)),
davewy=rowSums(A * t(B %*% t(A))),
emu = quad.diag(A,B))
Unit: milliseconds
expr min lq mean median uq max neval cld
full 32.76241 35.49665 39.51683 37.63958 41.46561 57.41370 100 c
davewy 22.74787 25.06874 28.42179 26.97330 29.68895 45.38188 100 b
emu 17.68390 20.21322 23.59981 22.09324 24.80734 43.60953 100 a
I am newcomer to R, migrated from GAUSS because of the license verification issues.
I want to speed-up the following code which creates n×k matrix A. Given the n×1 vector x and vectors of parameters mu, sig (both of them k dimensional), A is created as A[i,j]=dnorm(x[i], mu[j], sigma[j]). Following code works ok for small numbers n=40, k=4, but slows down significantly when n is around 10^6 and k is about the same size as n^{1/3}.
I am doing simulation experiment to verify the bootstrap validity, so I need to repeatedly compute matrix A for #ofsimulation × #bootstrap times, and it becomes little time comsuming as I want to experiment with many different values of n,k. I vectorized the code as much as I could (thanks to vector argument of dnorm), but can I ask more speed up?
Preemptive thanks for any help.
x = rnorm(40)
mu = c(-1,0,4,5)
sig = c(2^2,0.5^2,2^2,3^2)
n = length(x)
k = length(mu)
A = matrix(NA,n,k)
for(j in 1:k){
A[,j]=dnorm(x,mu[j],sig[j])
}
Your method can be put into a function like this
A.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
A <- matrix(NA,n,k)
for(j in 1:k) A[,j] <- dnorm(x,mu[j],sig[j])
A
}
and it's clear that you are filling the matrix A column by column.
R stores the entries of a matrix columnwise (just like Fortran).
This means that the matrix can be filled with a single call of dnorm using suitable repetitions of x, mu, and sig. The vector z will have the columns of the desired matrix stacked. and then the matrix to be returned can be formed from that vector just by specifying the number of rows an columns. See the following function
B.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
z <- dnorm(rep(x,times=k),rep(mu,each=n),rep(sig,each=n))
B <- matrix(z,nrow=n,ncol=k)
B
}
Let's make an example with your data and test this as follows:
N <- 40
set.seed(11)
x <- rnorm(N)
mu <- c(-1,0,4,5)
sig <- c(2^2,0.5^2,2^2,3^2)
A <- A.fill(x,mu,sig)
B <- B.fill(x,mu,sig)
all.equal(A,B)
# [1] TRUE
I'm assuming that n is an integer multiple of k.
Addition
As noted in the comments B.fill is quite slow for large values of n.
The reason lies in the construct rep(...,each=...).
So is there a way to speed A.fill.
I tested this function:
C.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
sapply(1:k,function(j) dnorm(x,mu[j],sig[j]), simplify=TRUE)
}
This function is about 20% faster than A.fill.
I have a long vector x, and another v, which contains lengths. I would like to sum x so that the answer y is a vector of length length(v), and y[1] is sum(x[1:v[i]]), y[2] is sum(x[(1+v[1]):(v[1]+v[2])]), and so on. Essentially this is performing sparse matrix multiplication from a space of dimension length(x) to one of dimension length(v). However, I would prefer not to bring in "advanced machinery", although I might have to. It does need to be very, very fast. Can anyone think of anything simpler than using a sparse matrix package?
Example -
x <- c(1,1,3,4,5)
v <- c(2,3)
y <- myFunc(x,v)
y should be c(2,12)
I am open to any pre-processing - e.g, storing in v the starting indexes of each stretch.
y <- cumsum(x)[cumsum(v)]
y <- c(y[1], diff(y))
This looks like it's doing extra work because it's computing the cumsum for the whole vector, but it's actually faster than the other solutions so far, for both small and large numbers of groups.
Here's how I simulated the data
set.seed(5)
N <- 1e6
n <- 10
x <- round(runif(N,0,100),1)
v <- as.vector(table(sample(n, N, replace=TRUE)))
On my machine the timings with n <- 10 are:
Brandon Bertelsen (for loop): 0.017
Ramnath (rowsum): 0.057
John (split/apply): 0.280
Aaron (cumsum): 0.008
changing to n <- 1e5 the timings are:
Brandon Bertelsen (for loop): 2.181
Ramnath (rowsum): 0.226
John (split/apply): 0.852
Aaron (cumsum): 0.015
I suspect this is faster than doing matrix multiplication, even with a sparse matrix package, because one doesn't have to form the matrix or do any multiplication. If more speed is needed, I suspect it could be sped up by writing it in C; not hard to do with the inline and rcpp packages, but I'll leave that to you.
You can do this using rowsum. It should be reasonably fast as it uses C code in the background.
y <- rowsum(x, rep(1:length(v), v))
Here's a slightly different tack.
s <- rep(1:length(v), v)
l <- split(x, s)
y <- sapply(l, sum)
Try something like:
for (i in 1:length(v)) {
y[i] <- ifelse(i > 1,sum(x[v[i-1]:v[i]]), sum(x[1:v[i]]))
}