Efficient Triangular Backsubstitution in R - r

I am interested in solving the linear system of equations Ax=b where A is a lower-triangular matrix (n × n) and b is a (n × 1) vector where n ≈ 600k.
I coded up backsubstitution in R and it works fast for matrices of size upto 1000, but is really slow for larger n (≈ 600k). I know that the naive backsubstitution is O(n^2).
My R function is below; does anyone know of a more efficient (vectorized, parallelized etc.) way of doing it, which scales to large n?
Backsubstitution
backsub=function(X,y)
{
l=dim(X)
n=l[1]
p=l[2]
y=as.matrix(y)
for (j in seq(p,1,-1))
{
y[j,1]=y[j,1]/X[j,j]
if((j-1)>0)
y[1:(j-1),1]=y[1:(j-1),1]-(y[j,1]*X[1:(j-1),j])
}
return(y)
}

How about the R function backsolve? It calls the Level 3 BLAS routine dtrsm which is probably what you want to be doing. In general, you won't beat BLAS/LAPACK linear algebra routines: they're insanely optimized.

Related

fast computation of matrix trace norm in R

I have a square symmetric real matrix S of dimension 31. I want to compute its trace (nuclear) norm, Frobenius (Hilbert--Schmidt) norm and operator (spectral) norm. I am using eigen:
x <- eigen(S, only.values = TRUE)$values
sum(abs(x))
sqrt(sum(x^2))
max(abs(x))
Is there a faster way to do this? For the Frobenius norm, I suppose that sum(S^2) should be faster. I also believe that there should be a way to compute the operator norm faster, as it only requires the maximal and minimal eigenvalues rather than all of them. I am not sure, however, how to handle the trace norm efficiently. It can be computed as the trace of the matrix square root of t(S) %*% S but (at least to my knowledge) computing the matrix square root is done using eigen too (see below my code if helpful).
I don't know if it helps at all, but I also know that S+diag(31) is positive semidefinite.
I need to do this for a lot of matrices (4 000 000 or so) so even mild improvements would be consequential.
Here is the code for matrix square root I am using
sqm <- function(A)
{
A <- eigen(A)
A$vectors %*% (sqrt(A$values) * t(A$vectors))
}

How calculate eigenvalue, matrix without eigen function in r

I want to implement eigen vectors and values without a function(eigen).
Hand calculations in simple matrices are possible, but it is hard to think about to implement them in r.
I want a code that can be used in all x matrixs (n x p).
I would be really grateful if you could provide me with an idea or code.
A <- t(x) %*% x
eigen(A)$vectors # I don't want to use 'eigen'
eigen(A)$values

Efficient use of Choleski decomposition in R

This question is related to this one and this one
I have two full rank matrices A1, A2 each
of dimension p x p and a p-vector y.
These matrices are closely related in the sense that
matrix A2 is a rank one update of matrix A1.
I'm interested in the vector
β2 | (β1, y, A1, A2, A1-1})
where
β2 = (A2' A2)-1(A2'y)
and
β1 = (A1' A1)-1(A1' y)
Now, in a previous question here I have been advised
to estimate β2 by the Choleski approach since the Choleski
decomposition is easy to update using R functions such as chud()
in package SamplerCompare.
Below are two functions to solve linear systems in R, the first one uses
the solve() function and the second one the Choleski approach
(the second one I can efficiently update).
fx01 <- function(ll,A,y) chol2inv(chol(crossprod(A))) %*% crossprod(A,y)
fx03 <- function(ll,A,y) solve(A,y)
p <- 5
A <- matrix(rnorm(p^2),p,p)
y <- rnorm(p)
system.time(lapply(1:1000,fx01,A=A,y=y))
system.time(lapply(1:1000,fx03,A=A,y=y))
My question is: for p small, both functions seems to be comparable
(actually fx01 is even faster). But as I increase p,
fx01 becomes increasingly slower so that for p = 100,
fx03 is three times as fast as fx01.
What is causing the performance deterioration of fx01 and can it
be improved/solved (maybe my implementation of the Choleski is too naive? Shouldn't I be using functions of the Choleski constellation such as backsolve, and if yes, how?
A %*% B is the R lingo for matrix multiplication of A by B.
crossprod(A,B) is the R lingo for A' B (ie transpose of A matrix
multiplying the matrix/vector B).
solve(A,b) solves for x the linear system A x=b.
chol(A) is the Choleski decomposition of a PSD matrix A.
chol2inv computes (X' X)-1 from the (R part) of the QR decomposition of X.
Your 'fx01' implementation is, as you mentioned, somewhat naive and is performing far more work than the 'fx03' approach. In linear algebra (my apologies for the main StackOverflow not supporting LaTeX!), 'fx01' performs:
B := A' A in roughly n^3 flops.
L := chol(B) in roughly 1/3 n^3 flops.
L := inv(L) in roughly 1/3 n^3 flops.
B := L' L in roughly 1/3 n^3 flops.
z := A y in roughly 2n^2 flops.
x := B z in roughly 2n^2 flops.
Thus, the cost looks very similar to 2n^3 + 4n^2, whereas your 'fx03' approach uses the default 'solve' routine, which likely performs an LU decomposition with partial pivoting (2/3 n^3 flops) and two triangle solves (plus pivoting) in 2n^2 flops. Your 'fx01' approach therefore performs three times as much work asymptotically, and this amazingly agrees with your experimental results. Note that if A was real symmetric or complex Hermitian, that an LDL^T or LDL' factorization and solve would only require half as much work.
With that said, I think that you should replace your Cholesky update of A' A with a more stable QR update of A, as I just answered in your previous question. A QR decomposition costs roughly 4/3 n^3 flops and a rank-one update to a QR decomposition is only O(n^2), so this approach only makes sense for general A when there is more than just one related solve that is simply a rank-one modification.

Algorithm for finding an equidistributed solution to a linear congruence system

I face the following problem in a cryptographical application: I have given a set of linear congruences
a[1]*x[1]+a[2]*x[2]+a[3]*x[3] == d[1] (mod p)
b[1]*x[1]+b[2]*x[2]+b[3]*x[3] == d[2] (mod p)
c[1]*x[1]+c[2]*x[2]+c[3]*x[3] == d[3] (mod p)
Here, x is unknown an a,b,c,d are given
The system is most likely underdetermined, so I have a large solution space. I need an algorithm that finds an equidistributed solution (that means equidistributed in the solution space) to that problem using a pseudo-random number generator (or fails).
Most standard algorithms for linear equation systems that I know from my linear algebra courses are not directly applicable to congruences as far as I can see...
My current, "safe" algorithm works as follows: Find all variable that appear in only one equation, and assign a random value. Now if in each row, only one variable is unassigned, assign the value according to the congruence. Otherwise fail.
Can anyone give me a clue how to solve this problem in general?
You can use gaussian elimination and similar algorithms just like you learned in your linear algebra courses, but all arithmetic is performed mod p (p is a prime). The one important difference is in the definition of "division": to compute a / b you instead compute a * (1/b) (in words, "a times b inverse"). Consider the following changes to the math operations normally used
addition: a+b becomes a+b mod p
subtraction: a-b becomes a-b mod p
multiplication: a*b becomes a*b mod p
division: a/b becomes: if p divides b, then "error: divide by zero", else a * (1/b) mod p
To compute the inverse of b mod p you can use the extended euclidean algorithm or alternatively compute b**(p-2) mod p.
Rather than trying to roll this yourself, look for an existing library or package. I think maybe Sage can do this, and certainly Mathematica, and Maple, and similar commercial math tools can.

SVD for sparse matrix in R

I've got a sparse Matrix in R that's apparently too big for me to run as.matrix() on (though it's not super-huge either). The as.matrix() call in question is inside the svd() function, so I'm wondering if anyone knows a different implementation of SVD that doesn't require first converting to a dense matrix.
The irlba package has a very fast SVD implementation for sparse matrices.
You can do a very impressive bit of sparse SVD in R using random projection as described in http://arxiv.org/abs/0909.4061
Here is some sample code:
# computes first k singular values of A with corresponding singular vectors
incore_stoch_svd = function(A, k) {
p = 10 # may need a larger value here
n = dim(A)[1]
m = dim(A)[2]
# random projection of A
Y = (A %*% matrix(rnorm((k+p) * m), ncol=k+p))
# the left part of the decomposition works for A (approximately)
Q = qr.Q(qr(Y))
# taking that off gives us something small to decompose
B = t(Q) %*% A
# decomposing B gives us singular values and right vectors for A
s = svd(B)
U = Q %*% s$u
# and then we can put it all together for a complete result
return (list(u=U, v=s$v, d=s$d))
}
So here's what I ended up doing. It's relatively straightforward to write a routine that dumps a sparse matrix (class dgCMatrix) to a text file in SVDLIBC's "sparse text" format, then call the svd executable, and read the three resultant text files back into R.
The catch is that it's pretty inefficient - it takes me about 10 seconds to read & write the files, but the actual SVD calculation takes only about 0.2 seconds or so. Still, this is of course way better than not being able to perform the calculation at all, so I'm happy. =)
rARPACK is the package you need. Works like a charm and is Superfast because it parallelizes via C and C++.

Resources