Crout LU decomposition in R - r

Just wondering if anybody would share an implementation of the Crout algorithm for a LU decomposition (A = L * U) in R. There's a lu function in pracma library but uses Doolite instead.
> A
[,1] [,2] [,3]
[1,] -10 30 50
[2,] -6 16 22
[3,] -2 1 -5
> lu(A)
$L
[,1] [,2] [,3]
[1,] 1.0 0.0 0
[2,] 0.6 1.0 0
[3,] 0.2 2.5 1
$U
[,1] [,2] [,3]
[1,] -10 30 50
[2,] 0 -2 -8
[3,] 0 0 5
Whereas for the Crout algorithm you'd get something like this instead:
$L
[,1] [,2] [,3]
[1,] -10 0 0
[2,] -6 -2 0
[3,] -2 -5 5
$U
[,1] [,2] [,3]
[1,] 1 -3 -5
[2,] 0 1 4
[3,] 0 0 1
I've been googling for something like that for a while but didn't found any working implementation in R.
Thanks!

It is not difficult to convert some MATLAB code to an R implementation:
LUcrout <- function(A) {
n <- nrow(A)
L <- matrix(0, n, n); U <- matrix(0, n, n)
for (i in 1:n) {
L[i, 1] <- A[i, 1]
U[i, i] <- 1
}
for (j in 2:n) {
U[1, j] <- A[1, j] / L[1, 1]
}
for (i in 2:n) {
for (j in 2:i) {
L[i, j] <- A[i, j] - L[i, 1:(j-1)] %*% U[1:(j-1), j]
}
if (i < n) {
for (j in ((i+1):n)) {
U[i, j] = (A[i, j] - L[i, 1:(i-1)] %*% U[1:(i-1), j]) / L[i, i]
}
}
}
return(list(L = L, U = U))
}
Applied to your matrix A example it returns
A = matrix(c(-10, 30, 50,
-6, 16, 22,
-2, 1, -5), 3, 3, byrow = TRUE)
> LUcrout(A)
$L
[,1] [,2] [,3]
[1,] -10 0 0
[2,] -6 -2 0
[3,] -2 -5 5
$U
[,1] [,2] [,3]
[1,] 1 -3 -5
[2,] 0 1 4
[3,] 0 0 1
which is not the same as what you suggested, but is identical to what MATLAB returns (see the 'Crout-matrix-decomposition' page on Wikipedia).
LU decompositions are not unique. Which advantages does the Crout algorithm have in your opinion?

Take the Doolittle decomposition of A transpose, swap L and U and take their transposes.

Related

How to write the function to create a diagonal matrix from upper right to lower left in R?

I would like to write one function whose input is a square matrix, and it returns a square matrix whose numbers from the upper right corner down to lower left corner are preserved and other numbers are zero.
For example
suppose A is a 4*4 matrix in the following.(sorry I do not know how to type the matrix expression)
[1,2,3,4]
[5,6,7,8]
[9,10,11,12]
[13,14,15,16]
How can I write a function in R without any loops to transform the matrix into this?
[0,0,0,4]
[0,0,7,0]
[0,10,0,0]
[13,0,0,0]
This feels like a gymnastics exercise...
xy <- matrix(1:16, ncol = 4, byrow = TRUE)
xy <- apply(xy, MARGIN = 1, rev)
xy[lower.tri(xy)] <- 0
xy[upper.tri(xy)] <- 0
t(apply(xy, MARGIN = 1, rev))
[,1] [,2] [,3] [,4]
[1,] 0 0 0 4
[2,] 0 0 7 0
[3,] 0 10 0 0
[4,] 13 0 0 0
Here is another option.
mat <- matrix(1:16, 4, byrow = TRUE)
idx <- cbind(seq_len(nrow(mat)),
ncol(mat):1)
values <- mat[idx]
mat <- matrix(0, nrow = dim(mat)[1], ncol = dim(mat)[2])
mat[idx] <- values
mat
# [,1] [,2] [,3] [,4]
#[1,] 0 0 0 4
#[2,] 0 0 7 0
#[3,] 0 10 0 0
#[4,] 13 0 0 0
A non-apply solution using some maths to generate the indices stealing xy from #Roman
xy <- matrix(1:16, ncol = 4, byrow = TRUE)
ind <- nrow(xy)
xy[setdiff(1:length(xy), seq(ind, by = ind -1, length.out = ind))] <- 0
xy
# [,1] [,2] [,3] [,4]
#[1,] 0 0 0 4
#[2,] 0 0 7 0
#[3,] 0 10 0 0
#[4,] 13 0 0 0
Trying it on 5 X 5 matrix
xy <- matrix(1:25, 5, byrow = TRUE)
ind <- nrow(xy)
xy[setdiff(1:length(xy), seq(ind, by = ind -1, length.out = ind))] <- 0
xy
# [,1] [,2] [,3] [,4] [,5]
#[1,] 0 0 0 0 5
#[2,] 0 0 0 9 0
#[3,] 0 0 13 0 0
#[4,] 0 17 0 0 0
#[5,] 21 0 0 0 0
This answer takes a slightly different approach than the other answers. Instead of trying to zero out everything except for the diagonal, we can just build the diagonal by itself:
m <- matrix(rep(0,16), nrow = 4, byrow = TRUE)
for (i in 0:15) {
row <- floor(i / 4)
col <- i %% 4
if (i == 3 + (row*3)) {
m[row+1, col+1] <- i+1
}
}
m
[,1] [,2] [,3] [,4]
[1,] 0 0 0 4
[2,] 0 0 7 0
[3,] 0 10 0 0
[4,] 13 0 0 0
I just thought about a way to reverse the original diag function from base R.
You can see it by just typing diag in the console.
Here the highlighted change I made in my diag_reverse:
y <- x[((m - 1L):0L * (dim(x)[1L])) + (1L:m)] # m is min(dim(x))
And here's the complete function (I kept all the code except that one line):
diag_reverse <- function (x = 1, nrow, ncol, names = TRUE)
{
if (is.matrix(x)) {
if (nargs() > 1L && (nargs() > 2L || any(names(match.call()) %in%
c("nrow", "ncol"))))
stop("'nrow' or 'ncol' cannot be specified when 'x' is a matrix")
if ((m <- min(dim(x))) == 0L)
return(vector(typeof(x), 0L))
y <- x[((m - 1L):0L * (dim(x)[1L])) + (1L:m)] # HERE I made the change
if (names) {
nms <- dimnames(x)
if (is.list(nms) && !any(vapply(nms, is.null, NA)) &&
identical((nm <- nms[[1L]][seq_len(m)]), nms[[2L]][seq_len(m)]))
names(y) <- nm
}
return(y)
}
if (is.array(x) && length(dim(x)) != 1L)
stop("'x' is an array, but not one-dimensional.")
if (missing(x))
n <- nrow
else if (length(x) == 1L && nargs() == 1L) {
n <- as.integer(x)
x <- 1
}
else n <- length(x)
if (!missing(nrow))
n <- nrow
if (missing(ncol))
ncol <- n
.Internal(diag(x, n, ncol))
}
Then we can call it:
m <- matrix(1:16,nrow=4,ncol=4,byrow = T)
diag_reverse(m)
#[1] 4 7 10 13
I'll test it on other matrices to see if it gives always the correct answer.
The apply family are really just loops with a bow tie.
Here is a way to do it without apply. With some input checking and should work on any size matrix.
off_diag = function(X)
{
if(!is.matrix(X)) stop('Argument is not a matrix')
n <- nrow(X)
if(ncol(X) != n) stop('Matrix is not square')
if(n<2) return(X)
Y <- X * c(0,rep(rep(c(0,1),c(n-2,1)),n),rep(0,n-1))
return(Y)
}
Now it can handle numeric vectors, character vectors and NAs.
mat <- matrix(1:16, 4, byrow = TRUE)
off_diag(mat)
# [,1] [,2] [,3] [,4]
# [1,] 0 0 0 4
# [2,] 0 0 7 0
# [3,] 0 10 0 0
# [4,] 13 0 0 0
Edit: improvement
I realised my function will fail if there are NAs since NA*0 is NA, additionally it will not work on characters, but doesn't check the matrix has mode as numeric. So instead I use the same setup to make a logical vector
minor_diag = function(X)
{
if(!is.matrix(X)) stop('Argument is not a matrix')
n <- nrow(X)
if(ncol(X) != n) stop('Matrix is not square')
if(n<2) return(X)
index = c(TRUE,rep(rep(c(TRUE,FALSE),c(n-2,1)),n),rep(TRUE,n-1))
X[index]=0
return(X)
}
mat <- matrix(letters[1:16], 4, byrow = TRUE)
minor_diag(mat)
## [,1] [,2] [,3] [,4]
## [1,] "0" "0" "0" "d"
## [2,] "0" "0" "g" "0"
## [3,] "0" "j" "0" "0"
## [4,] "m" "0" "0" "0"
minor_diag(matrix(NA,2,2))
## [,1] [,2]
## [1,] 0 NA
## [2,] NA 0
A one liner without loops
#setup
n <- 5
A <- matrix(1:(n^2), n)
#solution
diag(diag(A[n:1,]))[n:1,]

R chol and positive semi-definite matrix

I have the following matrix:
j <- matrix(c(1,1,.5,1,1,.5,.5,.5,1), nrow=3, ncol=3)
Which is positive semi-definite, because all of the eigenvalues are >= 0.
Source: https://math.stackexchange.com/questions/40849/how-to-check-if-a-symmetric-4-times4-matrix-is-positive-semi-definite
> eigen(j, symmetric = TRUE)
$values
[1] 2.3660254 0.6339746 0.0000000
$vectors
[,1] [,2] [,3]
[1,] -0.6279630 -0.3250576 7.071068e-01
[2,] -0.6279630 -0.3250576 -7.071068e-01
[3,] -0.4597008 0.8880738 -1.942890e-15
However, the cholesky decomposition fails...
> chol(j)
Error in chol.default(j) :
the leading minor of order 2 is not positive definite
I also adapted some code from the internet...
cholesky_matrix <- function(A){
# http://rosettacode.org/wiki/Cholesky_decomposition#C
L <- matrix(0,nrow=nrow(A),ncol=ncol(A))
colnames(L) <- colnames(A)
rownames(L) <- rownames(A)
m <- ncol(L)
for(i in 1:m){
for(j in 1:i){
s <- 0
if(j > 1){
for(k in 1:(j-1)){
s <- s + L[i,k]*L[j,k]
}
}
if(i == j){
L[i,j] <- sqrt(A[i,i] - s)
} else {
L[i,j] <- (1 / L[j,j])*(A[i,j] - s)
}
}
}
return(L)
}
Which also "fails" with NaNs.
> cholesky_matrix(j)
[,1] [,2] [,3]
[1,] 1.0 0 0
[2,] 1.0 0 0
[3,] 0.5 NaN NaN
Does anyone have any idea what is going on? Why is my decomposition failing?
The eigenvalues of your matrix are
> eigen(j)
$values
[1] 2.366025e+00 6.339746e-01 4.440892e-16
the last of which is effectively zero, within the limits of numerical precision. Per ?chol:
Compute the Choleski factorization of a real symmetric positive-definite square matrix.
(emphasis mine)
That said, you can still get the decomposition by setting pivot=TRUE, which is able to handle semi-definiteness:
> chol(j, pivot=TRUE)
[,1] [,2] [,3]
[1,] 1 0.5000000 1
[2,] 0 0.8660254 0
[3,] 0 0.0000000 0
attr(,"pivot")
[1] 1 3 2
attr(,"rank")
[1] 2
Warning message:
In chol.default(j, pivot = TRUE) :
the matrix is either rank-deficient or indefinite

Raising a Power on Matrices

I'm trying to create a function that will give me the value of a matrix once it has been raised to a power. This is what I've done so far:
A <- matrix(c(1,2,3,4,0,1,2,3,0,0,1,2,0,0,0,1),nrow=4,ncol=4)
power <- function(A,n){
+ if(n == 0){
+ return(diag(4))
+ }else{
+ return(A%*%A^(n-1))
+ }
+ }
OUTCOME:
> power(A,4)
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 10 1 0 0
[3,] 46 10 1 0
[4,] 146 46 10 1
This is giving a different value from what my calculator gets and I'm trying to figure what I'm doing wrong. Any help is appreciated!
We could use %^% from library(expm)
library(expm)
A%*%(A%^%3)
Using this in a function
power <- function(A,n){
if(n == 0){
return(diag(4))
}else{
return(A%*%(A%^%(n-1)))
}
}
power(A,4)
# [,1] [,2] [,3] [,4]
#[1,] 1 0 0 0
#[2,] 8 1 0 0
#[3,] 36 8 1 0
#[4,] 120 36 8 1
According to the description in ?matpow
Compute the k-th power of a matrix. Whereas ‘x^k’ computes
element wise powers, ‘x %^% k’ corresponds to k - 1 matrix
multiplications, ‘x %% x %% ... %*% x’.
Or a base R option is Reduce with %*% (but this would be slow compared to %^%.
Reduce(`%*%`,replicate(4, A, simplify=FALSE))
In a function,
power1 <- function(A,n){
if(n == 0){
return(diag(4))
}else{
Reduce(`%*%`,replicate(n, A, simplify=FALSE))
}
}
power1(A,4)
# [,1] [,2] [,3] [,4]
#[1,] 1 0 0 0
#[2,] 8 1 0 0
#[3,] 36 8 1 0
#[4,] 120 36 8 1
You have a problem with the way you are computing your matrix product. I use a while loop inside your power() function instead. It simply multiples the input matrix against itself n times and then returns the result. Here is a base R solution which is a continuation of the direction in which you were already going.
A <- matrix(c(1,2,3,4,0,1,2,3,0,0,1,2,0,0,0,1),nrow=4,ncol=4)
power <- function(A,n){
B <- diag(nrow(A))
if (n == 0) {
return(diag(nrow(A)))
} else {
while (n > 0) {
B <- A%*%B
n <- n - 1
}
return(B)
}
}
> power(A, 4)
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 8 1 0 0
[3,] 36 8 1 0
[4,] 120 36 8 1
I assume you want to make the muliplication of the matrix.You have to first make the multiply the matrix and then try to multiply them using the same powers as you want ,so you can do two things
Write code to multiply the matrix .
Loop the code to multiply.

R index matrix with vector / create index matrix from index vector

I am looking for an easier way to do the following:
m <- matrix(0, nrow=3, 3)
v <- c(1, 3, 2)
for (i in 1:nrow(m)) {
m[[i, v[i]]] = 1
}
The above code creates the following index matrix:
[,1] [,2] [,3]
[1,] 1 0 0
[2,] 0 0 1
[3,] 0 1 0
There surely must be a better way to do this?!
One way to do it without pre-defining the matrix would be to use outer:
num.col <- 3
outer(v, seq_len(num.col), "==") * 1
# [,1] [,2] [,3]
# [1,] 1 0 0
# [2,] 0 0 1
# [3,] 0 1 0

R Combining Matrices

I have two Matrices, one is binary (Zero or One) and the other is an integer matrix of the same dimensions, these are square matrices.
I'd like an efficient way of combining them in a specific way, without iteration along each element.
The way I'd like to combine them is to have a resultant matrix from matrix A and matrix B, that for the element, takes the lowest number that is not zero.
Can anyone think of a trick in R to achieve this, I've tried to do it mathematically but keep coming up short, I was wondering if there was a way to overlay the matrices with a conditional statement?
matA <- matrix(-8:7, 4,4); set.seed(123)
matB <- matrix(sample(0:1, 16, repl=TRUE), 4, 4)
matC <- matrix(NA, nrow(matA), ncol(matA))
matC[] <- pmin( matA, MatB)
matC[ matB==0] <- matA[matB==0]
matB
#-----------
[,1] [,2] [,3] [,4]
[1,] 0 1 1 1
[2,] 1 0 0 1
[3,] 0 1 1 0
[4,] 1 1 0 1
matC
#---------
[,1] [,2] [,3] [,4]
[1,] -8 -4 0 1
[2,] -7 -3 1 1
[3,] -6 -2 1 6
[4,] -5 -1 3 1
flodel's method produces:
> ifelse(matB == 0, matB, pmin(matA, matB))
[,1] [,2] [,3] [,4]
[1,] 0 -4 0 1
[2,] -7 0 0 1
[3,] 0 -2 1 0
[4,] -5 -1 0 1
mnel's method produces:
> (matB * !matA) + matA
[,1] [,2] [,3] [,4]
[1,] -8 -4 1 4
[2,] -7 -3 1 5
[3,] -6 -2 2 6
[4,] -5 -1 3 7
My guess is:
ifelse(A == 0, B, pmin(A, B))
or maybe
ifelse(A == 0, B, ifelse(B == 0, A, pmin(A, B)))
If this is not what you are looking for, please clarify (and maybe provide an example.)
From #A_Skeleton's comment on scaling, you could break your matrix into chunks:
mnel <- function(matA, matB) {
(matB * !matA) + matA
}
# method takes a function as the argument
mcombine <- function(matA, matB, method) {
chunkSize <- 10000
matC <- matrix(0, nrow(matA), ncol(matA))
for (i in 1:floor(nrow(matA) / chunkSize)) {
curRange <- (chunkSize * (i-1) + 1):(i * chunkSize)
matC[curRange,] <- method(matA[curRange,], matB[curRange,])
}
# handle case where dimensions don't divide exactly into chunks
lastRange <- i*chunkSize:nrow(matA)
matC[lastRange,] <- method(matA[lastRange,], matB[lastRange,])
matC
}
# Using mnel's method:
matC <- mcombine(matA, matB, mnel)

Resources