SVD calculation in R - r

How do I get actual matrix using Singular value decomposition(SVD)
efficiently in R ,
cause A=svd$u %*% svd$d %*% t(svd$v) This is not an efficient way to get matrix A

Try svd(A)$u%*%diag(svd(A)$d)%*%t(svd(A)$v).
set.seed(12345)
A <- matrix(data=runif(n=9, min=1, max=9), nrow=3)
A
[,1] [,2] [,3]
[1,] 6.767231 8.088997 3.600763
[2,] 8.006186 4.651848 5.073795
[3,] 7.087859 2.330974 6.821642
s <- svd(A)
D <- diag(s$d)
s$u %*% D %*% t(s$v)
[,1] [,2] [,3]
[1,] 6.767231 8.088997 3.600763
[2,] 8.006186 4.651848 5.073795
[3,] 7.087859 2.330974 6.821642

Improving upon the answer by #MYaseen208
(s$u) %*% (t(s$v)*s$d)
This has one less matrix multiplication (which is an O(n^3) operation).

Related

How to write solve(X'X) in R more succinctly?

I think I remember seeing a shorthand for solve(t(X) %*% X) in R, but I can't remember what it was. Is there something like that? Just a way to do that in fewer keystrokes?
Maybe you're thinking of crossprod()? It's not fewer keystrokes, but is a bit more elegant and, according to its help file, it can be slightly faster than the naive construction.
x <- matrix(rnorm(9), ncol=3)
solve(crossprod(x))
# [,1] [,2] [,3]
# [1,] 1.34638151 -0.02957435 0.8010735
# [2,] -0.02957435 0.32780020 -0.1786295
# [3,] 0.80107345 -0.17862950 1.4533671
solve(t(x) %*% x)
# [,1] [,2] [,3]
# [1,] 1.34638151 -0.02957435 0.8010735
# [2,] -0.02957435 0.32780020 -0.1786295
# [3,] 0.80107345 -0.17862950 1.4533671

Performing element-wise standard deviation in R with two matrices

As the title suggests, I am looking for a way to get the standard deviation per element from two separate matrices. However, I am quite the beginner at R and I can't seem to figue out how to do this. Below is an example of what I am trying to accomplish with a small sample of my data (first three rows)
I have two matrices with coordinates (df143 and df143_2, or matrices A and B as you will)
A:
[1,] 21.729504 -55.66055 -37.26477
[2,] 39.445610 -67.67449 -32.19464
[3,] 57.604027 -54.16734 -28.48679
B:
[1,] 21.706865 -55.50722 -37.57840
[2,] 39.553314 -67.68414 -31.95995
[3,] 57.286247 -54.13008 -28.44446
I am looking for an matrix output that shows the standard deviation per element of the two combined matrices.
Or you can do base R:
matrix(mapply(function(x,y) sd(c(x,y)),A, B), ncol=ncol(A))
# [,1] [,2] [,3]
#[1,] 0.01600819 0.10842068 0.22176990
#[2,] 0.07615823 0.00682358 0.16595089
#[3,] 0.22470439 0.02634680 0.02993183
I believe this is what you're looking to do:
library(abind)
a <- c(21.729504, -55.66055, -37.26477, 39.445610, -67.67449, -32.19464, 57.604027, -54.16734, -28.48679)
a <- matrix(a, ncol=3, byrow=TRUE)
b <- c(21.706865, -55.50722, -37.57840, 39.553314, -67.68414, -31.95995, 57.286247, -54.13008, -28.44446)
b <- matrix(b, ncol=3, byrow=TRUE)
m <- abind(a, b, along=3)
apply(m, 1:2, sd)
## [,1] [,2] [,3]
## [1,] 0.01600819 0.10842068 0.22176990
## [2,] 0.07615823 0.00682358 0.16595089
## [3,] 0.22470439 0.02634680 0.02993183

How to generate symmetric random matrix?

I want to generate a random matrix which should be symmetric.
I have tried this:
matrix(sample(0:1, 25, TRUE), 5, 5)
but it is not necessarily symmetric.
How can I do that?
Another quite interesting opportunity is based on the following mathematical fact: if A is some matrix, then A multiplied by its transpose is always symmetric.
> A <- matrix(runif(25), 5, 5)
> A %*% t(A)
[,1] [,2] [,3] [,4] [,5]
[1,] 1.727769 1.0337816 1.2195505 1.4661507 1.1041355
[2,] 1.033782 1.0037048 0.7368944 0.9073632 0.7643080
[3,] 1.219551 0.7368944 1.8383986 1.3309980 0.9867812
[4,] 1.466151 0.9073632 1.3309980 1.3845322 1.0034140
[5,] 1.104135 0.7643080 0.9867812 1.0034140 0.9376534
Try this from the Matrix package
library(Matrix)
x<-Matrix(rnorm(9),3)
x
3 x 3 Matrix of class "dgeMatrix"
[,1] [,2] [,3]
[1,] -0.9873338 0.8965887 -0.6041742
[2,] -0.3729662 -0.5882091 -0.2383262
[3,] 2.1263985 -0.3550972 0.1067264
X<-forceSymmetric(x)
X
3 x 3 Matrix of class "dsyMatrix"
[,1] [,2] [,3]
[1,] -0.9873338 0.8965887 -0.6041742
[2,] 0.8965887 -0.5882091 -0.2383262
[3,] -0.6041742 -0.2383262 0.1067264
If you don't want to use a package:
n=3
x <- matrix(rnorm(n*n), n)
ind <- lower.tri(x)
x[ind] <- t(x)[ind]
x
I like this one:
n <- 3
aux <- matrix(NA, nrow = n, ncol = n)
for(i in c(1:n)){
for(j in c(i:n)){
aux[i,j] <- sample(c(1:n), 1)
aux[j,i] <- aux[i,j]
}
}

Choleski Decomposition in R to get the inverse when pivot = TRUE

I am using the choleski decomposition to compute the inverse of a matrix that is positive semidefinite. However, when my matrix becomes extremely large and has zeros in it I have that my matrix is no longer (numerically from the computers point of view) positive definite. So to get around this problem I use the pivot = TRUE option in the choleski command in R. However, (as you will see below) the two return the same output but with the rows and columns or the matrix rearranged. I am trying to figure out is there a way (or transformation) to make them the same. Here is my code:
X = matrix(rnorm(9),nrow=3)
A = X%*%t(X)
inv1 = function(A){
Q = chol(A)
L = t(Q)
inverse = solve(Q)%*%solve(L)
return(inverse)
}
inv2 = function(A){
Q = chol(A,pivot=TRUE)
L = t(Q)
inverse = solve(Q)%*%solve(L)
return(inverse)
}
Which when run results in:
> inv1(A)
[,1] [,2] [,3]
[1,] 9.956119 -8.187262 -4.320911
[2,] -8.187262 7.469862 3.756087
[3,] -4.320911 3.756087 3.813175
>
> inv2(A)
[,1] [,2] [,3]
[1,] 7.469862 3.756087 -8.187262
[2,] 3.756087 3.813175 -4.320911
[3,] -8.187262 -4.320911 9.956119
Is there a way to get the two answers to match? I want inv2() to return the answer from inv1().
That is explained in ?chol: the column permutation is returned as an attribute.
inv2 <- function(A){
Q <- chol(A,pivot=TRUE)
Q <- Q[, order(attr(Q,"pivot"))]
Qi <- solve(Q)
Qi %*% t(Qi)
}
inv2(A)
solve(A) # Identical
Typically
M = matrix(rnorm(9),3)
M
[,1] [,2] [,3]
[1,] 1.2109251 -0.58668426 -0.4311855
[2,] -0.8574944 0.07003322 -0.6112794
[3,] 0.4660271 -0.47364400 -1.6554356
library(Matrix)
pm1 <- as(as.integer(c(2,3,1)), "pMatrix")
M %*% pm1
[,1] [,2] [,3]
[1,] -0.4311855 1.2109251 -0.58668426
[2,] -0.6112794 -0.8574944 0.07003322
[3,] -1.6554356 0.4660271 -0.47364400

Inverse of matrix in R

I was wondering what is your recommended way to compute the inverse of a matrix?
The ways I found seem not satisfactory. For example,
> c=rbind(c(1, -1/4), c(-1/4, 1))
> c
[,1] [,2]
[1,] 1.00 -0.25
[2,] -0.25 1.00
> inv(c)
Error: could not find function "inv"
> solve(c)
[,1] [,2]
[1,] 1.0666667 0.2666667
[2,] 0.2666667 1.0666667
> solve(c)*c
[,1] [,2]
[1,] 1.06666667 -0.06666667
[2,] -0.06666667 1.06666667
> qr.solve(c)*c
[,1] [,2]
[1,] 1.06666667 -0.06666667
[2,] -0.06666667 1.06666667
Thanks!
solve(c) does give the correct inverse. The issue with your code is that you are using the wrong operator for matrix multiplication. You should use solve(c) %*% c to invoke matrix multiplication in R.
R performs element by element multiplication when you invoke solve(c) * c.
You can use the function ginv() (Moore-Penrose generalized inverse) in the MASS package
Note that if you care about speed and do not need to worry about singularities, solve() should be preferred to ginv() because it is much faster, as you can check:
require(MASS)
mat <- matrix(rnorm(1e6),nrow=1e3,ncol=1e3)
t0 <- proc.time()
inv0 <- ginv(mat)
proc.time() - t0
t1 <- proc.time()
inv1 <- solve(mat)
proc.time() - t1
Use solve(matrix) if the matrix is larger than 1820x1820. Using inv() from matlib or ginv() from MASS takes longer or will not solve at all because of RAM limits.

Resources