Compute the null space of a sparse matrix - r

I found the function (null OR nullspace) to find the null space of a regular matrix in R, but I couldn't find any function or package for a sparse matrix (sparseMatrix).
Does anybody know how to do this?

If you take a look at the code of ggm::null, you will see that it is based on the QR decomposition of the input matrix.
On the other hand, the Matrix package provides its own method to compute the QR decomposition of a sparse matrix.
For example:
require(Matrix)
A <- matrix(rep(0:1, 3), 3, 2)
As <- Matrix(A, sparse = TRUE)
qr.Q(qr(A), complete=TRUE)[, 2:3]
qr.Q(qr(As), complete=TRUE)[, 2:3]

Related

R pnorm function with sparse matrix

I would like to find pvalues of a large sparse matrix. All the elements in this matrix are standard normal z-scores. I want to use pnorm function, but I met a problem that pnorm does not support sparse matrix. Except for transforming sparse matrix to full matrix, is there any other more efficient way?
Any suggestions are appreciated!
If it is a sparse matrix, you can easily replace the 0 values with pnorm(0..). What remains is to calculate the non-zero values, which you can do. For example a sparse matrix:
data <- rnorm(1e5)
zero_index <- sample(1e5)[1:9e4]
data[zero_index] <- 0
mat <- matrix(data, ncol=100)
mat_sparse <- Matrix(mat, sparse=TRUE)
Create a matrix with pnorm for 0:
mat_pnorm <- matrix(pnorm(rep(0,length(mat_sparse))),ncol=ncol(mat_sparse))
nzData <- summary(mat_sparse)
mat_pnorm[as.matrix(nzData[,1:2])] <- pnorm(nzData$x)
all.equal(mat_pnorm,pnorm(mat))
[1] TRUE
You did not specify how you would like the p-values, but you can easily have it cast into a vector instead of a matrix which was used above.

Invert singular matrices in R

I am trying to grasp the basic concept of invertible and non-invertible matrices.
I created a random non-singular square matrix
S <- matrix(rnorm(100, 0, 1), ncol = 10, nrow = 10)
I know that this matrix is positive definite (thus invertible) because when I decompose the matrix S into its eigenvalues, their product is positive.
eig_S <- eigen(S)
eig_S$values
[1] 3.0883683+0.000000i -2.0577317+1.558181i -2.0577317-1.558181i 1.6884120+1.353997i 1.6884120-1.353997i
[6] -2.1295086+0.000000i 0.1805059+1.942696i 0.1805059-1.942696i -0.8874465+0.000000i 0.8528495+0.000000i
solve(S)
According to this paper, we can compute the inverse of a non-singular matrix by its SVD too.
Where
(where U and V are eigenvectors and D eigenvalues, please do correct me if I am wrong).
The inverse then is, .
Indeed, I can run the formula in R:
s <- svd(S)
s$v%*%solve(diag(s$d))%*%t(s$u)
Which produces exactly the same result as solve(S).
My first question is:
1) Are s$d indeed represent the eigenvalues of S? Because s$d and eig_S$values are quite different.
Now the second part,
If I create a singular matrix
I <- matrix(rnorm(100, 0, 1), ncol = 5, nrow = 20)
I <- I%*%t(I)
eig_I <- eigen(I)
eig_I$values
[1] 3.750029e+01 2.489995e+01 1.554184e+01 1.120580e+01 8.674039e+00 3.082593e-15 5.529794e-16 3.227684e-16
[9] 2.834454e-16 5.876634e-17 -1.139421e-18 -2.304783e-17 -6.636508e-17 -7.309336e-17 -1.744084e-16 -2.561197e-16
[17] -3.075499e-16 -4.150320e-16 -7.164553e-16 -3.727682e-15
The solve function will produce an error
solve(I)
system is computationally singular: reciprocal condition number =
1.61045e-19
So, again according to the same paper we can use the SVD
i <- svd(I)
solve(i$u %*% diag(i$d) %*% t(i$v))
which produces the same error.
Then I tried to use the Cholesky decomposition for matrix inversion
Conj(t(I))%*%solve(I%*%Conj(t(I)))
and again I get the same error.
Could someone please explain where am I using the equations wrong?
I know that for matrix I%*%Conj(t(I)), the determinant of the eigenvalue matrix is positive but the matrix is not a full rank due to the initial multiplication that I did.
j <- eigen(I%*%Conj(t(I)))
det(diag(j$values))
[1] 3.17708e-196
qr(I %*% Conj(t(I)))$rank
[1] 5
UPDATE 1: Following the comments bellow, and after going through the paper/Wikipedia page again. I used these two codes, which they produce some results but I am not sure about their validity. The first example seems more believable. The SVD solution
i$v%*%diag(1/i$d)%*%t(i$u)
and the Cholesky
Conj(t(I))%*%(I%*%Conj(t(I)))^(-1)
I am not sure if I interpreted the two sources correctly though.

making matrix with eigenvalues

I am trying to make a diagonal matrix of eigenvalues.
Here is my code:
E = eigen(cor(A))
VAL = E$values
VEC = E$vectors
so I get a vector with eigenvalues, but how do I turn it into a matrix.
I guess I can just use cbind() and manually input a e-value matrix, but there has to be a more correct way
You can use diag:
diag(E$values)

Scalar Multiplication in R

I'm trying to perform simple scalar multiplication in R, but I'm running into a bit of an issue.
In linear algebra I would do the following:
Here's how I've implemented this in R:
A <- matrix(1:4, 2, byrow = TRUE)
c <- matrix(rep(3, 4), 2)
A * c
This produces the correct output, but creating the scalar matrix c will be cumbersome when it comes to larger matrices.
Is there a better way to do this?
In R the default is scalar. For matrix multiplication use %*%. t is transpose and solve will give you the inverse. Here are some examples:
a = matrix(1:4,2,2)
3 * a
c(1:2) %*% a
c(1:2) %*% t(a)
solve(a)
Here is a link: matrix algebra in R
Use the function drop() to convert a 1x1 variable matrix into a "real" scalar. So you can write drop(c)*A and you don't need to replace c with the value itself.

sparse-QR more rows than original matrix

I did a sparse qr decomposition with "Matrix" package in R like
a <- Matrix(runif(20), nrow = 5, sparse = T)
a[3:5,] <- 0 #now **a** is a 5X4 matrix
b <- qr.R(qr(a), complete = T) #but now **b** is a 7X4 matrix!
anyone knows why? Note that if I keep a dense, then the bug(?) does not appear.
I'll assume you did not see the warning, otherwise you would have mentioned it, right?
Warning message:
In qr.R(qr(a), complete = T) :
qr.R(< sparse >) may differ from qr.R(< dense >) because of permutations
Now if you are asking what those permutations mean, it's a different story...
The help("sparseQR-class") page may have more info on the issue:
However, because the matrix Q is not uniquely defined, the results of qr.qy and qr.qty do not necessarily match those from the corresponding dense matrix calculations.
Maybe it is the same with qr.R?
Finally, further down on the same help page:
qr.R --- signature(qr = "sparseQR"): compute the upper triangular R matrix of the QR decomposition. Note that this currently warns because of possible permutation mismatch with the classical qr.R() result, and you can suppress these warnings by setting options() either "Matrix.quiet.qr.R" or (the more general) either "Matrix.quiet" to TRUE.

Resources