Rank of a matrix in R - r

I want to test the rank of a matrix, is there someone who can recommend a package/function in R for this?

You can try the function qr ("qr", because it performs a QR decomposition):
#define a matrix for this example
M <- matrix(data = rnorm(12), ncol = 3)
#run the function qr()
qr(M)$rank
#Alternative: load the Matrix package...
require(Matrix)
#...and run the function rankMatrix()
rankMatrix(M)[1]

http://cran.r-project.org/web/packages/Matrix/Matrix.pdf, page 101
http://cran.r-project.org/web/packages/matrixcalc/matrixcalc.pdf, page 12

You can use the Library pracma: Practical Numerical Math (Provides a large number of functions from numerical analysis and linear algebra, numerical optimization, differential equations, time series, plus some well-known special mathematical functions.).
Install it using the below command in the R console:
install.packages("pracma", repos="http://R-Forge.R-project.org")
You can use the library then :
library(pracma)
Rank(Your Matrix object)

Related

Solving system of ODEs in vector/matrix form in R (with deSolve?)

So I want to ask whether there's any way to define and solve a system of differential equations in R using matrix notation.
I know usually you do something like
lotka-volterra <- function(t,a,b,c,d,x,y){
dx <- ax + bxy
dy <- dxy - cy
return(list(c(dx,dy)))
}
But I want to do
lotka-volterra <- function(t,M,v,x){
dx <- x * M%*% x + v * x
return(list(dx))
}
where x is a vector of length 2, M is a 2*2 matrix and v is a vector of length 2. I.e. I want to define the system of differential equations using matrix/vector notation.
This is important because my system is significantly more complex, and I don't want to define 11 different differential equations with 100+ parameters rather than 1 differential equation with 1 matrix of interaction parameters and 1 vector of growth parameters.
I can define the function as above, but when it comes to using ode function from deSolve, there is an expectation of parms which should be passed as a named vector of parameters, which of course does not accept non-scalar values.
Is this at all possible in R with deSolve, or another package? If not I'll look into perhaps using MATLAB or Python, though I don't know how it's done in either of those languages either at present.
Many thanks,
H
With my low reputation (points), I apologize for posting this as an answer which supposedly should be just a comment. Going back, have you tried this link? In addition, in an attempt to find an alternative solution to your problem, have you tried MANOPT, a toolbox of MATLAB? It's actually open source just like R. I encountered MANOPT on a paper whose problem boils down to solving a system of ODEs involving purely matrices.

qr function in R and matlab

I have a question about converting a matlab function into R, and I was hoping that someone could help.
The standard QR decomposition used in both matlab and R is referred to as qr(). To my understanding, the standard way of performing a qr decomposition in both languages is:
Matlab:
[Q,R] = qr(A) satisfying QR=A
R:
z <- qr(A)
Q <- qr.Q(z)
R <- qr.R(z)
Both of which provide me with the same results, unfortunately, this is not what I need. What I need is this:
Matlab:
[Q,R,e] = qr(A,0) which produces an economy-size decomposition in which e is a permutation vector so that A(:,e) = Q*R.
R:
No clue
I have tried comparing [Q,R,E] = qr(A) with
z <- qr(A);
Q <- qr.Q(z);
R <- qr.R(z);
E <- diag(ncol(A))[z$pivot]
and results seem identical for variables Q and E (but different for R). So depending on the defined inputs/outputs there will be different results (which makes sense).
So my question is:
Is there a way in R that can mimic this [Q,R,e]=qr(A,0) in Matlab?
I have tried digging into the matlab function but it leads to a long and torturous road of endless function definitions and I was hoping for a better solution.
Any help would be much appreciated, and if I've missed something obvious, I apologize.
I think the difference comes down to the numerical library underlying the calculations. By default, R's qr function uses the (very old) LINPACK routines, but if I do
z <- qr(X,LAPACK=T)
then R uses LAPACK and the results seem to match MATLAB's (which is probably also using LAPACK underneath). Either way we see the expected relationship with X:
z <- qr(X,LAPACK=F)
all.equal(X[,z$pivot], qr.Q(z)%*%qr.R(z), check.attributes=FALSE)
# [1] TRUE
z <- qr(X,LAPACK=T)
all.equal(X[,z$pivot], qr.Q(z)%*%qr.R(z), check.attributes=FALSE)
# [1] TRUE

How to sample from a Kendalls distribution function for a given copula?

I’m creating a sample vector v=(v1, ..., vm) from a Gauss copula with a given correlation matrix CM and with this I want to create some new variables zi=Ki-1(vi) where Ki is the Kendalls distribution function of a Gumbel copula with parameter CorPar.
In the “working” part I’m creating a correlation matrix and then I create my random vector v.
library(QRM)
library(copula)
library(matrixcalc)
library(Matrix)
CM <- matrix(runif(25),5,5)
CM_PSD <- nearPD(CM, corr=TRUE)$mat
v <- rcopula.gauss(1,as.matrix(CM_PSD))
CorPar <- 1.54
Now I want to get my z but I fail in running the R-code. As far as I learned from my research this should work somehow with the function qK out of the copula-Package.
http://artax.karlin.mff.cuni.cz/r-help/library/copula/html/K.html
qK(u, cop, d, n.MC=0, method=c("default", "simple", "sort", "discrete", "monoH.FC"), u.grid, ...)
u is the evaluation point so my v_i and since my copula is a 2-dimensional Gumbel copula I’m guessing d should be set to 2.
But I’m constantly failing on the cop part and the logic behind an acopula.
Can you please help me with this issue?
After quite a few more tries and time I finally solved the question by myself.
I don't know for sure why it is working but it is since I can validate my results with a different program. :)
cop_z <- onacopulaL("Gumbel", list(CorPar,1:2))
z <- qK(v,cop_z#copula, 2)

Time taken to krige in gstat package in R

The following R program creates an interpolated surface using 470 data points using walker Lake data in gstat package.
source("D:/kriging/allfunctions.r") # Reads in all functions.
source("D:/kriging/panel.gamma0.r") # Reads in panel function for xyplot.
library(lattice) # Needed for "xyplot" function.
library(geoR) # Needed for "polygrid" function.
library(akima)
library(gstat);
library(sp);
walk470 <- read.table("D:/kriging/walk470.txt",header=T)
attach(walk470)
coordinates(walk470) = ~x+y
walk.var1 <- variogram(v ~ x+y,data=walk470,width=10) #the width has to be tuned resulting different point pairs
plot(walk.var1,xlab="Distance",ylab="Semivariance",main="Variogram for V, Lag Spacing = 5")
model1.out <- fit.variogram(walk.var1,vgm(70000,"Sph",40,20000))
plot(walk.var1, model=model1.out,xlab="Distance",ylab="Semivariance",main="Variogram for V, Lag Spacing = 10")
poly <- chull(coordinates(walk470))
plot(coordinates(walk470),type="n",xlab="X",ylab="Y",cex.lab=1.6,main="Plot of Sample and Prediction Sites",cex.axis=1.5,cex.main=1.6)
lines(coordinates(walk470)[poly,])
poly.in <- polygrid(seq(2.5,247.5,5),seq(2.5,297.5,5),coordinates(walk470)[poly,])
points(poly.in)
points(coordinates(walk470),pch=16)
coordinates(poly.in) <- ~ x+y
krige.out <- krige(v ~ 1, walk470,poly.in, model=model1.out)
print(krige.out)
This program calculates the following for each point of 2688 points
(470x470) matrix inversion
(470x470) and (470x1) matrix multiplication
Is gstat package is using some smart way for calculation. I knew from previous stackoverflow query that it uses cholesky decomposition for matrix inversion. Is it normal speed for one machine to calculate it so quickly.
It uses LDL' decomposition, which is similar to Choleski. As you are using global kriging, the covariance matrix needs to be decomposed only once; then, for each prediction point, a system is solved, which is O(n). No 470x470 matrix gets ever inverted, neither are solutions obtained by multiplying it. Inverses are notational devices, but avoided as computational strategy when possible. In R, for instance, compare runtime of solve(A,b) with solve(A) %*% b.
Use the source, Luke!

Generalized Inverse in R

I can use ginv function from MASS library to get Moore-Penrose Generalisied Inverse of a matrix.
m <- matrix(1:9, 3, 3)
library(MASS)
ginv(m)
In SAS we do have more than one function to get a generalized inverse of a matrix. SVD can be used to find the generalized inverse but again this is a Moore-Penrose. I wonder if there any function in R to get a generalized inverse of a matrix (which is not unique) other than Moore-Penrose Generalisied Inverse. Thanks in advance for your help and time.
Edit
A generalized inverse of a matrix A is defined as any matrix G that
satisfies the equation AGA = A.
This G is not a Moore-Penrose Generalisied Inverse so it is not unique.
Most of the time you don't really want the inverse of a matrix, because the end result can be ruined by rounding errors by the time you're done.
It's more typical to create the LU decomposition using partial pivoting and scaling. Use it to perform forward/back substitution on right-hand-side vector to get the solution. This is especially helpful if you have multiple RHS vectors, because you can apply it repeatedly.
You need the Matrix package to do this.
Yes true, it's a great inconvenience R packages are no longer available. Alternatively you can use the pracma package.
And your Moore-Penrose Generalisied Inverse:
pinv(m)

Resources