Is there an equivalent to the MATLAB function ordschur (documentation here) in R?
The function re-orders the Schur factorization X = U*T*U' produced by the schur function and returns the reordered Schur matrix TS and the cumulative orthogonal transformation US such that X = US*TS*US'. I am particularly interested in the 'lhp' method - also described in the MATLAB documentation link.
Note that there is a function Schur in the package Matrix (see CRAN documentation here ) R which performs the Schur decomposition and eigenvalues of a square matrix. Update: This function also returns the Unitary orthogonal matrix U.
As far as I know MATLAB uses ?TRSEN function from LAPACK to perform reordering. You can look at some limited implementation here. In order to bring this functionality into R you can implement this routine by yourself.
Related
I am trying to calculate P^100 where P is my transition matrix. I want to do this by diagonalizing P so that way we have P = Q*D*Q^-1.
Of course, if I can get P to be of this form, then I can easily calculate P^100 = Q*D^100*Q^-1 (where * denotes matrix multiplication).
I discovered that if you just do P^5 that all you'll get in return is a matrix where each of your entries of P were raised to the 5th power, rather than the fifth power of the matrix (P*P*P*P*P).
I found a question on here that asks how to check if a matrix is diagonalizable but not how to explicitly construct the diagonalization of a matrix. In MATLAB it's super easy but well, I'm using R and not MATLAB.
The eigen() function will compute eigenvalues and eigenvectors for you (the matrix of eigenvectors is Q in your expression, diag() of the eigenvalues is D).
You could also use the %^% operator in the expm package, or functions from other packages described in the answers to this question.
The advantages of using someone else's code are that it's already been tested and debugged, and may use faster or more robust algorithms (e.g., it's often more efficient to compute the matrix power by composing powers of two of the matrix rather than doing the eigenvector computations). The advantage of writing your own method is that you'll understand it better.
The example is based on an example in Shumway and Stoffer's: "Time Series Analysis and it's Applications with R Examples". In the original example phi, cq, and cr were scalar so the authors could use fdHess without any issues (see the hashed out version of the code).
para=list(phi, cq, cr)
Linn=function(para){# to evaluate likelhood at estimates
#kf=Kfilter0(num,y,1,mu0,Sigma0,para[1],para[2],para[3])
kf=Kfilter0(num,y,A=h,mu0,Sigma0,para[[1]],para[[2]],para[[3]])
return(kf$like)}
emhess=fdHess(para, function(para) Linn(para))
SE=sqrt(diag(solve(emhess$Hessian)))
I would like to generalize the code so that it can be applied to multivariate time series models. So in the code shown phi, cq, and cr are n*n arrays.
Is there a package that can calculate the Hessian for a scalar valued function with matrix arguments?
The closest match I can find is this (I also looked at nlme and numDeriv):
calculating the Gradient and the Hessian in R
In this case all the arguments are passed as a vector so the function being called has to be modified so that it can take the list of arguments and reconstruct the required matrices.
Is there a method that would allow me to calculate the Hessian for a scalar valued function with matrix arguments without changing the function being called? Seems this would be such a common problem that there would be an off the shelf answer but I haven't been able to find one.
Baz
I want to test the rank of a matrix, is there someone who can recommend a package/function in R for this?
You can try the function qr ("qr", because it performs a QR decomposition):
#define a matrix for this example
M <- matrix(data = rnorm(12), ncol = 3)
#run the function qr()
qr(M)$rank
#Alternative: load the Matrix package...
require(Matrix)
#...and run the function rankMatrix()
rankMatrix(M)[1]
http://cran.r-project.org/web/packages/Matrix/Matrix.pdf, page 101
http://cran.r-project.org/web/packages/matrixcalc/matrixcalc.pdf, page 12
You can use the Library pracma: Practical Numerical Math (Provides a large number of functions from numerical analysis and linear algebra, numerical optimization, differential equations, time series, plus some well-known special mathematical functions.).
Install it using the below command in the R console:
install.packages("pracma", repos="http://R-Forge.R-project.org")
You can use the library then :
library(pracma)
Rank(Your Matrix object)
I can use ginv function from MASS library to get Moore-Penrose Generalisied Inverse of a matrix.
m <- matrix(1:9, 3, 3)
library(MASS)
ginv(m)
In SAS we do have more than one function to get a generalized inverse of a matrix. SVD can be used to find the generalized inverse but again this is a Moore-Penrose. I wonder if there any function in R to get a generalized inverse of a matrix (which is not unique) other than Moore-Penrose Generalisied Inverse. Thanks in advance for your help and time.
Edit
A generalized inverse of a matrix A is defined as any matrix G that
satisfies the equation AGA = A.
This G is not a Moore-Penrose Generalisied Inverse so it is not unique.
Most of the time you don't really want the inverse of a matrix, because the end result can be ruined by rounding errors by the time you're done.
It's more typical to create the LU decomposition using partial pivoting and scaling. Use it to perform forward/back substitution on right-hand-side vector to get the solution. This is especially helpful if you have multiple RHS vectors, because you can apply it repeatedly.
You need the Matrix package to do this.
Yes true, it's a great inconvenience R packages are no longer available. Alternatively you can use the pracma package.
And your Moore-Penrose Generalisied Inverse:
pinv(m)
After performing the Schur factorization using function Schur in library 'Matrix', how do I find the associated unitary matrix in R?
I can do this in MATLAB with the function Schur (documentation here), however, the Schur package in R only provides the triangular in the Schur factorization = $$ U * T * U' $$
Looking at the docs for the "Matrix" package, I noticed that the 'Schur' class has a slot for Q, which is the 'Square orthogonal "Matrix"' associated with the decomposition. So you want to do:
Sch.A <- Schur(A);
U <- Sch.A#Q
This is mildly confusing because they quote the decomposition as $A = Q^{\top} T Q$, which is perhaps why you missed it.