How do I find generalized Eigen Values, Vectors using Eigen3 library?
In octave, matlab, the eigen value function is of the form: [V, lambda] = eig (A, B).
I could only find this Class in Eigen3 lib but was not helpful in validating the results from above matlab/octave code.
You'll want to use the EigenSolver class which is located in the Eigen/Eigenvalues header. Either use the EigenSolver constructor that takes a matrix parameter or of or call the compute method with a matrix and it will solve for that matrix's eigenvalues and eigenvectors. Then you can use the eigenvalues() and eigenvectors() methods to retrieve the eigenvalues and eigenvectors.
This question is old. Anyway, if someone here is looking for it, they should consider the GeneralizedEigenSolver (http://eigen.tuxfamily.org/dox-devel/classEigen_1_1GeneralizedEigenSolver.html) that is available in the Eigen library. Although, at this time, as far as I know, it is not completely ready.
Related
I was wondering if there was a function in Lapack for orthonormalizing the columns of a very tall and skinny matrix. A similar previous question asked this question, presumably in the context of a square matrix. My setting is as follows: I have an M by N matrix A that I am trying to orthonormalize the columns of.
So, my first thought was to do a qr decomposition. The functions for doing a qr decomposition in Lapack seem to be dgeqrf and dormqr. Great. However, my problem is as follows: my matrix A is so tall, that I don't want to actually compute all of Q, because it is M by M. In fact, I can't afford to instantiate an M by M matrix at all during any of my computation (it would not fit in memory). I would rather compute just the matrix that wikipedia calls Q1. However, I can't seem to find a way to make this work.
The weird thing is, that I think it is possible. Numpy, in particular, has a function numpy.linalg.qr that appears to do just this. However, even after reading their source code, I can't figure out how they are using lapack calls to get this to work.
Do folks have ideas? I would strongly prefer this to only use lapack functions because I am hoping to port this code to CuSOLVE, which has implemented several lapack functions (including dgeqrf and dormqr) for the GPU.
You want the "thin" or "economy size" version of QR. In matlab, you can do this with:
[Q,R] = qr(A,0);
I haven't used Lapack directly, but I would imagine there's a corresponding call there. It appears that you can do this in python with:
numpy.linalg.qr(a, mode='reduced')
I aim to use maximum likelihood methods (usually about 10^5 iterations) with a probability distribution that creates very big integers and very small float values that cannot be stored as a numeric nor a in a float type.
I thought I would use the as.bigq in the gmp package. My issue is that one can only add, substract, multiply and dived two objects of class/type bigq, while my distribution actually contains logarithm, power, gamma and confluent hypergeometric functions.
What is my best option to deal with this issue?
Should I use another package?
Should I code all these functions for bigq objects.
Coding these function on R may cause some functions to be very slow, right?
How to write the logarithm function using only the +,-,*,/ operators? Should I approximate this function using a taylor series expansion?
How to write the power function using only the +,-,*,/ operators when the exponent is not an integer?
How to write the confluent hypergeometric function (the equivalent of the Hypergeometric1F1Regularized[..] function in Mathematica)?
I could eventually write these functions in C and call them from R but it sounds like some complicated work for not much, especially if I have to use the gmp package in C as well to handle these big numbers.
All your problems can be solved with Rmpfr most likely which allows you to use all of the functions returned by getGroupMembers("Math") with arbitrary accuracy.
Vignette: http://cran.r-project.org/web/packages/Rmpfr/vignettes/Rmpfr-pkg.pdf
Simple example of what it can do:
test <- mpfr(rnorm(100,mean=0,sd=.0001), 240)
Reduce("*", test)
I don't THINK it has hypergeometric functions though...
I want to compute the Generalized Singular Value Decomposition (GSVD) for sparse matrices A and B. Therefore I am looking for an implementation that is capable of using a special data structure for sparse matrices.
The only implementation I found (here) is part of the LAPACK package which is written in Fortran 77.
It works quite good, but unfortunately it can't handle sparse matrices.
MATLAB's gsvd accepts sparse matrices. I believe Octave (freely available) supports gsvd as well.
I asked the same question on Scicomp and got good answers. The post can be found here.
I've been searching on R help for a primitive function, because in my case I can't do it with integrate function, is there any way to find a primitive function (antiderivative)?
If it is a one-off, you can use a computer-algebra system (Maxima, Maple, Wolfram Alpha, etc.).
If you want to do it from R, you can use the Ryacas package.
For instance, yacas(expression(integrate(sin))) returns -Cos(x).
There is no analytical method to generate F where F'=f and f is known but you can always approximate a value for a specific bounds when the bounds is known using the trapezoid approximation for instance.
I want to get singular values of a matrix in R to get the principal components, then make princomp(x) too to compare results
I know princomp() would give the principal components
Question
How to get the principal components from $d, $u, and $v (solution of s = svd(x) )?
One way or another, you should probably look into prcomp, which calculates PCA using svd instead of eigen (as in princomp). That way, if all you want is the PCA output, but calculated using svd, you're golden.
Also, if you type stats:::prcomp.default at the command line, you can see how it's using the output of svd yourself.