I am running the Eigendecomposition in R and I get the eigenvalues and eigenvectors. However, the eigenvectors I get are normalized. I have two questions -
Why does R report normalized eigenvectors instead of the non-normalized ones?
How can we get the nonnormalized eigenvectors so that we can match the answers in the text?
Thanks and I would appreciate any help.
Related
What is the best way of calculating the diagonal of the inverse of a symmetric dense matrix (2000 * 2000)? Currently I calculate the inverse first using solve(x) and then extract the diagonal (diag(y)). Even though it works but I'm wondering whether there is a better way to do it so the code runs faster. I tried chol2inv() but it didn't work since my matrix is not positive-definite.
Update:
For anyone who may be interested, I was able to speed up the matrix inversion by using an optimized math library Intel MKL. It takes 3 seconds to inverse a 2000 * 2000 matrix on my machine. Intel MKL is available with Microsoft R Open.
If your matrix has no nice properties like being symmetric, diagonal, or positive-definite, your only choice sadly is to do sum(diag(solve(x)))
How long does that take to run on your matrix?
Using CUBLAS I performed matrix inversion of a N x N matrix containing random floating point (single precision) values upto 6 place decimals. After obtaining the Inverse (and verified using this website) , I multiplied the obtained Inverse matrix and the original matrix (using CUDA matrix multiplication program ) hoping to get exact Identity matrix. But the identity matrix had some error in it. Can you please explain me why this happened ? I am even attaching the output of my program below.
What you see is numerical error. It is common due to finite precision of computations. You could start from here.
https://en.m.wikipedia.org/wiki/Numerical_error
I am trying to solve a 5x5 Cholesky decomposition (for a variance-covariance matrix) all in terms of unknowns (no constants).
A simplified version, for the sake of giving an example, would be a 2x2 decomposition:
[[a,0],[b,c]]*[[a,b],[0,c]]=[[U1,U2],[U2,U3]]
Is there a software (I'm proficient in R, so if R can do it that would be great) that could solve the above to yield an answer of the left-hand variables in terms of the right-hand variables? i.e. this would be the final answer:
a = sqrt(U1)
b = U2/sqrt(U1)
c = sqrt(U3+U2/U1)
Take a look at this Wikipedia section.
The symbolic definition of the (i,j)th entry of the decomposition is defined recursively in terms of the entries above and to the left. You could implement these recursions using Matlab's Symbolic Math Toolbox and then apply them (symbolically) to obtain your formulas for the 5x5 case. Be warned that you'll probably end up with extremely complicated formulas for some of the unknowns, and - excepting unusual circumstances - it will be fine to implement the decomposition iteratively even for a fixed size 5x5 matrix.
I am trying to run the full SVD of a large (120k x 600k) and sparse (0,1% of non-null values) matrix M. Due to memory limitation all my previous attempts failed (with SVDLIBC, Octave, and R) and I am (almost) resigned to exploring other approaches to my problem (LSA).
However, at the moment, I am only interested in the eigenvalues of the diagonal matrix S and not in the left/right singular vectors (matrices U and V).
Is there a way to compute those singular values without storing in memory the dense matrix M and/or the singular vector matrices U and V?
Any help will be greatly appreciated.
[EDIT] My server configuration: 3,5GHz/3,9GHz (6 cores / 12 threads) 128GB of RAM
Looking for the meaning of that values (elements of matrix S from a SVD decomposition) in wikipedia we get:
The non-zero singular values of M (found on the diagonal entries of Σ)
are the square roots of the non-zero eigenvalues of both M*M and MM*
So you can look for the eigenvalues of the matrix A*A' (120k x 120k) without explicitly build the matrix, of course.
By the way, I dont think you are interested in ALL the eigenvalues (or singular values) for a matrix with such a dimensions. I do not think that any algorithm will give enough accurate results.
How comfortable are you with Fortran? I think you should be able to complete the computations using prebuilt packages available here and/or here. Also, if you're open to C++ and a decomposition using randomized and re-orthonormalized matrices, you can try the code at the google code project called redsvd. (I can't post the link because I don't have the requisite reputation for three links, but you can search for redsvd and find it readily.)
What's the best way to find additional orthonormal columns of Q? I have computed the reduced QR decomposition already, but need the full QR decomposition.
I assume there is a standard approach to this, but I've been having trouble finding it.
You might wonder why I need the full Q matrix. I'm using it to apply a constraint matrix for "natural" splines to a truncated power series basis expansion. I'm doing this in Java, but am looking for a language-independent answer.
Successively add columns to Q in the following way:
Pick a vector not already in the span of Q
Orthogonalize it with respect to the columns of Q
Add the orthogonalized vector as a new column of Q.
Add a row of zeros to the bottom of R
For reference, see these illustrative albeit mathematical lecture notes
Just in case, the process of "orthogonalization" of a new vector is an old technique called the Gram-Schmidt process, and there is a variant which is numerically stable.