Solving a system of unknowns in terms of unknowns - r

I am trying to solve a 5x5 Cholesky decomposition (for a variance-covariance matrix) all in terms of unknowns (no constants).
A simplified version, for the sake of giving an example, would be a 2x2 decomposition:
[[a,0],[b,c]]*[[a,b],[0,c]]=[[U1,U2],[U2,U3]]
Is there a software (I'm proficient in R, so if R can do it that would be great) that could solve the above to yield an answer of the left-hand variables in terms of the right-hand variables? i.e. this would be the final answer:
a = sqrt(U1)
b = U2/sqrt(U1)
c = sqrt(U3+U2/U1)

Take a look at this Wikipedia section.
The symbolic definition of the (i,j)th entry of the decomposition is defined recursively in terms of the entries above and to the left. You could implement these recursions using Matlab's Symbolic Math Toolbox and then apply them (symbolically) to obtain your formulas for the 5x5 case. Be warned that you'll probably end up with extremely complicated formulas for some of the unknowns, and - excepting unusual circumstances - it will be fine to implement the decomposition iteratively even for a fixed size 5x5 matrix.

Related

how does R choose eigenvectors?

When given a matrix with repeated eigenvalues, but non-defective, how does the R function eigen choose a basis for the eigenspace? Eg if I call eigen on the identity matrix, it gives me the standard basis. How did it choose that basis over any other orthonormal basis?
Still not a full answer, but digging a little deeper: the source code of eigen shows that for real, symmetric matrices it calls .Internal(La_rs(x, only.values))
The La_rs function is found here, and going through the code shows that it calls the LAPACK function dsyevr
The dsyevr function is documented here:
DSYEVR first reduces the matrix A to tridiagonal form T with a call
to DSYTRD. Then, whenever possible, DSYEVR calls DSTEMR to compute
the eigenspectrum using Relatively Robust Representations. DSTEMR
computes eigenvalues by the dqds algorithm, while orthogonal
eigenvectors are computed from various "good" L D L^T representations
(also known as Relatively Robust Representations).
The comments provide this link that gives more expository detail:
The next task is to compute an eigenvector for $\lambda - s$. For each $\hat{\lambda}$ the algorithm computes, with care, an optimal twisted factorization
...
obtained by implementing triangular factorization both from top down and bottom up and joining them at a well chosen index r ...
[emphasis added]. The emphasized words suggest that there are some devils in the details; if you want to go further down the rabbit hole, it looks like the internal dlarrv function is where the eigenvectors actually get calculated ...
For more details, see DSTEMR's documentation and:
Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations
to compute orthogonal eigenvectors of symmetric tridiagonal matrices,"
Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004.
Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and
Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25, 2004. Also LAPACK Working Note 154.
Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric
tridiagonal eigenvalue/eigenvector problem",
Computer Science Division Technical Report No. UCB/CSD-97-971,
UC Berkeley, May 1997.
It probably uses some algorithm written in FORTRAN a long time ago.
I suspect there is a procedure which is performed on the matrix to adjust it into a form from which eigenvalues and eigenvectors can be easily determined. I also suspect that this procedure won't need to do anything to an identity matrix to get it into the required form and so the eigenvalues and eigenvectors are just read off immediately.
In the general case of degenerate eigenvalues the answers you get will depend on the details of this algorithm. I doubt there is any choice being made - it's just whatever it spits out first.

Homogeneous eigenvalue sampling of a sparse unitary matrix

I work with Julia, but I think the question is more general. Suppose that one wants to find the spectrum of a very large (sparse) unitary matrix U numerically. As is reported in many entries, diagonalizing by brute force using eigs ends without eigenvalue convergence.
The trick would be then to work with simpler expressions, i.e. with
U_Re = real(U + U')*0.5
U_Im = real((U - U')*-0.5im)
My question is, is there a way to obtain a uniform sampling in finding the eigenvalues? That is, I would like to obtain, say 10e3 eigenvalues for U_Re and U_Im in the interval [-1,1].
I am not entirely sure how uniform sampling of the eigenvalues would work, but I think you are looking for ARPACK. ARPACK would use matrix-vector products to find your eigenvalues, so I am not entirely sure if the Real/Im decomposition is required in this case (hard to say without knowing a lot about the U).
Also, you might want to look at FEAST algorithm, which would benefit a lot from the given search contour.
I am not aware of the existing linking of Julia to those libraries, but I don't think it is a problem since Julia can call C functions.
Here, I gave some brief ideas, and Computational Science might be a better place to find the right crowd. However, a lot more details about U, its sparsity, size, and what does "uniform sampling of eigenvalues in the interval" means would be required.

Mathematical constrained optimization in R

I have a mathematical optimization which I wish to solve in R consider this system/problem:
How Can I solve this problem in R?
In this model Budget, p_l for all l and mu_target are fixed constants while muis a given m-dimensional vector and R is a given n by m matrix.
I have looked into constrOptim and lp but I don't have the imagination to implement the constraints
Those functions require that I have a "constraint" matrix but my problem is that I simply don't know how to design that constraint matrix. There are not many examples with decision variables on both sides of the equations.
Have a look on the nloptr package. It has quite extensive documentation with examples. Lots of algorithms to choose from, depending what problem you are trying to resolve.
NLoptr link

Lapack Orthonormalization Function for Rectangular Matrix

I was wondering if there was a function in Lapack for orthonormalizing the columns of a very tall and skinny matrix. A similar previous question asked this question, presumably in the context of a square matrix. My setting is as follows: I have an M by N matrix A that I am trying to orthonormalize the columns of.
So, my first thought was to do a qr decomposition. The functions for doing a qr decomposition in Lapack seem to be dgeqrf and dormqr. Great. However, my problem is as follows: my matrix A is so tall, that I don't want to actually compute all of Q, because it is M by M. In fact, I can't afford to instantiate an M by M matrix at all during any of my computation (it would not fit in memory). I would rather compute just the matrix that wikipedia calls Q1. However, I can't seem to find a way to make this work.
The weird thing is, that I think it is possible. Numpy, in particular, has a function numpy.linalg.qr that appears to do just this. However, even after reading their source code, I can't figure out how they are using lapack calls to get this to work.
Do folks have ideas? I would strongly prefer this to only use lapack functions because I am hoping to port this code to CuSOLVE, which has implemented several lapack functions (including dgeqrf and dormqr) for the GPU.
You want the "thin" or "economy size" version of QR. In matlab, you can do this with:
[Q,R] = qr(A,0);
I haven't used Lapack directly, but I would imagine there's a corresponding call there. It appears that you can do this in python with:
numpy.linalg.qr(a, mode='reduced')

Finding full QR decomposition from reduced QR

What's the best way to find additional orthonormal columns of Q? I have computed the reduced QR decomposition already, but need the full QR decomposition.
I assume there is a standard approach to this, but I've been having trouble finding it.
You might wonder why I need the full Q matrix. I'm using it to apply a constraint matrix for "natural" splines to a truncated power series basis expansion. I'm doing this in Java, but am looking for a language-independent answer.
Successively add columns to Q in the following way:
Pick a vector not already in the span of Q
Orthogonalize it with respect to the columns of Q
Add the orthogonalized vector as a new column of Q.
Add a row of zeros to the bottom of R
For reference, see these illustrative albeit mathematical lecture notes
Just in case, the process of "orthogonalization" of a new vector is an old technique called the Gram-Schmidt process, and there is a variant which is numerically stable.

Resources