BLAS/LAPACK routine for doing Gaussian elimination - linear-algebra

I'm a new user of BLAS/Lapack, and I'm just wondering is there a routine which does Gaussian elimination or even Gaussian-Jordan elimination? I googled and looked at their documentations, but still couldn't find them.
Thanks a lot for helping me out!

Gaussian elimination is basically the same as LU factorization. The routine xGETRF computes the LU factorization (e.g., DGETRF for real double precision matrices). The U factor corresponds to the matrix after Gaussian elimination. The U factor is stored in the upper triangular part (including the diagonal) of the matrix A on exit.
LU factorization / Gaussian elimination is commonly used to solve linear systems of equations. You can use the xGETRS routine to solve a linear system once you have computed the LU factorization.

Related

Gaussian Matrix?

Does a "Gaussian Matrix" refer to a matrix that has undergone gaussian elimination to become an upper triangular matrix (the U in lu factorization), or instead to the elimination matrix that multiplies a matrix to make it upper triangular via elimination steps?
To be honest, I never heard of a Gaussian matrix in the context of Gaussian/Gauss-Jordan elimination. Rather, the matrices were referred to directly by their property. For example - as you mentioned - upper triangular matrix. The term does not pop up on Wikipedia article either. Even in this thesis the term only arises for realizations of Gaussian random variables organized to a matrix; no direct relation to the elimination procedure.
However, I know Gaussian matrices from a very different context. Namely, as the Gram matrix when a Gaussian kernel is applied. This seems to be somewhat backed by WolframAlpha.

how does R choose eigenvectors?

When given a matrix with repeated eigenvalues, but non-defective, how does the R function eigen choose a basis for the eigenspace? Eg if I call eigen on the identity matrix, it gives me the standard basis. How did it choose that basis over any other orthonormal basis?
Still not a full answer, but digging a little deeper: the source code of eigen shows that for real, symmetric matrices it calls .Internal(La_rs(x, only.values))
The La_rs function is found here, and going through the code shows that it calls the LAPACK function dsyevr
The dsyevr function is documented here:
DSYEVR first reduces the matrix A to tridiagonal form T with a call
to DSYTRD. Then, whenever possible, DSYEVR calls DSTEMR to compute
the eigenspectrum using Relatively Robust Representations. DSTEMR
computes eigenvalues by the dqds algorithm, while orthogonal
eigenvectors are computed from various "good" L D L^T representations
(also known as Relatively Robust Representations).
The comments provide this link that gives more expository detail:
The next task is to compute an eigenvector for $\lambda - s$. For each $\hat{\lambda}$ the algorithm computes, with care, an optimal twisted factorization
...
obtained by implementing triangular factorization both from top down and bottom up and joining them at a well chosen index r ...
[emphasis added]. The emphasized words suggest that there are some devils in the details; if you want to go further down the rabbit hole, it looks like the internal dlarrv function is where the eigenvectors actually get calculated ...
For more details, see DSTEMR's documentation and:
Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations
to compute orthogonal eigenvectors of symmetric tridiagonal matrices,"
Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004.
Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and
Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25, 2004. Also LAPACK Working Note 154.
Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric
tridiagonal eigenvalue/eigenvector problem",
Computer Science Division Technical Report No. UCB/CSD-97-971,
UC Berkeley, May 1997.
It probably uses some algorithm written in FORTRAN a long time ago.
I suspect there is a procedure which is performed on the matrix to adjust it into a form from which eigenvalues and eigenvectors can be easily determined. I also suspect that this procedure won't need to do anything to an identity matrix to get it into the required form and so the eigenvalues and eigenvectors are just read off immediately.
In the general case of degenerate eigenvalues the answers you get will depend on the details of this algorithm. I doubt there is any choice being made - it's just whatever it spits out first.

Why does IPOPT evaluate objective function despite breaching constraints?

I'm using IPOPT within Julia. My objective function will throw an error for certain parameter values (specifically, though I assume this doesn't matter, it involves a Cholesky decomposition of a covariance matrix and so requires that the covariance matrix be positive-definite). As such, I non-linearly constrain the parameters so that they cannot produce an error. Despite this constraint, IPOPT still insists on evaluating the objective function at paramaters which cause my objective function to throw an error. This causes my script to crash, resulting in misery and pain.
I'm interested why, in general, IPOPT would evaluate a function at parameters that breach the constraints. (I've ensured that it is indeed checking the constraints before evaluating the function.) If possible, I would like to know how I can stop it doing this.
I have set IPOPT's 'bound_relax_factor' parameter to zero; this doesn't help. I understand I could ask the objective function to return NaN instead of throwing an error, but when I do IPOPT seems to get even more confused and does not end up converging. Poor thing.
I'm happy to provide some example code if it would help.
Many thanks in advance :):)
EDIT:
A commenter suggested I ask my objective function to return a bad objective value when the constraints are violated. Unfortunately this is what happens when I do:
I'm not sure why Ipopt would go from a point evaluating at 2.0016x10^2 to a point evaluating at 10^10 — I worry there's something quite fundamental about IPOPT I'm not understanding.
Setting 'constr_viol_tol' and 'acceptable_constr_viol_tol' to their minimal values doesn't noticably affect optimisation, nor does 'over-constraining' my parameters (i.e. ensuring they cannot be anywhere near an unacceptable value).
The only constraints that Ipopt is guaranteed to satisfy at all intermediate iterations are simple upper and lower bounds on variables. Any other linear or nonlinear equality or inequality constraint will not necessarily be satisfied until the solver has finished converging at the final iteration (if it can get to a point that satisfies termination conditions). Guaranteeing that intermediate iterates are always feasible in the presence of arbitrary non-convex equality and inequality constraints is not tractable. The Newton step direction is based on local first and second order derivative information, so will be an approximation and may leave the space of feasible points if the problem has nontrivial curvature. Think about the space of points where x * y == constant as an example.
You should reformulate your problem to avoid needing to evaluate objective or constraint functions at invalid points. For example, instead of taking the Cholesky factorization of a covariance matrix constructed from your data, introduce a unit lower triangular matrix L and a diagonal matrix D. Impose lower bound constraints D[i, i] >= 0 for all i in 1:size(D,1), and nonlinear equality constraints L * D * L' == A where A is your covariance matrix. Then use L * sqrtm(D) anywhere you need to operate on the Cholesky factorization (this is a possibly semidefinite factorization, so more of a modified Cholesky representation than the classical strictly positive definite L * L' factorization).
Note that if your problem is convex, then there is likely a specialized formulation that a conic solver will be more efficient at solving than a general-purpose nonlinear solver like Ipopt.

Warm starting symmetric eigenvalue computation?

Do any standard (LAPACK / ARPACK / etc) implementations of the symmetric eigenvalue problem allow "warm starting"? That is, can they be accelerated if I already have a pretty good guess for the eigenvalues and eigenvectors of my matrix.
With Rayleigh quotient iteration or power iteration, this should be pretty obvious, but I don't see how to do this with standard eigensolver software. I'd prefer not to write my own eigensolver.
What you need is an iterative eigenvalue solve algorithm.
LAPACK uses a direct eigensolver and having an estimation of eigenvectors is of no use. There is a QR iterative refinement in its routines. However It requires Hessenberg matrices. I do not think you could use these routines.
You could use ARPACK library, specify the starting vector a set info argument equal to one.
Also I suggest to reconsider writing your own QR solver. It is very simple.
A basic QR implementation using lapack could be:
Initialize Q, A
repeat
QR = A (dgeqrf)
A = RQ (dormqr)
until convergence (dnrm2)

For a 3x3 only symmetric and positive definite linear system, is Cholesky still faster than Householder?

I am trying to solve a linear system Ax=b where A is 3x3 symmetric positive definite.
Though it is low in scale, I will have to repeat it for different As millions of times. So efficiency is still important.
There are many solvers for linear systems (C++, via Eigen).
I personally prefer: HouseholderQr().solve(), and llt().solve(), ldlt().solve().
I know that when n is very large, solvers based on Cholesky decomposition are faster than that of Householder's. But for my case when n is only 3, how can I compare their relative efficiency? Is there any formula for the exact float operation analysis?
thanks
Yes, Cholesky will still be faster. It will be about n^3/3 flops. The only reason to use QR would be if your matrix was very ill-conditioned.
If you need to solve these systems a million times and efficiency is important, I'd recommend calling LAPACK directly. You want the DPOSV function.
http://www.netlib.org/lapack/lug/node26.html#1272
http://www.netlib.org/clapack/what/double/dposv.c

Resources