LAPACK's `dtrcon` underlying algorithm - r

I am currently trying to reconstruct some of the function of R's kappa condition number estimation function, which estimates the condition number of a matrix X by:
Working out the QR decomposition of X.
Calling to LAPACK's dtrcon or LINPACK's dtrco (depending on what the underlying dependencies on the users system are), and calculating the condition number of R, the upper triangular matrix which should have the same condition number as X (see here).
I have been trying to understand what the LAPACK and LINPACK algorithms do as it may be extremely useful for my own coding.
I have managed to find the algorithm that LINPACK uses, which was described here, but have had no luck finding the origin of LAPACK's algorithm. The comments in R's kappa function suggest that these are using different algorithms (see here) but I am unsure...
Long story short, my question is:
Does anyone know if LAPACK's dtrcon and LINPACK's dtrco are using the same algorithm and if not, what algorithm is LAPACK's dtrcon using?
Thank you in advance!

Related

how does R choose eigenvectors?

When given a matrix with repeated eigenvalues, but non-defective, how does the R function eigen choose a basis for the eigenspace? Eg if I call eigen on the identity matrix, it gives me the standard basis. How did it choose that basis over any other orthonormal basis?
Still not a full answer, but digging a little deeper: the source code of eigen shows that for real, symmetric matrices it calls .Internal(La_rs(x, only.values))
The La_rs function is found here, and going through the code shows that it calls the LAPACK function dsyevr
The dsyevr function is documented here:
DSYEVR first reduces the matrix A to tridiagonal form T with a call
to DSYTRD. Then, whenever possible, DSYEVR calls DSTEMR to compute
the eigenspectrum using Relatively Robust Representations. DSTEMR
computes eigenvalues by the dqds algorithm, while orthogonal
eigenvectors are computed from various "good" L D L^T representations
(also known as Relatively Robust Representations).
The comments provide this link that gives more expository detail:
The next task is to compute an eigenvector for $\lambda - s$. For each $\hat{\lambda}$ the algorithm computes, with care, an optimal twisted factorization
...
obtained by implementing triangular factorization both from top down and bottom up and joining them at a well chosen index r ...
[emphasis added]. The emphasized words suggest that there are some devils in the details; if you want to go further down the rabbit hole, it looks like the internal dlarrv function is where the eigenvectors actually get calculated ...
For more details, see DSTEMR's documentation and:
Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations
to compute orthogonal eigenvectors of symmetric tridiagonal matrices,"
Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004.
Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and
Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25, 2004. Also LAPACK Working Note 154.
Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric
tridiagonal eigenvalue/eigenvector problem",
Computer Science Division Technical Report No. UCB/CSD-97-971,
UC Berkeley, May 1997.
It probably uses some algorithm written in FORTRAN a long time ago.
I suspect there is a procedure which is performed on the matrix to adjust it into a form from which eigenvalues and eigenvectors can be easily determined. I also suspect that this procedure won't need to do anything to an identity matrix to get it into the required form and so the eigenvalues and eigenvectors are just read off immediately.
In the general case of degenerate eigenvalues the answers you get will depend on the details of this algorithm. I doubt there is any choice being made - it's just whatever it spits out first.

Mathematical constrained optimization in R

I have a mathematical optimization which I wish to solve in R consider this system/problem:
How Can I solve this problem in R?
In this model Budget, p_l for all l and mu_target are fixed constants while muis a given m-dimensional vector and R is a given n by m matrix.
I have looked into constrOptim and lp but I don't have the imagination to implement the constraints
Those functions require that I have a "constraint" matrix but my problem is that I simply don't know how to design that constraint matrix. There are not many examples with decision variables on both sides of the equations.
Have a look on the nloptr package. It has quite extensive documentation with examples. Lots of algorithms to choose from, depending what problem you are trying to resolve.
NLoptr link

SVM using quadprog in R

This set of exercises has the student use a QP solver to solve an SVM in R. The suggested solver is the quadprog package. The quadratic problem is given as:
From the remark about the linear SVM, $K=XX'$, $K$ is a singular matrix usually, at most rank $p$ where $X$ is $n\times p$. But the solver quadprog requires a positive definite matrix, not just PSD, in the place of $K$, as mentioned many places (and verified). Any ideas what the instructor had in mind?
I think the workaround would be to add a small number (such as 1e-7) to the diagonal elements of the matrix which is supposed to be positive definite. I am not certain about the math behind it, but the sources below, as well as my experience, suggest that this solution works.
source: https://stats.stackexchange.com/questions/179900/optimizing-a-support-vector-machine-with-quadratic-programming
source: https://teazrq.github.io/stat542/hw/HW6.pdf

Choosing the proper optimisation algorithm in R

I am trying to find extremum of a linear objective function with linear equality, linear inequality and nonlinear (quadratic) inequality constraints. The problem is I have already tried many algorithms from packages like nloptr, Rsolnp Nlcoptim and for every time I have obtained different results. What is more the results differ (in many cases) from GRG algorithm from Excel which can find better results in terms of the minimising objective function.
So far solnp (Rsolnp package) gives some good results and after proper calibrating the results are even better than the one from GRG Excel algorithm. Results from Solnl (NlcOptim) are average and very different, even if the data input is slightly changed.
Nloptr (Nloptr package) function has implemented various number of algorithms. I tried few (I do not remember which exactly) and the results were still average and completely different than the one obtained so far from other algorithms.
My knowledge about optimisation algorithms is really poor and my attempts are rather based on a random selection of algorithms. Thus could you advise some algorithms implemented in R that can handle such problem? And which one (and why) is better from another? Maybe there is some framework or decision tree regarding choosing proper optimisation algorithm.
If this can help, I try to find the optimal weights of the portfolio assets, where the objective function is to minimise portfolio risk (standard deviation), with all assets weights sum up to 1 and are greater then or equal to 0, and with defined portfolio return as constraints.

Warm starting symmetric eigenvalue computation?

Do any standard (LAPACK / ARPACK / etc) implementations of the symmetric eigenvalue problem allow "warm starting"? That is, can they be accelerated if I already have a pretty good guess for the eigenvalues and eigenvectors of my matrix.
With Rayleigh quotient iteration or power iteration, this should be pretty obvious, but I don't see how to do this with standard eigensolver software. I'd prefer not to write my own eigensolver.
What you need is an iterative eigenvalue solve algorithm.
LAPACK uses a direct eigensolver and having an estimation of eigenvectors is of no use. There is a QR iterative refinement in its routines. However It requires Hessenberg matrices. I do not think you could use these routines.
You could use ARPACK library, specify the starting vector a set info argument equal to one.
Also I suggest to reconsider writing your own QR solver. It is very simple.
A basic QR implementation using lapack could be:
Initialize Q, A
repeat
QR = A (dgeqrf)
A = RQ (dormqr)
until convergence (dnrm2)

Resources