calculating the pseudoinverse with PARI/GP - linear-algebra

How can I calculate the pseudoinverse for an arbitary mxn-
matrix in PARI/GP ? Is there a simple way, or do I have to
program the process completely ?

Jörg Arndt has written code for the (Moore-Penrose) pseudoinverse here:
http://www.jjj.de/pari/
where it appears under matsvd.gpi.
In the simple (but common) case where the matrix has full column rank, you can compute it as
pseudoinverse(M) = my(ct=conj(M)~); (ct*M)^-1 * ct;

I suppose you mean the Moore-Penrose pseudoinverse?
The tutorial and the manual for Pari/GP do not mention pseudoinverse, so you'll probably have to code your own solution.
The Wikipedia entry may help.You could also find algorithms in good advanced Linear Algebra books, as for example Jonathan Golan's The Linear Algebra a Beginning Graduate Student Ought to Know.

Related

Linear and bilinear constraint with Gurobi

Looking at Gurobi's expamples for programs, there is one for QCPs, and one for bilinear programs, and I was wondering how to add a constraint that is linear and bilinear (sorry if there's specific jargon for such a problem) in R (or any other language, if easier, but I am using R). Specifically, how would I add a matrix of constraints of the form (for example) that
xz + y - yz < c
where c is some constant. I think I could use mccormick relaxation to re-write this as a linear program (right?), but I was wondering if Gurobi has easy syntax for such constraints?
My current understanding of the syntax for QCPs and bilinear programs is that you use a sparse matrix construction of the form
And so you cannot refer to x,y,z on their own..
Figured it out. In case anyone else comes across a similar issue, you create a quadcon list and add it to the model, as described here. For an illustration of using quadcon, it is quite similar to quadratic constraints in this example, though this example is not explicitly of the type of constraint I asked about.

Homogeneous eigenvalue sampling of a sparse unitary matrix

I work with Julia, but I think the question is more general. Suppose that one wants to find the spectrum of a very large (sparse) unitary matrix U numerically. As is reported in many entries, diagonalizing by brute force using eigs ends without eigenvalue convergence.
The trick would be then to work with simpler expressions, i.e. with
U_Re = real(U + U')*0.5
U_Im = real((U - U')*-0.5im)
My question is, is there a way to obtain a uniform sampling in finding the eigenvalues? That is, I would like to obtain, say 10e3 eigenvalues for U_Re and U_Im in the interval [-1,1].
I am not entirely sure how uniform sampling of the eigenvalues would work, but I think you are looking for ARPACK. ARPACK would use matrix-vector products to find your eigenvalues, so I am not entirely sure if the Real/Im decomposition is required in this case (hard to say without knowing a lot about the U).
Also, you might want to look at FEAST algorithm, which would benefit a lot from the given search contour.
I am not aware of the existing linking of Julia to those libraries, but I don't think it is a problem since Julia can call C functions.
Here, I gave some brief ideas, and Computational Science might be a better place to find the right crowd. However, a lot more details about U, its sparsity, size, and what does "uniform sampling of eigenvalues in the interval" means would be required.

Solving a system of unknowns in terms of unknowns

I am trying to solve a 5x5 Cholesky decomposition (for a variance-covariance matrix) all in terms of unknowns (no constants).
A simplified version, for the sake of giving an example, would be a 2x2 decomposition:
[[a,0],[b,c]]*[[a,b],[0,c]]=[[U1,U2],[U2,U3]]
Is there a software (I'm proficient in R, so if R can do it that would be great) that could solve the above to yield an answer of the left-hand variables in terms of the right-hand variables? i.e. this would be the final answer:
a = sqrt(U1)
b = U2/sqrt(U1)
c = sqrt(U3+U2/U1)
Take a look at this Wikipedia section.
The symbolic definition of the (i,j)th entry of the decomposition is defined recursively in terms of the entries above and to the left. You could implement these recursions using Matlab's Symbolic Math Toolbox and then apply them (symbolically) to obtain your formulas for the 5x5 case. Be warned that you'll probably end up with extremely complicated formulas for some of the unknowns, and - excepting unusual circumstances - it will be fine to implement the decomposition iteratively even for a fixed size 5x5 matrix.

Advantages of a computer program over a math formula

I am not sure if this is the right place to ask this question, if not I will delete it if you guys tell me, but my question is related to transferring math techniques into a program. My question is:
If I were to use a program to solve the quadratic equation ax^2+bx+c=0 by using:
x_1 = (-b-sign(b)*sqrt(b^2-4*a*c) ) / (2*a)
x_2 = (c) / (a * x_1)
What are the advantages of using a computer over the common formula? I know it will reduce the error involved, but other than that.
I am assuming you are asking what are the differences between using the code
x1 = -b+sqrt(b*b-4*a*c)/(2*a);
x2 = -b-sqrt(b*b-4*a*c)/(2*a);
and the code
q = (-b-sign(b)*sqrt(b*b-4*a*c))/2;
x1 = q/a;
x2 = c/q;
The book Numerical recipes in C - The Art of Scientific Computing - Second Edition just says the second code will give you more accurate roots. You can consult online the book at http://apps.nrbook.com/c/index.html and you will find the formulae at page 183 and 184 in the section 5.6 Quadratic and cubic equations.
Professor Higham's book Accuracy and Stability of Numerical Algorithms, 2nd Edition has the introductory section 1.8. Solving a Quadratic Equation which elaborates further about the second code. Maybe you can read it through Google books with the query higham 1.8. solving a quadratic equation; it seems to me he just talk about the accuracy and robustness of the second code without describing any additional advantage.
For a longer explanation (in the Python Scilab context) you can look at Scilab is not naive by Michael Baudin available here:
http://forge.scilab.org/index.php/p/docscilabisnotnaive/downloads/get/scilabisnotnaive_v2.2.pdf
The computer program is the only way I know to get almost instantly the solution for millions of values of a, b and c.
Automating and speeding repetitive calculus tasks is one of the reasons why computers got popular recently.

Solving linear equations during inverse iteration

I am using OpenCL to calculate the eigenvectors of a matrix. AMD has an example of eigenvalue calculation so I decided to use inverse iteration to get the eigenvectors.
I was following the algorithm described here and I noticed that in order to solve step 4 I need to solve a system of linear equations (or calculate the inverse of a matrix).
What is the best way to do this on a GPU using OpenCL? Are there any examples/references that I should look into?
EDIT: I'm sorry, I should have mentioned that my matrix is symmetric tridiagonal. From what I have been reading this could be important and maybe simplifies the whole process a lot
The fact that the matrix is tridiagonal is VERY important - that reduces the complexity of the problem from O(N^3) to O(N). You can probably get some speedup from the fact that it's symmetric too, but that won't be as dramatic.
The method for solving a tridiagonal system is here: http://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm.
Also note that you don't need to store all N^2 elements of the matrix, since almost all of them will be zeroes. You just need one vector of length N (for the diagonal) and two of length N-1 for the sub- and superdiagonals. And since your matrix is symmetric, the sub- and superdiagonals are the same.
Hope that's helpful...
I suggest using LU decomposition.
Here's example.
It's written in CUDA, but I think, it's not so hard to rewrite it in OpenCL.

Resources