I work with Julia, but I think the question is more general. Suppose that one wants to find the spectrum of a very large (sparse) unitary matrix U numerically. As is reported in many entries, diagonalizing by brute force using eigs ends without eigenvalue convergence.
The trick would be then to work with simpler expressions, i.e. with
U_Re = real(U + U')*0.5
U_Im = real((U - U')*-0.5im)
My question is, is there a way to obtain a uniform sampling in finding the eigenvalues? That is, I would like to obtain, say 10e3 eigenvalues for U_Re and U_Im in the interval [-1,1].
I am not entirely sure how uniform sampling of the eigenvalues would work, but I think you are looking for ARPACK. ARPACK would use matrix-vector products to find your eigenvalues, so I am not entirely sure if the Real/Im decomposition is required in this case (hard to say without knowing a lot about the U).
Also, you might want to look at FEAST algorithm, which would benefit a lot from the given search contour.
I am not aware of the existing linking of Julia to those libraries, but I don't think it is a problem since Julia can call C functions.
Here, I gave some brief ideas, and Computational Science might be a better place to find the right crowd. However, a lot more details about U, its sparsity, size, and what does "uniform sampling of eigenvalues in the interval" means would be required.
Related
I have a time-dependent complex matrix A(t), and I want to follow its eigenvalues over time. In other words, in the time-dependent list of eigenvalues a[1](t), ..., a[n](t), I want each entry to change continuously over time.
One approach is to find the eigendecomposition of A(t+ε) iteratively, using the eigendecomposition of A(t) as an initial guess. Since the guess is almost correct, the iteration should only change it slightly, giving the desired continuity.
I think the LOBPCG and SVD solvers in IterativeSolvers.jl can do this, because they let you store the iterator state. Unfortunately, they only work for matrices with real eigenvalues. (The SVG solver also requires real entries.) The solvers in ArnoldiMethod.jl can handle complex eigenvalues, but doesn't seem to allow an initial guess. Is there any available eigensolver that has both the features I need?
I'm watching a lecture about estimating the fundamental matrix for use in stereo vision using the 8 point algorithm. I understand that once we recover the fundamental matrix between two cameras we can compute the epipolar line on one camera given a point on the other. To my understanding this epipolar line (after it's been rectified) makes it easy to find feature correspondences, because we are simply matching features along a 1D line.
The confusion comes from the fact that 8-point algorithm itself requires at least 8 feature correspondences to estimate the Fundamental Matrix.
So, we are finding point correspondences to recover a matrix that is used to find point correspondences?
This seems like a chicken-egg paradox so I guess I'm misunderstanding something.
The fundamental matrix can be precomputed. This leads to two advantages:
You can use a nice environment in which features can be matched easily (like using a chessboard) to compute the fundamental matrix.
You can use more computationally expensive operations like a sequence of SIFT, FLANN and RANSAC across the entire image since you only need to do that once.
After getting the fundamental matrix, you can find correspondences in a noisy environment more efficiently than using the same method when you compute the fundamental matrix.
I need to solve thousands of time SMALL linear system of the type Ax=b. Here A is a matrix that is not smaller than 3x3 and maximum 8x8. I am aware of this http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ so I dont think it is smart to invert the matrix even if the matrices are small right? So what is the most efficient way to do that? I am programming in Fortran so probably I should use lapack library right? My matrices are full and in general non-simmetric.
Thanks
A.
Caveat: I didn't look into this extensively, but I have some experience I am happy to share.
In my experience, the fastest way to solve a 3x3 system is to basically use Cramer's rule. If you need to solve multiple systems with the same matrix A, it pays to pre-compute the inverse of A. This is only true for 2x2 and 3x3.
If you have to solve multiple 4x4 systems with the same matrix, then again using the inverse is noticeably faster than the forward and back-substitution of LU. I seem to remember that it uses less operations, and in practice the difference is even more (again, in my experience). As the matrix size grows, the difference shrinks, and asymptotically the difference disappears. If you are solving systems with difference matrices, then I don't think there is an advantage in computing the inverse.
In all cases, solving the system with the inverse can be much less accurate than using the LU-decomposition is A is fairly ill-conditioned. So if accuracy is an issue, then LU-factorization is definitely the way to go.
The LU factorization sounds like just the ticket for you, and the lapack routine dgetrf will compute this for you, after which you can use dgetrs to solve that linear system. Lapack has been optimized to the gills over the years, so in all likelihood you are better using that than writing any of this code yourself.
The computational cost of computing the matrix inverse and then multiplying that by the right-hand side vector is the same if not more than computing the LU-factorization of the matrix and then forward- and back-solving to find your answer. Moreover, computing the inverse exhibits even more bizarre pathological behavior than computing the LU-factorization, the stability of which is still a fairly subtle issue. It can be useful to know the inverse for small matrices, but it sounds like you don't need that for your purpose, so why do it?
Moreover, provided there are no loop-carried dependencies, you can parallelize this using OpenMP without too much trouble.
I have an arbitrary curve (defined by a set of points) and I would like to generate a polynomial that fits that curve to an arbitrary precision. What is the best way to tackle this problem, or is there already a library or online service that performs this task?
Thanks!
If your "arbitrary curve" is described by a set of points (x_i,y_i) where each x_i is unique, and if you mean by "fits" the calculation of the best least-squares polynomial approximation of degree N, you can simply obtain the coefficients b of the polynomial using
b = polyfit(X,Y,N)
where X is the vector of x_i values, Y is the vector of Y_i values. In this way you can increase N until you obtain the accuracy you require. Of course you can achieve zero approximation error by calculating the interpolating polynomial. However, data fitting often requires some thought beforehand - you need to give thought to what you want the approximation to achieve. There are a variety of mathematical ways of assessing approximation error (by using different norms), the choice of which will depend on your requirements of the resulting approximation. There are also many potential pitfalls (such as overfitting) that you may come across and blindly attempting to fit curves may result in an approximation that is theoritically sound but utterly useless to you in practical terms. I would suggest doing a little research on approximation theory if the above method does not meet your requirements, as has been suggested in the comments on your question.
I'm trying to solve a 6th-order nonlinear PDE (1D) with fixed boundary values (extended Fisher-Kolmogorov - EFK). After failing with FTCS, next attempt is MoL (either central in space or FEM) using e.g. LSODES.
How can this be implemented? Using Python/C + OpenMP so far, but need some pointers
to do this efficiently.
EFK with additional 6th order term:
u_t = d u_6x - g u_4x + u_xx + u-u^3
where d, g are real coefficients.
u(x,0) = exp(-x^2/16),
ux = 0 on boundary
domain is [0,300] and dx << 1 since i'm looking for pattern formations (subject to the values
of d, g)
I hope this is sufficient information.
All PDE solutions like this will ultimately end up being expressed using linear algebra in your program, so the trick is to figure out how to get the PDE into that form before you start coding.
Finite element methods usually begin with a weighted residual method. Non-linear equations will require a linear approximation and iterative methods like Newton-Raphson. I would recommend that you start there.
Yours is a transient solution, so you'll have to do time stepping. You can either use an explicit method and live with the small time steps that stability limits will demand or an implicit method, which will force you to do a matrix inversion at each step.
I'd do a Fourier analysis first of the linear piece to get an idea of the stability requirements.
The only term in that equation that makes it non-linear is the last one: -u^3. Have you tried starting by leaving that term off and solving the linear equation that remains?
UPDATE: Some additional thoughts prompted by comments:
I understand how important the u^3 term is. Diffusion is a 2nd order derivative w.r.t. space, so I wouldn't be so certain that a 6th order equation will follow suit. My experience with PDEs comes from branches of physics that don't have 6th order equations, so I honestly don't know what the solution might look like. I'd solve the linear problem first to get a feel for it.
As for stability and explicit methods, it's dogma that the stability limits placed on time step size makes them likely to fail, but the probability isn't 1.0. I think map reduce and cloud computing might make an explicit solution more viable than it was even 10-20 years ago. Explicit dynamics have become a mainstream way to solve difficult statics problems, because they don't require a matrix inversion.