I am looking for some triangular solvers, and I have come across two solvers. One in BLAS: dtrsm and another in LAPACK: dtrtrs. From the looks of it both seem to have common functionality, with dtrsm having a little bit more functionality (scaling the right hand side before solving the system).
I would like to know
1) How else do these functions differ ?
2) When performing the same operation, which is faster ?
3) If the answer to (2) is not obvious, When is dtrsm suggested over dtrtrs and vice versa ?
Besides scaling, dtrsm can also solve systems in which the triangular matrix is right-multiplied into the unknown matrix (i.e., it can solve XA = B as well as AX = B). On the other hand, dtrsm can silently fail if A is singular, whereas dtrtrs checks for this condition and reports an error.
In a "typical" LAPACK distribution, dtrtrs is just a wrapper that checks for singularity and then calls dtrsm. dtrsm is therefore slightly faster, but that difference is insignificant for matrices of any reasonable size.
Related
Introduction
I'm doing research in computational contact mechanics, in which I try to solve a PDE using a finite difference method. Long story short, I need to solve a linear system like Ax = b.
The suspects
In the problem, the matrix A is sparse, and so I defined it accordingly. On the other hand, both x and b are dense arrays.
In fact, x is defined as x = A\b, the potential solution of the problem.
So, the least one might expect from this solution is to satisfy that Ax is close to b in some sense. Great is my surprise when I find that
julia> norm(A*x-b) # Frobenius or 2-norm
5018.901093242197
The vector x does not solve the system! I've tried a lot of tricks discover what is going on, but no clues as of now. My first candidate is that I've found a bug, however I need more evidence to make this assertion.
The hints
Here are some tests that I've done to try to pinpoint the error
If you convert A to dense, the solution changes completely, and in fact it returns the correct solution.
I have repeated the proccess above in matlab, and it seems to work well with both sparse and dense matrices (that is, the sparse version does not agree with that of Julia's)
Not all sparse matrices cause a problem. I have tried other initial conditions and the solver seems to work quite well. I am not able to predict what property of the matrix can be causing this discrepancy. However;
A has a condition number of 120848.06, which is quite high, although matlab doesn't seem to complain. Also, the absolute error of the solution to the real solution is huge.
How to reproduce this "bug"
Download the .csv files in the following link
Run the following code in the folder of the files (install the packages if necessary
using DelimitedFiles, LinearAlgebra, SparseArrays;
A = readdlm("A.csv", ',');
b = readdlm("b.csv", ',');
x = readdlm("x.csv", ',');
A_sparse = sparse(A);
println(norm(A_sparse\b - x)); # You should get something close to zero, x is the solution of the sparse implementation
println(norm(A_sparse*x - b)); # You should get something not close to zero, something is not working!
Final words
It might easily be the case that I'm missing something. Are there any other implementations apart from the usual A\b to test against?
To solve a sparse square system Julia chooses to do a sparse LU decomposition. For the specific matrix A in the question, this decomposition is numerically ill-conditioned. This is evidenced by the cond(lu(A_sparse).U) == 2.879548971708896e64. This causes the solve routine to make numerical errors in turn.
A quick solution is to use a QR decomposition instead, by running x = qr(A_sparse)\b.
The solve or LU routines might need to be fixed to handle this case, or at least maintainers need to know of this issue, so opening an issue on the github repo might be good.
(this is a rewrite of my comment on question)
This will be a strange question: I know what to do, and I am actually doing it, and it works, but I don't know how to write about it. Looking for solutions to a homogeneous matrix equation, say AX=0, I use the kernel of the parameter matrix A. But, the world being imperfect as it is, the matrix does not have a "perfect" kernel; it does have an "imperfect" one if you set a nonzero "tolerance" parameter. FWIW I'm using Scilab, the function is kernel(A,tol).
Now what are the correct terms for "imperfect kernel", or "tolerance" (of what?), how should this whole process be described in correct English and maths terminology? Should I say something like a "least-squares kernel"? "Approximate kernel"? Is tol the "tolerance of kernel-determination algorithm"? Sounds lame to me...
Depending on the method used (QR or SVD, third flag allows to choose this in Scilab implementation) the tolerance is used to determine when pivots (QR case) or singular values (SVD case) are consider to be zero. The kernel is then considered to be the associated subspace.
What is the point of the Symmetric type in the LinearAlgebra package of Julia? It seems like it is equivalent to the type Hermitian for real matrices (although: is this true?). If that is true, then the only case for which Symmetric is not redundant with Hermitian is for complex matrices, and it would be surprising to want to have a symmetric as opposed to Hermitian complex matrix (maybe I am mistaken on that though).
I ask this question in part because I sometimes find myself doing casework like this: if I have a real matrix, then use Symmetric; if complex, then Hermitian. It seems though that I could save work by just always using Hermitian. Will I be missing out on performance or otherwise if I do this?
(Also, bonus question that may be related: why is there no HermTridiagonal type in addition to SymTridiagonal? I could use the former. Plus, it seems more useful than SymTridiagonal in consideration of the above.)
To copy the answer from the linked discourse thread (via #stevengj):
Always use Hermitian. For real elements, there is no penalty compared to Symmetric.
There aren’t any specialized routines for complex Symmetric matrices that I know of. My feeling is that it was probably a mistake to have a separate Symmetric type in LinearAlgebra, but it is hard to remove at this point.
(I'm not sure whether I should post this problem on this site or on the math site. Please feel free to migrate this post if necessary.)
My problem at hand is that given a value of k I'd like to numerically compute a rational function of nonlinear polynomials in k which looks like the following: (sorry I don't know how to typeset equations here...)
where {a_0, ..., a_N; b_0, ..., b_N} are complex constants, {u_0, ..., u_N, v_0, ..., v_N} are real constants and i is the imaginary number. I learned from Numerical Recipes that there are whole bunch of ways to compute polynomials quickly, in the meanwhile keeping the rounding error small enough, if all coefficients were constant. But I do not think those ideas are useful in my case since the exponential prefactors also depend on k.
Currently I calculate it in a brute force way in C with complex.h (this is just a pseudo code):
double complex function(double k)
{
return (a_0+a_1*cexp(I*u_1*k)*k+a_2*cexp(I*u_2*k)*k*k+...)/(b_0+b_1*cexp(I*v_1*k)*k+v_2*cexp(I*v_2*k)*k*k+...);
}
However when the number of calls of function increases (because this is just a part of my real calculation), it is very slow and inaccurate (only 6 valid digits). I appreciate any comments and/or suggestions.
I trust that this isn't a homework assignment!
Normally the trick is to use a loop add the next coefficient to the running sum, and multiply by k. However, in your case, I think the "e" term in the coefficient is going to overwhelm any savings by factoring out k. You can still do it, but the savings will probably be small.
Is u_i a constant? Depending on how many times you need to run this formula, maybe you could premultiply u_i * k (unless k changes each run). It's been so many decades since I took a Numerical Analysis course that I have only vague recollections of the tricks of the trade. Let's see... is e^(i*u_i*k) the same as (e^(i*u_i))^k? I don't remember the rules on imaginary numbers, or whether you'll save anything since you've got a real^real (assuming k is real) anyway (internally done using e^power).
If you're getting only 6 digits, that suggests that your math, and maybe your library, is working in single precision (32 bit) reals. Check your library and check your declarations that you are using at least double precision (64 bit) reals everywhere.
Disclaimer
This is not strictly a programming question, but most programmers soon or later have to deal with math (especially algebra), so I think that the answer could turn out to be useful to someone else in the future.
Now the problem
I'm trying to check if m vectors of dimension n are linearly independent. If m == n you can just build a matrix using the vectors and check if the determinant is != 0. But what if m < n?
Any hints?
See also this video lecture.
Construct a matrix of the vectors (one row per vector), and perform a Gaussian elimination on this matrix. If any of the matrix rows cancels out, they are not linearly independent.
The trivial case is when m > n, in this case, they cannot be linearly independent.
Construct a matrix M whose rows are the vectors and determine the rank of M. If the rank of M is less than m (the number of vectors) then there is a linear dependence. In the algorithm to determine the rank of M you can stop the procedure as soon as you obtain one row of zeros, but running the algorithm to completion has the added bonanza of providing the dimension of the spanning set of the vectors. Oh, and the algorithm to determine the rank of M is merely Gaussian elimination.
Take care for numerical instability. See the warning at the beginning of chapter two in Numerical Recipes.
If m<n, you will have to do some operation on them (there are multiple possibilities: Gaussian elimination, orthogonalization, etc., almost any transformation which can be used for solving equations will do) and check the result (eg. Gaussian elimination => zero row or column, orthogonalization => zero vector, SVD => zero singular number)
However, note that this question is a bad question for a programmer to ask, and this problem is a bad problem for a program to solve. That's because every linearly dependent set of n<m vectors has a different set of linearly independent vectors nearby (eg. the problem is numerically unstable)
I have been working on this problem these days.
Previously, I have found some algorithms regarding Gaussian or Gaussian-Jordan elimination, but most of those algorithms only apply to square matrix, not general matrix.
To apply for general matrix, one of the best answers might be this:
http://rosettacode.org/wiki/Reduced_row_echelon_form#MATLAB
You can find both pseudo-code and source code in various languages.
As for me, I transformed the Python source code to C++, causes the C++ code provided in the above link is somehow complex and inappropriate to implement in my simulation.
Hope this will help you, and good luck ^^
If computing power is not a problem, probably the best way is to find singular values of the matrix. Basically you need to find eigenvalues of M'*M and look at the ratio of the largest to the smallest. If the ratio is not very big, the vectors are independent.
Another way to check that m row vectors are linearly independent, when put in a matrix M of size mxn, is to compute
det(M * M^T)
i.e. the determinant of a mxm square matrix. It will be zero if and only if M has some dependent rows. However Gaussian elimination should be in general faster.
Sorry man, my mistake...
The source code provided in the above link turns out to be incorrect, at least the python code I have tested and the C++ code I have transformed does not generates the right answer all the time. (while for the exmample in the above link, the result is correct :) -- )
To test the python code, simply replace the mtx with
[30,10,20,0],[60,20,40,0]
and the returned result would be like:
[1,0,0,0],[0,1,2,0]
Nevertheless, I have got a way out of this. It's just this time I transformed the matalb source code of rref function to C++. You can run matlab and use the type rref command to get the source code of rref.
Just notice that if you are working with some really large value or really small value, make sure use the long double datatype in c++. Otherwise, the result will be truncated and inconsistent with the matlab result.
I have been conducting large simulations in ns2, and all the observed results are sound.
hope this will help you and any other who have encontered the problem...
A very simple way, that is not the most computationally efficient, is to simply remove random rows until m=n and then apply the determinant trick.
m < n: remove rows (make the vectors shorter) until the matrix is square, and then
m = n: check if the determinant is 0 (as you said)
m < n (the number of vectors is greater than their length): they are linearly dependent (always).
The reason, in short, is that any solution to the system of m x n equations is also a solution to the n x n system of equations (you're trying to solve Av=0). For a better explanation, see Wikipedia, which explains it better than I can.