Applications of Dense Linear Algebra - linear-algebra

What are the common real-world applications of Dense Linear Algebra?
Many problems can be easily described and efficiently computed using Linear Algebra as a common language between human and computer. More often than not though these systems require the solution of sparse matrices, not dense ones. What are common applications that defy this rule?
I'm curious if the community should invest further time to improve DLA packages like LAPACK. Who uses LAPACK in a computationally constrained application? Who uses LAPACK to solve large problems requiring parallelism?
Specifically, what are problems that can not be solved today due to insufficient dense linear algebra capabilities.

This depends on what you mean by real-world. Real-world for me is physics so I'll tell you ones in physics first and then branch out. In physics we often have to find the eigenvalues and eigenvectors of a matrix called the Hamiltonian (it basically contains information about the energy of a system). These matrices can be dense, at least in blocks. These blocks can be quite large. This brings up another point: sparse matrices can be dense in blocks and then it is best to use a dense linear algebra solver for each of the blocks.
There is also something called the density matrix of a system. It can be found using the eigenvectors of the Hamiltonian. In one algorithm that I use we often are finding the eigenvectors/values of these density matrices and the density matrices are dense, at least in blocks.
Dense linear algebra is used in material science and hydrodynamics as well, as mentioned in this article. This also relates to quantum chemistry, which is another area in which they are used.
Dense linear algebra routines have also been used to solve quantum scattering of charged particles(it doesn't say so in the linked article, but it was used) and to analyze the Cosmic Microwave Background. More broadly, it is used in solving an array of electromagnetic problems relating to real-world things like antenna design, medical equipment design, and determining/reducing the radar signature of a plane.
Another very real world application is that of curve fitting. However, there are other ways of doing it than using linear algebra that have broader scope.
In summary, dense linear algebra is used in a variety of applications, most of which are science- or engineering-related.
As a side note, many people have previously and are presently putting a great deal of effort into dense linear algebra libraries including ones that use graphics cards to do the computations.

Many methods for linear regression require heavy lifting on big, dense data matrices. The most straightforward example I can think of is linear least squares using the Moore-Penrose pseudoinverse.

Sparse solvers might be more useful in the long run, but dense linear algebra is crucial to the development of sparse solvers, and can't really be neglected:
Dense systems are often an easier domain in which to do algorithmic development, because there's one less thing to worry about.
The size at which sparse solvers become faster than the best dense solvers (even for very sparse matrices) is much larger than most people think it is.
The fastest sparse solvers are generally built on the fastest dense linear algebra operations.

In some sense a special case of Andrew Cone's example, but Kalman Filters eg here typically have a dense state error covariance matrix, though the observation model matrix and transition matrices may be sparse.

Related

Does Fourier Series have any application related to computer science?

We have Fourier Series and several other chapters like Fourier Integral and Transforms, Ordinary Differential Equations, Partial Differential Equations in my Course.
I am pursuing Bachelors Degree in Computer Science & Engineering. Never being fond of mathematics I am little curios to know where this can be useful for me.
Fourier transform is one of the brilliant algorithms and it has quite a lot of use cases. Signal processing is the significant one among them.
Here are some use cases:
You can separate a song into its individual frequencies & boost
the ones you care for
Used for compression (audio for instance)
Used to predict earth quakes
Used to analyse DNA
Used to build apps like Shazam which predict what song is playing
Used in kinesiology to predict muscle fatigue by analysing muscle signals.
(In short, the signal frequency variations can be fed to a
machine learning algorithm and the algorithm could predict the type of
fatigue and so on)
I guess, this will give you an idea of how important it is.

Graph Processing - Vertex Centric Model vs Matrix-Vector Multiplication

Vertex-centric and matrix-vector multiplication are two most famous models to process graph structured data, I am looking for a comparison between them, which one is better and in what terms.
Comparison can be in terms of performance, express-ability(number of algorithms that can be implemented), scalability and any other aspect I am missing to list here :)
I have been looking around but could not find a comparison between the two approaches.
Thanks in advance

Solve for Ax=B where A is a sparse matrix in openCL

Does anyone know of a library or example of openCL code which will solve for Ax=B where A is large and sparse? I do not want to calculate the inverse of A as it would be very large and dense. The A matrix is >90% sparse and just calculating x is likely far less memory and computationally intensive.
The following post will help me on the CPU and looks like a good option but I really need the speedup of the GPU for this application.
C++ Memory Efficient Solution for Ax=b Linear Algebra System
What you are looking for is a Sparse Linear System Solver. For OpenCL take a look at ViennaCL: http://viennacl.sourceforge.net/
It has Conjugate Gradient, Stabilized Bi-Conjugate Gradient, Generalized Minimum Residual solvers.
However if you want to solve it efficiently you need a multigrid method. Take a look at: http://www.paralution.com/
PARALUTION is a library which enables you to perform various sparse iterative solvers and preconditioners on multi/many-core CPU and GPU devices.
There is also SpeedIT OpenCL:
This version of SpeedIT utilizes the power of OpenCL framework, which allows to use computational power of suitable GPUs. The SpeedIT OpenCL library provides a set of accelerated solvers and functions for sparse linear systems of
equations which are:
Preconditioned Conjugate Gradient solver
• Preconditioned Stabilized Bi-Conjugate Gradient solver
• Accelerated Sparse Matrix-Vector Multiplication
• Preconditioners:
◦ Jacobi diagonal
◦ Row scaling with norms l1, l2 and l∞
◦ ILU0 – Incomplete LU with 0 filling
You can solve Linear Simultaneous Equations of the form AX=B using Sequalator.
You can either use the OpenCL functionality or the multi threaded CPU functionality as per your hardware requirements.
You can also analyse the solutions to get an understanding of the errors in equations after substituting the solutions.

Most efficient way to solve SEVERAL linear systems Ax=b with SMALL A (minimum 3x3 maximum 8x8)

I need to solve thousands of time SMALL linear system of the type Ax=b. Here A is a matrix that is not smaller than 3x3 and maximum 8x8. I am aware of this http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ so I dont think it is smart to invert the matrix even if the matrices are small right? So what is the most efficient way to do that? I am programming in Fortran so probably I should use lapack library right? My matrices are full and in general non-simmetric.
Thanks
A.
Caveat: I didn't look into this extensively, but I have some experience I am happy to share.
In my experience, the fastest way to solve a 3x3 system is to basically use Cramer's rule. If you need to solve multiple systems with the same matrix A, it pays to pre-compute the inverse of A. This is only true for 2x2 and 3x3.
If you have to solve multiple 4x4 systems with the same matrix, then again using the inverse is noticeably faster than the forward and back-substitution of LU. I seem to remember that it uses less operations, and in practice the difference is even more (again, in my experience). As the matrix size grows, the difference shrinks, and asymptotically the difference disappears. If you are solving systems with difference matrices, then I don't think there is an advantage in computing the inverse.
In all cases, solving the system with the inverse can be much less accurate than using the LU-decomposition is A is fairly ill-conditioned. So if accuracy is an issue, then LU-factorization is definitely the way to go.
The LU factorization sounds like just the ticket for you, and the lapack routine dgetrf will compute this for you, after which you can use dgetrs to solve that linear system. Lapack has been optimized to the gills over the years, so in all likelihood you are better using that than writing any of this code yourself.
The computational cost of computing the matrix inverse and then multiplying that by the right-hand side vector is the same if not more than computing the LU-factorization of the matrix and then forward- and back-solving to find your answer. Moreover, computing the inverse exhibits even more bizarre pathological behavior than computing the LU-factorization, the stability of which is still a fairly subtle issue. It can be useful to know the inverse for small matrices, but it sounds like you don't need that for your purpose, so why do it?
Moreover, provided there are no loop-carried dependencies, you can parallelize this using OpenMP without too much trouble.

Which particular software development tasks have you used math for? And which branch of math did you use?

I'm not looking for a general discussion on if math is important or not for programming.
Instead I'm looking for real world scenarios where you have actually used some branch of math to solve some particular problem during your career as a software developer.
In particular, I'm looking for concrete examples.
I frequently find myself using De Morgan's theorem when as well as general Boolean algebra when trying to simplify conditionals
I've also occasionally written out truth tables to verify changes, as in the example below (found during a recent code review)
(showAll and s.ShowToUser are both of type bool.)
// Before
(showAll ? (s.ShowToUser || s.ShowToUser == false) : s.ShowToUser)
// After!
showAll || s.ShowToUser
I also used some basic right-angle trigonometry a few years ago when working on some simple graphics - I had to rotate and centre a text string along a line that could be at any angle.
Not revolutionary...but certainly maths.
Linear algebra for 3D rendering and also for financial tools.
Regression analysis for the same financial tools, like correlations between financial instruments and indices, and such.
Statistics, I had to write several methods to get statistical values, like the F Probability Distribution, the Pearson product moment coeficient, and some Linear Algebra correlations, interpolations and extrapolations for implementing the Arbitrage pricing theory for asset pricing and stocks.
Discrete math for everything, linear algebra for 3D, analysis for physics especially for calculating mass properties.
[Linear algebra for everything]
Projective geometry for camera calibration
Identification of time series / statistical filtering for sound & image processing
(I guess) basic mechanics and hence calculus for game programming
Computing sizes of caches to optimize performance. Not as simple as it sounds when this is your critical path, and you have to go back and work out the times saved by using the cache relative to its size.
I'm in medical imaging, and I use mostly linear algebra and basic geometry for anything related to 3D display, anatomical measurements, etc...
I also use numerical analysis for handling real-world noisy data, and a good deal of statistics to prove algorithms, design support tools for clinical trials, etc...
Games with trigonometry and AI with graph theory in my case.
Graph theory to create a weighted graph to represent all possible paths between two points and then find the shortest or most efficient path.
Also statistics for plotting graphs and risk calculations. I used both Normal distribution and cumulative normal distribution calculations. Pretty commonly used functions in Excel I would guess but I actully had to write them myself since there is no built-in support in the .NET libraries. Sadly the built in Math support in .NET seem pretty basic.
I've used trigonometry the most and also a small amount a calculus, working on overlays for GIS (mapping) software, comparing objects in 3D space, and converting between coordinate systems.
A general mathematical understanding is very useful if you're using 3rd party libraries to do calculations for you, as you ofter need to appreciate their limitations.
i often use math and programming together, but the goal of my work IS the math so use software to achive that.
as for the math i use; mostly Calculus (FFT's analysing continuous and discrete signals) with a slash of linar algebra (CORDIC) to do trig on a MCU with no floating point chip.
I used a analytic geometry for simple 3d engine in opengl in hobby project on high school.
Some geometry computation i had used for dynamic printing reports, where was another 90° angle layout than.
A year ago I used some derivatives and integrals for store analysis (product item movement in store).
Bot all the computation can be found on internet or high-school book.
Statistics mean, standard-deviation, for our analysts.
Linear algebra - particularly gauss-jordan elimination and
Calculus - derivatives in the form of difference tables for generating polynomials from a table of (x, f(x))
Linear algebra and complex analysis in electronic engineering.
Statistics in analysing data and translating it into other units (different project).
I used probability and log odds (log of the ratio of two probabilities) to classify incoming emails into multiple categories. Most of the heavy lifting was done by my colleague Fidelis Assis.
Real world scenarios: better rostering of staff, more efficient scheduling of flights, shortest paths in road networks, optimal facility/resource locations.
Branch of maths: Operations Research. Vague definition: construct a mathematical model of a (normally complex) real world business problem, and then use mathematical tools (e.g. optimisation, statistics/probability, queuing theory, graph theory) to interrogate this model to aid in the making of effective decisions (e.g. minimise cost, maximise efficency, predict outcomes etc).
Statistics for scientific data analyses such as:
calculation of distributions, z-standardisation
Fishers Z
Reliability (Alpha, Kappa, Cohen)
Discriminance analyses
scale aggregation, poling, etc.
In actual software development I've only really used quite trivial linear algebra, geometry and trigonometry. Certainly nothing more advanced than the first college course in each subject.
I have however written lots of programs to solve really quite hard math problems, using some very advanced math. But I wouldn't call any of that software development since I wasn't actually developing software. By that I mean that the end result wasn't the program itself, it was an answer. Basically someone would ask me what is essentially a math question and I'd write a program that answered that question. Sure I’d keep the code around for when I get asked the question again, and sometimes I’d send the code to someone so that they could answer the question themselves, but that still doesn’t count as software development in my mind. Occasionally someone would take that code and re-implement it in an application, but then they're the ones doing the software development and I'm the one doing the math.
(Hopefully this new job I’ve started will actually let me to both, so we’ll see how that works out)

Resources