How can I perform vector calculations in lisp, such as magnitude of a vector, norm of a vector, distance (between two points), dot product, cross product, etc.
Thanks.
There are several libraries of bindings to Fortran linear algebra packages like LAPACK and BLAS, such as LLA, the Lisp Linear Algebra library.
Take a look at GSLL (which includes an interface to BLAS), and the underlying grid system. On the other hand, I agree with the above comment in that if the things you mention are all you need, then it's probably faster/easier to write your own.
I think that Tamas Papp's LLA library might have what you want. He recently announced that he plans a rewrite.
All this stuff is incredibly straight-forward maths. Calculate it the way you would normally.
Related
As you probably know functions can be represented as a infinite series. For example f(x) = cosx can be represented as this. My question is if this is every used practically in programming for any type of application. I know it can be used I was just wondering if it actually is for serious projects.
Aside from infinite series, there are other representations for functions which can be useful for computing approximations. Asymptotic series, identities involving other "elementary" functions, and interpolation in a table of values are all used in different contexts. Take a look at Abramowitz & Stegun "Handbook of Mathematical Functions" to get an idea of the variety of possibilities. Also look for the source code for popular libraries or systems such as R, Numpy, Scipy, or Octave to see what approaches have been used by the authors of that software.
Specifically about series approximations for trigonometric functions, I think that might be a reasonable thing to do, but only if the range of the argument is reduced (via identities) so that it is as small as possible.
Approximation of functions is a great topic; good luck and have fun.
I'm currently working on a math library.
It's now supporting several matrix operations:
- Plus
- Product
- Dot
- Get & Set
- Transpose
- Multiply
- Determinant
I always want to generalize everything I can generalize
I was thinking about a recursive way to implement the transpose of a matrix, but I just couldn't figure it out.
Anybody help?
I would advise you against trying to write a recursive method to transpose a matrix.
The idea is easy:
transpose(A) = A(j,i)
Recursion isn't hard in this case. You can see the stopping condition: a 1x1 matrix with a single value is its own transpose. Build it up for 2x2, etc.
The problem is that this will be terribly inefficient, both in terms of stack depth and memory, for any matrix beyond a trivial size. People who apply linear algebra to real problems can require tens of thousands or billions of degrees of freedom.
You don't talk about meaningful, practical cases like sparse or banded matricies, etc.
You're better off doing it using a straightforward declarative approach.
Haskell use BLAS as its backing implementation. It's a more functional language than JavaScript. Perhaps you could crib some ideas by looking at the source code.
I'd recommend that you do the simple thing first, get it all working, and then branch out from there.
Here's a question to ask yourself: Why would anyone want to do serious numerical work using JavaScript? What will your library offer that's an improvement on what's available?
If you want to learn how to reinvent wheels, by all means proceed. Just understand that you aren't the first.
I did some basic reading on Eigen and Blas. Both library have support for matrix matrix, matrix vector multiplication. I don't understand which one should I use in which case? To me it seems, both have almost same performance. It would be nice if someone could give me some resource or just tell me what are the advantages one library have over another? Or how does these two differer in case of matrix and vector manipulation? Thanks in advance.
Use Eigen, it's more complete and much easier to use. Then, if you wonder if another fully optimized BLAS implementation could give you higher performance, then just recompile your code with -DEIGEN_USE_BLAS and link to your favorite blas and see by yourself.
Also, when using Eigen, don't forget to enable compiler optimizations, e.g. -O3 and the instruction-sets your hardware supports, e.g., -mavx -mfma when using latest Eigen.
So the answer to this question is here.
http://eigen.tuxfamily.org/index.php?title=FAQ#How_does_Eigen_compare_to_BLAS.2FLAPACK.3F
More or less, I use Eigen mostly, because it has a comforable interface. If you need speed and multicore parallelism or have only little but time-consuming linear algebra stuff in your code, go for GotoBlas2. Usually it is fastest on Intel machines.
Are there any software tools for performing arithmetic on very large numbers in parallel? What I mean by parallel is that I want to use all available cores on my computer for this.
The constraints are wide open for me. I don't mind trying any language or tech.
Please and thanks.
It seems like you are either dividing really huge numbers, or are using a suboptimal algorithm. Parallelizing things to a fixed number of cores will only tweak the constants, but have no effect on the asymptotic behavior of your operation. And if you're talking about hours for a single division, asymptotic behavior is what matters most. So I suggest you first make sure sure your asymptotic complexity is as good as can be, and then start looking for ways to improve the constants, perhaps by parallelizing.
Wikipedia suggests Barrett division, and GMP has a variant of that. I'm not sure whether what you've tried so far is on a similar level, but unless you are sure that it is, I'd give GMP a try.
See also Parallel Modular Multiplication on Multi-core Processors for recent research. Haven't read into that myself, though.
The only effort I am aware of is a CUDA library called CUMP. However, the library only provides support for addition, subtraction and multiplication. Anyway, you can use multiplication to perform the division on the GPU and check if the quality of the result is enough for your particular problem.
What are some of the better libraries for large sparse iterative (conjugate gradient, MINRES, GMRES, etc.) linear algebra system solving? I've often coded my own routines, but I'm interested to know which "off-the-shelf" packages people prefer. I've heard of PETSc, TAUCS, IML++, and a few others. I'm wondering how these stack up, and what else is out there. My preference is for ease of use, and freely available software.
Victor Eijkhout's Overview of Iterative Linear System Solver Packages would probably be a good place to start.
You may also wish to look at Trilinos
http://trilinos.sandia.gov/
It is designed by some great software craftsman, using modern
design techniques.
Moreover, from within Trilinos, you can call PetsC if you desire.
NIST has some sparse Linear Algebra software you can download
here: http://math.nist.gov/sparselib++/ and here: http://math.nist.gov/spblas/
I haven't used those packages myself, but I've heard good things about them.
http://www.cise.ufl.edu/research/sparse/umfpack/
UMFPACK is a set of routines for
solving unsymmetric sparse linear
systems, Ax=b, using the Unsymmetric
MultiFrontal method. Written in
ANSI/ISO C, with a MATLAB (Version 6.0
and later) interface. Appears as a
built-in routine (for lu, backslash,
and forward slash) in MATLAB. Includes
a MATLAB interface, a C-callable
interface, and a Fortran-callable
interface. Note that "UMFPACK" is
pronounced in two syllables, "Umph
Pack". It is not "You Em Ef Pack".
I'm using it for FEM code.
I would check out Microsoft's Solver Foundation. It's free to cheap for even pretty big problems. The unlimited version is industrial strength and is based on Gurobi and of course isn't cheap.
http://code.msdn.microsoft.com/solverfoundation