OpenGL Matrices VS DirectX Matrices - math

I have only handled DirectX matrices
I have read articles that say you cannot use DirectX matrix math libraries for openGL matrices.
but i have also read that if your math is consistent you can get similar to same results. That just confuses me even more.
Can anyone enlighten me? or If you cannot use DirectX math on openGL matrices. Does anyone know a good openGL matrix library?
any help in understand the differences and math knowledge of them would be grateful.

I have read articles that say you cannot use DirectX matrix math libraries for openGL matrices.
This is wrong.
but i have also read that if your math is consistent you can get similar to same results.
The only difference is about the default assumptions OpenGL and DirectX make about their Normalized Device Coordinate space. They use slightly different ranges and signs. However this only translates into a transformation stack with an additional "adaptor" matrix premultiplied on it, and you're done with.
Also important may be the index ordering, but modern OpenGL allows you to choose which order is used by a parameter to glUniformMatrix of the name "transpose"

I'd like to add to datenwolf's great answer by making it clearer that the key perceived difference between OpenGL's matrices and DirectX's matrices is that they are in column-major and row-major formats, respectively. (Column- and row-major refers to how you would write them out in a 4x4 format. If the translations appear in the fourth column, that is column-major, vice versa for row-major.)
Mathematicians would tell you that column-major is the proper way to represent a matrix, primarily because it makes operations on them, visually (on paper), easier.
When it comes to OpenGL, however, the entire matrix is laid out contiguously in memory in a 16-element array, with the translation occupying the 13th, 14th, and 15th elements, according to the specifications. Thus, it's easy to adapt the data to either matrix format.

Related

8 point algorithm for estimating Fundamental Matrix

I'm watching a lecture about estimating the fundamental matrix for use in stereo vision using the 8 point algorithm. I understand that once we recover the fundamental matrix between two cameras we can compute the epipolar line on one camera given a point on the other. To my understanding this epipolar line (after it's been rectified) makes it easy to find feature correspondences, because we are simply matching features along a 1D line.
The confusion comes from the fact that 8-point algorithm itself requires at least 8 feature correspondences to estimate the Fundamental Matrix.
So, we are finding point correspondences to recover a matrix that is used to find point correspondences?
This seems like a chicken-egg paradox so I guess I'm misunderstanding something.
The fundamental matrix can be precomputed. This leads to two advantages:
You can use a nice environment in which features can be matched easily (like using a chessboard) to compute the fundamental matrix.
You can use more computationally expensive operations like a sequence of SIFT, FLANN and RANSAC across the entire image since you only need to do that once.
After getting the fundamental matrix, you can find correspondences in a noisy environment more efficiently than using the same method when you compute the fundamental matrix.

How to transpose a high-degree tensor in FP way?

I'm currently working on a math library.
It's now supporting several matrix operations:
- Plus
- Product
- Dot
- Get & Set
- Transpose
- Multiply
- Determinant
I always want to generalize everything I can generalize
I was thinking about a recursive way to implement the transpose of a matrix, but I just couldn't figure it out.
Anybody help?
I would advise you against trying to write a recursive method to transpose a matrix.
The idea is easy:
transpose(A) = A(j,i)
Recursion isn't hard in this case. You can see the stopping condition: a 1x1 matrix with a single value is its own transpose. Build it up for 2x2, etc.
The problem is that this will be terribly inefficient, both in terms of stack depth and memory, for any matrix beyond a trivial size. People who apply linear algebra to real problems can require tens of thousands or billions of degrees of freedom.
You don't talk about meaningful, practical cases like sparse or banded matricies, etc.
You're better off doing it using a straightforward declarative approach.
Haskell use BLAS as its backing implementation. It's a more functional language than JavaScript. Perhaps you could crib some ideas by looking at the source code.
I'd recommend that you do the simple thing first, get it all working, and then branch out from there.
Here's a question to ask yourself: Why would anyone want to do serious numerical work using JavaScript? What will your library offer that's an improvement on what's available?
If you want to learn how to reinvent wheels, by all means proceed. Just understand that you aren't the first.

Floating point error in successive coordinate rotations

I have code (Python) that must perform some operations regarding distances between reflected segments of a curve.
In order to make the thinking and the code clearer, I apply two rotations (using matrix multiplication) before performing the actual calculation. I suppose it would be possible to perform the calculation without any rotation at all, but the code and the thinking would be much more awkward.
What I have to ask is: are the three rotations too much of a penalty in terms of lost precision because of rounding-off floating point errors? Is there a way to estimate the magnitude of this error?
Thanks for reading
As a rule of thumb in numerical calculations -- only take the first 12 digits seriously :)
Now, assuming 3D rotations, and that outcomes of trig functions are infinitely precise, a matrix multiplication will involve 3 multiplications and 2 additions per element in the rotated vector. Since you do two rotations, this amounts to 6 multiplications and 4 additions per element.
If you read this (which you should read front to back one day), or this, or this, you'll find that individual arithmetic operations of IEEE 754 are guaranteed to be accurate to within half a ULP (=last decimal place).
Applied to your problem, that means that the 10 operations per element in the result vector will be accurate to within 5 ULPs.
In other words -- suppose you're rotating a unit vector. The elements of the rotated vector will be accurate to 0.000000000000005 -- I'd say that's nothing to worry about.
Including the errors in the trig functions, well, that's a bit more complicated...that really depends on your programming language and/or version of your compiler etc. But I guarantee it'll be comparable to the 5 ULPs.
If you do think this accuracy is not going to be enough, then I'd suggest you perform the two rotations in one go. Work out the matrix multiplication analytically, and implement the rotation as a single matrix multiplication. Alternatively: have a look at quaternions (although I suspect that's a bit overkill for your situation).
What you need to do is compute the condition number of your operations and determine whether it may incur loss of significance. That should allow you to estimate the error that could be introduced.

Qt: what class should I use for matrices instead of QMatrix?

The documentation for QMatrix says it's obsolete and is strongly unadvised to use. Ok, but what should I use instead to store a matrix?
I have even posted a bug report on the Qt Documentation Bug tracker but they didn't respond.
It depends on what you want to do. The replacement for QMatrix is QTransform, so you should use that if it will accomplish what you want. It's worth noting that neither QMatrix nor QTransform are really matrices in the mathematical sense.
If you're talking about ordinary mathematical matrices, you should look to any of the existing C++ matrix libraries (a quick Google search turns up a number of results), or write your own matrix class. I was recently working on a project where I needed to do multiplication of small (2x2) matrices, so I just designed the class myself. It was quite easy.
EDIT: By the way, that's not a bug, so you should try to remove the report, if it's possible.
QMatrix was specifically for 2D transformations and QTransform replaces it for that purpose. If you're looking for regular matrix classes for 3D work or linear algebra, then Qt has QMatrix4x4 and QGenericMatrix.

What library for arbitrary precision library should I use?

I need to program something that calculates a number to arbitrary precision...
but I need it to output the digits that are already "certain" (ie below some error bound) to a file so that there are digits to work on while the program keeps running.
Also, most libraries for arbitrary precision seem to require a fixed precision, but what if I wanted dynamic precision, ie, it would go on and on...
Most algorithms that calculate a number to extended precision require that all intermediate calculations are done to a somewhat higher precision to guarantee accurate results. You normally specify your final desired precision and that's the result that you get. If you want to output the "known" accurate digits during the calculation, you'll generally need to implement the algorithm and track the accurate digits yourself.
Without knowing what number you want to calculate, I can't offer any better suggestions.
GMP/MPIR only support very basic floating point calculations. MPFR, which requires either GMP or MPIR, provides a much broader set of floating point operations.
My advice is to use MPIR. It's a fork of GMP but with (in my opinion) a more helpful and developer-friendly crew.

Resources