Related
Hello, I've got a question which I cannot solve so I need a bit help.
In the picture above you can see an Oriented Bounding Box specified by 4 points (A, B, C, D). There is also a point in space called P. If I cast a ray from P against the OBB the ray is going to intersect the OBB at some point. This point of intersection is called Q in the picture. By the way the ray is always going to be x-axis aligned which means its directional vector is either (1, 0) or (-1,0) if normalized. My goal is to find the point of intersection - Q. Is there a way (if possible computationaly inexpensive) to do so?
Thanks in advance.
One way to do this is to consider each side of the bounding box to be a linear equation of the form y = ax + b, where a is the slope and b is the y-intercept. Then consider the ray from P to be an equation of the form y = c, where c is a constant. Then compare this equation to each of the four other equations to see where it intersects each one. One of these intersections will be our Q, if a Q exists; it's possible that the ray will miss the bounding box entirely. We will need to do a few checks:
Firstly, eliminate all potential Q's that are on the wrong side of P.
Secondly, check each of the four intersections to make sure they are within the bounds of the lines that they represent, and eliminate the ones that are not.
Finally, if any potential Q's remain, the one closest to P will be our Q. If no potential Q's remain, this means that the ray from P misses the bounding box entirely.
For example...
The line drawn from D to B would have a slope equal to (B.y - D.y) / (B.x - D.x) and a y-intercept equal to B.y - B.x * slope. Then the entire equation is y = (B.y - D.y) / (B.x - D.x) * x + B.y - B.x * (B.y - D.y) / (B.x - D.x). Set this equation equal to y = P.y and solve for x:
x = (P.y - B.y + B.x*(B.y - D.y)/(B.x - D.x))*(B.x - D.x)/(B.y - D.y)
The result of this equation will give you the x-value of the intersection. The y-value is P.y. Do this for each of the other 3 lines as well: A-B, A-C, C-D. I will refer to these intersections as Q(D-B), Q(A-B), Q(A-C), and Q(C-D) respectively.
Next, eliminate candidate-Q's that are on the wrong side of P. In our example, this eliminates Q(A-B), since it is way off to the right side of the screen. Mathematically, Q(A-B).x > P.x.
Then eliminate all candidate-Q's that are not on the line they represent. We can do this check by finding the lowest and highest x-values and y-values given by the two points that represent the line. For example, to check that Q(A-C) is on the line A-C, check that C.x <= Q(A-C).x <= A.x and C.y <= Q(A-C).y <= A.y. Q(A-C) passes the test, as well as Q(D-B). However, Q(C-D) does not pass, as it is way off to the left side of the screen, far from the box. Therefore, Q(C-D) is eliminated from candidacy.
Finally, of the two points that remain, Q(A-C) and Q(D-B), we choose Q(D-B) to be our winner, because it is closest to P.
We now can say that the ray from P hits the bounding box at Q(D-B).
Of course, when you implement this in code, you will need to account for divisions by zero. If a line is perfectly vertical, there does not exist a point-slope equation of the line, so you will need to create a separate formula for this case. If a line is perfectly horizontal, it's respective candidate-Q should be automatically eliminated from candidacy, as the ray from P will never touch it.
Edit:
It would be more efficient to only do this process with lines whose two points are on vertically opposite sides of point P. If both points of a line are above P, or they are both below P, they would be eliminated from candidacy from the beginning.
Find the two sides that straddle p on Y. (Test of the form (Ya < Yp) != (Yb < Yp)).
Then compute the intersection points of the horizontal by p with these two sides, and keep the first to the left of p.
If the ray points to the left(right) then it must intersect an edge that connects to the point in the OOB with max(min) x-value. We can determine which edge by simply comparing the y-value of the ray with the y value of the max(min) point and its neighbors. We also need to consider OBBs that are actually axis-aligned, and thus have two points with equal max(min) x-value. Once we have the edge it's simple to confirm that the ray does in fact intersect the OBB and calculate its x-value.
Here's some Java code to illustrate (ideone):
static double nearestX(Point[] obb, int y, int dir)
{
// Find min(max) point
int n = 0;
for(int i=1; i<4; i++)
if((obb[n].x < obb[i].x) == (dir == -1)) n = i;
// Determine next or prev edge
int next = (n+1) % 4;
int prev = (n+3) % 4;
int nn;
if((obb[n].x == obb[next].x) || (obb[n].y < y) == (obb[n].y < obb[next].y))
nn = next;
else
nn = prev;
// Check that the ray intersects the OBB
if(Math.abs(y) > Math.abs(obb[nn].y)) return Double.NaN;
// Standard calculation of x from y for line segment
return obb[n].x + (y-obb[n].y)*(obb[nn].x-obb[n].x)/(obb[nn].y-obb[n].y);
}
Test:
public static void main(String[] args)
{
test("Diamond", new Point[]{p(0, -2), p(2, 0), p(0, 2), p(-2,0)});
test("Square", new Point[]{p(-2, -2), p(2, -2), p(2, 2), p(-2,2)});
}
static void test(String label, Point[] obb)
{
System.out.println(label + ": " + Arrays.toString(obb));
for(int dir : new int[] {-1, 1})
{
for(int y : new int[] {-3, -2, -1, 0, 1, 2, 3})
System.out.printf("(% d, % d) = %.0f\n", y , dir, nearestX(obb, y, dir));
System.out.println();
}
}
Output:
Diamond: [(0,-2), (2,0), (0,2), (-2,0)]
(-3, -1) = NaN
(-2, -1) = 0
(-1, -1) = 1
( 0, -1) = 2
( 1, -1) = 1
( 2, -1) = 0
( 3, -1) = NaN
(-3, 1) = NaN
(-2, 1) = 0
(-1, 1) = -1
( 0, 1) = -2
( 1, 1) = -1
( 2, 1) = 0
( 3, 1) = NaN
Square: [(-2,-2), (2,-2), (2,2), (-2,2)]
(-3, -1) = NaN
(-2, -1) = 2
(-1, -1) = 2
( 0, -1) = 2
( 1, -1) = 2
( 2, -1) = 2
( 3, -1) = NaN
(-3, 1) = NaN
(-2, 1) = -2
(-1, 1) = -2
( 0, 1) = -2
( 1, 1) = -2
( 2, 1) = -2
( 3, 1) = NaN
The renewal function for Weibull distribution m(t) with t = 10 is given as below.
I want to find the value of m(t). I wrote the following r code to compute m(t)
last_term = NULL
gamma_k = NULL
n = 50
for(k in 1:n){
gamma_k[k] = gamma(2*k + 1)/factorial(k)
}
for(j in 1: (n-1)){
prev = gamma_k[n-j]
last_term[j] = gamma(2*j + 1)/factorial(j)*prev
}
final_term = NULL
find_value = function(n){
for(i in 2:n){
final_term[i] = gamma_k[i] - sum(last_term[1:(i-1)])
}
return(final_term)
}
all_k = find_value(n)
af_sum = NULL
m_t = function(t){
for(k in 1:n){
af_sum[k] = (-1)^(k-1) * all_k[k] * t^(2*k)/gamma(2*k + 1)
}
return(sum(na.omit(af_sum)))
}
m_t(20)
The output is m(t) = 2.670408e+93. Does my iteratvie procedure correct? Thanks.
I don't think it will work. First, lets move Γ(2k+1) from denominator of m(t) into Ak. Thus, Ak will behave roughly as 1/k!.
In the nominator of the m(t) terms there is t2k, so roughly speaking you're computing sum with terms
100k/k!
From Stirling formula
k! ~ kk, making terms
(100/k)k
so yes, they will start to decrease and converge to something but after 100th term
Anyway, here is the code, you could try to improve it, but it breaks at k~70
N <- 20
A <- rep(0, N)
# compute A_k/gamma(2k+1) terms
ps <- 0.0 # previous sum
A[1] = 1.0
for(k in 2:N) {
ps <- ps + A[k-1]*gamma(2*(k-1) + 1)/factorial(k-1)
A[k] <- 1.0/factorial(k) - ps/gamma(2*k+1)
}
print(A)
t <- 10.0
t2 <- t*t
r <- 0.0
for(k in 1:N){
r <- r + (-t2)^k*A[k]
}
print(-r)
UPDATE
Ok, I calculated Ak as in your question, got the same answer. I want to estimate terms Ak/Γ(2k+1) from m(t), I believe it will be pretty much dominated by 1/k! term. To do that I made another array k!*Ak/Γ(2k+1), and it should be close to one.
Code
N <- 20
A <- rep(0.0, N)
psum <- function( pA, k ) {
ps <- 0.0
if (k >= 2) {
jmax <- k - 1
for(j in 1:jmax) {
ps <- ps + (gamma(2*j+1)/factorial(j))*pA[k-j]
}
}
ps
}
# compute A_k/gamma(2k+1) terms
A[1] = gamma(3)
for(k in 2:N) {
A[k] <- gamma(2*k+1)/factorial(k) - psum(A, k)
}
print(A)
B <- rep(0.0, N)
for(k in 1:N) {
B[k] <- (A[k]/gamma(2*k+1))*factorial(k)
}
print(B)
shows that
I got the same Ak values as you did.
Bk is indeed very close to 1
It means that term Ak/Γ(2k+1) could be replaced by 1/k! to get quick estimate of what we might get (with replacement)
m(t) ~= - Sum(k=1, k=Infinity) (-1)k (t2)k / k! = 1 - Sum(k=0, k=Infinity) (-t2)k / k!
This is actually well-known sum and it is equal to exp() with negative argument (well, you have to add term for k=0)
m(t) ~= 1 - exp(-t2)
Conclusions
Approximate value is positive. Probably will stay positive after all, Ak/Γ(2k+1) is a bit different from 1/k!.
We're talking about 1 - exp(-100), which is 1-3.72*10-44! And we're trying to compute it precisely summing and subtracting values on the order of 10100 or even higher. Even with MPFR I don't think this is possible.
Another approach is needed
OK, so I ended up going down a pretty different road on this. I have implemented a simple discretization of the integral equation which defines the renewal function:
m(t) = F(t) + integrate (m(t - s)*f(s), s, 0, t)
The integral is approximated with the rectangle rule. Approximating the integral for different values of t gives a system of linear equations. I wrote a function to generate the equations and extract a matrix of coefficients from it. After looking at some examples, I guessed a rule to define the coefficients directly and used that to generate solutions for some examples. In particular I tried shape = 2, t = 10, as in OP's example, with step = 0.1 (so 101 equations).
I found that the result agrees pretty well with an approximate result which I found in a paper (Baxter et al., cited in the code). Since the renewal function is the expected number of events, for large t it is approximately equal to t/mu where mu is the mean time between events; this is a handy way to know if we're anywhere in the neighborhood.
I was working with Maxima (http://maxima.sourceforge.net), which is not efficient for numerical stuff, but which makes it very easy to experiment with different aspects. At this point it would be straightforward to port the final, numerical stuff to another language such as Python.
Thanks to OP for suggesting the problem, and S. Pappadeux for insightful discussions. Here is the plot I got comparing the discretized approximation (red) with the approximation for large t (blue). Trying some examples with different step sizes, I saw that the values tend to increase a little as step size gets smaller, so I think the red line is probably a little low, and the blue line might be more nearly correct.
Here is my Maxima code:
/* discretize weibull renewal function and formulate system of linear equations
* copyright 2020 by Robert Dodier
* I release this work under terms of the GNU General Public License
*
* This is a program for Maxima, a computer algebra system.
* http://maxima.sourceforge.net/
*/
"Definition of the renewal function m(t):" $
renewal_eq: m(t) = F(t) + 'integrate (m(t - s)*f(s), s, 0, t);
"Approximate integral equation with rectangle rule:" $
discretize_renewal (delta_t, k) :=
if equal(k, 0)
then m(0) = F(0)
else m(k*delta_t) = F(k*delta_t)
+ m(k*delta_t)*f(0)*(delta_t / 2)
+ sum (m((k - j)*delta_t)*f(j*delta_t)*delta_t, j, 1, k - 1)
+ m(0)*f(k*delta_t)*(delta_t / 2);
make_eqs (n, delta_t) :=
makelist (discretize_renewal (delta_t, k), k, 0, n);
make_vars (n, delta_t) :=
makelist (m(k*delta_t), k, 0, n);
"Discretized integral equation and variables for n = 4, delta_t = 1/2:" $
make_eqs (4, 1/2);
make_vars (4, 1/2);
make_eqs_vars (n, delta_t) :=
[make_eqs (n, delta_t), make_vars (n, delta_t)];
load (distrib);
subst_pdf_cdf (shape, scale, e) :=
subst ([f = lambda ([x], pdf_weibull (x, shape, scale)), F = lambda ([x], cdf_weibull (x, shape, scale))], e);
matrix_from (eqs, vars) :=
(augcoefmatrix (eqs, vars),
[submatrix (%%, length(%%) + 1), - col (%%, length(%%) + 1)]);
"Subsitute Weibull pdf and cdf for shape = 2 into discretized equation:" $
apply (matrix_from, make_eqs_vars (4, 1/2));
subst_pdf_cdf (2, 1, %);
"Just the right-hand side matrix:" $
rhs_matrix_from (eqs, vars) :=
(map (rhs, eqs),
augcoefmatrix (%%, vars),
[submatrix (%%, length(%%) + 1), col (%%, length(%%) + 1)]);
"Generate the right-hand side matrix, instead of extracting it from equations:" $
generate_rhs_matrix (n, delta_t) :=
[delta_t * genmatrix (lambda ([i, j], if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))];
"Generate numerical right-hand side matrix, skipping over formulas:" $
generate_rhs_matrix_numerical (shape, scale, n, delta_t) :=
block ([f, F, numer: true], local (f, F),
f: lambda ([x], pdf_weibull (x, shape, scale)),
F: lambda ([x], cdf_weibull (x, shape, scale)),
[genmatrix (lambda ([i, j], delta_t * if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))]);
"Solve approximate integral equation (shape = 3, t = 1) via LU decomposition:" $
fpprintprec: 4 $
n: 20 $
t: 1;
[AA, bb]: generate_rhs_matrix_numerical (3, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Iterative solution of approximate integral equation (shape = 3, t = 1):" $
xx: bb;
for i thru 10 do xx: AA . xx + bb;
xx - (AA.xx + bb);
xx_iterative: xx;
"Should find iterative and LU give same result:" $
xx_diff: xx_iterative - xx_by_lu[1];
sqrt (transpose(xx_diff) . xx_diff);
"Try shape = 2, t = 10:" $
n: 100 $
t: 10 $
[AA, bb]: generate_rhs_matrix_numerical (2, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Baxter, et al., Eq. 3 (for large values of t) compared to discretization:" $
/* L.A. Baxter, E.M. Scheuer, D.J. McConalogue, W.R. Blischke.
* "On the Tabulation of the Renewal Function,"
* Econometrics, vol. 24, no. 2 (May 1982).
* H(t) is their notation for the renewal function.
*/
H(t) := t/mu + sigma^2/(2*mu^2) - 1/2;
tx_points: makelist ([float (k/n*t), xx_by_lu[1][k, 1]], k, 1, n);
plot2d ([H(u), [discrete, tx_points]], [u, 0, t]), mu = mean_weibull(2, 1), sigma = std_weibull(2, 1);
Mathematical background
Continued fractions are a way to represent numbers (rational or not), with a basic recursion formula to calculate it. Given a number r, we define r[0]=r and have:
for n in range(0..N):
a[n] = floor(r[n])
if r[n] == [an]: break
r[n+1] = 1 / (r[n]-a[n])
where a is the final representation. We can also define a series of convergents by
h[-2,-1] = [0, 1]
k[-2, -1] = [1, 0]
h[n] = a[n]*h[n-1]+h[n-2]
k[n] = a[n]*k[n-1]+k[n-2]
where h[n]/k[n] converge to r.
Pell's equation is a problem of the form x^2-D*y^2=1 where all numbers are integers and D is not a perfect square in our case. A solution for a given D that minimizes x is given by continued fractions. Basically, for the above equation, it is guaranteed that this (fundamental) solution is x=h[n] and y=k[n] for the lowest n found which solves the equation in the continued fraction expansion of sqrt(D).
Problem
I am failing to get this simple algorithm work for D=61. I first noticed it did not solve Pell's equation for 100 coefficients, so I compared it against Wolfram Alpha's convergents and continued fraction representation and noticed the 20th elements fail - the representation is 3 compared to 4 that I get, yielding different convergents - h[20]=335159612 on Wolfram compared to 425680601 for me.
I tested the code below, two languages (though to be fair, Python is C under the hood I guess), on two systems and get the same result - a diff on loop 20. I'll note that the convergents are still accurate and converge! Why am I getting different results compared to Wolfram Alpha, and is it possible to fix it?
For testing, here's a Python program to solve Pell's equation for D=61, printing first 20 convergents and the continued fraction representation cf (and some extra unneeded fluff):
from math import floor, sqrt # Can use mpmath here as well.
def continued_fraction(D, count=100, thresh=1E-12, verbose=False):
cf = []
h = (0, 1)
k = (1, 0)
r = start = sqrt(D)
initial_count = count
x = (1+thresh+start)*start
y = start
while abs(x/y - start) > thresh and count:
i = int(floor(r))
cf.append(i)
f = r - i
x, y = i*h[-1] + h[-2], i*k[-1] + k[-2]
if verbose is True or verbose == initial_count-count:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
if x**2 - D*y**2 == 1:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
print(cf)
return
count -= 1
r = 1/f
h = (h[1], x)
k = (k[1], y)
print(cf)
raise OverflowError(f"Converged on {x} {y} with count {count} and diff {abs(start-x/y)}!")
continued_fraction(61, count=20, verbose=True, thresh=-1) # We don't want to stop on account of thresh in this example
A c program doing the same:
#include<stdio.h>
#include<math.h>
#include<stdlib.h>
int main() {
long D = 61;
double start = sqrt(D);
long h[] = {0, 1};
long k[] = {1, 0};
int count = 20;
float thresh = 1E-12;
double r = start;
long x = (1+thresh+start)*start;
long y = start;
while(abs(x/(double)y-start) > -1 && count) {
long i = floor(r);
double f = r - i;
x = i * h[1] + h[0];
y = i * k[1] + k[0];
printf("%ld\u00B2-%ldx%ld\u00B2 = %lf\n", x, D, y, x*x-D*y*y);
r = 1/f;
--count;
h[0] = h[1];
h[1] = x;
k[0] = k[1];
k[1] = y;
}
return 0;
}
mpmath, python's multi-precision library can be used. Just be careful that all the important numbers are in mp format.
In the code below, x, y and i are standard multi-precision integers. r and f are multi-precision real numbers. Note that the initial count is set higher than 20.
from mpmath import mp, mpf
mp.dps = 50 # precision in number of decimal digits
def continued_fraction(D, count=22, thresh=mpf(1E-12), verbose=False):
cf = []
h = (0, 1)
k = (1, 0)
r = start = mp.sqrt(D)
initial_count = count
x = 0 # some dummy starting values, they will be overwritten early in the while loop
y = 1
while abs(x/y - start) > thresh and count > 0:
i = int(mp.floor(r))
cf.append(i)
x, y = i*h[-1] + h[-2], i*k[-1] + k[-2]
if verbose or initial_count == count:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
if x**2 - D*y**2 == 1:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
print(cf)
return
count -= 1
f = r - i
r = 1/f
h = (h[1], x)
k = (k[1], y)
print(cf)
raise OverflowError(f"Converged on {x} {y} with count {count} and diff {abs(start-x/y)}!")
continued_fraction(61, count=22, verbose=True, thresh=mpf(1e-100))
Output is similar to wolfram's:
...
335159612²-61x42912791² = 3
1431159437²-61x183241189² = -12
1766319049²-61x226153980² = 1
[7, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1, 14, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1]
I have a quaternion that contains the rotation of the three axes (x, y, z) at the same time.
I want to convert this quaternion to a rotation matrix but only the rotation on the Y axis of the quaternion or of any of the other axes, without all three at the same time.
A possible route:
Transform unit vectors X=(1,0,0) and Z=(0,0,1) by the quaternion
Call these rotated vectors (x0,x1,x2) and (z0,z1,z2)
If the rotation would have been purely around Y, we would have:
(x0,x1,x2) = (cos(theta), 0, sin(theta))
(z0,z1,z2) = (-sin(theta), 0, cos(theta))
not used is (y0,y1,y2) = (0, 1, 0)
so, calculate
c = (x0+z2) / 2
and s = (x2-z0) / 2
then normalize to get c2 + s2 equal to 1
norm = sqrt(c * c + s * s)
if norm != 0:
c = c / norm
s = s / norm
(if the norm would be zero, there is not much we can do)
the angle would be atan2(c, s)
the rotation matrix would be [[c,0,-s],[0,1,0],[s,0,c]]
I've gotten stuck getting my euler angles out my rotation matrix.
My conventions are:
Left-handed (x right, z back, y up)
YZX
Left handed angle rotation
My rotation matrix is built up from Euler angles like (from my code):
var xRotationMatrix = $M([
[1, 0, 0, 0],
[0, cx, -sx, 0],
[0, sx, cx, 0],
[0, 0, 0, 1]
]);
var yRotationMatrix = $M([
[ cy, 0, sy, 0],
[ 0, 1, 0, 0],
[-sy, 0, cy, 0],
[ 0, 0, 0, 1]
]);
var zRotationMatrix = $M([
[cz, -sz, 0, 0],
[sz, cz, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]
]);
Which results in a final rotation matrix as:
R(YZX) = | cy.cz, -cy.sz.cx + sy.sx, cy.sz.sx + sy.cx, 0|
| sz, cz.cx, -cz.sx, 0|
|-sy.cz, sy.sz.cx + cy.sx, -sy.sz.sx + cy.cx, 0|
| 0, 0, 0, 1|
I'm calculating my euler angles back from this matrix using this code:
this.anglesFromMatrix = function(m) {
var y = 0, x = 0, z = 0;
if (m.e(2, 1) > 0.999) {
y = Math.atan2(m.e(1, 3), m.e(3, 3));
z = Math.PI / 2;
x = 0;
} else if (m.e(2, 1) < -0.999) {
y = Math.atan2(m.e(1, 3), m.e(3, 3));
z = -Math.PI / 2;
x = 0;
} else {
y = Math.atan2(-m.e(3, 1), -m.e(1, 1));
x = Math.atan2(-m.e(2, 3), m.e(2, 2));
z = Math.asin(m.e(2, 1));
}
return {theta: this.deg(x), phi: this.deg(y), psi: this.deg(z)};
};
I've done the maths backwards and forwards a few times, but I can't see what's wrong. Any help would hugely appreciated.
Your matrix and euler angles aren't consistent. It looks like you should be using
y = Math.atan2(-m.e(3, 1), m.e(1, 1));
instead of
y = Math.atan2(-m.e(3, 1), -m.e(1, 1));
for the general case (the else branch).
I said "looks like" because -- what language is this? I'm assuming you have the indexing correct for this language. Are you sure about atan2? There is no single convention for atan2. In some programming languages the sine term is the first argument, in others, the cosine term is the first argument.
The last and most important branch of the anglesFromMatrix function has a small sign error but otherwise works correctly. Use
y = Math.atan2(-m.e(3, 1), m.e(1, 1))
since only m.e(3, 1) of m.e(1, 1) = cy.cz and m.e(3, 1) = -sy.cz should be inverted. I haven't checked the other branches for errors.
Beware that since sz = m.e(2, 1) has two solutions, the angles (x, y, z) used to construct the matrix m might not be the same as the angles (rx, ry, rz) returned by anglesFromMatrix(m). Instead we can test that the matrix rm constructed from (rx, ry, rz) does indeed equal m.
I worked on this problem extensively to come up with the correct angles for a given matrix. The problem in the math comes from the inability to determine a precise value for the SIN since -SIN(x) = SIN(-x) and this will affect the other values of the matrix. The solution I came up with comes up with two equally valid solutions out of eight possible solutions. I used a standard Z . Y . X matrix form but it should be adaptable to any matrix. Start by findng the three angles from: X = atan(m32,m33): Y = -asin(m31) : Z = atan(m21,m11) : Then create angles X' = -sign(X)*PI+X : Y'= sign(Y)*PI-Y : Z = -sign(Z)*pi+Z . Using these angles create eight set of angle groups : XYZ : X'YZ : XYZ' : X'YZ' : X'Y'Z' : XY'Z' : X'Y'Z : XY'Z
Use these set to create the eight corresponding matrixes. Then do a sum of the difference between the unknown matrix and each matrix. This is a sum of each element of the unknown minus the same element of the test matrix. After doing this, two of the sums will be zero and those matrixes will represent the solution angles to the original matrix. This works for all possible angle combinations including 0's. As 0's are introduced, more of the eight test matrixes become valid. At 0,0,0 they all become idenity matrixes!
Hope this helps, it worked very well for my application.
Bruce
update
After finding problems with Y = -90 or 90 degrees in the solution above. I came up with this solution that seems to reproduce the matrix at all values!
X = if(or(m31=1,m31=-1),0,atan(m33+1e-24,m32))
Y = -asin(m31)
Z = if(or(m31=1,m31=-1),-atan2(m22,m12),atan2(m11+1e-24,m21))
I went the long way around to find this solution, but it wa very enlightening :o)
Hope this helps!
Bruce