I would like to know the angular difference between the orientation of two 3D matrices (4x4). Two matrices that are both oriented in the same direction would be zero, and two matrices that are oriented in opposite directions would be 180º. By 'orientation' I am referring to the direction that an object transformed by the matrix would be facing. So I'm only concerned with rotation, not translation or scale.
Specifically, I am using instances of WebKitCSSMatrix which refer to the 16 3D matrix values as .m11 through .m44.
in that case compare only one axis from the matrices
extract direction vector from your matrix
which one it is depends on your mesh models
it is the one the object is going forward
in mine models it is usually Z-axis
but I also see that other people often use X-axis
look here: matrix vectors extraction
I am not familiar with your matrix library
but there is a chance that your matrices are transposed !!!
so if it not works like it should extract the transposed vectors ... (rows instead columns)
compute the difference
just compute this: angle = acos ( (V1.V2)/(|V1|.|V2|) )
where V1,V2 are the direction vectors
Related
For 2-dimensional sampled curves (an array of 2D points) there exists the Rahmer-Douglas-Peucker algorithm which only keeps "important" points. It works by calculating the perpendicular distance of each point (or sample) to a line that connects the first and the last point of. If the maximum distance is larger than a value epsilon the point is kept and the array is split into 2 parts. For both parts the operation is repeated (maximal perpendicular distance, if larger than epsilon etc.) The smaller epsilon the more detail is kept.
I am trying to write a function that can also do this for higher arrays of higher dimensional points. But I am unsure how to define distance. Or if this is actually a good idea.
I guess there exist lots of complicated and elegant algorithms that fit the curves to beziers and NURBS and what not. But are there also relatively simple ones?
I would prefer not to use beziers, but simply to identify "important" N-dimensional points.
You could extend your 2D algorithm using algebra and the L2 norm. Let's say you want to calculate the distance from a point X to a line segment PQ (where X, P and Q are defined as N-dimensional vectors).
First you can calculate the vector "proj" as:
Then, the distance is the module of the vector V = PX-proj.
For this calculation you only need the dot product between vectors, and that is well defined for N-dimensional spaces.
Using this approach I have successfuly used Rahmer-Douglas-Peucker algorithm in 3D.
I've fully implemented the algorithm and I'm a bit confused by how the rotation matrix works. So you end with a "structure" matrix which is 3xP, and the contents (if I'm correct) are P 3D points (so rows are x,y,z).
The rotation matrix however is 2fx3. F being the number of frames since initially we stack 3 frames of tracked feature points into a matrix. And it's 2f because the top half are the x coordinates and the bottom half the y coordinates.
Anyway, the resulting matrix is this 2fx3 and it seems like you have 2 rotation matrices so I'm a bit confused how it corresponds to a normal rotation matrix
Here's a short overview of the algorithm
http://www.cs.huji.ac.il/~csip/sfm.pdf
I actually figured out the answer. So like I said the R matrix is of the size 2fx3 and I was confused how that corresponded to a normal 3x3 rotation matrix. So it turns out that since R is stacked such that you have
r1x
r2x
r3x
r1y
r2y
r3y
Where each row is a 1x3 vector that corresponds to a row in a normal rotation matrix to get the rotation from the initial points to the new ones you take the corresponding r rows for x,y and cross them for z. So to get the rotation matrix for the 1st frame it would be
(each of these is a 1x3 vector)
r1x
r1y
cross(r1x, r1y)
As far as I know, Direct3D works with an LH coordinate system right?
So how would I get position and x/y/z axis (local orientation axis) out of a LH 4x4 (world) matrix?
Thanks.
In case you don't know: LH stands for left-handed
If the 4x4 matrix is what I think it is (a homogeneous rigid body transformation matrix, same as an element of SE(3)) then it should be fairly easy to get what you want. Any rigid body transformation can be represented by a 4x4 matrix of the form
g_ab = [ R, p;
0, 1]
in block matrix notation. The ab subscript denotes that the transformation will take the coordinates of a point represented in frame b and will tell you what the coordinates are as represented in frame a. R here is a 3x3 rotation matrix and p is a vector that, when the rotation matrix is unity (no rotation) tells you the coordinates of the origin of b in frame a. Usually, however, a rotation is present, so you have to do as below.
The position of the coordinate system described by the matrix will be given by applying the transformation to the point (0,0,0). This will well you what world coordinates the point is located at. The trick is that, when dealing with SE(3), you have to add a 1 at the end of points and a 0 at the end of vectors, which makes them vectors of length 4 instead of length 3, and hence operable on by the matrix! So, to transform point (0,0,0) in your local coordinate frame to the world frame, you'd right multiply your matrix (let's call it g_SA) by the vector (0,0,0,1). To get the world coordinates of a vector (x,y,z) you multiply the matrix by (x,y,z,0). You can think of that as being because vectors are differences of points, so the 1 in the last element goes the away. So, for example, to find the representation of your local x-axis in the world coordinates, you multiply g_SA*(1,0,0,0). To find the y-axis you do g_SA*(0,1,0,0), and so on.
The best place I've seen this discussed (and where I learned it from) is A Mathematical Introduction to Robotic Manipulation by Murray, Li and Sastry and the chapter you are interested in is 2.3.1.
sorry - I should know this but I don't.
I have computed the position of a reference frame (S1) with respect to a base reference frame (S0) through two different processes that give me two different 4x4 affine transformation matrices. I'd like to compute an error between the two but am not sure how to deal with the rotational component. Would love any advice.
thank you!
If R0 and R1 are the two rotation matrices which are supposed to be the same, then R0*R1' should be identity. The magnitude of the rotation vector corresponding to R0*R1' is the rotation (in radians, typically) from identity. Converting rotation matrices to rotation vectors is efficiently done via Rodrigues' formula.
To answer your question with a common use case, Python and OpenCV, the error is
r, _ = cv2.Rodrigues(R0.dot(R1.T))
rotation_error_from_identity = np.linalg.norm(r)
You are looking for the single axis rotation from frame S1 to frame S0 (or vice versa). The axis of the rotation isn't all that important here. You want the rotation angle.
Let R0 and R1 be the upper left 3x3 rotation matrices from your 4x4 matrices S0 and S1. Now compute E=R0*transpose(R1) (or transpose(R0)*R1; it doesn't really matter which.)
Now calculate
d(0) = E(1,2) - E(2,1)
d(1) = E(2,0) - E(0,2)
d(2) = E(0,1) - E(1,0)
dmag = sqrt(d(0)*d(0) + d(1)*d(1) + d(2)*d(2))
phi = asin (dmag/2)
I've left out some hairy details (and these details can bite you). In particular, the above is invalid for very large error angles (error > 90 degrees) and is imprecise for large error angles (angle > 45 degrees).
If you have a general-purpose function that extracts the single axis rotation from a matrix, use it. Or if you have a general-purpose function that extracts a quaternion from a matrix, use that. (Single axis rotation and quaternions are very closely related to one another).
I'm experimenting with using axis-angle vectors for rotations in my hobby game engine. This is a 3-component vector along the axis of rotation with a length of the rotation in radians. I like them because:
Unlike quats or rotation matrices, I can actually see the numbers and visualize the rotation in my mind
They're a little less memory than quaternions or matrices.
I can represent values outside the range of -Pi to Pi (This is important if I store an angular velocity)
However, I have a tight loop that updates the rotation of all of my objects (tens of thousands) based on their angular velocity. Currently, the only way I know to combine two rotation axis vectors is to convert them to quaternions, multiply them, and then convert the result back to an axis/angle. Through profiling, I've identified this as a bottleneck. Does anyone know a more straightforward approach?
You representation is equivalent to quaternion rotation, provided your rotation vectors are unit length. If you don't want to use some canned quaternion data structure you should simply ensure your rotation vectors are of unit length, and then work out the equivalent quaternion multiplications / reciprocal computation to determine the aggregate rotation. You might be able to reduce the number of multiplications or additions.
If your angle is the only thing that is changing (i.e. the axis of rotation is constant), then you can simply use a linear scaling of the angle, and, if you'd like, mod it to be in the range [0, 2π). So, if you have a rotation rate of α raidans per second, starting from an initial angle of θ0 at time t0, then the final rotation angle at time t is given by:
θ(t) = θ0+α(t-t0) mod 2π
You then just apply that rotation to your collection of vectors.
If none of this improves your performance, you should consider using a canned quaternion library as such things are already optimized for the kinds of application you're disucssing.
You can keep them as angle axis values.
Build a cross-product (anti-symmetric) matrix using the angle axis values (x,y,z) and weight the elements of this matrix by multiplying them by the angle value. Now sum up all of these cross-product matrices (one for each angle axis value) and find the final rotation matrix by using the matrix exponential.
If matrix A represents this cross-product matrix (built from Angle Axis value) then,
exp(A) is equivalent to the rotation matrix R (i.e., equivalent to your quaternion in matrix form).
Therefore,
exp (A1 + A2) = R1 * R2
probably a more expensive calucation in the end...
You should use unit quaternions rather than scaled vectors to represent your rotations. It can be shown (not by me) that any representation of rotations using three parameters will run into problems (i.e. is singular) at some point. In your case it occurs where your vector has a length of 0 (i.e. the identity) and at lengths of 2pi, 4pi, etc. In these cases the representation becomes singular. Unit quaternions and rotation matrices do not have this problem.
From your description, it sounds like you are updating your rotation state as a result of numerical integration. In this case you can update your rotation state by converting your rotational rate (\omega) to a quaternion rate (q_dot). If we represent your quaternion as q = [q0 q1 q2 q3] where q0 is the scalar part then:
q_dot = E*\omega
where
[ -q1 -q2 -q3 ]
E = [ q0 -q3 q2 ]
[ q3 q0 -q1 ]
[ -q2 q1 q0 ]
Then your update becomes
q(k+1) = q(k) + q_dot*dt
for simple integration. You could choose a different integrator if you choose.
Old question, but another example of stack overflow answering questions the OP wasn't asking. OP already listed out his reasoning for not using quaternions to represent velocity. I was in the same boat.
That said, the way you combine two angular velocities, with each represented by a vector, which represents the axis of rotation with its magnitude representing the amount of rotation.
Just add them together. Component-by-component. Hope that helps some other soul out there.