I am struggling with some basic vector rotations in Monogame.
I have a 3D forward facing unit vector (0, 0, -1) and simply want to rotate them 180 degrees around the Y axis (up). Here is the code:
[Fact]
public void Vector_Rotation()
{
Vector3 forward = Vector3.Forward;
float angle = (float)Math.PI;
Vector3 dirQuat = Vector3.Transform(forward, Quaternion.CreateFromAxisAngle(Vector3.Up, angle));
Vector3 dirMatrix = Vector3.Transform(forward, Matrix.CreateRotationY(angle));
}
Expected result
dirQuat = (0, 0, 1);
dirMatrix = (0, 0, 1);
Actual result
dirQuat = (8.742278E-08, 0, 1)
dirMatrix = (8.742278E-08, 0, 1)
I would expect a unit vector of the opposite of forward (0, 0, 1). I am new to the Monogame framework, have I missed something fundamental here?
Thanks in advance for any advice
As correctly pointed out by britter, the discrepancy is caused by the accumulation of error of the float operations. This is well described elsewhere on this site, but I will give the short answer here.
Main takeaway points:
Computers work in a finite amount of bits. In base 10, 1/3 (.33333333333333333...) cannot be exactly represented in a finite number of bits.
Multiplying two 32-bit Int numbers produces a 64-bit result. Which is then chopped to only include the 32 most significant bits when stored. This integer example when applied to Floats is always worse (since not every bit in FP32 is data) and varies with exponent differences in value (i.e. Errors are minimized when both numbers are between 0 and 1).
The minimal absolute recognizable difference between two Floats is given by Float.Epsilon. In other words, subtracting any two non-identical Float values, can result in a value up to Float.Epsilon=1.401298E-45
So why is your expected result of 0 returning 8.742278E-08 The error comes from the summation of errors during the intermediate matrix multiplications and accumulation steps, the first be the truncation of PI: float angle = (float)Math.PI; The impact of #2 permeates every aspect of the final design.
The only mildly surprising thing about your result is the Quaternion transform does not introduce more error into the final product due to the additional matrix dimension, until I realized the two function calls produce the same exact transform matrix(the additional dimension matrix was all zeroes), much of the discrepancy comes about in these two calls.
The rest of the error is manifested in the last matrix multiply Vector3.Transform(forward,
It not surprising result given the number of multiplies occurring per step.
Related
I'm currently implementing an algorithm for 3D pointcloud filtering following a scientific paper.
I run in some problems when computing the rotation matrix for specific values. The goal is to rotate points into the coordinatesystem which is defined by the direction of the normal vector ( Z Axis). Since the following query is rotationally symmetric in X,Y axis, the orientation of these axis does not matter.
R is defined as follows: Rotationmatrix
[1 1 -(nx+ny)/nz]
R = [ (row1 x row3)' ]
[nx ny nz ]
n is normalized. The problem occures when n_z becomes really small or zero. Therefore i considered to normalize row 1 before computing the crossproduct for row 2.
Nevertheless the determinant becomes -1. Will the rotationmatrix sill lead to correct results? R is orthogonal but det|R| not +1
thanks for any suggestions
You always get that
det(a, a×b, b) = - det( a, b, a×b)
= - dot(a×b, a×b)
is always negative. Thus you need to change the second row by negating it or by re-arranging the overall order of the rows.
Are you interested in rotating points around arbitrary axis? If yes, maybe quaternions is good solution.
You can check this if you want to transform a quaternion to matrix before you actually use it.
I noticed today that Plane.Transform in XNA 4.0 does not seem to give the results I expected.
var s = Matrix.CreateScale(0.1f);
var p = new Plane(new Vector3(1.0f, 0.0f, 0.0f), 1.0f);
var p = Plane.Transform(p, s);
I would have expected the plane to now have a magnitude of 0.1f but instead it has a distance of 1 and a normal of length 10:
{Normal:{X:10 Y:0 Z:0} D:1}
Why does this happen?
I can't fully explain why but the matrix that is passed to the transform method (your scale matrix) is inverted before being applied to the plane so your scale went from .1 to 10.
The 3x3 section of the matrix that holds scale and rotation data gets applied to the normal of the plane which is why your normal got scaled.
The 4th row of the matrix gets applied to the D part of the plane so since your scale matrix had all zeros there (except for m44 which had a 1), the D part of the plane remained unchanged.
Scaling a plane doesn't make much sense overall since a plane is essentially dimensionless except for that D part. The normal should always be kept at unit length for intersection test purposes so scaling a normal doesn't make sense. And if you want that D part scaled, it can be simply myPlane.D *= 0.1f; instead of trying to transform it with a matrix.
Speculation follows:
One possible reason for the matrix inversion is because there are two ways to think about the D part.
1.) the distance from the origin to the plane.
2.) the distance from the plane to the origin.
Both have the same value but oppositely signed in terms of the direction. MS chose to think of D as the distance from the plane to the origin and that would mean the direction is the opposite of the normal direction. Most likely, there was good reason for this but I have no idea. That most likely forces the matrix inversion in the Plane.Transform() method. see the graphic here: http://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k(MICROSOFT.XNA.FRAMEWORK.PLANE);k(DevLang-CSHARP)&rd=true
(In three dimensions) I'm looking for a way to compute the signed angle between two vectors, given no information other than those vectors. As answered in this question, it is simple enough to compute the signed angle given the normal of a plane to which the vectors are perpendicular. But I can find no way to do this without that value. It's obvious that the cross product of two vectors produces such a normal, but I've run into the following contradiction using the answer above:
signed_angle(x_dir, y_dir) == 90
signed_angle(y_dir, x_dir) == 90
where I would expect the second result to be negative. This is due to the fact that the cross product cross(x_dir, y_dir) is in the opposite direction of cross(y_dir, x_dir), given the following psuedocode with normalized input:
signed_angle(Va, Vb)
magnitude = acos(dot(Va, Vb))
axis = cross(Va, Vb)
dir = dot(Vb, cross(axis, Va))
if dir < 0 then
magnitude = -magnitude
endif
return magnitude
I don't believe dir will ever be negative above.
I've seen the same problem with the suggested atan2 solution.
I'm looking for a way to make:
signed_angle(a, b) == -signed_angle(b, a)
The relevant mathematical formulas:
dot_product(a,b) == length(a) * length(b) * cos(angle)
length(cross_product(a,b)) == length(a) * length(b) * sin(angle)
For a robust angle between 3-D vectors, your actual computation should be:
s = length(cross_product(a,b))
c = dot_product(a,b)
angle = atan2(s, c)
If you use acos(c) alone, you will get severe precision problems for cases when the angle is small. Computing s and using atan2() gives you a robust result for all possible cases.
Since s is always nonnegative, the resulting angle will range from 0 to pi. There will always be an equivalent negative angle (angle - 2*pi), but there is no geometric reason to prefer it.
Signed angle between two vectors without a reference plane
angle = acos(dotproduct(normalized(a), normalized(b)));
signed_angle(a, b) == -signed_angle(b, a)
I think that's impossible without some kind of reference vector.
Thanks all. After reviewing the comments here and looking back at what I was trying to do, I realized that I can accomplish what I need to do with the given, standard formula for a signed angle. I just got hung up in the unit test for my signed angle function.
For reference, I'm feeding the resulting angle back into a rotate function. I had failed to account for the fact that this will naturally use the same axis as in signed_angle (the cross product of input vectors), and the correct direction of rotation will follow from which ever direction that axis is facing.
More simply put, both of these should just "do the right thing" and rotate in different directions:
rotate(cross(Va, Vb), signed_angle(Va, Vb), point)
rotate(cross(Vb, Va), signed_angle(Vb, Va), point)
Where the first argument is the axis of rotation and second is the amount to rotate.
If all you want is a consistent result, then any arbitrary way of choosing between a × b and b × a for your normal will do. Perhaps pick the one that's lexicographically smaller?
(But you might want to explain what problem you are actually trying to solve: maybe there's a solution that doesn't involve computing a consistent signed angle between arbitrary 3-vectors.)
sorry - I should know this but I don't.
I have computed the position of a reference frame (S1) with respect to a base reference frame (S0) through two different processes that give me two different 4x4 affine transformation matrices. I'd like to compute an error between the two but am not sure how to deal with the rotational component. Would love any advice.
thank you!
If R0 and R1 are the two rotation matrices which are supposed to be the same, then R0*R1' should be identity. The magnitude of the rotation vector corresponding to R0*R1' is the rotation (in radians, typically) from identity. Converting rotation matrices to rotation vectors is efficiently done via Rodrigues' formula.
To answer your question with a common use case, Python and OpenCV, the error is
r, _ = cv2.Rodrigues(R0.dot(R1.T))
rotation_error_from_identity = np.linalg.norm(r)
You are looking for the single axis rotation from frame S1 to frame S0 (or vice versa). The axis of the rotation isn't all that important here. You want the rotation angle.
Let R0 and R1 be the upper left 3x3 rotation matrices from your 4x4 matrices S0 and S1. Now compute E=R0*transpose(R1) (or transpose(R0)*R1; it doesn't really matter which.)
Now calculate
d(0) = E(1,2) - E(2,1)
d(1) = E(2,0) - E(0,2)
d(2) = E(0,1) - E(1,0)
dmag = sqrt(d(0)*d(0) + d(1)*d(1) + d(2)*d(2))
phi = asin (dmag/2)
I've left out some hairy details (and these details can bite you). In particular, the above is invalid for very large error angles (error > 90 degrees) and is imprecise for large error angles (angle > 45 degrees).
If you have a general-purpose function that extracts the single axis rotation from a matrix, use it. Or if you have a general-purpose function that extracts a quaternion from a matrix, use that. (Single axis rotation and quaternions are very closely related to one another).
I'm experimenting with using axis-angle vectors for rotations in my hobby game engine. This is a 3-component vector along the axis of rotation with a length of the rotation in radians. I like them because:
Unlike quats or rotation matrices, I can actually see the numbers and visualize the rotation in my mind
They're a little less memory than quaternions or matrices.
I can represent values outside the range of -Pi to Pi (This is important if I store an angular velocity)
However, I have a tight loop that updates the rotation of all of my objects (tens of thousands) based on their angular velocity. Currently, the only way I know to combine two rotation axis vectors is to convert them to quaternions, multiply them, and then convert the result back to an axis/angle. Through profiling, I've identified this as a bottleneck. Does anyone know a more straightforward approach?
You representation is equivalent to quaternion rotation, provided your rotation vectors are unit length. If you don't want to use some canned quaternion data structure you should simply ensure your rotation vectors are of unit length, and then work out the equivalent quaternion multiplications / reciprocal computation to determine the aggregate rotation. You might be able to reduce the number of multiplications or additions.
If your angle is the only thing that is changing (i.e. the axis of rotation is constant), then you can simply use a linear scaling of the angle, and, if you'd like, mod it to be in the range [0, 2π). So, if you have a rotation rate of α raidans per second, starting from an initial angle of θ0 at time t0, then the final rotation angle at time t is given by:
θ(t) = θ0+α(t-t0) mod 2π
You then just apply that rotation to your collection of vectors.
If none of this improves your performance, you should consider using a canned quaternion library as such things are already optimized for the kinds of application you're disucssing.
You can keep them as angle axis values.
Build a cross-product (anti-symmetric) matrix using the angle axis values (x,y,z) and weight the elements of this matrix by multiplying them by the angle value. Now sum up all of these cross-product matrices (one for each angle axis value) and find the final rotation matrix by using the matrix exponential.
If matrix A represents this cross-product matrix (built from Angle Axis value) then,
exp(A) is equivalent to the rotation matrix R (i.e., equivalent to your quaternion in matrix form).
Therefore,
exp (A1 + A2) = R1 * R2
probably a more expensive calucation in the end...
You should use unit quaternions rather than scaled vectors to represent your rotations. It can be shown (not by me) that any representation of rotations using three parameters will run into problems (i.e. is singular) at some point. In your case it occurs where your vector has a length of 0 (i.e. the identity) and at lengths of 2pi, 4pi, etc. In these cases the representation becomes singular. Unit quaternions and rotation matrices do not have this problem.
From your description, it sounds like you are updating your rotation state as a result of numerical integration. In this case you can update your rotation state by converting your rotational rate (\omega) to a quaternion rate (q_dot). If we represent your quaternion as q = [q0 q1 q2 q3] where q0 is the scalar part then:
q_dot = E*\omega
where
[ -q1 -q2 -q3 ]
E = [ q0 -q3 q2 ]
[ q3 q0 -q1 ]
[ -q2 q1 q0 ]
Then your update becomes
q(k+1) = q(k) + q_dot*dt
for simple integration. You could choose a different integrator if you choose.
Old question, but another example of stack overflow answering questions the OP wasn't asking. OP already listed out his reasoning for not using quaternions to represent velocity. I was in the same boat.
That said, the way you combine two angular velocities, with each represented by a vector, which represents the axis of rotation with its magnitude representing the amount of rotation.
Just add them together. Component-by-component. Hope that helps some other soul out there.