I have two vectors describing rotations; a start rotation A and a target rotation B. How would I best go about interpolating A by a factor F to approach B?
Using a simple lerp on the vectors fails to work when more than one dimension needs to be interpolated (i.e. produces undesirable rotations). Maybe building quaternions from the rotation vectors and using slerp is the way to go. But how, then, could I extract a vector describing the new rotation from the resulting quaternion?
Thanks in advance.
Since I don't seem to understand your question, here is a little SLERP implementation in python using numpy. I plotted the results using matplotlib (v.99 for Axes3D).
I don't know if you can use python, but does look like your SLERP implementation? It seems to me to give fine results ...
from numpy import *
from numpy.linalg import norm
def slerp(p0, p1, t):
omega = arccos(dot(p0/norm(p0), p1/norm(p1)))
so = sin(omega)
return sin((1.0-t)*omega) / so * p0 + sin(t*omega)/so * p1
# test code
if __name__ == '__main__':
pA = array([-2.0, 0.0, 2.0])
pB = array([0.0, 2.0, -2.0])
ps = array([slerp(pA, pB, t) for t in arange(0.0, 1.0, 0.01)])
from pylab import *
from mpl_toolkits.mplot3d import Axes3D
f = figure()
ax = Axes3D(f)
ax.plot3D(ps[:,0], ps[:,1], ps[:,2], '.')
show()
A simple LERP (and renormalizing) only works fine when the vectors are very close together, but will result in unwanted results when the vectors are further apart.
There are two options:
Simple cross-products:
Determine the axis n that is orthogonal to both A and B using a cross product (take care when the vectors are aligned) and calculate the angle a between A and B using a dot product. Now you can simply approach B by letting a go from 0 to a (this will be aNew and applying the rotation of aNew about axis n on A.
Quaternions:
Calculate the quaternion q that moves A to B, and interpolate q with the identity quaternion I using SLERP. The resulting quaternion qNew can then be applied on A.
Well, your slerp approach would work and is probably computationally most efficient (even though it's a bit tough to understand). To get back from the quaternions to the vector, you'll need to use a set of formulas you can find here.
There's also a bit of relevant code here, although I don't know if it corresponds to the way you have your data represented.
If you have decided to go with Quaternions (which will slerp very nicely), see my answer here on resources for implementing Quaternions:
Rotating in OpenGL relative to the viewport
You should find plenty of examples in the links in that post.
Related
Im trying to optimize my skeletal animation system by using tracks (curve) instead of keyframe. Each curve take care of a specific component then (for now) I linearly interpolate the values. Work fine for my bone positions, however Im having a hard time getting rid of the "jagyness" of the quaternion component interpolation...
Basically I have 1 curve for each component (XY and Z) for each bones quaternion and I use the following code to interpolate the XY and Z curves independently:
// Simple lerp... (f is always a value between 0.0f and 1.0f)
return ( curve->data_array[ currentframe ].value * ( 1.0f - f ) ) +
( curve->data_array[ nextframe ].value * f );
When I interpolate the quaternion XYZ then I use the following code to rebuild the W component of the quaternion before normalizing it and affecting it to my bone before drawing:
Quaternion QuaternionW( const Quaternion q )
{
Quaternion t = { q.x, q.y, q.z };
float l = 1.0f - ( q.x * q.x ) - ( q.y * q.y ) - ( q.z * q.z );
t.w = ( l < 0.0f ) ? 0.0f : -sqrtf( l );
return t;
}
The drawing look fine at the exception that the bones become all jerky from time to time, would it be due to the floating point precision? Or the recalculation of the W component? Or there is absolutely no way I can linearly interpolate each component of a quaternion this way?
ps: On a side note, in my curve interpolation function if I replace the code above with:
return curve->data_array[ currentframe ].value;
instead or linearly interpolating, everything is fine... So the data is obviously correct... Im puzzled...
[ EDIT ]
After more research I found that the problem comes from the frame data... I got i.e. the following:
Frame0:
quat.x = 0.950497
Frame1:
quat.x = -0.952190
Frame2:
quat.x = 0.953192
This is what causes the inversion and jaggyness... I tried to detect this case and inverse the sign of the data but it still doesn't fix the problem fully as some frame now simply look weird (visually when drawing).
Any ideas how to properly fix the curves?
Your data are probably not wrong. Quaternion representations of orientation have the funny property of being 2x redundant. If you negate all four elements of a quaternion, you're left with the same orientation. It's easy to see this if you think of the quaternion as an axis/angle representation: Rotating by Θ around axis a, is the same as rotating by -Θ around axis -a.
So what should you do about it? As mentioned before, slerp is the right thing to do. Quaternion orientations exist on the unit hypersphere. If you linearly interpolate between points on a sphere, you leave the sphere. However, if the points are close by each other, it's often not a big deal (although you should still renormalize afterward). What you absolutely do need to make sure you do is check the inner-product of your two quaternions before interpolating them: e.g.,
k=q0[0]*q1[0] + q0[1]*q1[1] + q0[2]*q1[2] + q0[3]*q1[3];
If k<0, negate one of the quaternions: for (ii=0;ii<4;++ii) q1[ii]=-q1[ii]; This makes sure that you're not trying to interpolate the long way around the circle. This does mean, however, that you have to treat the quaternions as a whole, not in parts. Completely throwing away one component is particularly problematic because you need its sign to keep the quaternion from being ambiguous.
Naive considerations
Linear interpolation is fine for things that operate additively, i.e. that add something to something else every time you execute the corresponding operation. Queternions, however, are multiplicative: you multiply them to chain them.
For this reason, I originally suggested computing the following:
pow(secondQuaternion, f)*pow(firstQuaternion, 1. - f)
Wikipedia has a section on computing powers of quaternions, among other things. As your comment below states that this does not work, the above is for reference only.
Proper interpolation
Since writing this post, I've read a bit more about slerp (spherical linear interpolation) and found that wikipedia has a section on quaternion slerp. Your comment above suggests that the term is already familiar to you. The formula is a bit more complicated than what I wrote above, but it is still rather related due to the way it uses powers. I guess you'd do best by adapting or porting an available implementatin of that formula. This page for example comes with a bit of code.
Fixing data
As to your updated question
Any ideas how to properly fix the curves?
Fixing errors while maintaining correct data requires some idea of what kinds of errors do occur. So I'd start by trying to locate the source of that error, if at all possible. If that can be fixed to generate correct data, then good. If not, it should still give you a better idea of what to expect, and when.
I have an input device that gives me 3 angles -- rotation around x,y,z axes.
Now I need to use these angles to rotate the 3D space, without gimbal lock. I thought I could convert to Quaternions, but apparently since I'm getting the data as 3 angles this won't help?
If that's the case, just how can I correctly rotate the space, keeping in mind that my input data simply is x,y,z axes rotation angles, so I can't just "avoid" that. Similarly, moving around the order of axes rotations won't help -- all axes will be used anyway, so shuffling the order around won't accomplish anything. But surely there must be a way to do this?
If it helps, the problem can pretty much be reduced to implementing this function:
void generateVectorsFromAngles(double &lastXRotation,
double &lastYRotation,
double &lastZRotation,
JD::Vector &up,
JD::Vector &viewing) {
JD::Vector yaxis = JD::Vector(0,0,1);
JD::Vector zaxis = JD::Vector(0,1,0);
JD::Vector xaxis = JD::Vector(1,0,0);
up.rotate(xaxis, lastXRotation);
up.rotate(yaxis, lastYRotation);
up.rotate(zaxis, lastZRotation);
viewing.rotate(xaxis, lastXRotation);
viewing.rotate(yaxis, lastYRotation);
viewing.rotate(zaxis, lastZRotation);
}
in a way that avoids gimbal lock.
If your device is giving you absolute X/Y/Z angles (which implies something like actual gimbals), it will have some specific sequence to describe what order the rotations occur in.
Since you say that "the order doesn't matter", this suggests your device is something like (almost certainly?) a 3-axis rate gyro, and you're getting differential angles. In this case, you want to combine your 3 differential angles into a rotation vector, and use this to update an orientation quaternion, as follows:
given differential angles (in radians):
dXrot, dYrot, dZrot
and current orientation quaternion Q such that:
{r=0, ijk=rot(v)} = Q {r=0, ijk=v} Q*
construct an update quaternion:
dQ = {r=1, i=dXrot/2, j=dYrot/2, k=dZrot/2}
and update your orientation:
Q' = normalize( quaternion_multiply(dQ, Q) )
Note that dQ is only a crude approximation of a unit quaternion (which makes the normalize() operation more important than usual). However, if your differential angles are not large, it is actually quite a good approximation. Even if your differential angles are large, this simple approximation makes less nonsense than many other things you could do. If you have problems with large differential angles, you might try adding a quadratic correction to improve your accuracy (as described in the third section).
However, a more likely problem is that any kind of repeated update like this tends to drift, simply from accumulated arithmetic error if nothing else. Also, your physical sensors will have bias -- e.g., your rate gyros will have offsets which, if not corrected for, will cause your orientation estimate Q to precess slowly. If this kind of drift matters to your application, you will need some way to detect/correct it if you want to maintain a stable system.
If you do have a problem with large differential angles, there is a trigonometric formula for computing an exact update quaternion dQ. The assumption is that the total rotation angle should be linearly proportional to the magnitude of the input vector; given this, you can compute an exact update quaternion as follows:
given differential half-angle vector (in radians):
dV = (dXrot, dYrot, dZrot)/2
compute the magnitude of the half-angle vector:
theta = |dV| = 0.5 * sqrt(dXrot^2 + dYrot^2 + dZrot^2)
then the update quaternion, as used above, is:
dQ = {r=cos(theta), ijk=dV*sin(theta)/theta}
= {r=cos(theta), ijk=normalize(dV)*sin(theta)}
Note that directly computing either sin(theta)/theta ornormalize(dV) is is singular near zero, but the limit value of vector ijk near zero is simply ijk = dV = (dXrot,dYrot,dZrot), as in the approximation from the first section. If you do compute your update quaternion this way, the straightforward method is to check for this, and use the approximation for small theta (for which it is an extremely good approximation!).
Finally, another approach is to use a Taylor expansion for cos(theta) and sin(theta)/theta. This is an intermediate approach -- an improved approximation that increases the range of accuracy:
cos(x) ~ 1 - x^2/2 + x^4/24 - x^6/720 ...
sin(x)/x ~ 1 - x^2/6 + x^4/120 - x^6/5040 ...
So, the "quadratic correction" mentioned in the first section is:
dQ = {r=1-theta*theta*(1.0/2), ijk=dV*(1-theta*theta*(1.0/6))}
Q' = normalize( quaternion_multiply(dQ, Q) )
Additional terms will extend the accurate range of the approximation, but if you need more than +/-90 degrees per update, you should probably use the exact trig functions described in the second section. You could also use a Taylor expansion in combination with the exact trigonometric solution -- it may be helpful by allowing you to switch seamlessly between the approximation and the exact formula.
I think that the 'gimbal lock' is not a problem of computations/mathematics but rather a problem of some physical devices.
Given that you can represent any orientation with XYZ rotations, then even at the 'gimbal lock point' there is a XYZ representation for any imaginable orientation change. Your physical gimbal may be not able to rotate this way, but the mathematics still works :).
The only problem here is your input device - if it's gimbal then it can lock, but you didn't give any details on that.
EDIT: OK, so after you added a function I think I see what you need. The function is perfectly correct. But sadly, you just can't get a nice and easy, continuous way of orientation edition using XYZ axis rotations. I haven't seen such solution even in professional 3D packages.
The only thing that comes to my mind is to treat your input like a steering in aeroplane - you just have some initial orientation and you can rotate it around X, Y or Z axis by some amount. Then you store the new orientation and clear your inputs. Rotations in 3DMax/Maya/Blender are done the same way.
If you give us more info about real-world usage you want to achieve we may get some better ideas.
I'm working in OpenCV but I don't think there is a function for this. I can find a function for finding affine transformations, but affine transformations include scaling, and I only want to consider rotation + translation.
Imagine I have two sets of points in 2d - let's say each set has exactly 50 points.
E.g. set A = {x1, y1, x2, y2, ... , x50, y50}
set B = {x1', y1', x2', y2', ... , x50', y50'}
I want to find the rotation and translation combination that gets closest to mapping set A onto set B. I guess I would define "closest" as minimises the average distance between points in A and corresponding points in B. I.e., minimises the average distance between (x1, y1) and (x1', y1'), etc.
I guess I could use brute force testing all possible translations and rotations but this would be extremely inefficient. Does anyone know a simpler way?
Thanks!
This problem has a very elegant solution in terms of singular value decomposition of the proximity matrix (distances between pairs of points). The name of this is the orthogonal Procrustes problem, after the Greek legend about a fellow who offered travellers a bed that would fit anyone.
The solution comes from finding the nearest orthogonal matrix to a given (not necessarily orthogonal) matrix.
The way I would do it in Excel is to make a couple columns representing the points.
Cells representing rotation/translation of a set (no need to rotate and translate both of them).
Then columns representing those same points rotated/translated.
Then another column for the distance between the points of the rotated/translated points.
Then a cell of the sum of the distances between points.
Finally, use Solver to optimize the rotation and translation cells.
If you fix some rotation you can get an answer using ternary search. Run search in x and for every tested x run it in y to get the best value. This will give you the correct answer since the function (sum of corresponding distances) is convex (this can be proved through observing that restriction of the function to any line is a one-dimensional convex function; and the last is a standard fact: the sum of several convex functions is convex).
Instead of brute force over the angle I can propose such a method based on the ternary search. Choose some not very large step S. Compute the target function for every angle in (0, S, 2S,...). Then, if S is small enough, we can exclude some of segments (iS, (i + 1)S) from consideration. Namely ones with relatively large values of function with angles iS and (i + 1)S. Being implemented carefully this can give an answer and can do it faster than brute force.
Given a point P on a 'canonical' ellipse defined by axes a, b, and an arc length s, how can I find a point Q, also on the ellipse, that is s clockwise along the elliptical curve from P — such that if I were to start at P and 'walk along' the elliptical curve for a distance of s, I would reach Q — programatically and without breaking the computational bank?
I have heard that this can be computed through some sort of elliptical integration, but I need to do this a bunch, and quickly. What I'm looking for is an easy to use, computationally inexpensive, and fairly accurate approximation method. Or at least a method that is one or two of those things. I will be implementing this in python.
Edit: alternatively, I might be forced to create a lookup table of position values around ellipses (I might only need in the 10s of dissimilar ellipses). How should I do this, and what method can I use to fill it?
You'll need to integrate the ellipse equation. It's not difficult, actually.
Take a look at the equations here:
Link
Since you're using python, the Runge-Kutta for integration is implemented in Python here (I don't know the license, though):
http://doswa.com/blog/2009/04/21/improved-rk4-implementation/
Just on step 3 and 4 of mathforum solution you already have a value for ds (the arc lenght) and you want dx.
After finding dx, use step 6 to find y.
You could use scipy.special.ellipeinc to calculate the arclengths. (More details are given by Roger Stafford here.)
If that isn't fast enough, you could wrap the arclength calculation in a function and use a memoize decorator to cache the result of previous (arclength) function calls.
Or, as you've mentioned, you could pre-calculate the values you need, and store them in a dict.
In order to solve the problems you need a conjeture:there is a circle in unit elipse
a=1, that it has the same perimeter han the elipse. That perim is 2πrp.your. perimeter is then P=2πrp x a
I'm writing a solution for the Usaco problem "Electric Fences".
In the problem you have to find the optimal location for a point among a large amount of linesegments, so the sum of point-linesegment distances is smallest possible.
I had an idea, that it might be possible to do a hillclimb, and it worked for all testcases. The given analysis used a similar method, but it did not explain why this would work.
Thus I'm still unable to either prove or disprove the existence of local optimums in the given tasks. I had an idea that it could be done using induction, but I haven't been able to make it work. Can you help me?
Updated definition
Given a set of (x1,y1,x2,y2) linesegments find the (x,y) point P, that minimizes the function:
def Val(x,y):
d = 0
for x1,y1,x2,y2 in LineSegments:
if triangle (x1,y1,x2,y2,x,y) is not obtuse in (x1,y1) or (x2,y2):
d += DistPointToLine(x,y,x1,y1,x2,y2)
else:
d += min(DistPointToPoint(x,y,x1,y1), DistPointToPoint(x,y,x2,y2))
return d
By some reason the problem contains only one local optima, and thus the following procedure can be used to solve it:
precision = ((-0.1,0), (0.1,0), (0,-0.1), (0,0.1))
def Solve(precision=0.1):
x = 0; y = 0
best = Val(x,y)
while True:
for dx,dy in precision:
if Val(x+dx, y+dy) > best:
x += dx; y += dy
best = Val(x,y)
break
else:
break
return (x,y)
The questions is: Why does this not get stuck somewhere on the way to the global optimum? Why is there no local hilltops to bring this naive procedure to its knees?
It is easy to prove the algorithm's correctness if we notice that the distance function for a single line segment is a convex function. Convex in this case means that if we think of the distance function as a surface z=f(x,y), then if we filled in the volume above the surface, we'd have a convex solid. In the case of the distance from a single line segment, the solid would look like a triangular wedge with conical ends.
Since the sum of convex functions is also convex, then the sum of distances from multiple line segments will also be a convex function. Therefore, any local minimum you find must also be a global minimum by virtue of the function being convex.