How to calculate Tangent and Binormal? - math

Talking about bump mapping, specular highlight and these kind of things in OpenGL Shading Language (GLSL)
I have:
An array of vertices (e.g. {0.2,0.5,0.1, 0.2,0.4,0.5, ...})
An array of normals (e.g. {0.0,0.0,1.0, 0.0,1.0,0.0, ...})
The position of a point light in world space (e.g. {0.0,1.0,-5.0})
The position of the viewer in world space (e.g. {0.0,0.0,0.0}) (assume the viewer is in the center of the world)
Now, how can I calculate the Binormal and Tangent for each vertex? I mean, what is the formula to calculate the Binormals, what I have to use based on those informations? And about the tangent?
I'll construct the TBN Matrix anyway, so if you know a formula to construct the matrix directly based on those informations will be nice!
Oh, yeh, I have the texture coordinates too, if needed.
And as I'm talking about GLSL, would be nice a per-vertex solution, I mean, one which doesn't need to access more than one vertex information at a time.
---- Update -----
I found this solution:
vec3 tangent;
vec3 binormal;
vec3 c1 = cross(a_normal, vec3(0.0, 0.0, 1.0));
vec3 c2 = cross(a_normal, vec3(0.0, 1.0, 0.0));
if (length(c1)>length(c2))
{
tangent = c1;
}
else
{
tangent = c2;
}
tangent = normalize(tangent);
binormal = cross(v_nglNormal, tangent);
binormal = normalize(binormal);
But I don't know if it is 100% correct.

The relevant input data to your problem are the texture coordinates. Tangent and Binormal are vectors locally parallel to the object's surface. And in the case of normal mapping they're describing the local orientation of the normal texture.
So you have to calculate the direction (in the model's space) in which the texturing vectors point. Say you have a triangle ABC, with texture coordinates HKL. This gives us vectors:
D = B-A
E = C-A
F = K-H
G = L-H
Now we want to express D and E in terms of tangent space T, U, i.e.
D = F.s * T + F.t * U
E = G.s * T + G.t * U
This is a system of linear equations with 6 unknowns and 6 equations, it can be written as
| D.x D.y D.z | | F.s F.t | | T.x T.y T.z |
| | = | | | |
| E.x E.y E.z | | G.s G.t | | U.x U.y U.z |
Inverting the FG matrix yields
| T.x T.y T.z | 1 | G.t -F.t | | D.x D.y D.z |
| | = ----------------- | | | |
| U.x U.y U.z | F.s G.t - F.t G.s | -G.s F.s | | E.x E.y E.z |
Together with the vertex normal T and U form a local space basis, called the tangent space, described by the matrix
| T.x U.x N.x |
| T.y U.y N.y |
| T.z U.z N.z |
Transforming from tangent space into object space. To do lighting calculations one needs the inverse of this. With a little bit of exercise one finds:
T' = T - (N·T) N
U' = U - (N·U) N - (T'·U) T'
Normalizing the vectors T' and U', calling them tangent and binormal we obtain the matrix transforming from object into tangent space, where we do the lighting:
| T'.x T'.y T'.z |
| U'.x U'.y U'.z |
| N.x N.y N.z |
We store T' and U' them together with the vertex normal as a part of the model's geometry (as vertex attributes), so that we can use them in the shader for lighting calculations. I repeat: You don't determine tangent and binormal in the shader, you precompute them and store them as part of the model's geometry (just like normals).
(The notation between the vertical bars above are all matrices, never determinants, which normally use vertical bars instead of brackets in their notation.)

Generally, you have 2 ways of generating the TBN matrix: off-line and on-line.
On-line = right in the fragment shader using derivative instructions. Those derivations give you a flat TBN basis for each point of a polygon. In order to get a smooth one we have to re-orthogonalize it based on a given (smooth) vertex normal. This procedure is even more heavy on GPU than initial TBN extraction.
// compute derivations of the world position
vec3 p_dx = dFdx(pw_i);
vec3 p_dy = dFdy(pw_i);
// compute derivations of the texture coordinate
vec2 tc_dx = dFdx(tc_i);
vec2 tc_dy = dFdy(tc_i);
// compute initial tangent and bi-tangent
vec3 t = normalize( tc_dy.y * p_dx - tc_dx.y * p_dy );
vec3 b = normalize( tc_dy.x * p_dx - tc_dx.x * p_dy ); // sign inversion
// get new tangent from a given mesh normal
vec3 n = normalize(n_obj_i);
vec3 x = cross(n, t);
t = cross(x, n);
t = normalize(t);
// get updated bi-tangent
x = cross(b, n);
b = cross(n, x);
b = normalize(b);
mat3 tbn = mat3(t, b, n);
Off-line = prepare tangent as a vertex attribute. This is more difficult to get because it will not just add another vertex attrib but also will require to re-compose all other attributes. Moreover, it will not 100% give you a better performance as you'll get an additional cost of storing/passing/animating(!) vector3 vertex attribute.
The math is described in many places (google it), including the #datenwolf post.
The problem here is that 2 vertices may have the same normal and texture coordinate but different tangents. That means you can not just add a vertex attribute to a vertex, you'll need to split the vertex into 2 and specify different tangents for the clones.
The best way to get unique tangent (and other attribs) per vertex is to do it as early as possible = in the exporter. There on the stage of sorting pure vertices by attributes you'll just need to add the tangent vector to the sorting key.
As a radical solution to the problem consider using quaternions. A single quaternion (vec4) can successfully represent tangential space of a pre-defined handiness. It's easy to keep orthonormal (including passing to the fragment shader), store and extract normal if needed. More info on the KRI wiki.

Based on the answer from kvark, I would like to add more thoughts.
If you are in need of an orthonormalized tangent space matrix you have to do some work any way.
Even if you add tangent and binormal attributes, they will be interpolated during the shader stages
and at the end they are neither normalized nor they are normal to each another.
Let's assume that we have a normalized normalvector n, and we have the tangent t and the binormalb or we can calculate them from the derivations as follows:
// derivations of the fragment position
vec3 pos_dx = dFdx( fragPos );
vec3 pos_dy = dFdy( fragPos );
// derivations of the texture coordinate
vec2 texC_dx = dFdx( texCoord );
vec2 texC_dy = dFdy( texCoord );
// tangent vector and binormal vector
vec3 t = texC_dy.y * pos_dx - texC_dx.y * pos_dy;
vec3 b = texC_dx.x * pos_dy - texC_dy.x * pos_dx;
Of course an orthonormalized tangent space matrix can be calcualted by using the cross product,
but this would only work for right-hand systems. If a matrix was mirrored (left-hand system) it will turn to a right hand system:
t = cross( cross( n, t ), t ); // orthonormalization of the tangent vector
b = cross( n, t ); // orthonormalization of the binormal vector
// may invert the binormal vector
mat3 tbn = mat3( normalize(t), normalize(b), n );
In the code snippet above the binormal vector is reversed if the tangent space is a left-handed system.
To avoid this, the hard way must be gone:
t = cross( cross( n, t ), t ); // orthonormalization of the tangent vector
b = cross( b, cross( b, n ) ); // orthonormalization of the binormal vectors to the normal vector
b = cross( cross( t, b ), t ); // orthonormalization of the binormal vectors to the tangent vector
mat3 tbn = mat3( normalize(t), normalize(b), n );
A common way to orthogonalize any matrix is the Gram–Schmidt process:
t = t - n * dot( t, n ); // orthonormalization ot the tangent vectors
b = b - n * dot( b, n ); // orthonormalization of the binormal vectors to the normal vector
b = b - t * dot( b, t ); // orthonormalization of the binormal vectors to the tangent vector
mat3 tbn = mat3( normalize(t), normalize(b), n );
Another possibility is to use the determinant of the 2*2 matrix, which results from the derivations of the texture coordinates texC_dx, texC_dy, to take the direction of the binormal vector into account. The idea is that the determinant of a orthogonal matrix is 1 and the determined one of a orthogonal mirror matrix -1.
The determinant can eihter be calcualted by the GLSL function determinant( mat2( texC_dx, texC_dy )
or it can be calcualated by it formula texC_dx.x * texC_dy.y - texC_dy.x * texC_dx.y.
For the calculation of the orthonormalized tangent space matrix, the binormal vector is no longer required and the calculation of the unit vector
(normalize) of the binormal vector can be evaded.
float texDet = texC_dx.x * texC_dy.y - texC_dy.x * texC_dx.y;
vec3 t = texC_dy.y * pos_dx - texC_dx.y * pos_dy;
t = normalize( t - n * dot( t, n ) );
vec3 b = cross( n, t ); // b is normlized because n and t are orthonormalized unit vectors
mat3 tbn = mat3( t, sign( texDet ) * b, n ); // take in account the direction of the binormal vector

There is a variety of ways to calculate tangents, and if the normal map baker doesn't do it the same way as the renderer you'll get subtle artifacts. Many bakers use the MikkTSpace algorithm, which isn't the same as the fragment derivatives trick.
Fortunately, if you have an indexed mesh from a program that uses MikkTSpace (and no texture coordinate triangles with opposite orientations share an index) the hard part of the algorithm is mostly done for you, and you can reconstruct the tangents like this:
#include <cmath>
#include "glm/geometric.hpp"
#include "glm/vec2.hpp"
#include "glm/vec3.hpp"
#include "glm/vec4.hpp"
using glm::vec2;
using glm::vec3;
using glm::vec4;
void makeTangents(uint32_t nIndices, uint16_t* indices,
const vec3 *positions, const vec3 *normals,
const vec2 *texCoords, vec4 *tangents) {
uint32_t inconsistentUvs = 0;
for (uint32_t l = 0; l < nIndices; ++l) tangents[indices[l]] = vec4(0);
for (uint32_t l = 0; l < nIndices; ++l) {
uint32_t i = indices[l];
uint32_t j = indices[(l + 1) % 3 + l / 3 * 3];
uint32_t k = indices[(l + 2) % 3 + l / 3 * 3];
vec3 n = normals[i];
vec3 v1 = positions[j] - positions[i], v2 = positions[k] - positions[i];
vec2 t1 = texCoords[j] - texCoords[i], t2 = texCoords[k] - texCoords[i];
// Is the texture flipped?
float uv2xArea = t1.x * t2.y - t1.y * t2.x;
if (std::abs(uv2xArea) < 0x1p-20)
continue; // Smaller than 1/2 pixel at 1024x1024
float flip = uv2xArea > 0 ? 1 : -1;
// 'flip' or '-flip'; depends on the handedness of the space.
if (tangents[i].w != 0 && tangents[i].w != -flip) ++inconsistentUvs;
tangents[i].w = -flip;
// Project triangle onto tangent plane
v1 -= n * dot(v1, n);
v2 -= n * dot(v2, n);
// Tangent is object space direction of texture coordinates
vec3 s = normalize((t2.y * v1 - t1.y * v2)*flip);
// Use angle between projected v1 and v2 as weight
float angle = std::acos(dot(v1, v2) / (length(v1) * length(v2)));
tangents[i] += vec4(s * angle, 0);
}
for (uint32_t l = 0; l < nIndices; ++l) {
vec4& t = tangents[indices[l]];
t = vec4(normalize(vec3(t.x, t.y, t.z)), t.w);
}
// std::cerr << inconsistentUvs << " inconsistent UVs\n";
}
In the vertex shader, they are rotated into world space:
fragNormal = (model.model * vec4(inNormal, 0)).xyz;
fragTangent = vec4((model.model * vec4(inTangent.xyz, 0)).xyz, inTangent.w);
Then the binormal and world space normal are calculated like this (see http://mikktspace.com/):
vec3 binormal = fragTangent.w * cross(fragNormal, fragTangent.xyz);
vec3 worldNormal = normalize(normal.x * fragTangent.xyz +
normal.y * binormal +
normal.z * fragNormal);
(The binormal is usually calculated per pixel, but some bakers give you the option to calculate it per vertex and interpolate it. This page has information about specific programs.)

Related

How to calculate the intersection point between an infinite line and a line segment?

Basically, a function that fulfills this signature:
function getLineIntersection(vec2 p0, vec2 direction, vec2 p2, vec2 p3) {
// return a vec2
}
I have looked around at existing solutions, and they all seem to deal with how to find the intersection between two line segments, or between two infinite lines. Is there a solution for this problem where the line has an initial position, an angle, and needs to determine if it intersects with a line segment? Basically, something like this:
There should be one line segment that starts in a location and has a unit direction, and another line segment that is just a line connected by two points. Is this possible, and if so, is there a good way of calculating the intersection point, if it exists?
If you've a endless line which is defined by a point P and a normalized direction R and a second endless line, which is defined by a point Q and a direction S, then the intersection point of the endless lines X is:
alpha ... angle between Q-P and R
beta ... angle between R and S
gamma = 180° - alpha - beta
h = | Q - P | * sin(alpha)
u = h / sin(beta)
t = | Q - P | * sin(gamma) / sin(beta)
t = dot(Q-P, (S.y, -S.x)) / dot(R, (S.y, -S.x)) = determinant(mat2(Q-P, S)) / determinant(mat2(R, S))
u = dot(Q-P, (R.y, -R.x)) / dot(R, (S.y, -S.x)) = determinant(mat2(Q-P, R)) / determinant(mat2(R, S))
X = P + R * t = Q + S * u
If you want to detect if the intersection is on the lien, you need to compare the distance of the intersection point with the length of the line.
The intersection point (X) is on the line segment if t is in [0.0, 1.0] for X = p2 + (p3 - p2) * t
vec2 getLineIntersection(vec2 p0, vec2 direction, vec2 p2, vec2 p3)
{
vec2 P = p2;
vec2 R = p3 - p2;
vec2 Q = p0;
vec2 S = direction;
vec2 N = vec2(S.y, -S.x);
float t = dot(Q-P, N) / dot(R, N);
if (t >= 0.0 && t <= 1.0)
return P + R * t;
return vec2(-1.0);
}
Start with the intersection of two infinite lines, expressed in parametric form (e.g., A + tp, where A is the "start point", p is the direction vector and t is a scalar parameter). Solve a system of two equations to get the two parameters of the intersection point.
Now if one of your lines is really a segment AB, and B = A + p (i.e., the direction vector goes from A to B), then if the parameter t is between 0 and 1, the intersection lies on the segment.

Efficient way to apply mirror effect on quaternion rotation?

Quaternions represent rotations - they don't include information about scaling or mirroring. However it is still possible to mirror the effect of a rotation.
Consider a mirroring on the x-y-plane (we can also call it a mirroring along the z-axis). A rotation around the x-axis mirrored on the x-y-plane would be negated. Likewise with a rotation around the y axis. However, a rotation around the z-axis would be left unchanged.
Another example: 90º rotation around axis (1,1,1) mirrored in the x-y plane would give -90º rotation around (1,1,-1). To aid the intuition, if you can visualize a depiction of the axis and a circular arrow indicating the rotation, then mirroring that visualization indicates what the new rotation should be.
I have found a way to calculate this mirroring of the rotation, like this:
Get the angle-axis representation of the quaternion.
For each of the axes x, y, and z.
If the scaling is negative (mirrored) along that axis:
Negate both angle and axis.
Get the updated quaternion from the modified angle and axis.
This only supports mirroring along the primary axes, x, y, and z, since that's all I need. It works for arbitrary rotations though.
However, the conversions from quaternion to angle-axis and back from angle-axis to quaternion are expensive. I'm wondering if there's a way to do the conversion directly on the quaternion itself, but my comprehension of quaternion math is not sufficient to get anywhere myself.
(Posted on StackOverflow rather than math-related forums due to the importance of a computationally efficient method.)
I just spent quite some time on figuring out a clear answer to this question, so I am posting it here for the record.
Introduction
As was noted in other answers, a mirror effect cannot be represented as a rotation. However, given a rotation R1to2 from a coordinate frame C1 to a coordinate frame C2, we may be interested in efficiently computing the equivalent rotation when applying the same mirror effect to C1 and C2 (e.g. I was facing the problem of converting an input quaternion, given in a left-handed coordinate frame, into the quaternion representing the same rotation but in a right-handed coordinate frame).
In terms of rotation matrices, this can be thought of as follows:
R_mirroredC1_to_mirroredC2 = M_mirrorC2 * R_C1_to_C2 * M_mirrorC1
Here, both R_C1_to_C2 and R_mirroredC1_to_mirroredC2 represent valid rotations, so when dealing with quaternions, how do you efficiently compute q_mirroredC1_to_mirroredC2 from q_C1_to_C2?
Solution
The following assumes that q_C1_to_C2=[w,x,y,z]:
if C1 and C2 are mirrored along the X-axis (i.e. M_mirrorC1=M_mirrorC2=diag_3x3(-1,1,1)) then q_mirroredC1_to_mirroredC2=[w,x,-y,-z]
if C1 and C2 are mirrored along the Y-axis (i.e. M_mirrorC1=M_mirrorC2=diag_3x3(1,-1,1)) then q_mirroredC1_to_mirroredC2=[w,-x,y,-z]
if C1 and C2 are mirrored along the Z-axis (i.e. M_mirrorC1=M_mirrorC2=diag_3x3(1,1,-1)) then q_mirroredC1_to_mirroredC2=[w,-x,-y,z]
When considering different mirrored axes for the C1 and C2, we have the following:
if C1 is mirrored along the X-axis and C2 along the Y-axis (i.e. M_mirrorC1=diag_3x3(-1,1,1) & M_mirrorC2=diag_3x3(1,-1,1)) then q_mirroredC1_to_mirroredC2=[z,y,x,w]
if C1 is mirrored along the X-axis and C2 along the Z-axis (i.e. M_mirrorC1=diag_3x3(-1,1,1) & M_mirrorC2=diag_3x3(1,1,-1)) then q_mirroredC1_to_mirroredC2=[-y,z,-w,x]
if C1 is mirrored along the Y-axis and C2 along the X-axis (i.e. M_mirrorC1=diag_3x3(1,-1,1) & M_mirrorC2=diag_3x3(-1,1,1)) then q_mirroredC1_to_mirroredC2=[z,-y,-x,w]
if C1 is mirrored along the Y-axis and C2 along the Z-axis (i.e. M_mirrorC1=diag_3x3(1,-1,1) & M_mirrorC2=diag_3x3(1,1,-1)) then q_mirroredC1_to_mirroredC2=[x,w,z,y]
if C1 is mirrored along the Z-axis and C2 along the X-axis (i.e. M_mirrorC1=diag_3x3(1,1,-1) & M_mirrorC2=diag_3x3(-1,1,1)) then q_mirroredC1_to_mirroredC2=[y,z,w,x]
if C1 is mirrored along the Z-axis and C2 along the Y-axis (i.e. M_mirrorC1=diag_3x3(1,1,-1) & M_mirrorC2=diag_3x3(1,-1,1)) then q_mirroredC1_to_mirroredC2=[x,w,-z,-y]
Test program
Here is a small c++ program based on OpenCV to test all this:
#include <opencv2/opencv.hpp>
#define CST_PI 3.1415926535897932384626433832795
// Random rotation matrix uniformly sampled from SO3 (see "Fast random rotation matrices" by J.Arvo)
cv::Matx<double,3,3> get_random_rotmat()
{
double theta1 = 2*CST_PI*cv::randu<double>();
double theta2 = 2*CST_PI*cv::randu<double>();
double x3 = cv::randu<double>();
cv::Matx<double,3,3> R(std::cos(theta1),std::sin(theta1),0,-std::sin(theta1),std::cos(theta1),0,0,0,1);
cv::Matx<double,3,1> v(std::cos(theta2)*std::sqrt(x3),std::sin(theta2)*std::sqrt(x3),std::sqrt(1-x3));
return -1*(cv::Matx<double,3,3>::eye()-2*v*v.t())*R;
}
cv::Matx<double,4,1> rotmat2quatwxyz(const cv::Matx<double,3,3> &R)
{
// Implementation from Ceres 1.10
const double trace = R(0,0) + R(1,1) + R(2,2);
cv::Matx<double,4,1> quat_wxyz;
if (trace >= 0.0) {
double t = sqrt(trace + 1.0);
quat_wxyz(0) = 0.5 * t;
t = 0.5 / t;
quat_wxyz(1) = (R(2,1) - R(1,2)) * t;
quat_wxyz(2) = (R(0,2) - R(2,0)) * t;
quat_wxyz(3) = (R(1,0) - R(0,1)) * t;
} else {
int i = 0;
if (R(1, 1) > R(0, 0))
i = 1;
if (R(2, 2) > R(i, i))
i = 2;
const int j = (i + 1) % 3;
const int k = (j + 1) % 3;
double t = sqrt(R(i, i) - R(j, j) - R(k, k) + 1.0);
quat_wxyz(i + 1) = 0.5 * t;
t = 0.5 / t;
quat_wxyz(0) = (R(k,j) - R(j,k)) * t;
quat_wxyz(j + 1) = (R(j,i) + R(i,j)) * t;
quat_wxyz(k + 1) = (R(k,i) + R(i,k)) * t;
}
// Check that the w element is positive
if(quat_wxyz(0)<0)
quat_wxyz *= -1; // quat and -quat represent the same rotation, but to make quaternion comparison easier, we always use the one with positive w
return quat_wxyz;
}
cv::Matx<double,4,1> apply_quaternion_trick(const unsigned int item_permuts[4], const int sign_flips[4], const cv::Matx<double,4,1>& quat_wxyz)
{
// Flip the sign of the x and z components
cv::Matx<double,4,1> quat_flipped(sign_flips[0]*quat_wxyz(item_permuts[0]),sign_flips[1]*quat_wxyz(item_permuts[1]),sign_flips[2]*quat_wxyz(item_permuts[2]),sign_flips[3]*quat_wxyz(item_permuts[3]));
// Check that the w element is positive
if(quat_flipped(0)<0)
quat_flipped *= -1; // quat and -quat represent the same rotation, but to make quaternion comparison easier, we always use the one with positive w
return quat_flipped;
}
void detect_quaternion_trick(const cv::Matx<double,4,1> &quat_regular, const cv::Matx<double,4,1> &quat_flipped, unsigned int item_permuts[4], int sign_flips[4])
{
if(abs(quat_regular(0))==abs(quat_flipped(0))) {
item_permuts[0]=0;
sign_flips[0] = (quat_regular(0)/quat_flipped(0)>0 ? 1 : -1);
}
else if(abs(quat_regular(0))==abs(quat_flipped(1))) {
item_permuts[1]=0;
sign_flips[1] = (quat_regular(0)/quat_flipped(1)>0 ? 1 : -1);
}
else if(abs(quat_regular(0))==abs(quat_flipped(2))) {
item_permuts[2]=0;
sign_flips[2] = (quat_regular(0)/quat_flipped(2)>0 ? 1 : -1);
}
else if(abs(quat_regular(0))==abs(quat_flipped(3))) {
item_permuts[3]=0;
sign_flips[3] = (quat_regular(0)/quat_flipped(3)>0 ? 1 : -1);
}
if(abs(quat_regular(1))==abs(quat_flipped(0))) {
item_permuts[0]=1;
sign_flips[0] = (quat_regular(1)/quat_flipped(0)>0 ? 1 : -1);
}
else if(abs(quat_regular(1))==abs(quat_flipped(1))) {
item_permuts[1]=1;
sign_flips[1] = (quat_regular(1)/quat_flipped(1)>0 ? 1 : -1);
}
else if(abs(quat_regular(1))==abs(quat_flipped(2))) {
item_permuts[2]=1;
sign_flips[2] = (quat_regular(1)/quat_flipped(2)>0 ? 1 : -1);
}
else if(abs(quat_regular(1))==abs(quat_flipped(3))) {
item_permuts[3]=1;
sign_flips[3] = (quat_regular(1)/quat_flipped(3)>0 ? 1 : -1);
}
if(abs(quat_regular(2))==abs(quat_flipped(0))) {
item_permuts[0]=2;
sign_flips[0] = (quat_regular(2)/quat_flipped(0)>0 ? 1 : -1);
}
else if(abs(quat_regular(2))==abs(quat_flipped(1))) {
item_permuts[1]=2;
sign_flips[1] = (quat_regular(2)/quat_flipped(1)>0 ? 1 : -1);
}
else if(abs(quat_regular(2))==abs(quat_flipped(2))) {
item_permuts[2]=2;
sign_flips[2] = (quat_regular(2)/quat_flipped(2)>0 ? 1 : -1);
}
else if(abs(quat_regular(2))==abs(quat_flipped(3))) {
item_permuts[3]=2;
sign_flips[3] = (quat_regular(2)/quat_flipped(3)>0 ? 1 : -1);
}
if(abs(quat_regular(3))==abs(quat_flipped(0))) {
item_permuts[0]=3;
sign_flips[0] = (quat_regular(3)/quat_flipped(0)>0 ? 1 : -1);
}
else if(abs(quat_regular(3))==abs(quat_flipped(1))) {
item_permuts[1]=3;
sign_flips[1] = (quat_regular(3)/quat_flipped(1)>0 ? 1 : -1);
}
else if(abs(quat_regular(3))==abs(quat_flipped(2))) {
item_permuts[2]=3;
sign_flips[2] = (quat_regular(3)/quat_flipped(2)>0 ? 1 : -1);
}
else if(abs(quat_regular(3))==abs(quat_flipped(3))) {
item_permuts[3]=3;
sign_flips[3] = (quat_regular(3)/quat_flipped(3)>0 ? 1 : -1);
}
}
int main(int argc, char **argv)
{
cv::Matx<double,3,3> M_xflip(-1,0,0,0,1,0,0,0,1);
cv::Matx<double,3,3> M_yflip(1,0,0,0,-1,0,0,0,1);
cv::Matx<double,3,3> M_zflip(1,0,0,0,1,0,0,0,-1);
// Let the user choose the configuration
char im,om;
std::cout << "Enter the axis (x,y,z) along which input ref is flipped:" << std::endl;
std::cin >> im;
std::cout << "Enter the axis (x,y,z) along which output ref is flipped:" << std::endl;
std::cin >> om;
cv::Matx<double,3,3> M_iflip,M_oflip;
if(im=='x') M_iflip=M_xflip;
else if(im=='y') M_iflip=M_yflip;
else if(im=='z') M_iflip=M_zflip;
if(om=='x') M_oflip=M_xflip;
else if(om=='y') M_oflip=M_yflip;
else if(om=='z') M_oflip=M_zflip;
// Generate random quaternions until we find one where no two elements are equal
cv::Matx<double,3,3> R;
cv::Matx<double,4,1> quat_regular,quat_flipped;
do {
R = get_random_rotmat();
quat_regular = rotmat2quatwxyz(R);
} while(quat_regular(0)==quat_regular(1) || quat_regular(0)==quat_regular(2) || quat_regular(0)==quat_regular(3) ||
quat_regular(1)==quat_regular(2) || quat_regular(1)==quat_regular(3) ||
quat_regular(2)==quat_regular(3));
// Determine and display the appropriate quaternion trick
quat_flipped = rotmat2quatwxyz(M_oflip*R*M_iflip);
unsigned int item_permuts[4]={0,1,2,3};
int sign_flips[4]={1,1,1,1};
detect_quaternion_trick(quat_regular,quat_flipped,item_permuts,sign_flips);
char str_quat[4]={'w','x','y','z'};
std::cout << std::endl << "When iref is flipped along the " << im << "-axis and oref along the " << om << "-axis:" << std::endl;
std::cout << "resulting_quat=[" << (sign_flips[0]>0?"":"-") << str_quat[item_permuts[0]] << ","
<< (sign_flips[1]>0?"":"-") << str_quat[item_permuts[1]] << ","
<< (sign_flips[2]>0?"":"-") << str_quat[item_permuts[2]] << ","
<< (sign_flips[3]>0?"":"-") << str_quat[item_permuts[3]] << "], where initial_quat=[w,x,y,z]" << std::endl;
// Test this trick on several random rotation matrices
unsigned int n_errors = 0, n_tests = 10000;
std::cout << std::endl << "Performing " << n_tests << " tests on random rotation matrices:" << std::endl;
for(unsigned int i=0; i<n_tests; ++i) {
// Get a random rotation matrix and the corresponding quaternion
cv::Matx<double,3,3> R = get_random_rotmat();
cv::Matx<double,4,1> quat_regular = rotmat2quatwxyz(R);
// Get the quaternion corresponding to the flipped coordinate frames, via the sign trick and via computation on rotation matrices
cv::Matx<double,4,1> quat_tricked = apply_quaternion_trick(item_permuts,sign_flips,quat_regular);
cv::Matx<double,4,1> quat_flipped = rotmat2quatwxyz(M_oflip*R*M_iflip);
// Check that both results are identical
if(cv::norm(quat_tricked-quat_flipped,cv::NORM_INF)>1e-6) {
std::cout << "Error (idx=" << i << ")!"
<< "\n quat_regular=" << quat_regular.t()
<< "\n quat_tricked=" << quat_tricked.t()
<< "\n quat_flipped=" << quat_flipped.t() << std::endl;
++n_errors;
}
}
std::cout << n_errors << " errors on " << n_tests << " tests." << std::endl;
system("pause");
return 0;
}
There is little bit easier and programmer oriented way to think about this. Assume that you want to reverse the z axis (i.e. flip z axis to -z) in your coordinate system. Now think of quaternion as orientation vector in terms of roll, pitch and yaw. When you flip z axis, notice that sign of roll and pitch is inverted but sign for yaw remains same.
Now you can find the net effect on quaternion using following code for converting Euler angles to quaternion (I'd put this code to Wikipedia):
static Quaterniond toQuaternion(double pitch, double roll, double yaw)
{
Quaterniond q;
double t0 = std::cos(yaw * 0.5f);
double t1 = std::sin(yaw * 0.5f);
double t2 = std::cos(roll * 0.5f);
double t3 = std::sin(roll * 0.5f);
double t4 = std::cos(pitch * 0.5f);
double t5 = std::sin(pitch * 0.5f);
q.w() = t0 * t2 * t4 + t1 * t3 * t5;
q.x() = t0 * t3 * t4 - t1 * t2 * t5;
q.y() = t0 * t2 * t5 + t1 * t3 * t4;
q.z() = t1 * t2 * t4 - t0 * t3 * t5;
return q;
}
Using basic trigonometry, sin(-x) = -sin(x) and cos(-x) = cos(x). Applyieng this to above code you can see that sign for t3 and t5 will flip. This will cause sign of x and y to flip.
So when you invert the z-axis,
Q'(w, x, y, z) = Q(w, -x, -y, z)
Similarly you can figure out any other combinations of axis reversal and find impact on quaternion.
PS: In case if anyone is wondering why anyone would ever need this... I needed above to transform quaternion coming from MavLink/Pixhawk system which controls drone. The source system uses NED coordinate system but usual 3D environments like Unreal uses NEU coordinate system which requires transforming z axis to -z to use the quaternion correctly.
I did some further analysis, and it appears the effect of a quaternion (w, x, y, z) can have it's effect mirrored like this:
Mirror effect of rotation along x axis by flipping y and z elements of the quaternion.
Mirror effect of rotation along y axis by flipping x and z elements of the quaternion.
Mirror effect of rotation along z axis by flipping x and y elements of the quaternion.
The w element of the quaternion never needs to be touched.
Unfortunately I still don't understand quaternions well enough to be able to explain why this works, but I derived it from implementations of converting to and from axis-angle format, and after implementing this solution, it works just as well as my original one in all tests of it I have performed.
We can examine the set of all rotations and reflections in 3D this is called the Orthogonal group O(3). It can be though of as the set of orthogonal matrices with determinant +1 or -1. All rotations have determinant +1 and pure reflections have determinate -1. There is another member of O(3) the inversion in a point (x,y,z)->(-x,-y,-z) this has det -1 in 3D and we will come to this later. If we combine two transformations in the group you multiply their determinants. Hence two rotations combined give another rotation (+1 * +1 = +1), a rotation combined with a reflection give a reflection (+1 * -1 = -1) and two reflections combined give a rotation (-1 * -1 = +1).
We can restrict the O(3) to just those with determinant +1 to form the Special Orthogonal Group SO(3). This just contains the rotations.
Now the set of unit quaternions is the double cover of SO(3) that means that two unit quaternions correspond to each rotation. To be precise if a+b i+c j+d k is a unit quaternions then a-b i-c j-d k represents the same rotation, you can think of this as a rotation by ø around the vector (b,c,d) being the same as a rotation by -ø around the vector (-b,-c,-d).
Note that all the unit quaternions have determinant +1, so there is none which correspond to a pure reflection. This is why you cannot use quaternions to represent reflections.
What you might be able to do is use the inversion. Now a reflection followed by an inversion is a rotation. For example reflect in x=0 and invert, is the same as reflecting in the y=0 and reflecting in the z=0. This is the same as 180º rotation around the x-axis. You could do the same procedure for any reflection.
We can define a plane through the origin by using it normal vector n = (a,b,c). A reflection of a vector v(x,y,z) in that plane is given by
v - 2 (v . n ) / ( n . n) n
= (x,y,z) - 2 (a x+b y+c z) / (a^2+b^2+c^2) (a,b,c)
In particular the x-y plane has normal (0,0,1) so a reflection is
(x,y,z) - 2 z (0,0,1) = (x,y,-z)
Quaternions and spatial rotation has a nice formula for a quaternion from the axis angle formula.
p = cos(ø/2) + (x i + y j + z k) sin(ø/2)
This is a quaternion W + X i + Y j + Z k with W=cos(ø/2), X = x sin(ø/2), Y = y sin(ø/2), Z = z sin(ø/2)
Changing the direction of rotation will flip the sin of the half angle but leave the cos unchanged, giving
p' = cos(ø/2) - (x i + y j + z k) sin(ø/2)
Now if we consider reflecting the corresponding vector in x-y plane giving
q = cos(ø/2) + (x i + y j - z k) sin(ø/2)
we might want to change the direction of rotation giving
q' = cos(ø/2) + (- x i - y j + z k) sin(ø/2)
= W - X i - Y j + Z k
which I think corresponds to your answer.
We can generalise this to reflection in a general plane with unit length normal (a,b,c). Let d be the dot product (a,b,c).(x,y,z). The refection of (x,y,z) is
(x,y,z) - 2 d (a,b,c) = (x - 2 d a, y - 2 d b, z - 2 d c)
the rotation quaternion of this
q = cos(ø/2) - ((x - 2 d a) i + ((y - 2 d b) j + (z - 2 d c) k) sin(ø/2)
q = cos(ø/2) - (x i + y j + z k) sin(ø/2)
+ 2 d sin(ø/2) (a i + b j + c k)
= W - X i - Y j - Z k + 2 d (X,Y,Z).(a,b,c) (a i + b j + c k)
Note that mirroring is not a rotation, so generally you can't bake it into a quaternion (I might very well have misunderstood your question, though). The 3x3 component of the mirroring transformation matrix is
M = I-2(n*nT)
where I is an identity 3x3 matrix, n is the mirror plane's normal represented as a 3x1 matrix, and nT is n as a 1x3 matrix (so n*nT is a 3x(1x1)x3=3x3 matrix).
Now, if the quaternion q you want to 'mirror' is the last transformation, the last transformation on the other side would be just M*q (again, this would be a general 3x3 matrix, not generally representable as a quaternion)
For anyone who gets here by a web-search and is looking for the math, then:
Reflection
To reflecting point 'p' through plane ax+by+cz=0, using quaternions:
n = 0+(a,b,c)
p = 0+(x,y,z)
where 'n' is a unit bivector (or pure quaternion if you prefer)
p' = npn
then p' is the reflect point.
If you compose with a second reflection 'm':
p' = mnpnm = (mn)p(mn)^*
is a rotation.
Rotations and reflections compose as expected.
Uniform scaling
Since scalar products commute and can be factor out then if we have either a rotation by unit quaternion 'Q' or a reflection by unit bivector 'b' (or any combination of) multiplying either by some non-zero scale value 's' results in a uniform scaling of s^2. And since (sqrt(s0)*sqrt(s1))^2 = s0*s1, these uniform scaling value compose as expected.
However this point is probably of no interest since in code we want to be able to assume unit magnitude values to reduce the runtime complexity.

Perlin noise for terrain generation

I'm trying to implement 2D Perlin noise to create Minecraft-like terrain (Minecraft doesn't actually use 2D Perlin noise) without overhangs or caves and stuff.
The way I'm doing it, is by creating a [50][20][50] array of cubes, where [20] will be the maximum height of the array, and its values will be determined with Perlin noise. I will then fill that array with arrays of cube.
I've been reading from this article and I don't understand, how do I compute the 4 gradient vector and use it in my code? Does every adjacent 2D array such as [2][3] and [2][4] have a different 4 gradient vector?
Also, I've read that the general Perlin noise function also takes a numeric value that will be used as seed, where do I put that in this case?
I'm going to explain Perlin noise using working code, and without relying on other explanations. First you need a way to generate a pseudo-random float at a 2D point. Each point should look random relative to the others, but the trick is that the same coordinates should always produce the same float. We can use any hash function to do that - not just the one that Ken Perlin used in his code. Here's one:
static float noise2(int x, int y) {
int n = x + y * 57;
n = (n << 13) ^ n;
return (float) (1.0-((n*(n*n*15731+789221)+1376312589)&0x7fffffff)/1073741824.0);
}
I use this to generate a "landscape" landscape[i][j] = noise2(i,j); (which I then convert to an image) and it always produces the same thing:
...
But that looks too random - like the hills and valleys are too densely packed. We need a way of "stretching" each random point over, say, 5 points. And for the values between those "key" points, you want a smooth gradient:
static float stretchedNoise2(float x_float, float y_float, float stretch) {
// stretch
x_float /= stretch;
y_float /= stretch;
// the whole part of the coordinates
int x = (int) Math.floor(x_float);
int y = (int) Math.floor(y_float);
// the decimal part - how far between the two points yours is
float fractional_X = x_float - x;
float fractional_Y = y_float - y;
// we need to grab the 4x4 nearest points to do cubic interpolation
double[] p = new double[4];
for (int j = 0; j < 4; j++) {
double[] p2 = new double[4];
for (int i = 0; i < 4; i++) {
p2[i] = noise2(x + i - 1, y + j - 1);
}
// interpolate each row
p[j] = cubicInterp(p2, fractional_X);
}
// and interpolate the results each row's interpolation
return (float) cubicInterp(p, fractional_Y);
}
public static double cubicInterp(double[] p, double x) {
return cubicInterp(p[0],p[1],p[2],p[3], x);
}
public static double cubicInterp(double v0, double v1, double v2, double v3, double x) {
double P = (v3 - v2) - (v0 - v1);
double Q = (v0 - v1) - P;
double R = v2 - v0;
double S = v1;
return P * x * x * x + Q * x * x + R * x + S;
}
If you don't understand the details, that's ok - I don't know how Math.cos() is implemented, but I still know what it does. And this function gives us stretched, smooth noise.
->
The stretchedNoise2 function generates a "landscape" at a certain scale (big or small) - a landscape of random points with smooth slopes between them. Now we can generate a sequence of landscapes on top of each other:
public static double perlin2(float xx, float yy) {
double noise = 0;
noise += stretchedNoise2(xx, yy, 5) * 1; // sample 1
noise += stretchedNoise2(xx, yy, 13) * 2; // twice as influential
// you can keep repeating different variants of the above lines
// some interesting variants are included below.
return noise / (1+2); // make sure you sum the multipliers above
}
To put it more accurately, we get the weighed average of the points from each sample.
( + 2 * ) / 3 =
When you stack a bunch of smooth noise together, usually about 5 samples of increasing "stretch", you get Perlin noise. (If you understand the last sentence, you understand Perlin noise.)
There are other implementations that are faster because they do the same thing in different ways, but because it is no longer 1983 and because you are getting started with writing a landscape generator, you don't need to know about all the special tricks and terminology they use to understand Perlin noise or do fun things with it. For example:
1) 2) 3)
// 1
float smearX = interpolatedNoise2(xx, yy, 99) * 99;
float smearY = interpolatedNoise2(xx, yy, 99) * 99;
ret += interpolatedNoise2(xx + smearX, yy + smearY, 13)*1;
// 2
float smearX2 = interpolatedNoise2(xx, yy, 9) * 19;
float smearY2 = interpolatedNoise2(xx, yy, 9) * 19;
ret += interpolatedNoise2(xx + smearX2, yy + smearY2, 13)*1;
// 3
ret += Math.cos( interpolatedNoise2(xx , yy , 5)*4) *1;
About perlin noise
Perlin noise was developed to generate a random continuous surfaces (actually, procedural textures). Its main feature is that the noise is always continuous over space.
From the article:
Perlin noise is function for generating coherent noise over a space. Coherent noise means that for any two points in the space, the value of the noise function changes smoothly as you move from one point to the other -- that is, there are no discontinuities.
Simply, a perlin noise looks like this:
_ _ __
\ __/ \__/ \__
\__/
But this certainly is not a perlin noise, because there are gaps:
_ _
\_ __/
___/ __/
Calculating the noise (or crushing gradients!)
As #markspace said, perlin noise is mathematically hard. Lets simplify by generating 1D noise.
Imagine the following 1D space:
________________
Firstly, we define a grid (or points in 1D space):
1 2 3 4
________________
Then, we randomly chose a noise value to each grid point (This value is equivalent to the gradient in the 2D noise):
1 2 3 4
________________
-1 0 0.5 1 // random noise value
Now, calculating the noise value for a grid point it is easy, just pick the value:
noise(3) => 0.5
But the noise value for a arbitrary point p needs to be calculated based in the closest grid points p1 and p2 using their value and influence:
// in 1D the influence is just the distance between the points
noise(p) => noise(p1) * influence(p1) + noise(p2) * influence(p2)
noise(2.5) => noise(2) * influence(2, 2.5) + noise(3) * influence(3, 2.5)
=> 0 * 0.5 + 0.5 * 0.5 => 0.25
The end! Now we are able to calculate 1D noise, just add one dimension for 2D. :-)
Hope it helps you understand! Now read #mk.'s answer for working code and have happy noises!
Edit:
Follow up question in the comments:
I read in wikipedia article that the gradient vector in 2d perlin should be length of 1 (unit circle) and random direction. since vector has X and Y, how do I do that exactly?
This could be easily lifted and adapted from the original perlin noise code. Find bellow a pseudocode.
gradient.x = random()*2 - 1;
gradient.y = random()*2 - 1;
normalize_2d( gradient );
Where normalize_2d is:
// normalizes a 2d vector
function normalize_2d(v)
size = square_root( v.x * v.x + v.y * v.y );
v.x = v.x / size;
v.y = v.y / size;
Compute Perlin noise at coordinates x, y
function perlin(float x, float y) {
// Determine grid cell coordinates
int x0 = (x > 0.0 ? (int)x : (int)x - 1);
int x1 = x0 + 1;
int y0 = (y > 0.0 ? (int)y : (int)y - 1);
int y1 = y0 + 1;
// Determine interpolation weights
// Could also use higher order polynomial/s-curve here
float sx = x - (double)x0;
float sy = y - (double)y0;
// Interpolate between grid point gradients
float n0, n1, ix0, ix1, value;
n0 = dotGridGradient(x0, y0, x, y);
n1 = dotGridGradient(x1, y0, x, y);
ix0 = lerp(n0, n1, sx);
n0 = dotGridGradient(x0, y1, x, y);
n1 = dotGridGradient(x1, y1, x, y);
ix1 = lerp(n0, n1, sx);
value = lerp(ix0, ix1, sy);
return value;
}

How to convert direction vector to euler angles?

I'm looking for a way to convert direction vector (X,Y,Z) into Euler angles (heading, pitch, bank). I know that direction vector by itself is not enough to get the bank angle, so there's also another so-called Up vector.
Having direction vector (X,Y,Z) and up vector (X,Y,Z) how do I convert that into Euler angles?
Let's see if I understand correctly. This is about the orientation of a rigid body in three dimensional space, like an air plane during flight. The nose of that airplane points towards the direction vector
D=(XD,YD,ZD) .
Towards the roof is the up vector
U=(XU,YU,ZU) .
Then heading H would be the direction vector D projected onto the earth surface:
H=(XD,YD,0) ,
with an associated angle
angle_H=atan2(YD,XD) .
Pitch P would be the up/down angle of the nose with respect to the horizon, if the direction vector D is normalized you get it from
ZD=sin(angle_P)
resulting in
angle_P=asin(ZD) .
Finally, for the bank angle we consider the direction of the wings, assuming the wings are perpendicular to the body. If the plane flies straight towards D, the wings point perpendicular to D and parallel to the earth surface:
W0 = ( -YD, XD, 0 )
This would be a bank angle of 0. The expected Up Vector would be perpendicular to W0 and perpendicular to D
U0 = W0 × D
with × denoting the cross product. U equals U0 if the bank angle is zero, otherwise the angle between U and U0 is the bank angle angle_B, which can be calculated from
cos(angle_B) = Dot(U0,U) / abs(U0) / abs(U)
sin(angle_B) = Dot(W0,U) / abs(W0) / abs(U) .
Here 'abs' calculates the length of the vector. From that you get the bank angle as
angle_B = atan2( Dot(W0,U) / abs(W0), Dot(U0,U) / abs(U0) ) .
The normalization factors cancel each other if U and D are normalized.
we need three vectors: X1, Y1, Z1 of local coordinate system (LCS) expressed in terms of world coordinate system (WCS). The code below presents how to calculate three Euler angles based on these 3 vectors.
#include <math.h>
#include <float.h>
#define PI 3.141592653589793
/**
* #param X1x
* #param X1y
* #param X1z X1 vector coordinates
* #param Y1x
* #param Y1y
* #param Y1z Y1 vector coordinates
* #param Z1x
* #param Z1y
* #param Z1z Z1 vector coordinates
* #param pre precession rotation
* #param nut nutation rotation
* #param rot intrinsic rotation
*/
void lcs2Euler(
double X1x, double X1y, double X1z,
double Y1x, double Y1y, double Y1z,
double Z1x, double Z1y, double Z1z,
double *pre, double *nut, double *rot) {
double Z1xy = sqrt(Z1x * Z1x + Z1y * Z1y);
if (Z1xy > DBL_EPSILON) {
*pre = atan2(Y1x * Z1y - Y1y*Z1x, X1x * Z1y - X1y * Z1x);
*nut = atan2(Z1xy, Z1z);
*rot = -atan2(-Z1x, Z1y);
}
else {
*pre = 0.;
*nut = (Z1z > 0.) ? 0. : PI;
*rot = -atan2(X1y, X1x);
}
}

Finding quaternion representing the rotation from one vector to another

I have two vectors u and v. Is there a way of finding a quaternion representing the rotation from u to v?
Quaternion q;
vector a = crossproduct(v1, v2);
q.xyz = a;
q.w = sqrt((v1.Length ^ 2) * (v2.Length ^ 2)) + dotproduct(v1, v2);
Don't forget to normalize q.
Richard is right about there not being a unique rotation, but the above should give the "shortest arc," which is probably what you need.
Half-Way Vector Solution
I came up with the solution that I believe Imbrondir was trying to present (albeit with a minor mistake, which was probably why sinisterchipmunk had trouble verifying it).
Given that we can construct a quaternion representing a rotation around an axis like so:
q.w == cos(angle / 2)
q.x == sin(angle / 2) * axis.x
q.y == sin(angle / 2) * axis.y
q.z == sin(angle / 2) * axis.z
And that the dot and cross product of two normalized vectors are:
dot == cos(theta)
cross.x == sin(theta) * perpendicular.x
cross.y == sin(theta) * perpendicular.y
cross.z == sin(theta) * perpendicular.z
Seeing as a rotation from u to v can be achieved by rotating by theta (the angle between the vectors) around the perpendicular vector, it looks as though we can directly construct a quaternion representing such a rotation from the results of the dot and cross products; however, as it stands, theta = angle / 2, which means that doing so would result in twice the desired rotation.
One solution is to compute a vector half-way between u and v, and use the dot and cross product of u and the half-way vector to construct a quaternion representing a rotation of twice the angle between u and the half-way vector, which takes us all the way to v!
There is a special case, where u == -v and a unique half-way vector becomes impossible to calculate. This is expected, given the infinitely many "shortest arc" rotations which can take us from u to v, and we must simply rotate by 180 degrees around any vector orthogonal to u (or v) as our special-case solution. This is done by taking the normalized cross product of u with any other vector not parallel to u.
Pseudo code follows (obviously, in reality the special case would have to account for floating point inaccuracies -- probably by checking the dot products against some threshold rather than an absolute value).
Also note that there is no special case when u == v (the identity quaternion is produced -- check and see for yourself).
// N.B. the arguments are _not_ axis and angle, but rather the
// raw scalar-vector components.
Quaternion(float w, Vector3 xyz);
Quaternion get_rotation_between(Vector3 u, Vector3 v)
{
// It is important that the inputs are of equal length when
// calculating the half-way vector.
u = normalized(u);
v = normalized(v);
// Unfortunately, we have to check for when u == -v, as u + v
// in this case will be (0, 0, 0), which cannot be normalized.
if (u == -v)
{
// 180 degree rotation around any orthogonal vector
return Quaternion(0, normalized(orthogonal(u)));
}
Vector3 half = normalized(u + v);
return Quaternion(dot(u, half), cross(u, half));
}
The orthogonal function returns any vector orthogonal to the given vector. This implementation uses the cross product with the most orthogonal basis vector.
Vector3 orthogonal(Vector3 v)
{
float x = abs(v.x);
float y = abs(v.y);
float z = abs(v.z);
Vector3 other = x < y ? (x < z ? X_AXIS : Z_AXIS) : (y < z ? Y_AXIS : Z_AXIS);
return cross(v, other);
}
Half-Way Quaternion Solution
This is actually the solution presented in the accepted answer, and it seems to be marginally faster than the half-way vector solution (~20% faster by my measurements, though don't take my word for it). I'm adding it here in case others like myself are interested in an explanation.
Essentially, instead of calculating a quaternion using a half-way vector, you can calculate the quaternion which results in twice the required rotation (as detailed in the other solution), and find the quaternion half-way between that and zero degrees.
As I explained before, the quaternion for double the required rotation is:
q.w == dot(u, v)
q.xyz == cross(u, v)
And the quaternion for zero rotation is:
q.w == 1
q.xyz == (0, 0, 0)
Calculating the half-way quaternion is simply a matter of summing the quaternions and normalizing the result, just like with vectors. However, as is also the case with vectors, the quaternions must have the same magnitude, otherwise the result will be skewed towards the quaternion with the larger magnitude.
A quaternion constructed from the dot and cross product of two vectors will have the same magnitude as those products: length(u) * length(v). Rather than dividing all four components by this factor, we can instead scale up the identity quaternion. And if you were wondering why the accepted answer seemingly complicates matters by using sqrt(length(u) ^ 2 * length(v) ^ 2), it's because the squared length of a vector is quicker to calculate than the length, so we can save one sqrt calculation. The result is:
q.w = dot(u, v) + sqrt(length_2(u) * length_2(v))
q.xyz = cross(u, v)
And then normalize the result. Pseudo code follows:
Quaternion get_rotation_between(Vector3 u, Vector3 v)
{
float k_cos_theta = dot(u, v);
float k = sqrt(length_2(u) * length_2(v));
if (k_cos_theta / k == -1)
{
// 180 degree rotation around any orthogonal vector
return Quaternion(0, normalized(orthogonal(u)));
}
return normalized(Quaternion(k_cos_theta + k, cross(u, v)));
}
The problem as stated is not well-defined: there is not a unique rotation for a given pair of vectors. Consider the case, for example, where u = <1, 0, 0> and v = <0, 1, 0>. One rotation from u to v would be a pi / 2 rotation around the z-axis. Another rotation from u to v would be a pi rotation around the vector <1, 1, 0>.
I'm not much good on Quaternion. However I struggled for hours on this, and could not make Polaris878 solution work. I've tried pre-normalizing v1 and v2. Normalizing q. Normalizing q.xyz. Yet still I don't get it. The result still didn't give me the right result.
In the end though I found a solution that did. If it helps anyone else, here's my working (python) code:
def diffVectors(v1, v2):
""" Get rotation Quaternion between 2 vectors """
v1.normalize(), v2.normalize()
v = v1+v2
v.normalize()
angle = v.dot(v2)
axis = v.cross(v2)
return Quaternion( angle, *axis )
A special case must be made if v1 and v2 are paralell like v1 == v2 or v1 == -v2 (with some tolerance), where I believe the solutions should be Quaternion(1, 0,0,0) (no rotation) or Quaternion(0, *v1) (180 degree rotation)
Why not represent the vector using pure quaternions? It's better if you normalize them first perhaps.
q1 = (0 ux uy uz)'
q2 = (0 vx vy vz)'
q1 qrot = q2
Pre-multiply with q1-1
qrot = q1-1 q2
where q1-1 = q1conj / qnorm
This is can be thought of as "left division".
Right division, which is not what you want is:
qrot,right = q2-1 q1
From algorithm point of view , the fastest solution looks in pseudocode
Quaternion shortest_arc(const vector3& v1, const vector3& v2 )
{
// input vectors NOT unit
Quaternion q( cross(v1, v2), dot(v1, v2) );
// reducing to half angle
q.w += q.magnitude(); // 4 multiplication instead of 6 and more numerical stable
// handling close to 180 degree case
//... code skipped
return q.normalized(); // normalize if you need UNIT quaternion
}
Be sure that you need unit quaternions (usualy, it is required for interpolation).
NOTE:
Nonunit quaternions can be used with some operations faster than unit.
Some of the answers don't seem to consider possibility that cross product could be 0. Below snippet uses angle-axis representation:
//v1, v2 are assumed to be normalized
Vector3 axis = v1.cross(v2);
if (axis == Vector3::Zero())
axis = up();
else
axis = axis.normalized();
return toQuaternion(axis, ang);
The toQuaternion can be implemented as follows:
static Quaternion toQuaternion(const Vector3& axis, float angle)
{
auto s = std::sin(angle / 2);
auto u = axis.normalized();
return Quaternion(std::cos(angle / 2), u.x() * s, u.y() * s, u.z() * s);
}
If you are using Eigen library, you can also just do:
Quaternion::FromTwoVectors(from, to)
Working just with normalized quaternions, we can express Joseph Thompson's answer in the follwing terms.
Let q_v = (0, u_x, v_y, v_z) and q_w = (0, v_x, v_y, v_z) and consider
q = q_v * q_w = (-u dot v, u x v).
So representing q as q(q_0, q_1, q_2, q_3) we have
q_r = (1 - q_0, q_1, q_2, q_3).normalize()
According to the derivation of the quaternion rotation between two angles, one can rotate a vector u to vector v with
function fromVectors(u, v) {
d = dot(u, v)
w = cross(u, v)
return Quaternion(d + sqrt(d * d + dot(w, w)), w).normalize()
}
If it is known that the vectors u to vector v are unit vectors, the function reduces to
function fromUnitVectors(u, v) {
return Quaternion(1 + dot(u, v), cross(u, v)).normalize()
}
Depending on your use-case, handling the cases when the dot product is 1 (parallel vectors) and -1 (vectors pointing in opposite directions) may be needed.
The Generalized Solution
function align(Q, u, v)
U = quat(0, ux, uy, uz)
V = quat(0, vx, vy, vz)
return normalize(length(U*V)*Q - V*Q*U)
To find the quaternion of smallest rotation which rotate u to v, use
align(quat(1, 0, 0, 0), u, v)
Why This Generalization?
R is the quaternion closest to Q which will rotate u to v. More importantly, R is the quaternion closest to Q whose local u direction points in same direction as v.
This can be used to give you all possible rotations which rotate from u to v, depending on the choice of Q. If you want the minimal rotation from u to v, as the other solutions give, use Q = quat(1, 0, 0, 0).
Most commonly, I find that the real operation you want to do is a general alignment of one axis with another.
// If you find yourself often doing something like
quatFromTo(toWorldSpace(Q, localFrom), worldTo)*Q
// you should instead consider doing
align(Q, localFrom, worldTo)
Example
Say you want the quaternion Y which only represents Q's yaw, the pure rotation about the y axis. We can compute Y with the following.
Y = align(quat(Qw, Qx, Qy, Qz), vec(0, 1, 0), vec(0, 1, 0))
// simplifies to
Y = normalize(quat(Qw, 0, Qy, 0))
Alignment as a 4x4 Projection Matrix
If you want to perform the same alignment operation repeatedly, because this operation is the same as the projection of a quaternion onto a 2D plane embedded in 4D space, we can represent this operation as the multiplication with 4x4 projection matrix, A*Q.
I = mat4(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1)
A = I - leftQ(V)*rightQ(U)/length(U*V)
// which expands to
A = mat4(
1 + ux*vx + uy*vy + uz*vz, uy*vz - uz*vy, uz*vx - ux*vz, ux*vy - uy*vx,
uy*vz - uz*vy, 1 + ux*vx - uy*vy - uz*vz, uy*vx + ux*vy, uz*vx + ux*vz,
uz*vx - ux*vz, uy*vx + ux*vy, 1 - ux*vx + uy*vy - uz*vz, uz*vy + uy*vz,
ux*vy - uy*vx, uz*vx + ux*vz, uz*vy + uy*vz, 1 - ux*vx - uy*vy + uz*vz)
// A can be applied to Q with the usual matrix-vector multiplication
R = normalize(A*Q)
//LeftQ is a 4x4 matrix which represents the multiplication on the left
//RightQ is a 4x4 matrix which represents the multiplication on the Right
LeftQ(w, x, y, z) = mat4(
w, -x, -y, -z,
x, w, -z, y,
y, z, w, -x,
z, -y, x, w)
RightQ(w, x, y, z) = mat4(
w, -x, -y, -z,
x, w, z, -y,
y, -z, w, x,
z, y, -x, w)

Resources