How to convert direction vector to euler angles? - math

I'm looking for a way to convert direction vector (X,Y,Z) into Euler angles (heading, pitch, bank). I know that direction vector by itself is not enough to get the bank angle, so there's also another so-called Up vector.
Having direction vector (X,Y,Z) and up vector (X,Y,Z) how do I convert that into Euler angles?

Let's see if I understand correctly. This is about the orientation of a rigid body in three dimensional space, like an air plane during flight. The nose of that airplane points towards the direction vector
D=(XD,YD,ZD) .
Towards the roof is the up vector
U=(XU,YU,ZU) .
Then heading H would be the direction vector D projected onto the earth surface:
H=(XD,YD,0) ,
with an associated angle
angle_H=atan2(YD,XD) .
Pitch P would be the up/down angle of the nose with respect to the horizon, if the direction vector D is normalized you get it from
ZD=sin(angle_P)
resulting in
angle_P=asin(ZD) .
Finally, for the bank angle we consider the direction of the wings, assuming the wings are perpendicular to the body. If the plane flies straight towards D, the wings point perpendicular to D and parallel to the earth surface:
W0 = ( -YD, XD, 0 )
This would be a bank angle of 0. The expected Up Vector would be perpendicular to W0 and perpendicular to D
U0 = W0 × D
with × denoting the cross product. U equals U0 if the bank angle is zero, otherwise the angle between U and U0 is the bank angle angle_B, which can be calculated from
cos(angle_B) = Dot(U0,U) / abs(U0) / abs(U)
sin(angle_B) = Dot(W0,U) / abs(W0) / abs(U) .
Here 'abs' calculates the length of the vector. From that you get the bank angle as
angle_B = atan2( Dot(W0,U) / abs(W0), Dot(U0,U) / abs(U0) ) .
The normalization factors cancel each other if U and D are normalized.

we need three vectors: X1, Y1, Z1 of local coordinate system (LCS) expressed in terms of world coordinate system (WCS). The code below presents how to calculate three Euler angles based on these 3 vectors.
#include <math.h>
#include <float.h>
#define PI 3.141592653589793
/**
* #param X1x
* #param X1y
* #param X1z X1 vector coordinates
* #param Y1x
* #param Y1y
* #param Y1z Y1 vector coordinates
* #param Z1x
* #param Z1y
* #param Z1z Z1 vector coordinates
* #param pre precession rotation
* #param nut nutation rotation
* #param rot intrinsic rotation
*/
void lcs2Euler(
double X1x, double X1y, double X1z,
double Y1x, double Y1y, double Y1z,
double Z1x, double Z1y, double Z1z,
double *pre, double *nut, double *rot) {
double Z1xy = sqrt(Z1x * Z1x + Z1y * Z1y);
if (Z1xy > DBL_EPSILON) {
*pre = atan2(Y1x * Z1y - Y1y*Z1x, X1x * Z1y - X1y * Z1x);
*nut = atan2(Z1xy, Z1z);
*rot = -atan2(-Z1x, Z1y);
}
else {
*pre = 0.;
*nut = (Z1z > 0.) ? 0. : PI;
*rot = -atan2(X1y, X1x);
}
}

Related

Orientation Matrix from Heading, Pitch and Roll

I have a problem converting Heading, Pitch and Roll to an Orientation Matrix
How would I go about improving this to ensure Y and Z and are correctly calculated.
--[[
get_orientation - Returns the orientation matrix of an object based on its heading, pitch, and roll in degrees.
#param heading number - The heading of the object in degrees.
#param pitch number - The pitch of the object in degrees.
#param roll number - The roll of the object in degrees.
#returns orientation table - The orientation matrix of the object, represented as a table with three unit vectors: x, y, and z.
]]
local function get_orientation(heading, pitch, roll)
local orientation = {}
-- Convert the heading, pitch, and roll from degrees to radians using math.rad
heading = math.rad(heading)
pitch = math.rad(pitch)
roll = math.rad(roll)
-- Calculate the x unit vector
-- x is the Vec3 unit vector that points in the direction of the object's front
orientation.x = {}
orientation.x.x = math.cos(heading) * math.cos(pitch)
orientation.x.y = math.sin(pitch)
orientation.x.z = math.sin(heading) * math.cos(pitch)
-- Calculate the y unit vector
-- y is the Vec3 unit vector that points in the direction of the object's top
orientation.y = {}
orientation.y.x = -math.cos(heading) * math.sin(pitch)
orientation.y.y = math.cos(pitch)
orientation.y.z = -math.sin(heading) * math.sin(pitch)
-- Calculate the z unit vector
-- z is the Vec3 unit vector that points in the direction of the object's right side
orientation.z = {}
orientation.z.x = -math.sin(heading)
orientation.z.z = -math.cos(heading) * math.cos(roll)
orientation.z.y = math.sin(roll)
-- Return the orientation matrix of the object
return orientation
end
---------------------------------------------------
---------------------------------------------------
local lat = 41.610278
local lon = 41.599444
local heading = 90
local pitch = 25
local roll = 0
local alt = 100
local x, y = terrain.convertLatLonToMeters(lat, lon)
local orientation = get_orientation(heading, pitch, roll)
local position = {
x=orientation.x,
y=orientation.y,
z=orientation.z,
p={x=x,y=alt,z=y}
}
Export.LoSetCameraPosition(position)
local actual = Export.LoGetCameraPosition()
return {position=position, actual=actual}
There is a little more information about the Orientation here
https://www.digitalcombatsimulator.com/en/support/faq/1256/#:~:text=level%2010%20m-,Orientation,-Object%20orientation%20is
I am attempting to build a function that takes lat, lon, alt, heading, pitch and roll and produce the Position matrix.
I have the example function so far, which does work but everything goes funky after -81 pitch and the roll does not work as intended.
Formulas for 3D rotation are not simple, but they can be simply deduced from three consecutive 2D rotations.
local function apply_rotation(a, b, angle)
local ax, ay, az, bx, by, bz = a.x, a.y, a.z, b.x, b.y, b.z
a.x = math.cos(angle) * ax + math.sin(angle) * bx
a.y = math.cos(angle) * ay + math.sin(angle) * by
a.z = math.cos(angle) * az + math.sin(angle) * bz
b.x = math.cos(angle) * bx - math.sin(angle) * ax
b.y = math.cos(angle) * by - math.sin(angle) * ay
b.z = math.cos(angle) * bz - math.sin(angle) * az
end
local function get_orientation(heading, pitch, roll)
-- Convert the heading, pitch, and roll from degrees to radians using math.rad
heading = math.rad(heading)
pitch = math.rad(pitch)
roll = math.rad(roll)
-- x is the Vec3 unit vector that points in the direction of the object's front
-- y is the Vec3 unit vector that points in the direction of the object's top
-- z is the Vec3 unit vector that points in the direction of the object's right side
local o = {
x = { x=1, y=0, z=0 },
y = { x=0, y=1, z=0 },
z = { x=0, y=0, z=1 },
}
apply_rotation(o.x, o.z, heading)
apply_rotation(o.x, o.y, pitch)
apply_rotation(o.z, o.y, roll)
-- Return the orientation matrix of the object
return o
end

Rotation About an Arbitrary Axis in 3 Dimensions Using Matrix

I come accross a math problem about Interactive Computer Graphics.
I summarize and abstract this problem as follows:
I'm going to rotation a 3d coordinate P(x1,y1,z1) around a point O(x0,y0,z0)
and there are 2 vectors u and v which we already know.
u is the direction to O before transformation.
v is the direction to O after transformation.
I want to know how to conduct the calculation and get the coordinate of Q
Thanks a lot.
Solution:
Rotation About an Arbitrary Axis in 3 Dimensions using the following matrix:
rotation axis vector (normalized): (u,v,w)
position coordinate of the rotation center: (a,b,c)
rotation angel: theta
Reference:
https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxnbGVubm11cnJheXxneDoyMTJiZTZlNzVlMjFiZTFi
for just single point no rotations is needed ... so knowns are:
u,v,O,P
so we now the distance is not changing:
|P-O| = |Q-O|
and directions are parallel to u,v so:
Q = O + v*(|P-O|/|v|)
But I suspect you want to construct rotation (transform matrix) such that more points (mesh perhaps) are transformed. If that is true then you need at least one known to get this right. Because there is infinite possible rotations transforming P -> Q but the rest of the mesh will be different for each ... so you need to know at least 2 non trivial points pair P0,P1 -> Q0,Q1 or axis of rotation or plane parallel to rotation or any other data known ...
Anyway in current state you can use as rotation axis vector perpendicular to u,v and angle obtained from dot product:
axis = cross (u,v)
ang = +/-acos(dot(u,v))
You just need to find out the sign of angle so try both and use the one for which the resultinq Q is where it should be so dot(Q-O,v) is max. To rotate around arbitrary axis and point use:
Rodrigues_rotation_formula
Also this might be helpfull:
Understanding 4x4 homogenous transform matrices
By computing dot product between v and u get the angle l between the vectors. Do a cross product of v and u (normalized) to produce axis of rotation vector a. Let w be a vector along vector u from O to P. To rotate point P into Q apply the following actions (in pseudo code) having axis a and angle l computed above:
float4 Rotate(float4 w, float l, float4 a)
{
float4x4 Mr = IDENTITY;
quat_t quat = IDENTITY;
float4 t = ZERO;
float xx, yy, zz, xy, xz, yz, wx, wy, wz;
quat[X] = a[X] * sin((-l / 2.0f));
quat[Y] = a[Y] * sin((-l / 2.0f));
quat[Z] = a[Z] * sin((-l / 2.0f));
quat[W] = cos((-l / 2.0f));
xx = quat[X] * quat[X];
yy = quat[Y] * quat[Y];
zz = quat[Z] * quat[Z];
xy = quat[X] * quat[Y];
xz = quat[X] * quat[Z];
yz = quat[Y] * quat[Z];
wx = quat[W] * quat[X];
wy = quat[W] * quat[Y];
wz = quat[W] * quat[Z];
Mr[0][0] = 1.0f - 2.0f * (yy + zz);
Mr[0][1] = 2.0f * (xy + wz);
Mr[0][2] = 2.0f * (xz - wy);
Mr[0][3] = 0.0f;
Mr[1][0] = 2.0f * (xy - wz);
Mr[1][1] = 1.0f - 2.0f * (xx + zz);
Mr[1][2] = 2.0f * (yz + wx);
Mr[1][3] = 0.0f;
Mr[2][0] = 2.0f * (xz + wy);
Mr[2][1] = 2.0f * (yz - wx);
Mr[2][2] = 1.0f - 2.0f * (xx + yy);
Mr[2][3] = 0.0f;
Mr[3][0] = 0.0f;
Mr[3][1] = 0.0f;
Mr[3][2] = 0.0f;
Mr[3][3] = 1.0f;
w = Mr * w;
return w;
}
Point Q is at the end of the rotated vector w. Algorithm used in the pseudo code is quaternion rotation.
If you know u, v, P, and O then I would suggest that you compute |OP| which should be preserved under rotations. Then multiply this length by the unit vector -v (I assumed u, v are unit vectors: if not - normalize them) and translate the origin by this -|OP|v vector. The negative sign in front of v comes from the description given in your question:"v is the direction to O after transformation".
P and Q are at the same distance R to O
R = sqrt( (x1-x0)^2 + (y1-y0)^2 + (z1-z0)^2 )
and OQ is collinear to v, so OQ = v * R / ||v|| where ||v|| is the norm of v
||v|| = sqrt( xv^2 + yv^2 + zv^2 )
So the coordinates of Q(xq,yq,zq) are:
xq= xo + xv * R / ||v||
yq= yo + yv * R / ||v||
zq= zo + zv * R / ||v||

Tesselation of the circle in OpenGL

I'm having trouble understanding the math behind this function. I would like to hear the logic behind the formulas (especially what is this tangential and radial factor) written here to create points which later (when it send the vec3 array to a function) form a circle in OpenGL.
void doTesselate(const Arc& arc, int slices, std::vector<glm::vec3>& vertices)
{
double dang = (arc.endAngle() - arc.startAngle()) * Deg2Rad;
double radius = arc.radius();
double angIncr = dang / slices;
double tangetial_factor = tan(angIncr);
double radial_factor = 1 - cos(angIncr);
double startAngle = arc.startAngle() * Deg2Rad;
const glm::vec3& center = arc.center();
double x = center.x - radius * cos(startAngle);
double y = center.y - radius * sin(startAngle);
++slices;
for (int ii = 0; ii < slices; ii++) {
vertices.push_back(glm::vec3(x, y, center.z));
double tx = center.y - y;
double ty = x - center.x;
x += tx * tangetial_factor;
y += ty * tangetial_factor;
double rx = center.x - x;
double ry = center.y - y;
x += rx * radial_factor;
y += ry * radial_factor;
}
}
The idea is the following:
Starting from the current point, you go a bit in tangential direction and then back towards the center.
The vector (tx, ty) is the tangent at the current point with length equal to the radius. In order to get to the new angle, you have to move tan(angle) * radius along the tangent. radius is already incorporated in the tangent vector and tan(angle) is the tangetial_factor (you get that directly from tangent's definition).
After that, (rx, ry) is the vector towards the center. This vector has the length l:
cos(angle) = radius / l
l = radius / cos(angle)
We need to find a multiple m of this vector, such that the corrected point lies on the circle with the given radius again. If we just inspect the lengths, then we want to find:
target distance = current distance - m * length of (rx, ry)
radius = radius / cos(angle) - m * radius / cos(angle)
1 = (1 - m) / cos(angle)
cos(angle) = 1 - m
1 - cos(angle) = m
And this multiple is exactly the radial_factor (the amount which you need to move towards the center to get onto the circle).

Bearing(azimuth) and distance via 2 gps coordnates (c#)

I'm trying to find the angle to north(bearing/azimuth) & distance between 2 gps coordinates. But obviously I have a mistake somewhere - it gives me wrong bearing&distance values. Please correct me where I'm wrong. Trying it in Unity 5 (c#).
Here is the code:
public float pointX;
public float pointY;
public float lat1=55.500817f;
public float lat2=55.380680f;
public float lon1=37.568342f;
public float lon2=37.822586f;
public float azimuth;
void Update () {
float dlon = lon2 - lon1;
float dlat = lat2 - lat1;
pointX = Mathf.Sin(dlon* 0.01745329f)*Mathf.Cos(lat2* 0.01745329f);
pointY = Mathf.Cos (lat1* 0.01745329f) * Mathf.Sin (lat2* 0.01745329f) - Mathf.Sin (lat1* 0.01745329f) * Mathf.Cos (lat2* 0.01745329f) * Mathf.Cos (dlon*0.01745329f);
azimuth=Mathf.Atan2(pointX, pointY)*57.29578f;
double distance = Math.Pow(Math.Sin(dlat/2*0.01745329),2.0)+(Math.Cos(lat1* 0.01745329)*Math.Cos(lat2* 0.01745329)* Math.Pow(Math.Sin(dlon/2* 0.01745329),2.0));
distance = 2.0*6376500.0*Math.Atan2(Math.Sqrt(distance),Math.Sqrt(1.0-distance));
where * 0.01745329f is the conversion from degrees to radians and *57.29578f is the conversion from radians to degrees
Let's assume all the angles are already converted to radians, and use Re as the Earth's mean radius, and we'll assume a spherical Earth model. There are corrections for the ellipsoidal shape of the Earth but this will get you close. I'll use python-style coding since I know nothing about C#.
#
# North Distance of point 2 from point 1
#
dN = Re * dlat
#
# East Distance of point 2 from point 1
#
dE = Re * dlon * cos(0.5 * (lat1 + lat2))
#
# Distance between points
#
distance=math.sqrt(dN**2 + dE**2)
#
# Azimuth to point 2 from point 1 in radians
#
azimuth=math.atan2(dE,dN)
I copied your code (to java), 1:1
There is no bug in your azimuth code.
Either the cause is by usage of Mathf (float) instead of the double variant
or you just look at the wrong data or wrong output.
As intermediate values I get:
pointx= 0.0025209875920285405,
pointy = -0.0020921620920549278,
azimuth = 129.68

How to calculate Tangent and Binormal?

Talking about bump mapping, specular highlight and these kind of things in OpenGL Shading Language (GLSL)
I have:
An array of vertices (e.g. {0.2,0.5,0.1, 0.2,0.4,0.5, ...})
An array of normals (e.g. {0.0,0.0,1.0, 0.0,1.0,0.0, ...})
The position of a point light in world space (e.g. {0.0,1.0,-5.0})
The position of the viewer in world space (e.g. {0.0,0.0,0.0}) (assume the viewer is in the center of the world)
Now, how can I calculate the Binormal and Tangent for each vertex? I mean, what is the formula to calculate the Binormals, what I have to use based on those informations? And about the tangent?
I'll construct the TBN Matrix anyway, so if you know a formula to construct the matrix directly based on those informations will be nice!
Oh, yeh, I have the texture coordinates too, if needed.
And as I'm talking about GLSL, would be nice a per-vertex solution, I mean, one which doesn't need to access more than one vertex information at a time.
---- Update -----
I found this solution:
vec3 tangent;
vec3 binormal;
vec3 c1 = cross(a_normal, vec3(0.0, 0.0, 1.0));
vec3 c2 = cross(a_normal, vec3(0.0, 1.0, 0.0));
if (length(c1)>length(c2))
{
tangent = c1;
}
else
{
tangent = c2;
}
tangent = normalize(tangent);
binormal = cross(v_nglNormal, tangent);
binormal = normalize(binormal);
But I don't know if it is 100% correct.
The relevant input data to your problem are the texture coordinates. Tangent and Binormal are vectors locally parallel to the object's surface. And in the case of normal mapping they're describing the local orientation of the normal texture.
So you have to calculate the direction (in the model's space) in which the texturing vectors point. Say you have a triangle ABC, with texture coordinates HKL. This gives us vectors:
D = B-A
E = C-A
F = K-H
G = L-H
Now we want to express D and E in terms of tangent space T, U, i.e.
D = F.s * T + F.t * U
E = G.s * T + G.t * U
This is a system of linear equations with 6 unknowns and 6 equations, it can be written as
| D.x D.y D.z | | F.s F.t | | T.x T.y T.z |
| | = | | | |
| E.x E.y E.z | | G.s G.t | | U.x U.y U.z |
Inverting the FG matrix yields
| T.x T.y T.z | 1 | G.t -F.t | | D.x D.y D.z |
| | = ----------------- | | | |
| U.x U.y U.z | F.s G.t - F.t G.s | -G.s F.s | | E.x E.y E.z |
Together with the vertex normal T and U form a local space basis, called the tangent space, described by the matrix
| T.x U.x N.x |
| T.y U.y N.y |
| T.z U.z N.z |
Transforming from tangent space into object space. To do lighting calculations one needs the inverse of this. With a little bit of exercise one finds:
T' = T - (N·T) N
U' = U - (N·U) N - (T'·U) T'
Normalizing the vectors T' and U', calling them tangent and binormal we obtain the matrix transforming from object into tangent space, where we do the lighting:
| T'.x T'.y T'.z |
| U'.x U'.y U'.z |
| N.x N.y N.z |
We store T' and U' them together with the vertex normal as a part of the model's geometry (as vertex attributes), so that we can use them in the shader for lighting calculations. I repeat: You don't determine tangent and binormal in the shader, you precompute them and store them as part of the model's geometry (just like normals).
(The notation between the vertical bars above are all matrices, never determinants, which normally use vertical bars instead of brackets in their notation.)
Generally, you have 2 ways of generating the TBN matrix: off-line and on-line.
On-line = right in the fragment shader using derivative instructions. Those derivations give you a flat TBN basis for each point of a polygon. In order to get a smooth one we have to re-orthogonalize it based on a given (smooth) vertex normal. This procedure is even more heavy on GPU than initial TBN extraction.
// compute derivations of the world position
vec3 p_dx = dFdx(pw_i);
vec3 p_dy = dFdy(pw_i);
// compute derivations of the texture coordinate
vec2 tc_dx = dFdx(tc_i);
vec2 tc_dy = dFdy(tc_i);
// compute initial tangent and bi-tangent
vec3 t = normalize( tc_dy.y * p_dx - tc_dx.y * p_dy );
vec3 b = normalize( tc_dy.x * p_dx - tc_dx.x * p_dy ); // sign inversion
// get new tangent from a given mesh normal
vec3 n = normalize(n_obj_i);
vec3 x = cross(n, t);
t = cross(x, n);
t = normalize(t);
// get updated bi-tangent
x = cross(b, n);
b = cross(n, x);
b = normalize(b);
mat3 tbn = mat3(t, b, n);
Off-line = prepare tangent as a vertex attribute. This is more difficult to get because it will not just add another vertex attrib but also will require to re-compose all other attributes. Moreover, it will not 100% give you a better performance as you'll get an additional cost of storing/passing/animating(!) vector3 vertex attribute.
The math is described in many places (google it), including the #datenwolf post.
The problem here is that 2 vertices may have the same normal and texture coordinate but different tangents. That means you can not just add a vertex attribute to a vertex, you'll need to split the vertex into 2 and specify different tangents for the clones.
The best way to get unique tangent (and other attribs) per vertex is to do it as early as possible = in the exporter. There on the stage of sorting pure vertices by attributes you'll just need to add the tangent vector to the sorting key.
As a radical solution to the problem consider using quaternions. A single quaternion (vec4) can successfully represent tangential space of a pre-defined handiness. It's easy to keep orthonormal (including passing to the fragment shader), store and extract normal if needed. More info on the KRI wiki.
Based on the answer from kvark, I would like to add more thoughts.
If you are in need of an orthonormalized tangent space matrix you have to do some work any way.
Even if you add tangent and binormal attributes, they will be interpolated during the shader stages
and at the end they are neither normalized nor they are normal to each another.
Let's assume that we have a normalized normalvector n, and we have the tangent t and the binormalb or we can calculate them from the derivations as follows:
// derivations of the fragment position
vec3 pos_dx = dFdx( fragPos );
vec3 pos_dy = dFdy( fragPos );
// derivations of the texture coordinate
vec2 texC_dx = dFdx( texCoord );
vec2 texC_dy = dFdy( texCoord );
// tangent vector and binormal vector
vec3 t = texC_dy.y * pos_dx - texC_dx.y * pos_dy;
vec3 b = texC_dx.x * pos_dy - texC_dy.x * pos_dx;
Of course an orthonormalized tangent space matrix can be calcualted by using the cross product,
but this would only work for right-hand systems. If a matrix was mirrored (left-hand system) it will turn to a right hand system:
t = cross( cross( n, t ), t ); // orthonormalization of the tangent vector
b = cross( n, t ); // orthonormalization of the binormal vector
// may invert the binormal vector
mat3 tbn = mat3( normalize(t), normalize(b), n );
In the code snippet above the binormal vector is reversed if the tangent space is a left-handed system.
To avoid this, the hard way must be gone:
t = cross( cross( n, t ), t ); // orthonormalization of the tangent vector
b = cross( b, cross( b, n ) ); // orthonormalization of the binormal vectors to the normal vector
b = cross( cross( t, b ), t ); // orthonormalization of the binormal vectors to the tangent vector
mat3 tbn = mat3( normalize(t), normalize(b), n );
A common way to orthogonalize any matrix is the Gram–Schmidt process:
t = t - n * dot( t, n ); // orthonormalization ot the tangent vectors
b = b - n * dot( b, n ); // orthonormalization of the binormal vectors to the normal vector
b = b - t * dot( b, t ); // orthonormalization of the binormal vectors to the tangent vector
mat3 tbn = mat3( normalize(t), normalize(b), n );
Another possibility is to use the determinant of the 2*2 matrix, which results from the derivations of the texture coordinates texC_dx, texC_dy, to take the direction of the binormal vector into account. The idea is that the determinant of a orthogonal matrix is 1 and the determined one of a orthogonal mirror matrix -1.
The determinant can eihter be calcualted by the GLSL function determinant( mat2( texC_dx, texC_dy )
or it can be calcualated by it formula texC_dx.x * texC_dy.y - texC_dy.x * texC_dx.y.
For the calculation of the orthonormalized tangent space matrix, the binormal vector is no longer required and the calculation of the unit vector
(normalize) of the binormal vector can be evaded.
float texDet = texC_dx.x * texC_dy.y - texC_dy.x * texC_dx.y;
vec3 t = texC_dy.y * pos_dx - texC_dx.y * pos_dy;
t = normalize( t - n * dot( t, n ) );
vec3 b = cross( n, t ); // b is normlized because n and t are orthonormalized unit vectors
mat3 tbn = mat3( t, sign( texDet ) * b, n ); // take in account the direction of the binormal vector
There is a variety of ways to calculate tangents, and if the normal map baker doesn't do it the same way as the renderer you'll get subtle artifacts. Many bakers use the MikkTSpace algorithm, which isn't the same as the fragment derivatives trick.
Fortunately, if you have an indexed mesh from a program that uses MikkTSpace (and no texture coordinate triangles with opposite orientations share an index) the hard part of the algorithm is mostly done for you, and you can reconstruct the tangents like this:
#include <cmath>
#include "glm/geometric.hpp"
#include "glm/vec2.hpp"
#include "glm/vec3.hpp"
#include "glm/vec4.hpp"
using glm::vec2;
using glm::vec3;
using glm::vec4;
void makeTangents(uint32_t nIndices, uint16_t* indices,
const vec3 *positions, const vec3 *normals,
const vec2 *texCoords, vec4 *tangents) {
uint32_t inconsistentUvs = 0;
for (uint32_t l = 0; l < nIndices; ++l) tangents[indices[l]] = vec4(0);
for (uint32_t l = 0; l < nIndices; ++l) {
uint32_t i = indices[l];
uint32_t j = indices[(l + 1) % 3 + l / 3 * 3];
uint32_t k = indices[(l + 2) % 3 + l / 3 * 3];
vec3 n = normals[i];
vec3 v1 = positions[j] - positions[i], v2 = positions[k] - positions[i];
vec2 t1 = texCoords[j] - texCoords[i], t2 = texCoords[k] - texCoords[i];
// Is the texture flipped?
float uv2xArea = t1.x * t2.y - t1.y * t2.x;
if (std::abs(uv2xArea) < 0x1p-20)
continue; // Smaller than 1/2 pixel at 1024x1024
float flip = uv2xArea > 0 ? 1 : -1;
// 'flip' or '-flip'; depends on the handedness of the space.
if (tangents[i].w != 0 && tangents[i].w != -flip) ++inconsistentUvs;
tangents[i].w = -flip;
// Project triangle onto tangent plane
v1 -= n * dot(v1, n);
v2 -= n * dot(v2, n);
// Tangent is object space direction of texture coordinates
vec3 s = normalize((t2.y * v1 - t1.y * v2)*flip);
// Use angle between projected v1 and v2 as weight
float angle = std::acos(dot(v1, v2) / (length(v1) * length(v2)));
tangents[i] += vec4(s * angle, 0);
}
for (uint32_t l = 0; l < nIndices; ++l) {
vec4& t = tangents[indices[l]];
t = vec4(normalize(vec3(t.x, t.y, t.z)), t.w);
}
// std::cerr << inconsistentUvs << " inconsistent UVs\n";
}
In the vertex shader, they are rotated into world space:
fragNormal = (model.model * vec4(inNormal, 0)).xyz;
fragTangent = vec4((model.model * vec4(inTangent.xyz, 0)).xyz, inTangent.w);
Then the binormal and world space normal are calculated like this (see http://mikktspace.com/):
vec3 binormal = fragTangent.w * cross(fragNormal, fragTangent.xyz);
vec3 worldNormal = normalize(normal.x * fragTangent.xyz +
normal.y * binormal +
normal.z * fragNormal);
(The binormal is usually calculated per pixel, but some bakers give you the option to calculate it per vertex and interpolate it. This page has information about specific programs.)

Resources