Normal Mapping on procedural sphere - vector

I am a student in video games, and we are working on a raytracer in C++. We are using our teachers' library.
We create procedural objects (in our case a sphere), the Camera sends a ray for each pixel of the screen and the ray send back information on what it hit.
Some of us decided to integrate Normal Maps. So, at first, we sent ray on the object, looked at the value of the Normal map texel where we hit the sphere, converted it in a vector, normalized it and sent it back in place of the normal of the object. The result was pretty good, but of course, it didn't take the orientation of the "face" (it's procedural, so there is no face, but it gives the idea) into account anymore, so the render was flat.
We still don't really know how to "blend" the normal of the texture (in tangent space) and the normal of the object together. Here is our code:
// TGfxVec3 is part of our teachers library, and is a 3d vector like this:
// TGfxVec3( 12.7f, -13.4f, 52.0f )
// The sphere being at the origin and of radius 1, and tHit.m_tPosition being the
// exact position at the surface of the sphere where the ray hit, the normal of this
// point is the position hit by the ray.
TGfxVec3 tNormal = tHit.m_tPosition;
TGfxVec3 tTangent = Vec3CrossProduct( tNormal , m_tAxisZ );
TGfxVec3 tBiNormal = Vec3CrossProduct( tNormal , tTangent );
TGfxVec3 tTextureNorm = 2*(TGfxVec3( pNorm[0], pNorm[1], pNorm[2] )/255)-TGfxVec3( -1.0f, -1.0f, -1.0f );
// pNorm[0], pNorm[1], pNorm[2] are respectively the channels Red, Green,
// and Blue of the Normal Map texture.
// We put them in a 3D vector, divid them by 255 so their value go from 0 to 1,
// multiply them by 2, and then substract a vector, so their rang goes from -1 to +1.
tHit.m_tNorm = TGfxVec3( tTangente.x*tTextNorm.x + tCoTangente.x*tTextNorm.x +
tNorm.x*tTextNorm.x, tTangente.y*tTextNorm.y + tCoTangente.y*tTextNorm.y +
tNorm.y*tTextNorm.y, tTangente.z*tTextNorm.z + tCoTangente.z*tTextNorm.z +
tNorm.z*tTextNorm.z ).Normalize();
// Here, after some research, I came across this : http://www.txutxi.com/?p=316 ,
// that allow us to convert the normal map tangent space to the object space.
The results are still not good. My main concern are the Tangent and Binormals. The Axis taken in reference (here: m_tAxisZ, the Z Axis of the Sphere), is not right. But I don't know what to take, or even if what I am doing is really good. So I came here for help.

So, we finally did it. :D Ok, I will try to be clear. For this, two images :
(1) : http://i.imgur.com/cHwrR9A.png
(2) : http://i.imgur.com/mGPH1RW.png
(My drawing skill has no equal, I know).
So, the main problem was to find the Tangent "T" and the Bi-tangent "B". We already have the Normal "N". Our circle always being at the origin with a radius of 1, a point on its surface is equal to the Normal to that point (black and red vector on the first image). So, we have to find the tangent to that point (in green). For this, we just have to rotate the vector from PI/2 rad :
With N( x, y ) :
T = ( -N.y , N.x )
However, we are in 3D. So the point will not always be at the equator. We can easily solve this problem by ignoring the position in Y of our point and normalize the vector with only the two other component. So, on the second image, we have P (we set its Y value to 0), and we normalize the new vector to get P'.
With P( x, y, z ) :
P' = ( P.x, 0, P.z).Normalize();
Then, we go back to my first message to find the T. Finally, we get the B with a cross product between the N en the T. Finally, we calculate the normal to that point by taking the normal map into account.
With the variable "Map" containing the three channels (RGB) of the normal Map, each one clamped from -1 to 1, and T, N and B all being 3D vectors :
( Map.R*T + Map.G*B + Map.B*N ).Normalize();
And that's it, you have the normal to the point taking your normal map into account. :) Hope this will be usefull for others.

You are mostly right and completely wrong at the same time.
Tangent space normal mapping use a transformation matrix to convert the tangent space normal from the texture to another space, like object or world space, or transform the light in the tangent space to compute the lighting with everything in the same space.
Bi-normal is a common mistake and should be named bi-tangent.
It is sometime possible to compute the TBN at the fly on simple geometry, like on a height-map as it is easy to deduce the tangent and the bi-tangent on a regular grid. But on a sphere, the cross trick with a fixed axis will result to a singularity at the pole where the cross product give a zero length vector.
Last, even if we ignore the pole singularity, the TBN must be normalized before you apply the matrix to the tangent space normal. You may also miss a transpose, as a 3x3 orthonormal matrix inverse is the transpose, and what you need is the inverse of the original TBN matrix if you go from tangent to object.
Because of all this, we most often store the TBN as extra information in the geometry, computed from the texture coordinate ( the url you referenced link to that computation description ) and interpolate at runtime with the other values.
Rem : there is a rough simplification to use the geometry nornal as the TBN normal but there is no reason in the first place that they match.

Related

Calculating camera matrix, given position, angle and FOV

I've recently been venturing into conversion of 3D points in space to a 2D pixel position on a screen, and almost every single answer I've found has been something like "do X with your world-to-camera matrix, and multiply by your viewport height to get it in pixels".
Now, that's all fine and good, but oftentimes these questions were about programming for video game engines, where a function to get a camera's view matrix is often built into a library and called on-command. But in my case, I can't do that - I need to know how to, given an FOV (say, 78 degrees) and a position and angle (of the format pitch = x, yaw = y, roll = z) it's facing, calculate the view matrix of a virtual camera.
Does anybody know what I need to do? I'm working with Lua (with built-in userdata for things like 3D vectors, angles, and 4x4 matrices exposed via the C interface), if that helps.
I am using gluPerspective
where:
fovw,fovh // are FOV in width and height of screen angles [rad]
zn,zf // are znear,zfar distances from focal point of camera
When using FOVy notation from OpenGL then:
aspect = width/height
fovh = FOVy
fovw = FOVx = FOVy*aspect
so just feed your 4x4 matrix with the values in order defined by notations you use (column or row major order).
I got the feeling you are doing SW render on your own so Do not forget to do the perspective divide!. Also take a look at the matrix link above and also at:
3D graphic pipeline

OpenGL : equation of the line going through a point defined by a 4x4 matrix ? (camera for example)

I would like to know what is the set of 3 equations (in the world coordinates) of the line going through my camera (perpendicular to the camera screen). The position and rotation of my camera in the world coordinates being defined by a 4x4 matrix.
Any idea?
parametric line is simple just extract the Z axis direction vector and origin point O from the direct camera matrix (see the link below on how to do it). Then any point P on your line is defined as:
P(t) = O + t*Z
where t is your parameter. The camera view direction is usually -Z for OpenGL perspective in such case:
t = (-inf,0>
Depending on your projection you might want to use:
t = <-z_far,-z_near>
The problem is there are many combinations of conventions. So you need to know if you have row major or column major order of your matrix (so you know if the direction vectors and origins are in rows or columns). Also camera matrix in gfx is usually inverse one so you need to invert it first. For more info about this see:
Understanding 4x4 homogenous transform matrices

How to rotate a Vector3 using Vector2?

I want to simulate particles driven by wind on a three.js globe. The data I have is a Vector3 for the position of a particle and a Vector2 indicating wind speed and direction, think North/East. How do I get the new Vector3?
I've consulted numerous examples and read the documentation and believe the solution involves quaternions, but the axis of rotation is not given. Also, there are thousands of particles, it should be fast, however real-time is not required.
The radius of the sphere is 1.
I would recommend you have a look at the Spherical class provided by three.js. Instead of cartesian coordinates (x,y,z), a point is represented in terms of a spherical coordinate-system (θ (theta), φ (phi), r).
The value of theta is the longitude and phi is the latitude for your globe (r - sphereRadius would be the height above the surface). Your wind-vectors can then be interpreted as changes to these two values. So what I would try is basically this:
// a) convert particle-location to spherical
const sphericalPosition = new THREE.Spherical()
.setFromVector3(particle.position);
// b) update theta/phi (note that windSpeed is assumed to
// be given in radians/time, but for a sphere of size 1 that
// shouldn't make a difference)
sphericalPosition.theta += windSpeed.x; // east-direction
sphericalPosition.phi += windSpeed.y; // north-direction
// c) write back to particle-position
particle.position.setFromSpherical(sphericalPosition);
Performance wise this shouldn't be a problem at all (maybe don't create a new Spherical-instance for every particle like I did above). The conversions involve a bit of trigonometry, but we're talking just thousands of points, not millions.
Hope that helps!
If you just want to rotate a vector based on an angle, just perform a simple rotation of values on the specified plane yourself using trig as per this page eg for a rotation on the xz plane:
var x = cos(theta)*vec_to_rotate.x - sin(theta)*vec_to_rotate.z;
var z = sin(theta)*vec_to_rotate.x + cos(theta)*vec_to_rotate.z;
rotated_vector = new THREE.Vector3(x,vec_to_rotate.y,z);
But to move particles with wind, you're not really rotating a vector, you should be adding a velocity vector, and it 'rotates' its own heading based on a combination of initial velocity, inertia, air friction, and additional competing forces a la:
init(){
position = new THREE.Vector(0,0,0);
velocity = new THREE.Vector3(1,0,0);
wind_vector = new THREE.Vector3(0,0,1);
}
update(){
velocity.add(wind_vector);
position.add(velocity);
velocity.multiplyScalar(.95);
}
This model is truer to how wind will influence a particle. This particle will start off heading along the x axis, and then 'turn' eventually to go in the direction of the wind, without any rotation of vectors. It has a mass, and a velocity in a direction, a force is acting on it, it turns.
You can see that because the whole velocity is subject to friction (the multscalar), our initial velocity diminishes as the wind vector accumulates, which causes a turn without performing any rotations. Thought i'd throw this out just in case you're unfamiliar with working with particle systems and maybe were just thinking about it wrong.

How to find view point coordinates?

I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size

Finding absolute coordinates from relative coordinates in 3D space

My question is fairly difficult to explain, so please bear with me. I have a random object with Forward, Right, and Up vectors. Now, imagine this particular object is rotated randomly across all three axis randomly. How would I go about finding the REAL coordinates of a point relative to the newly rotated object?
Example:
How would I, for instance, find the forward-most corner of the cube given its Forward, Right, and Up vectors (as well as its coordinates, obviously) assuming that the colored axis is the 'real' axis.
The best I could come up with is:
x=cube.x+pointToFind.x*(forward.x+right.x+up.x)
y=cube.y+pointToFind.y*(forward.y+right.y+up.y)
z=cube.z+pointToFind.z*(forward.z+right.z+up.z)
This worked sometimes, but failed when one of the coordinates for the point was 0 for obvious reasons.
In short, I don't know what do to, or really how to accurately describe what I'm trying to do... This is less of a programming questions and more of a general math question.
In general, you would have to project all corners of the object, one after the other, on the target direction (i.e., compute the scalar or dot product of both vectors) and remember the point delivering the maximum value.
Because of the special structure of the cube, several simplifications are possible. You can rotate the target direction vector into the local frame. Then the determination of the maximal projection can be read off the signs of its local coordinates. If the sign of the coordinate is positive the scalar product is maximized by maximizing the cube coordinate to 1. If the sign is negative, then the scalar product is maximized by minimizing the cube coordinate to 0.
Inverse rotation is the same as forming dot products with the columns of the rotation matrix (forward, right, up), so
result = zero-vector; //zero corner of the cube
if( dot( target, forward ) > 0 )
result += forward;
if( dot( target, up ) > 0 )
result += up;
if( dot( target, right ) > 0 )
result += right;

Resources