texture projection + perspective correction, getting the math right - math

I render animated geometry. In each frame, I want to texturemap the geometry with a screen-space-texture from the previous frame (projected onto the geometry as it was in the previous frame). so the result should be such as if the screen-space texture was projected onto the geometry one frame ago and then transformed by geometry animation to the current frame.
Calculating the proper texture coordinates per vertex is not difficult. In GLSL that's simply:
void main(void)
{
vPos = currentMVP * vec4(position,1);
gl_Position = vPos;
vec4 oldPos = previousMVP * vec4(position,1);
vec2 UV = vec2(((oldPos.x/oldPos.w)+1)*0.5f, ((oldPos.y/oldPos.w)+1)*0.5f);
...
}
But getting the texturecoordinates interpolate correctly over the geometry is more tricky than I thoght. Normally, texturecoordinates for projection should be interpolated linearily in screenspace - so to achieve this one would multiply them by vPos.w in the vertexshader and divide them again by vPos.w in the fragmentshader. However, that's only correct if the texture is projected from the cameraview. In this case I need something else. I need an interpolation that attributes for forward-perspectivecorrect interpolation in the previous frame and backward-perspectivecorrect interpolation in the current frame.
This graphic illustrates three different cases:
-Case A is simple. here i could leave the normal perspective corrected interpolation of the texturecoordinates (as performed by default by the rasterizer).
-in Case B however, I would need linear interpolation of the texturecoordinates to get the proper result (either by multiplying with vPos.w in vertexShader and divide by vPos.w in fragment shader. Or in newer GLSL versions by using the "noperspective" interpolation qualifier).
-and in Case C I would need perspective corrected interpolation, but according to the oldPos.w value. so I would have to linearize the interpolation of u'=(u/oldPos.w) and v'=(v/oldPos.w) by multiplying u' with currentPos.w in vertex and divide the interpolated value by currentPos.w in fragment. I would also need to linearily interpolate w'=(1/oldPos.w) in the same way and then calculate the final u'' in fragment by dividing the interpolated u' by the interpolated w' (and same for v'' respectively).
So - the question now is, what's the proper math to yeld the correct result in either case?
again, calculating the correct uv's for the vertices is not the problem. it's about achieving the correct interpolation over the triangles.
//maybe relevant: in the same pass I also want to do some regular texturing of the object using non projective, perspective corrected texturing. This means I must not alter the gl_Position.w value.

vec2 UV = vec2(((oldPos.x/oldPos.w)+1)*0.5f, ((oldPos.y/oldPos.w)+1)*0.5f);
Wrong. You need the W; you don't want to divide yet. What you want is this:
vec4 oldPos = previousMVP * vec4(position,1);
oldPos = clipToTexture * oldPos;
vec3 UV = oldPos.xyw;
The clipToScreen matrix is a 4x4 matrix that does the scale and translation needed to go from clip to screen space. That's what your scale of 0.5 and adding 1.0 were doing. Here, it's in matrix form; normally, you'd just left-multiply "previousMVP" with this, so it would all be a single matrix multiply.
In your fragment shader, you need to do projective texture lookups. I don't remember the GLSL 1.20 function, but I know the 1.30+ function:
vec4 color = textureProj(samplerName, UV.stp);
It is this function which will do the necessary division-by-W step.

Related

Normalized Device Coordinate Metal coming from OpenGL

Alright, so I know there are a lot of questions referring to normalized device coordinates here on SO, but none of them address my particular issue.
So, everything I draw it's specified in 2D screen coordinates where top,left is (0,0) and bottom right is (screenWidth, screenHeight) then in my vertex shader I do this calculation to get out NDC (basically, I'm rendering UI elements):
float ndcX = (screenX - ScreenHalfWidth) / ScreenHalfWidth;
float ndcY = 1.0 - (screenY / ScreenHalfHeight);
where ScreenX/ScreenY is pixel coordinates, for example (600, 700) and screenHalf_____ is half of the screen width/height.
And the final position that I return from the vertex shader for the rasterization state is:
gl_Position = vec4(ndcX, ndcY, Depth, 1.0);
Which which works perfectly fine in Opengl ES.
Now the problem is that when I try it just like this in Metal 2, it doesn't work.
I know Metal's NDC are 2x2x1 and Opengl's NDC are 2x2x2 but I thought depth here didn't play an important part in this equation since I am passing it in my self per vertex.
I tried this link and this so question but was confused and the links weren't that helpful since I am trying to avoid matrix calculations in the vertex shader since I am rendering everything 2D for now.
So my questions...What is the formula to transform pixel coordinates to NDC in Metal? Is it possible without using an orthographic projection matrix? Why doesn't my equation work for Metal?
It is of course possible without a projection matrix. Matrices are just a useful convenience for applying transformations. But it's important to understand how they work when situations like this arise, since using a general orthographic projection matrix would perform unnecessary operations to arrive at the same results.
Here are the formulae I might use to do this:
float xScale = 2.0f / drawableSize.x;
float yScale = -2.0f / drawableSize.y;
float xBias = -1.0f;
float yBias = 1.0f;
float clipX = position.x * xScale + xBias;
float clipY = position.y * yScale + yBias;
Where drawableSize is the dimension (in pixels) of the renderbuffer, which can be passed in a buffer to the vertex shader. You can also precompute the scale factors and pass those in instead of the screen dimensions, to save some computation on the GPU.

How to rotate a Vector3 using Vector2?

I want to simulate particles driven by wind on a three.js globe. The data I have is a Vector3 for the position of a particle and a Vector2 indicating wind speed and direction, think North/East. How do I get the new Vector3?
I've consulted numerous examples and read the documentation and believe the solution involves quaternions, but the axis of rotation is not given. Also, there are thousands of particles, it should be fast, however real-time is not required.
The radius of the sphere is 1.
I would recommend you have a look at the Spherical class provided by three.js. Instead of cartesian coordinates (x,y,z), a point is represented in terms of a spherical coordinate-system (θ (theta), φ (phi), r).
The value of theta is the longitude and phi is the latitude for your globe (r - sphereRadius would be the height above the surface). Your wind-vectors can then be interpreted as changes to these two values. So what I would try is basically this:
// a) convert particle-location to spherical
const sphericalPosition = new THREE.Spherical()
.setFromVector3(particle.position);
// b) update theta/phi (note that windSpeed is assumed to
// be given in radians/time, but for a sphere of size 1 that
// shouldn't make a difference)
sphericalPosition.theta += windSpeed.x; // east-direction
sphericalPosition.phi += windSpeed.y; // north-direction
// c) write back to particle-position
particle.position.setFromSpherical(sphericalPosition);
Performance wise this shouldn't be a problem at all (maybe don't create a new Spherical-instance for every particle like I did above). The conversions involve a bit of trigonometry, but we're talking just thousands of points, not millions.
Hope that helps!
If you just want to rotate a vector based on an angle, just perform a simple rotation of values on the specified plane yourself using trig as per this page eg for a rotation on the xz plane:
var x = cos(theta)*vec_to_rotate.x - sin(theta)*vec_to_rotate.z;
var z = sin(theta)*vec_to_rotate.x + cos(theta)*vec_to_rotate.z;
rotated_vector = new THREE.Vector3(x,vec_to_rotate.y,z);
But to move particles with wind, you're not really rotating a vector, you should be adding a velocity vector, and it 'rotates' its own heading based on a combination of initial velocity, inertia, air friction, and additional competing forces a la:
init(){
position = new THREE.Vector(0,0,0);
velocity = new THREE.Vector3(1,0,0);
wind_vector = new THREE.Vector3(0,0,1);
}
update(){
velocity.add(wind_vector);
position.add(velocity);
velocity.multiplyScalar(.95);
}
This model is truer to how wind will influence a particle. This particle will start off heading along the x axis, and then 'turn' eventually to go in the direction of the wind, without any rotation of vectors. It has a mass, and a velocity in a direction, a force is acting on it, it turns.
You can see that because the whole velocity is subject to friction (the multscalar), our initial velocity diminishes as the wind vector accumulates, which causes a turn without performing any rotations. Thought i'd throw this out just in case you're unfamiliar with working with particle systems and maybe were just thinking about it wrong.

Normals of height map dont work

Iam trying to implement normals for my height map but they dont seems to work.
Look at these:
Note that the pattern occurs along the edges. Why?
Vertices are shared (indexing) and normals are average for vertex from all triangles that vertex is part of.
Algorithm for normals looks like that:
float size=Size;
int WGidY=int(gl_WorkGroupID.y);
int WGidX=int(gl_WorkGroupID.x);
vec4 tempVertices[3];
tempVertices[0]=imageLoad(HeightMap, ivec2(WGidX, WGidY));
tempVertices[1]=imageLoad(HeightMap, ivec2(WGidX, WGidY+1));
tempVertices[2]=imageLoad(HeightMap, ivec2(WGidX+1, WGidY));
vec4 LoadedNormal=imageLoad(NormalMap, ivec2(WGidX, WGidY));
vec4 Normal=vec4(0.0f);
Normal.xyz=cross((tempVertices[0].xyz-tempVertices[1].xyz), (tempVertices[0].xyz-tempVertices[2].xyz));
Normal.w=1;
imageStore(NormalMap, ivec2(WGidX,WGidY), Normal+LoadedNormal);
No need to do averaging like that. You can compute it directly in one step as follows:
vec3 v[4] = {
imageLoad(HeightMap, ivec2(WGidX-1, WGidY)).xyz,
imageLoad(HeightMap, ivec2(WGidX+1, WGidY)).xyz,
imageLoad(HeightMap, ivec2(WGidX, WGidY-1)).xyz,
imageLoad(HeightMap, ivec2(WGidX, WGidY+1)).xyz,
};
vec3 Normal = normalize(cross(v[1] - v[0], v[3] - v[2]));
imageStore(NormalMap, ivec2(WGidX,WGidY), vec4(Normal, 1));
Also you don't even need to store the HeightMap mesh explicitly. Instead you can send the same low-resolution quad to the GPU, tessellate it with a tessellation shader, apply the height map to the generated vertices by sampling from a one-channel texture, and compute the normals on-the-fly as above.
ok guys, I found a problem. This is symptom of "greedy triangulation". The normals inside a triangle are interpolated by barycentric algorithm but the edges are interpolated linearly to prevent color differences between adjacting triangles. Thank you, again, Paul Bourke:
http://paulbourke.net/texture_colour/interpolation/
If you dont have enough triangles dont use Phong Shading (maybe normal mapping?).
After tweaks:
http://prntscr.com/dadrue
http://prntscr.com/dadtum
http://prntscr.com/dadugf

Normal Mapping on procedural sphere

I am a student in video games, and we are working on a raytracer in C++. We are using our teachers' library.
We create procedural objects (in our case a sphere), the Camera sends a ray for each pixel of the screen and the ray send back information on what it hit.
Some of us decided to integrate Normal Maps. So, at first, we sent ray on the object, looked at the value of the Normal map texel where we hit the sphere, converted it in a vector, normalized it and sent it back in place of the normal of the object. The result was pretty good, but of course, it didn't take the orientation of the "face" (it's procedural, so there is no face, but it gives the idea) into account anymore, so the render was flat.
We still don't really know how to "blend" the normal of the texture (in tangent space) and the normal of the object together. Here is our code:
// TGfxVec3 is part of our teachers library, and is a 3d vector like this:
// TGfxVec3( 12.7f, -13.4f, 52.0f )
// The sphere being at the origin and of radius 1, and tHit.m_tPosition being the
// exact position at the surface of the sphere where the ray hit, the normal of this
// point is the position hit by the ray.
TGfxVec3 tNormal = tHit.m_tPosition;
TGfxVec3 tTangent = Vec3CrossProduct( tNormal , m_tAxisZ );
TGfxVec3 tBiNormal = Vec3CrossProduct( tNormal , tTangent );
TGfxVec3 tTextureNorm = 2*(TGfxVec3( pNorm[0], pNorm[1], pNorm[2] )/255)-TGfxVec3( -1.0f, -1.0f, -1.0f );
// pNorm[0], pNorm[1], pNorm[2] are respectively the channels Red, Green,
// and Blue of the Normal Map texture.
// We put them in a 3D vector, divid them by 255 so their value go from 0 to 1,
// multiply them by 2, and then substract a vector, so their rang goes from -1 to +1.
tHit.m_tNorm = TGfxVec3( tTangente.x*tTextNorm.x + tCoTangente.x*tTextNorm.x +
tNorm.x*tTextNorm.x, tTangente.y*tTextNorm.y + tCoTangente.y*tTextNorm.y +
tNorm.y*tTextNorm.y, tTangente.z*tTextNorm.z + tCoTangente.z*tTextNorm.z +
tNorm.z*tTextNorm.z ).Normalize();
// Here, after some research, I came across this : http://www.txutxi.com/?p=316 ,
// that allow us to convert the normal map tangent space to the object space.
The results are still not good. My main concern are the Tangent and Binormals. The Axis taken in reference (here: m_tAxisZ, the Z Axis of the Sphere), is not right. But I don't know what to take, or even if what I am doing is really good. So I came here for help.
So, we finally did it. :D Ok, I will try to be clear. For this, two images :
(1) : http://i.imgur.com/cHwrR9A.png
(2) : http://i.imgur.com/mGPH1RW.png
(My drawing skill has no equal, I know).
So, the main problem was to find the Tangent "T" and the Bi-tangent "B". We already have the Normal "N". Our circle always being at the origin with a radius of 1, a point on its surface is equal to the Normal to that point (black and red vector on the first image). So, we have to find the tangent to that point (in green). For this, we just have to rotate the vector from PI/2 rad :
With N( x, y ) :
T = ( -N.y , N.x )
However, we are in 3D. So the point will not always be at the equator. We can easily solve this problem by ignoring the position in Y of our point and normalize the vector with only the two other component. So, on the second image, we have P (we set its Y value to 0), and we normalize the new vector to get P'.
With P( x, y, z ) :
P' = ( P.x, 0, P.z).Normalize();
Then, we go back to my first message to find the T. Finally, we get the B with a cross product between the N en the T. Finally, we calculate the normal to that point by taking the normal map into account.
With the variable "Map" containing the three channels (RGB) of the normal Map, each one clamped from -1 to 1, and T, N and B all being 3D vectors :
( Map.R*T + Map.G*B + Map.B*N ).Normalize();
And that's it, you have the normal to the point taking your normal map into account. :) Hope this will be usefull for others.
You are mostly right and completely wrong at the same time.
Tangent space normal mapping use a transformation matrix to convert the tangent space normal from the texture to another space, like object or world space, or transform the light in the tangent space to compute the lighting with everything in the same space.
Bi-normal is a common mistake and should be named bi-tangent.
It is sometime possible to compute the TBN at the fly on simple geometry, like on a height-map as it is easy to deduce the tangent and the bi-tangent on a regular grid. But on a sphere, the cross trick with a fixed axis will result to a singularity at the pole where the cross product give a zero length vector.
Last, even if we ignore the pole singularity, the TBN must be normalized before you apply the matrix to the tangent space normal. You may also miss a transpose, as a 3x3 orthonormal matrix inverse is the transpose, and what you need is the inverse of the original TBN matrix if you go from tangent to object.
Because of all this, we most often store the TBN as extra information in the geometry, computed from the texture coordinate ( the url you referenced link to that computation description ) and interpolate at runtime with the other values.
Rem : there is a rough simplification to use the geometry nornal as the TBN normal but there is no reason in the first place that they match.

GLSL - Vertex normals + normal mapping?

I'm trying to create a simple shader for my lighting system. Right now, I'm working on adding support for normal-mapping right now. Without normal-mapping, the lighting system works fine. I'm using the normals forwarded from the vertex shader, and they work perfectly fine. I'm also reading the normals from the normal map correctly. Without including the normal map, the lighting works perfectly. I've tried adding the vertex normal and the normal map's normal, and that doesn't work. Also tried multiplying. Here's how I'm reading the normal-map:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normals = normalize((normalHeight.xyz * vec3(2.0) - vec3(1.0)));
So I have the correct vertex normals, and the normals from the normal map. How should I combine these to get the correct normals?
It depends on how you store your normal maps. If they are in world space to begin with (this is rather rare) and your scene never changes, you can look them up the way you have them. Typically, however, they are in tangent space. Tangent space is a vector space that uses the object's normal, and the rate of change in the (s,t) texture coordinates to properly transform the normals on a surface with arbitrary orientation.
Tangent space normal maps usually appear bluish to the naked eye, whereas world space normal maps are every color of the rainbow (and need to be biased and scaled because half of the colorspace is supposed to represent negative vectors) :)
If you want to understand tangent space better, complete with implementation on deriving the basis vectors, see this link.
Does your normal map not contain the adjusted normals? If yes, then you just need to read the texture in the fragment shader and you should have your normal, like so:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normal = normalize(normalHeight.xyz);
If your trying to account for negative values then you should not be multiplying by the vector but rather the scalar.
vec3 normal = normalize( (normalHeight.xyz * 2.0) - 1.0 );

Resources