I'm trying to create a simple shader for my lighting system. Right now, I'm working on adding support for normal-mapping right now. Without normal-mapping, the lighting system works fine. I'm using the normals forwarded from the vertex shader, and they work perfectly fine. I'm also reading the normals from the normal map correctly. Without including the normal map, the lighting works perfectly. I've tried adding the vertex normal and the normal map's normal, and that doesn't work. Also tried multiplying. Here's how I'm reading the normal-map:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normals = normalize((normalHeight.xyz * vec3(2.0) - vec3(1.0)));
So I have the correct vertex normals, and the normals from the normal map. How should I combine these to get the correct normals?
It depends on how you store your normal maps. If they are in world space to begin with (this is rather rare) and your scene never changes, you can look them up the way you have them. Typically, however, they are in tangent space. Tangent space is a vector space that uses the object's normal, and the rate of change in the (s,t) texture coordinates to properly transform the normals on a surface with arbitrary orientation.
Tangent space normal maps usually appear bluish to the naked eye, whereas world space normal maps are every color of the rainbow (and need to be biased and scaled because half of the colorspace is supposed to represent negative vectors) :)
If you want to understand tangent space better, complete with implementation on deriving the basis vectors, see this link.
Does your normal map not contain the adjusted normals? If yes, then you just need to read the texture in the fragment shader and you should have your normal, like so:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normal = normalize(normalHeight.xyz);
If your trying to account for negative values then you should not be multiplying by the vector but rather the scalar.
vec3 normal = normalize( (normalHeight.xyz * 2.0) - 1.0 );
Related
I'm making a procedurally generated minecraft-like voxel terrain in Unity. Mesh generation and albedo channel texturing is flawless; however I need to apply different normal map textures for different cube faces regarding whether they're neighboring to another cube or not. Materials accepts only single normal map file and doesn't provide a sprite-sheet-editor kind of functionality for normal maps. So I have no idea about how to use selected slices out of normal map file as if they were albedo textures. I couldn't find any related resources about the problem. Any help will be appreciated. Thanks...
First of all, I'm not an expert in this area, though I am going to try to help you based on my limited and incomplete understanding of parts of Unity.
If there are a finite number of "normal face maps" that can exist, I suggest something like you indicated ("sprite sheet") and create a single texture (also sometimes called a texture atlas) that contains all these normal maps.
The next step, which I'm not sure whether the Standard material shader will be to handle for your situation is to generate UV/texture coordinates for the normal map and pass those along with your vertex xyz positions to the shader. The UV coordinates need to be for each vertex of each face; they are specified as a 2-D (U, V) offset into your atlas of normal maps and are floating point values with a range of [0.0, 1.0], that map to the full X and Y coordinates of the actual normal texture. For instance, if you had an atlas with a grid of textures in 4 rows and 4 columns, a face that should use the top-left texture would have UV coords of [(0,0), (0.25,0), (0.25,0.25), (0, 0.25)].
The difficulty here may depend if you are you using UV coordinates for doing other texture mapping (e.g. in the Albedo or wherever else). If this is the case, I think the Unity Standard Shader permits two sets of texture coordinates, and if you need more, you might have to roll your own shader or find a Shader asset elsewhere that allows for more UV sets. This is where my understanding of gets shaky, as I'm not exactly sure how the shader uses these two UV coordinate sets, and whether there is some existing convention for how these UV coordinate are used, as the standard shader supports secondary/detail maps, which may mean you have to share the UV0 set with all non-detail maps, so albedo, normal, height, occlusion, etc.
Alright, so I know there are a lot of questions referring to normalized device coordinates here on SO, but none of them address my particular issue.
So, everything I draw it's specified in 2D screen coordinates where top,left is (0,0) and bottom right is (screenWidth, screenHeight) then in my vertex shader I do this calculation to get out NDC (basically, I'm rendering UI elements):
float ndcX = (screenX - ScreenHalfWidth) / ScreenHalfWidth;
float ndcY = 1.0 - (screenY / ScreenHalfHeight);
where ScreenX/ScreenY is pixel coordinates, for example (600, 700) and screenHalf_____ is half of the screen width/height.
And the final position that I return from the vertex shader for the rasterization state is:
gl_Position = vec4(ndcX, ndcY, Depth, 1.0);
Which which works perfectly fine in Opengl ES.
Now the problem is that when I try it just like this in Metal 2, it doesn't work.
I know Metal's NDC are 2x2x1 and Opengl's NDC are 2x2x2 but I thought depth here didn't play an important part in this equation since I am passing it in my self per vertex.
I tried this link and this so question but was confused and the links weren't that helpful since I am trying to avoid matrix calculations in the vertex shader since I am rendering everything 2D for now.
So my questions...What is the formula to transform pixel coordinates to NDC in Metal? Is it possible without using an orthographic projection matrix? Why doesn't my equation work for Metal?
It is of course possible without a projection matrix. Matrices are just a useful convenience for applying transformations. But it's important to understand how they work when situations like this arise, since using a general orthographic projection matrix would perform unnecessary operations to arrive at the same results.
Here are the formulae I might use to do this:
float xScale = 2.0f / drawableSize.x;
float yScale = -2.0f / drawableSize.y;
float xBias = -1.0f;
float yBias = 1.0f;
float clipX = position.x * xScale + xBias;
float clipY = position.y * yScale + yBias;
Where drawableSize is the dimension (in pixels) of the renderbuffer, which can be passed in a buffer to the vertex shader. You can also precompute the scale factors and pass those in instead of the screen dimensions, to save some computation on the GPU.
Iam trying to implement normals for my height map but they dont seems to work.
Look at these:
Note that the pattern occurs along the edges. Why?
Vertices are shared (indexing) and normals are average for vertex from all triangles that vertex is part of.
Algorithm for normals looks like that:
float size=Size;
int WGidY=int(gl_WorkGroupID.y);
int WGidX=int(gl_WorkGroupID.x);
vec4 tempVertices[3];
tempVertices[0]=imageLoad(HeightMap, ivec2(WGidX, WGidY));
tempVertices[1]=imageLoad(HeightMap, ivec2(WGidX, WGidY+1));
tempVertices[2]=imageLoad(HeightMap, ivec2(WGidX+1, WGidY));
vec4 LoadedNormal=imageLoad(NormalMap, ivec2(WGidX, WGidY));
vec4 Normal=vec4(0.0f);
Normal.xyz=cross((tempVertices[0].xyz-tempVertices[1].xyz), (tempVertices[0].xyz-tempVertices[2].xyz));
Normal.w=1;
imageStore(NormalMap, ivec2(WGidX,WGidY), Normal+LoadedNormal);
No need to do averaging like that. You can compute it directly in one step as follows:
vec3 v[4] = {
imageLoad(HeightMap, ivec2(WGidX-1, WGidY)).xyz,
imageLoad(HeightMap, ivec2(WGidX+1, WGidY)).xyz,
imageLoad(HeightMap, ivec2(WGidX, WGidY-1)).xyz,
imageLoad(HeightMap, ivec2(WGidX, WGidY+1)).xyz,
};
vec3 Normal = normalize(cross(v[1] - v[0], v[3] - v[2]));
imageStore(NormalMap, ivec2(WGidX,WGidY), vec4(Normal, 1));
Also you don't even need to store the HeightMap mesh explicitly. Instead you can send the same low-resolution quad to the GPU, tessellate it with a tessellation shader, apply the height map to the generated vertices by sampling from a one-channel texture, and compute the normals on-the-fly as above.
ok guys, I found a problem. This is symptom of "greedy triangulation". The normals inside a triangle are interpolated by barycentric algorithm but the edges are interpolated linearly to prevent color differences between adjacting triangles. Thank you, again, Paul Bourke:
http://paulbourke.net/texture_colour/interpolation/
If you dont have enough triangles dont use Phong Shading (maybe normal mapping?).
After tweaks:
http://prntscr.com/dadrue
http://prntscr.com/dadtum
http://prntscr.com/dadugf
I'm trying to calculate modelview matrix of my 2D camera but I can't get the formula right. I use the Affine3f transform class so the matrix is compatible with OpenGL. This is closest that I did get by trial and error. This code rotates and scales the camera ok, but if I apply translation and rotation at same time the camera movement gets messed up: camera moves in rotated fashion, which is not what I want. (And this probaly due to fact I first apply the rotation matrix and then translation)
Eigen::Affine3f modelview;
modelview.setIdentity();
modelview.translate(Eigen::Vector3f(camera_offset_x, camera_offset_y, 0.0f));
modelview.scale(Eigen::Vector3f(camera_zoom_x, camera_zoom_y, 0.0f));
modelview.rotate(Eigen::AngleAxisf(camera_angle, Eigen::Vector3f::UnitZ()));
modelview.translate(Eigen::Vector3f(camera_x, camera_y, 0.0f));
[loadmatrix_to_gl]
What I want is that camera would rotate and scale around offset position in screenspace {(0,0) is middle of the screen in this case} and then be positioned along the global xy-axes in worldspace {(0,0) is also initialy at middle of the screen} to the final position. How would I do this?
Note that I have set up also an orthographic projection matrix, which may affect this problem.
If you want a 2D image, rendered in the XY plane with OpenGL, to (1) rotate counter-clockwise by a around point P, (2) scale by S, and then (3) translate so that pixels at C (in the newly scaled and rotated image) are at the origin, you would use this transformation:
translate by -P (this moves the pixels at P to the origin)
rotate by a
translate by P (this moves the origin back to where it was)
scale by S (if you did this earlier, your rotation would be messed up)
translate by -C
If the 2D image we being rendered at the origin, you'd also need to end by translate by some value along the negative z axis to be able to see it.
Normally, you'd just do this with OpenGL basics (glTranslatef, glScalef, glRotatef, etc.). And you would do them in the reverse order that I've listed them. Since you want to use glLoadMatrix, you'd do things in the order I described with Eigen. It's important to remember that OpenGL is expecting a Column Major matrix (but that seems to be the default for Eigen; so that's probably not a problem).
JCooper did great explaining the steps to construct the initial matrix.
However I eventually solved the problem bit differently. There was few additional things and steps that were not obvious for me at the time. See JCooper answer's comments. First is to realize all matrix operations are relative.
Thus if you want to position or move the camera with absolute xy-axes, you must first decompose the matrix to extract its absolute position with unchanged axes. Then you translate the matrix by the difference of the old and new position.
Here is way to do this with Eigen:
First compute Affine2f matrix cmat scalar determinant D. With Eigen this is done with D = cmat.linear().determinant();. Next compute 'reverse' matrix matrev of the current rotation+scale matrix R using the D. matrev = (RS.array() / (1.0f / determ)).matrix()); where RS is cmat.matrix().topLeftCorner(2,2)
The absolute camera position P is then given by P = invmat * -C where C is cmat.matrix().col(2).head<2>()
Now we can reposition the camera anywhere along the absolute axes and keeping the rotation+scaling same: V = RS * (T - P) where RS is same as before, T is the new position vec and P is the decomposed position vec.
The cmat then simply translated by V to move the camera: cmat.pretranslate(V)
I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.