Normals of height map dont work - dictionary

Iam trying to implement normals for my height map but they dont seems to work.
Look at these:
Note that the pattern occurs along the edges. Why?
Vertices are shared (indexing) and normals are average for vertex from all triangles that vertex is part of.
Algorithm for normals looks like that:
float size=Size;
int WGidY=int(gl_WorkGroupID.y);
int WGidX=int(gl_WorkGroupID.x);
vec4 tempVertices[3];
tempVertices[0]=imageLoad(HeightMap, ivec2(WGidX, WGidY));
tempVertices[1]=imageLoad(HeightMap, ivec2(WGidX, WGidY+1));
tempVertices[2]=imageLoad(HeightMap, ivec2(WGidX+1, WGidY));
vec4 LoadedNormal=imageLoad(NormalMap, ivec2(WGidX, WGidY));
vec4 Normal=vec4(0.0f);
Normal.xyz=cross((tempVertices[0].xyz-tempVertices[1].xyz), (tempVertices[0].xyz-tempVertices[2].xyz));
Normal.w=1;
imageStore(NormalMap, ivec2(WGidX,WGidY), Normal+LoadedNormal);

No need to do averaging like that. You can compute it directly in one step as follows:
vec3 v[4] = {
imageLoad(HeightMap, ivec2(WGidX-1, WGidY)).xyz,
imageLoad(HeightMap, ivec2(WGidX+1, WGidY)).xyz,
imageLoad(HeightMap, ivec2(WGidX, WGidY-1)).xyz,
imageLoad(HeightMap, ivec2(WGidX, WGidY+1)).xyz,
};
vec3 Normal = normalize(cross(v[1] - v[0], v[3] - v[2]));
imageStore(NormalMap, ivec2(WGidX,WGidY), vec4(Normal, 1));
Also you don't even need to store the HeightMap mesh explicitly. Instead you can send the same low-resolution quad to the GPU, tessellate it with a tessellation shader, apply the height map to the generated vertices by sampling from a one-channel texture, and compute the normals on-the-fly as above.

ok guys, I found a problem. This is symptom of "greedy triangulation". The normals inside a triangle are interpolated by barycentric algorithm but the edges are interpolated linearly to prevent color differences between adjacting triangles. Thank you, again, Paul Bourke:
http://paulbourke.net/texture_colour/interpolation/
If you dont have enough triangles dont use Phong Shading (maybe normal mapping?).
After tweaks:
http://prntscr.com/dadrue
http://prntscr.com/dadtum
http://prntscr.com/dadugf

Related

Normalized Device Coordinate Metal coming from OpenGL

Alright, so I know there are a lot of questions referring to normalized device coordinates here on SO, but none of them address my particular issue.
So, everything I draw it's specified in 2D screen coordinates where top,left is (0,0) and bottom right is (screenWidth, screenHeight) then in my vertex shader I do this calculation to get out NDC (basically, I'm rendering UI elements):
float ndcX = (screenX - ScreenHalfWidth) / ScreenHalfWidth;
float ndcY = 1.0 - (screenY / ScreenHalfHeight);
where ScreenX/ScreenY is pixel coordinates, for example (600, 700) and screenHalf_____ is half of the screen width/height.
And the final position that I return from the vertex shader for the rasterization state is:
gl_Position = vec4(ndcX, ndcY, Depth, 1.0);
Which which works perfectly fine in Opengl ES.
Now the problem is that when I try it just like this in Metal 2, it doesn't work.
I know Metal's NDC are 2x2x1 and Opengl's NDC are 2x2x2 but I thought depth here didn't play an important part in this equation since I am passing it in my self per vertex.
I tried this link and this so question but was confused and the links weren't that helpful since I am trying to avoid matrix calculations in the vertex shader since I am rendering everything 2D for now.
So my questions...What is the formula to transform pixel coordinates to NDC in Metal? Is it possible without using an orthographic projection matrix? Why doesn't my equation work for Metal?
It is of course possible without a projection matrix. Matrices are just a useful convenience for applying transformations. But it's important to understand how they work when situations like this arise, since using a general orthographic projection matrix would perform unnecessary operations to arrive at the same results.
Here are the formulae I might use to do this:
float xScale = 2.0f / drawableSize.x;
float yScale = -2.0f / drawableSize.y;
float xBias = -1.0f;
float yBias = 1.0f;
float clipX = position.x * xScale + xBias;
float clipY = position.y * yScale + yBias;
Where drawableSize is the dimension (in pixels) of the renderbuffer, which can be passed in a buffer to the vertex shader. You can also precompute the scale factors and pass those in instead of the screen dimensions, to save some computation on the GPU.

What is use of getNormals() method in TriangleMesh JavaFX

I am currently working of JavaFX 3D application and come across getNormals() method in TriangleMesh class.
As in TriangleMesh class is used to create user defined Java FX 3D obejct and in that getPoints() is used to add PointsgetFaces() is used to add Faces getTexCoords() is used to manage Texture of 3D Object, but I am not sure what is use of getNormals() method in TriangleMesh class.
In TriangleMesh class, we can set vertex format to VertexFormat.POINT_TEXCOORD and VertexFormat.POINT_NORMAL_TEXCOORD.But if we set vertexFormat as "VertexFormat.POINT_NORMAL_TEXCOORD", then we need to add indices of normals into the Faces like below : [
p0, n0, t0, p1, n1, t1, p3, n3, t3, // First triangle of a textured rectangle
p1, n1, t1, p2, n2, t2, p3, n3, t3 // Second triangle of a textured rectangle
]
as described in https://docs.oracle.com/javase/8/javafx/api/javafx/scene/shape/TriangleMesh.html
I didn't find any difference in 3D shape, if I used vertexFormat as POINT_TEXCOORD or POINT_NORMAL_TEXCOORD.
So what is the use of getNormals() method in TriangleMesh JavaFX?
Thanks in Advance..
Use of normals in computer graphics:
The normal is often used in computer graphics to determine a surface's orientation toward a light source for flat shading, or the orientation of each of the corners (vertices) to mimic a curved surface with Phong shading.
The normals effect the shading applied to a face.
The standard shading mechanism for JavaFX 8 is Phong Shading and a Phong Reflection Model. By default, Phong Shading assumes a smoothly varying (linearly interpolated) surface normal vector. This allows you to have a sphere rendered by shading with limited vertex geometry supplied. By default the normal vectors will be calculated as being perpendicular to the faces.
What JavaFX allows is for you to supply your own normals rather than rely on the default calculated ones. The Phong shading algorithm implementation in JavaFX will then interpolate between the normals that you supply rather than the normals it calculates. Changing the direction of surface normals will change the shading model by altering how the model represents light bouncing off of it, essentially the light will bounce in a different direction with a modified normal.
This example from Wikipedia shows a phong shaded sphere on the right. Both spheres actually have the same geometry. The distribution of the normals which contribute towards the phong shading equation is the default, smoothly interpolated one based upon a standard normal calculation for each face (so no user normals supplied). The equation used for calculating the shading is described in the PhongMaterial javadoc, and you can see there the normal contribution to the shading algorithm, in terms of the both the calculations of diffuse color and the specular highlights.
Standard 3D models, such as obj files can optionally allow for providing normals:
vn i j k
Polygonal and free-form geometry statement.
Specifies a normal vector with components i, j, and k.
Vertex normals affect the smooth-shading and rendering of geometry.
For polygons, vertex normals are used in place of the actual facet
normals. For surfaces, vertex normals are interpolated over the
entire surface and replace the actual analytic surface normal.
When vertex normals are present, they supersede smoothing groups.
i j k are the i, j, and k coordinates for the vertex normal. They
are floating point numbers
So, why would you want it?
The easiest way to explain might be to look at something known as smoothing groups (please click on the link, I won't embed here due to copyright). As can be seen by the linked image, when the smoothing group is applied to a collection of faces it is possible to get a sharp delineation (e.g. a crease or a corner) between the grouped faces. Specifying normals allows you to accomplish a similar thing to a smoothing group, just with more control because you can specify individual normals for each vertex rather than an overall group of related faces. Note JavaFX allows you to specify smoothing groups via getFaceSmoothingGroups() for instances where you don't want to go to the trouble of defining full normal geometry via getNormals().
Another, similar idea is a normal map (or bump map). Such a map stores normal information in an image rather than as vector information, such as the getNormals() method, so it is a slightly different thing. But you can see a similar interaction with the reflection model algorithm:
Background Reading - How to understand Phong Materials (and other things)

GLSL - Vertex normals + normal mapping?

I'm trying to create a simple shader for my lighting system. Right now, I'm working on adding support for normal-mapping right now. Without normal-mapping, the lighting system works fine. I'm using the normals forwarded from the vertex shader, and they work perfectly fine. I'm also reading the normals from the normal map correctly. Without including the normal map, the lighting works perfectly. I've tried adding the vertex normal and the normal map's normal, and that doesn't work. Also tried multiplying. Here's how I'm reading the normal-map:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normals = normalize((normalHeight.xyz * vec3(2.0) - vec3(1.0)));
So I have the correct vertex normals, and the normals from the normal map. How should I combine these to get the correct normals?
It depends on how you store your normal maps. If they are in world space to begin with (this is rather rare) and your scene never changes, you can look them up the way you have them. Typically, however, they are in tangent space. Tangent space is a vector space that uses the object's normal, and the rate of change in the (s,t) texture coordinates to properly transform the normals on a surface with arbitrary orientation.
Tangent space normal maps usually appear bluish to the naked eye, whereas world space normal maps are every color of the rainbow (and need to be biased and scaled because half of the colorspace is supposed to represent negative vectors) :)
If you want to understand tangent space better, complete with implementation on deriving the basis vectors, see this link.
Does your normal map not contain the adjusted normals? If yes, then you just need to read the texture in the fragment shader and you should have your normal, like so:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normal = normalize(normalHeight.xyz);
If your trying to account for negative values then you should not be multiplying by the vector but rather the scalar.
vec3 normal = normalize( (normalHeight.xyz * 2.0) - 1.0 );

How do I calculate pixel shader depth to render a circle drawn on a point sprite as a sphere that will intersect with other objects?

I am writing a shader to render spheres on point sprites, by drawing shaded circles, and need to write a depth component as well as colour in order that spheres near each other will intersect correctly.
I am using code similar to that written by Johna Holwerda:
void PS_ShowDepth(VS_OUTPUT input, out float4 color: COLOR0,out float depth : DEPTH)
{
float dist = length (input.uv - float2 (0.5f, 0.5f)); //get the distance form the center of the point-sprite
float alpha = saturate(sign (0.5f - dist));
sphereDepth = cos (dist * 3.14159) * sphereThickness * particleSize; //calculate how thick the sphere should be; sphereThickness is a variable.
depth = saturate (sphereDepth + input.color.w); //input.color.w represents the depth value of the pixel on the point-sprite
color = float4 (depth.xxx ,alpha ); //or anything else you might need in future passes
}
The video at that link gives a good idea of the effect I'm after: those spheres drawn on point sprites intersect correctly. I've added images below to illustrate too.
I can calculate the depth of the point sprite itself fine. However, I am not sure show to calculate the thickness of the sphere at a pixel in order to add it to the sprite's depth, to give a final depth value. (The above code uses a variable rather than calculating it.)
I've been working on this on and off for several weeks but haven't figured it out - I'm sure it's simple, but it's something my brain hasn't twigged.
Direct3D 9's point sprite sizes are calculated in pixels, and my sprites have several sizes - both by falloff due to distance (I implemented the same algorithm the old fixed-function pipeline used for point size computations in my vertex shader) and also due to what the sprite represents.
How do I go from the data I have in a pixel shader (sprite location, sprite depth, original world-space radius, radius in pixels onscreen, normalised distance of the pixel in question from the centre of the sprite) to a depth value? A partial solution simply of sprite size to sphere thickness in depth coordinates would be fine - that can be scaled by the normalised distance from the centre to get the thickness of the sphere at a pixel.
I am using Direct3D 9 and HLSL with shader model 3 as the upper SM limit.
In pictures
To demonstrate the technique, and the point at which I'm having trouble:
Start with two point sprites, and in the pixel shader draw a circle on each, using clip to remove fragments outside the circle's boundary:
One will render above the other, since after all they are flat surfaces.
Now, make the shader more advanced, and draw the circle as though it was a sphere, with lighting. Note that even though the flat sprites look 3D, they still draw with one fully in front of the other since it's an illusion: they are still flat.
(The above is easy; it's the final step I am having trouble with and am asking how to achieve.)
Now, instead of the pixel shader writing only colour values, it should write the depth as well:
void SpherePS (...any parameters...
out float4 oBackBuffer : COLOR0,
out float oDepth : DEPTH0 <- now also writing depth
)
{
Note that now the spheres intersect when the distance between them is smaller than their radiuses:
How do I calculate the correct depth value in order to achieve this final step?
Edit / Notes
Several people have commented that a real sphere will distort due to perspective, which may be especially visible at the edges of the screen, and so I should use a different technique. First, thanks for pointing that out, it's not necessarily obvious and is good for future readers! Second, my aim is not to render a perspective-correct sphere, but to render millions of data points fast, and visually I think a sphere-like object looks nicer than a flat sprite, and shows the spatial position better too. Slight distortion or lack of distortion does not matter. If you watch the demo video, you can see how it is a useful visual tool. I don't want to render actual sphere meshes because of the large number of triangles compared to a simple hardware-generated point sprite. I really do want to use the technique of point sprites, and I simply want to extend the extant demo technique in order to calculate the correct depth value, which in the demo was passed in as a variable with no source for how it was derived.
I came up with a solution yesterday, which which works well and and produces the desired result of a sphere drawn on the sprite, with a correct depth value which intersects with other objects and spheres in the scene. It may be less efficient than it needs to be (it calculates and projects two vertices per sprite, for example) and is probably not fully correct mathematically (it takes shortcuts), but it produces visually good results.
The technique
In order to write out the depth of the 'sphere', you need to calculate the radius of the sphere in depth coordinates - i.e., how thick half the sphere is. This amount can then be scaled as you write out each pixel on the sphere by how far from the centre of the sphere you are.
To calculate the radius in depth coordinates:
Vertex shader: in unprojected scene coordinates cast a ray from the eye through the sphere centre (that is, the vertex that represents the point sprite) and add the radius of the sphere. This gives you a point lying on the surface of the sphere. Project both the sprite vertex and your new sphere surface vertex, and calculate depth (z/w) for each. The different is the depth value you need.
Pixel Shader: to draw a circle you already calculate a normalised distance from the centre of the sprite, using clip to not draw pixels outside the circle. Since it's normalised (0-1), multiply this by the sphere depth (which is the depth value of the radius, i.e. the pixel at the centre of the sphere) and add to the depth of the flat sprite itself. This gives a depth thickest at the sphere centre to 0 and the edge, following the surface of the sphere. (Depending on how accurate you need it, use a cosine to get a curved thickness. I found linear gave perfectly fine-looking results.)
Code
This is not full code since my effects are for my company, but the code here is rewritten from my actual effect file omitting unnecessary / proprietary stuff, and should be complete enough to demonstrate the technique.
Vertex shader
void SphereVS(float4 vPos // Input vertex,
float fPointRadius, // Radius of circle / sphere in world coords
out float fDXScale, // Result of DirectX algorithm to scale the sprite size
out float fDepth, // Flat sprite depth
out float4 oPos : POSITION0, // Projected sprite position
out float fDiameter : PSIZE, // Sprite size in pixels (DX point sprites are sized in px)
out float fSphereRadiusDepth : TEXCOORDn // Radius of the sphere in depth coords
{
...
// Normal projection
oPos = mul(vPos, g_mWorldViewProj);
// DX depth (of the flat billboarded point sprite)
fDepth = oPos.z / oPos.w;
// Also scale the sprite size - DX specifies a point sprite's size in pixels.
// One (old) algorithm is in http://msdn.microsoft.com/en-us/library/windows/desktop/bb147281(v=vs.85).aspx
fDXScale = ...;
fDiameter = fDXScale * fPointRadius;
// Finally, the key: what's the depth coord to use for the thickness of the sphere?
fSphereRadiusDepth = CalculateSphereDepth(vPos, fPointRadius, fDepth, fDXScale);
...
}
All standard stuff, but I include it to show how it's used.
The key method and the answer to the question is:
float CalculateSphereDepth(float4 vPos, float fPointRadius, float fSphereCenterDepth, float fDXScale) {
// Calculate sphere depth. Do this by calculating a point on the
// far side of the sphere, ie cast a ray from the eye, through the
// point sprite vertex (the sphere center) and extend it by the radius
// of the sphere
// The difference in depths between the sphere center and the sphere
// edge is then used to write out sphere 'depth' on the sprite.
float4 vRayDir = vPos - g_vecEyePos;
float fLength = length(vRayDir);
vRayDir = normalize(vRayDir);
fLength = fLength + vPointRadius; // Distance from eye through sphere center to edge of sphere
float4 oSphereEdgePos = g_vecEyePos + (fLength * vRayDir); // Point on the edge of the sphere
oSphereEdgePos.w = 1.0;
oSphereEdgePos = mul(oSphereEdgePos, g_mWorldViewProj); // Project it
// DX depth calculation of the projected sphere-edge point
const float fSphereEdgeDepth = oSphereEdgePos.z / oSphereEdgePos.w;
float fSphereRadiusDepth = fSphereCenterDepth - fSphereEdgeDepth; // Difference between center and edge of sphere
fSphereRadiusDepth *= fDXScale; // Account for sphere scaling
return fSphereRadiusDepth;
}
Pixel shader
void SpherePS(
...
float fSpriteDepth : TEXCOORD0,
float fSphereRadiusDepth : TEXCOORD1,
out float4 oFragment : COLOR0,
out float fSphereDepth : DEPTH0
)
{
float fCircleDist = ...; // See example code in the question
// 0-1 value from the center of the sprite, use clip to form the sprite into a circle
clip(fCircleDist);
fSphereDepth = fSpriteDepth + (fCircleDist * fSphereRadiusDepth);
// And calculate a pixel color
oFragment = ...; // Add lighting etc here
}
This code omits lighting etc. To calculate how far the pixel is from the centre of the sprite (to get fCircleDist) see the example code in the question (calculates 'float dist = ...') which already drew a circle.
The end result is...
Result
Voila, point sprites drawing spheres.
Notes
The scaling algorithm for the sprites may require the depth to be
scaled, too. I am not sure that line is correct.
It is not fully mathematically correct (takes shortcuts)
but as you can see the result is visually correct
When using millions of sprites, I still get a good rendering speed (<10ms per frame for 3 million sprites, on a VMWare Fusion emulated Direct3D device)
The first big mistake is that a real 3d sphere will not project to a circle under perspective 3d projection.
This is very non intuitive, but look at some pictures, especially with a large field of view and off center spheres.
Second, I would recommend against using point sprites in the beginning, it might make things harder than necessary, especially considering the first point. Just draw a generous bounding quad around your sphere and go from there.
In your shader you should have the screen space position as an input. From that, the view transform, and your projection matrix you can get to a line in eye space. You need to intersect this line with the sphere in eye space (raytracing), get the eye space intersection point, and transform that back to screen space. Then output 1/w as depth. I am not doing the math for you here because I am a bit drunk and lazy and I don't think that's what you really want to do anyway. It's a great exercise in linear algebra though, so maybe you should try it. :)
The effect you are probably trying to do is called Depth Sprites and is usually used only with an orthographic projection and with the depth of a sprite stored in a texture. Just store the depth along with your color for example in the alpha channel and just output
eye.z+(storeddepth-.5)*depthofsprite.
Sphere will not project into a circle in general case. Here is the solution.
This technique is called spherical billboards. An in-depth description can be found in this paper:
Spherical Billboards and their Application to Rendering Explosions
You draw point sprites as quads and then sample a depth texture in order to find the distance between per-pixel Z-value and your current Z-coordinate. The distance between the sampled Z-value and current Z affects the opacity of the pixel to make it look like a sphere while intersecting underlying geometry. Authors of the paper suggest the following code to compute opacity:
float Opacity(float3 P, float3 Q, float r, float2 scr)
{
float alpha = 0;
float d = length(P.xy - Q.xy);
if(d < r) {
float w = sqrt(r*r - d*d);
float F = P.z - w;
float B = P.z + w;
float Zs = tex2D(Depth, scr);
float ds = min(Zs, B) - max(f, F);
alpha = 1 - exp(-tau * (1-d/r) * ds);
}
return alpha;
}
This will prevent sharp intersections of your billboards with the scene geometry.
In case point-sprites pipeline is difficult to control (i can say only about OpenGL and not DirectX) it is better to use GPU-accelerated billboarding: you supply 4 equal 3D vertices that match the center of the particle. Then you move them into the appropriate billboard corners in a vertex shader, i.e:
if ( idx == 0 ) ParticlePos += (-X - Y);
if ( idx == 1 ) ParticlePos += (+X - Y);
if ( idx == 2 ) ParticlePos += (+X + Y);
if ( idx == 3 ) ParticlePos += (-X + Y);
This is more oriented to the modern GPU pipeline and of coarse will work with any nondegenerate perspective projection.

texture projection + perspective correction, getting the math right

I render animated geometry. In each frame, I want to texturemap the geometry with a screen-space-texture from the previous frame (projected onto the geometry as it was in the previous frame). so the result should be such as if the screen-space texture was projected onto the geometry one frame ago and then transformed by geometry animation to the current frame.
Calculating the proper texture coordinates per vertex is not difficult. In GLSL that's simply:
void main(void)
{
vPos = currentMVP * vec4(position,1);
gl_Position = vPos;
vec4 oldPos = previousMVP * vec4(position,1);
vec2 UV = vec2(((oldPos.x/oldPos.w)+1)*0.5f, ((oldPos.y/oldPos.w)+1)*0.5f);
...
}
But getting the texturecoordinates interpolate correctly over the geometry is more tricky than I thoght. Normally, texturecoordinates for projection should be interpolated linearily in screenspace - so to achieve this one would multiply them by vPos.w in the vertexshader and divide them again by vPos.w in the fragmentshader. However, that's only correct if the texture is projected from the cameraview. In this case I need something else. I need an interpolation that attributes for forward-perspectivecorrect interpolation in the previous frame and backward-perspectivecorrect interpolation in the current frame.
This graphic illustrates three different cases:
-Case A is simple. here i could leave the normal perspective corrected interpolation of the texturecoordinates (as performed by default by the rasterizer).
-in Case B however, I would need linear interpolation of the texturecoordinates to get the proper result (either by multiplying with vPos.w in vertexShader and divide by vPos.w in fragment shader. Or in newer GLSL versions by using the "noperspective" interpolation qualifier).
-and in Case C I would need perspective corrected interpolation, but according to the oldPos.w value. so I would have to linearize the interpolation of u'=(u/oldPos.w) and v'=(v/oldPos.w) by multiplying u' with currentPos.w in vertex and divide the interpolated value by currentPos.w in fragment. I would also need to linearily interpolate w'=(1/oldPos.w) in the same way and then calculate the final u'' in fragment by dividing the interpolated u' by the interpolated w' (and same for v'' respectively).
So - the question now is, what's the proper math to yeld the correct result in either case?
again, calculating the correct uv's for the vertices is not the problem. it's about achieving the correct interpolation over the triangles.
//maybe relevant: in the same pass I also want to do some regular texturing of the object using non projective, perspective corrected texturing. This means I must not alter the gl_Position.w value.
vec2 UV = vec2(((oldPos.x/oldPos.w)+1)*0.5f, ((oldPos.y/oldPos.w)+1)*0.5f);
Wrong. You need the W; you don't want to divide yet. What you want is this:
vec4 oldPos = previousMVP * vec4(position,1);
oldPos = clipToTexture * oldPos;
vec3 UV = oldPos.xyw;
The clipToScreen matrix is a 4x4 matrix that does the scale and translation needed to go from clip to screen space. That's what your scale of 0.5 and adding 1.0 were doing. Here, it's in matrix form; normally, you'd just left-multiply "previousMVP" with this, so it would all be a single matrix multiply.
In your fragment shader, you need to do projective texture lookups. I don't remember the GLSL 1.20 function, but I know the 1.30+ function:
vec4 color = textureProj(samplerName, UV.stp);
It is this function which will do the necessary division-by-W step.

Resources