Normal Map Implementation.... am I missing something here? - math

So, I have a light direction in World Space, and I calculate my normals per-vertex... I am however a little confused about my normal map implementation. Right now I'm doing this.
// Normal Map
const float3 normalmap = (2.0f*gTextures1024.Sample(gLinearSam, float3(_in.Tex, gNormalMapIndex)).rgb) - 1.0f;
const float3 NormalW = _in.Norm;
const float3 TangentW = normalize(_in.TangentW.xyz - dot(_in.TangentW.xyz, _in.Norm)* _in.Norm);
const float3 BitangentW = cross(NormalW, TangentW) * _in.TangentW.w;
const float3x3 TBN = float3x3(TangentW, BitangentW, NormalW);
float3 normal = normalize(mul(TBN, normalmap));
// Lighting Calculations
//float4 normal = normalize(float4(_in.Norm, 0.0f));
float3 hvector = normalize(mul(-gDirLight.Direction.xyz, TBN) + gEyePos).xyz;
//hvector = mul(hvector.xyz, TBN);
float4 ambient = gDirLight.Ambient * gMaterial.Ambient;
float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 specular = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 texColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
[branch]
if(gUseTextures)
texColor = gTextures1024.Sample(gLinearSam, float3(_in.Tex, gDiffuseMapIndex));
// diffuse factor
float diffuseFactor = saturate(dot(normal, -gDirLight.Direction.xyz));
[branch]
if(diffuseFactor > 0.0f)
{
diffuse = diffuseFactor * gDirLight.Diffuse * gMaterial.Diffuse;
// Specular facttor & color
float HdotN = saturate(dot(hvector, normal));
specular = gDirLight.Specular * pow(HdotN, gMaterial.Specular.w);
}
// Modulate with late add
return (texColor * (ambient + diffuse)) + specular;
Am I doing something wrong here? According to me I am implementing the normal maps calculation in world space, and everything should be working just fine... Am I missing something here?

TBN is a matrix that transforms vectors from world space to tangent space. Therefore, you should do lighting calculations in tangent space.
The normal that you acquire from the normal map is already in tangent space (assumably). So you need to transform light direction and eye position to tangent space and continue the calculation as usual.

Nico, you were right. But I had MANY MANY more issues that were creeping up on me.
Issue #1: I wasn't calculating my normals properly. I was using the per vertex average but wasn't even aware that there was such a technique as weighted average. This was a 2000% improvement on all my lighting.
Issue #2: Tangent and Bitangent calculation were not being done correctly either. I might still improve on that area to see if I can also do a weighted average of them.
Issue #3: I wasn't doing my lighting calculations correctly and after being on Wikipedia for about 2 days, I finally did it right, and actually understand it now with complete clarity.
Issue #4: I just wanted to hurry up and do it without understanding 100% what I was doing(never making that mistake AGAIN).

Related

Calculating a 3D point infront of a position and rotation

I'm currently working on a game project and need to render a point in front of the current players vision, the game is written in a custom c++ engine. I have the current position (x,y,z) and the current rotation (pitch,yaw,roll). I need to extend the point forward along the known angle at a set distance.
edit:
What I Used As A Solution (Its slightly off but that's ok for me)
Vec3 LocalPos = {0,0,0};
Vec3 CurrentLocalAngle = {0,0,0};
float len = 0.1f;
float pitch = CurrentLocalAngle.x * (M_PI / 180);
float yaw = CurrentLocalAngle.y * (M_PI / 180);
float sp = sinf(pitch);
float cp = cosf(pitch);
float sy = sinf(yaw);
float cy = cosf(yaw);
Vec3 dir = { cp * cy, cp * sy, -sp };
LocalPos = { LocalPos.x + dir.x * len, LocalPos.y + dir.y * len,LocalPos.z + dir.z * len };
You can get the forward vector of the player from matrix column 3 if it is column based, then you multiply its normal by the distance you want then add the result to the player position you will get the point you need.
Convert the angle to a directional vector or just get the "forward vector" from the player if it's available in the engine you're using (it should be the same thing).
Directional vectors are normalized by nature (they have distance = 1), so you can just multiply them by the desired distance to get the desired offset. Multiply this vector by the distance you want the point to be relative to the reference point (the player's camera vector I presume), and then you just add one to the other to get the point in the world where this point belongs.

How to calculate azimuth from X Y Z values from magnetometer?

I want to make compass with Arduino and QMC5883. Now, the magnetometer outputs me only X Y Z values, and I have to calculate the rest myself. So far, I've used this:
float azimuth = atan2(x, y) * 180.0/PI;
But it's pretty buggy, and vulnerable to tilting in any direction. Is there any better algorythm that - for example - phone manufactures use? I could use accelerometer for help, if it would be needed.
The BBC micro:bit's device abstraction layer (DAL) includes this code to do tilt adjustment based on angles derived from accelerometer data. From https://github.com/lancaster-university/microbit-dal/blob/master/source/drivers/MicroBitCompass.cpp
/**
* Calculates a tilt compensated bearing of the device, using the accelerometer.
*/
int MicroBitCompass::tiltCompensatedBearing()
{
// Precompute the tilt compensation parameters to improve readability.
float phi = accelerometer->getRollRadians();
float theta = accelerometer->getPitchRadians();
// Convert to floating point to reduce rounding errors
Sample3D cs = this->getSample(NORTH_EAST_DOWN);
float x = (float) cs.x;
float y = (float) cs.y;
float z = (float) cs.z;
// Precompute cos and sin of pitch and roll angles to make the calculation a little more efficient.
float sinPhi = sin(phi);
float cosPhi = cos(phi);
float sinTheta = sin(theta);
float cosTheta = cos(theta);
// Calculate the tilt compensated bearing, and convert to degrees.
float bearing = (360*atan2(x*cosTheta + y*sinTheta*sinPhi + z*sinTheta*cosPhi, z*sinPhi - y*cosPhi)) / (2*PI);
// Handle the 90 degree offset caused by the NORTH_EAST_DOWN based calculation.
bearing = 90 - bearing;
// Ensure the calculated bearing is in the 0..359 degree range.
if (bearing < 0)
bearing += 360.0f;
return (int) (bearing);
}

Drawing a rotating sphere by using a pixel shader in Direct3D

I would like to draw a textured circle in Direct3D which looks like a real 3D sphere. For this purpose, I took a texture of a billard ball and tried to write a pixel shader in HLSL, which maps it onto a simple pre-transformed quad in such a way that it looks like a 3-dimensional sphere (apart from the lighting, of course).
This is what I've got so far:
struct PS_INPUT
{
float2 Texture : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 Color : COLOR0;
};
sampler2D Tex0;
// main function
PS_OUTPUT ps_main( PS_INPUT In )
{
// default color for points outside the sphere (alpha=0, i.e. invisible)
PS_OUTPUT Out;
Out.Color = float4(0, 0, 0, 0);
float pi = acos(-1);
// map texel coordinates to [-1, 1]
float x = 2.0 * (In.Texture.x - 0.5);
float y = 2.0 * (In.Texture.y - 0.5);
float r = sqrt(x * x + y * y);
// if the texel is not inside the sphere
if(r > 1.0f)
return Out;
// 3D position on the front half of the sphere
float p[3] = {x, y, sqrt(1 - x*x + y*y)};
// calculate UV mapping
float u = 0.5 + atan2(p[2], p[0]) / (2.0*pi);
float v = 0.5 - asin(p[1]) / pi;
// do some simple antialiasing
float alpha = saturate((1-r) * 32); // scale by half quad width
Out.Color = tex2D(Tex0, float2(u, v));
Out.Color.a = alpha;
return Out;
}
The texture coordinates of my quad range from 0 to 1, so I first map them to [-1, 1]. After that I followed the formula in this article to calculate the correct texture coordinates for the current point.
At first, the outcome looked ok, but I'd like to be able to rotate this illusion of a sphere arbitrarily. So I gradually increased u in the hope of rotating the sphere around the vertical axis. This is the result:
As you can see, the imprint of the ball looks unnaturally deformed when it reaches the edge. Can anyone see any reason for this? And additionally, how could I implement rotations around an arbitrary axis?
Thanks in advance!
I finally found the mistake by myself: The calculation of the z value which corresponds to the current point (x, y) on the front half of the sphere was wrong. It must of course be:
That's all, it works as exspected now. Furthermore, I figured out how to rotate the sphere. You just have to rotate the point p before calculating u and v by multiplying it with a 3D rotation matrix like this one for example.
The result looks like the following:
If anyone has any advice as to how I could smooth the texture a litte bit, please leave a comment.

Sum Vector Components in OpenCL (SSE-like)

Is there a single instruction to calculate the sum of all components of a float4, e.g., in OpenCL?
float4 v;
float desiredResult = v.x + v.y + v.z + v.w;
float4 v;
float desiredResult = dot(v, (float4)(1.0f, 1.0f, 1.0f, 1.0f));
It's a little more work, because you're multiplying each component by one before adding them, but some GPUs have a dot product instruction built in. So might be faster; might be slower. It depends on your hardware.

Average QRgb values

For a posterization algorithmn I'm going to need to average the color values (QRgb) present in my std::vector.
How would you suggest to do it? Sum the 3 components separately then average them? Otherwise?
Since QRgb is just a 32-bit unsigned int in ARGB format it doesn't suffice for adding colors, which will most likely result in overflow. But also QColor doesn't suffice as it uses fixed-point 16-bit integers for the color components and therefore also cannot cope with colors out of the valid [0,1] range. So you cannot use QRgb or QColor for this as they clamp each partial sum to the valid range. Neither can you predivide the colors before adding them because of their limited precision.
So your best bet would really just be to sum up the individual components using floating point numbers and then divide them by the vector size:
std::vector<QRgb> rgbValues;
float r = 0.0f, g = 0.0f, b = 0.0f, a = 0.0f;
for(std::vector<QRgb>::const_iterator iter=rgbValues.begin();
iter!=rgbValues.end(); ++iter)
{
QColor color(*iter);
r += color.redF();
g += color.greenF();
b += color.blueF();
a += color.alphaF();
}
float scale = 1.0f / float(rgbValues.size());
QRgb = QColor::fromRgbF(r*scale, g*scale, b*scale, a*scale).rgba();

Resources