Sum Vector Components in OpenCL (SSE-like) - opencl

Is there a single instruction to calculate the sum of all components of a float4, e.g., in OpenCL?
float4 v;
float desiredResult = v.x + v.y + v.z + v.w;

float4 v;
float desiredResult = dot(v, (float4)(1.0f, 1.0f, 1.0f, 1.0f));
It's a little more work, because you're multiplying each component by one before adding them, but some GPUs have a dot product instruction built in. So might be faster; might be slower. It depends on your hardware.

Related

Calculating a 3D point infront of a position and rotation

I'm currently working on a game project and need to render a point in front of the current players vision, the game is written in a custom c++ engine. I have the current position (x,y,z) and the current rotation (pitch,yaw,roll). I need to extend the point forward along the known angle at a set distance.
edit:
What I Used As A Solution (Its slightly off but that's ok for me)
Vec3 LocalPos = {0,0,0};
Vec3 CurrentLocalAngle = {0,0,0};
float len = 0.1f;
float pitch = CurrentLocalAngle.x * (M_PI / 180);
float yaw = CurrentLocalAngle.y * (M_PI / 180);
float sp = sinf(pitch);
float cp = cosf(pitch);
float sy = sinf(yaw);
float cy = cosf(yaw);
Vec3 dir = { cp * cy, cp * sy, -sp };
LocalPos = { LocalPos.x + dir.x * len, LocalPos.y + dir.y * len,LocalPos.z + dir.z * len };
You can get the forward vector of the player from matrix column 3 if it is column based, then you multiply its normal by the distance you want then add the result to the player position you will get the point you need.
Convert the angle to a directional vector or just get the "forward vector" from the player if it's available in the engine you're using (it should be the same thing).
Directional vectors are normalized by nature (they have distance = 1), so you can just multiply them by the desired distance to get the desired offset. Multiply this vector by the distance you want the point to be relative to the reference point (the player's camera vector I presume), and then you just add one to the other to get the point in the world where this point belongs.

How to calculate azimuth from X Y Z values from magnetometer?

I want to make compass with Arduino and QMC5883. Now, the magnetometer outputs me only X Y Z values, and I have to calculate the rest myself. So far, I've used this:
float azimuth = atan2(x, y) * 180.0/PI;
But it's pretty buggy, and vulnerable to tilting in any direction. Is there any better algorythm that - for example - phone manufactures use? I could use accelerometer for help, if it would be needed.
The BBC micro:bit's device abstraction layer (DAL) includes this code to do tilt adjustment based on angles derived from accelerometer data. From https://github.com/lancaster-university/microbit-dal/blob/master/source/drivers/MicroBitCompass.cpp
/**
* Calculates a tilt compensated bearing of the device, using the accelerometer.
*/
int MicroBitCompass::tiltCompensatedBearing()
{
// Precompute the tilt compensation parameters to improve readability.
float phi = accelerometer->getRollRadians();
float theta = accelerometer->getPitchRadians();
// Convert to floating point to reduce rounding errors
Sample3D cs = this->getSample(NORTH_EAST_DOWN);
float x = (float) cs.x;
float y = (float) cs.y;
float z = (float) cs.z;
// Precompute cos and sin of pitch and roll angles to make the calculation a little more efficient.
float sinPhi = sin(phi);
float cosPhi = cos(phi);
float sinTheta = sin(theta);
float cosTheta = cos(theta);
// Calculate the tilt compensated bearing, and convert to degrees.
float bearing = (360*atan2(x*cosTheta + y*sinTheta*sinPhi + z*sinTheta*cosPhi, z*sinPhi - y*cosPhi)) / (2*PI);
// Handle the 90 degree offset caused by the NORTH_EAST_DOWN based calculation.
bearing = 90 - bearing;
// Ensure the calculated bearing is in the 0..359 degree range.
if (bearing < 0)
bearing += 360.0f;
return (int) (bearing);
}

Normal Map Implementation.... am I missing something here?

So, I have a light direction in World Space, and I calculate my normals per-vertex... I am however a little confused about my normal map implementation. Right now I'm doing this.
// Normal Map
const float3 normalmap = (2.0f*gTextures1024.Sample(gLinearSam, float3(_in.Tex, gNormalMapIndex)).rgb) - 1.0f;
const float3 NormalW = _in.Norm;
const float3 TangentW = normalize(_in.TangentW.xyz - dot(_in.TangentW.xyz, _in.Norm)* _in.Norm);
const float3 BitangentW = cross(NormalW, TangentW) * _in.TangentW.w;
const float3x3 TBN = float3x3(TangentW, BitangentW, NormalW);
float3 normal = normalize(mul(TBN, normalmap));
// Lighting Calculations
//float4 normal = normalize(float4(_in.Norm, 0.0f));
float3 hvector = normalize(mul(-gDirLight.Direction.xyz, TBN) + gEyePos).xyz;
//hvector = mul(hvector.xyz, TBN);
float4 ambient = gDirLight.Ambient * gMaterial.Ambient;
float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 specular = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 texColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
[branch]
if(gUseTextures)
texColor = gTextures1024.Sample(gLinearSam, float3(_in.Tex, gDiffuseMapIndex));
// diffuse factor
float diffuseFactor = saturate(dot(normal, -gDirLight.Direction.xyz));
[branch]
if(diffuseFactor > 0.0f)
{
diffuse = diffuseFactor * gDirLight.Diffuse * gMaterial.Diffuse;
// Specular facttor & color
float HdotN = saturate(dot(hvector, normal));
specular = gDirLight.Specular * pow(HdotN, gMaterial.Specular.w);
}
// Modulate with late add
return (texColor * (ambient + diffuse)) + specular;
Am I doing something wrong here? According to me I am implementing the normal maps calculation in world space, and everything should be working just fine... Am I missing something here?
TBN is a matrix that transforms vectors from world space to tangent space. Therefore, you should do lighting calculations in tangent space.
The normal that you acquire from the normal map is already in tangent space (assumably). So you need to transform light direction and eye position to tangent space and continue the calculation as usual.
Nico, you were right. But I had MANY MANY more issues that were creeping up on me.
Issue #1: I wasn't calculating my normals properly. I was using the per vertex average but wasn't even aware that there was such a technique as weighted average. This was a 2000% improvement on all my lighting.
Issue #2: Tangent and Bitangent calculation were not being done correctly either. I might still improve on that area to see if I can also do a weighted average of them.
Issue #3: I wasn't doing my lighting calculations correctly and after being on Wikipedia for about 2 days, I finally did it right, and actually understand it now with complete clarity.
Issue #4: I just wanted to hurry up and do it without understanding 100% what I was doing(never making that mistake AGAIN).

mapping polar angle to 0..1

Given a cartesian position, how can you map the angle from the origin into the range 0 .. 1?
I have tried:
sweep = atan(pos.y,pos.x) + PI) / (2.*PI);
(where sweep should be between 0 and 1)
This is GLSL, so the atan function is happy with two parameters (y then x) and returns -PI ... PI
This gives 1 in the top-left quadrant, a nice gradient in the top-right going round to the bottom right quadrant and then 0 in the bottom left quadrant:
How do I get a nice single gradient sweep instead? I want the maximum sweep somewhere, and the minimum adjacent to it anti-clockwise.
Here's my GLSL shader code:
Vertex shader:
uniform mat4 MVP_MATRIX;
attribute vec2 VERTEX;
varying vec2 pos;
void main() {
gl_Position = MVP_MATRIX * vec4(VERTEX,-2,1.);
pos = gl_Position.xy;
}
Fragment shader:
uniform vec4 COLOUR;
varying vec2 pos;
void main() {
float PI = 3.14159265358979323846264;
float sweep = (atan(pos.y,pos.x) + PI) / (2.*PI);
gl_FragColor = vec4(COLOUR.rgb * sweep,COLOUR.a);
}
Most programming languages have a two-parameter version of atan, often called atan2 This will usually give a result in the range (-PI, PI]. To convert that to the values 0-1 you can use:
(atan2(y,x) + PI) / (2*PI)
Since your language's atan function takes two arguments, it probably does the same thing as atan2.
You appear to be using atan2, which returns an angle in (-pi, pi). Make it into:
atan2(pos.y,pos.x) + PI) / (2*PI)

Average QRgb values

For a posterization algorithmn I'm going to need to average the color values (QRgb) present in my std::vector.
How would you suggest to do it? Sum the 3 components separately then average them? Otherwise?
Since QRgb is just a 32-bit unsigned int in ARGB format it doesn't suffice for adding colors, which will most likely result in overflow. But also QColor doesn't suffice as it uses fixed-point 16-bit integers for the color components and therefore also cannot cope with colors out of the valid [0,1] range. So you cannot use QRgb or QColor for this as they clamp each partial sum to the valid range. Neither can you predivide the colors before adding them because of their limited precision.
So your best bet would really just be to sum up the individual components using floating point numbers and then divide them by the vector size:
std::vector<QRgb> rgbValues;
float r = 0.0f, g = 0.0f, b = 0.0f, a = 0.0f;
for(std::vector<QRgb>::const_iterator iter=rgbValues.begin();
iter!=rgbValues.end(); ++iter)
{
QColor color(*iter);
r += color.redF();
g += color.greenF();
b += color.blueF();
a += color.alphaF();
}
float scale = 1.0f / float(rgbValues.size());
QRgb = QColor::fromRgbF(r*scale, g*scale, b*scale, a*scale).rgba();

Resources