I have noticed in most of the PCl example, they choose 1024 for random point generation. Is there a special reason for choosing 1024 ?
cloud->points[i].x = 1024 * rand () / (RAND_MAX + 1.0f);
cloud->points[i].y = 1024 * rand () / (RAND_MAX + 1.0f);
cloud->points[i].z = 1024 * rand () / (RAND_MAX + 1.0f);
I don't think there is a reason at all, I guess they just chose a random value.
It is a matter of normalization. rand() provides normalized values between [0, 1]. If you multiply by 1024, you'll have values between [0, 1024] = [0, 2^10]. The only reason I could see is if you have your boundaries delimited in a [0, 2^N] range, you will make a lot of tasks easier (e.g. working with hierarchical structures).
Anyway mind that you can normalize between the values that you want, depending on your needs.
Related
I'm currently working on a game project and need to render a point in front of the current players vision, the game is written in a custom c++ engine. I have the current position (x,y,z) and the current rotation (pitch,yaw,roll). I need to extend the point forward along the known angle at a set distance.
edit:
What I Used As A Solution (Its slightly off but that's ok for me)
Vec3 LocalPos = {0,0,0};
Vec3 CurrentLocalAngle = {0,0,0};
float len = 0.1f;
float pitch = CurrentLocalAngle.x * (M_PI / 180);
float yaw = CurrentLocalAngle.y * (M_PI / 180);
float sp = sinf(pitch);
float cp = cosf(pitch);
float sy = sinf(yaw);
float cy = cosf(yaw);
Vec3 dir = { cp * cy, cp * sy, -sp };
LocalPos = { LocalPos.x + dir.x * len, LocalPos.y + dir.y * len,LocalPos.z + dir.z * len };
You can get the forward vector of the player from matrix column 3 if it is column based, then you multiply its normal by the distance you want then add the result to the player position you will get the point you need.
Convert the angle to a directional vector or just get the "forward vector" from the player if it's available in the engine you're using (it should be the same thing).
Directional vectors are normalized by nature (they have distance = 1), so you can just multiply them by the desired distance to get the desired offset. Multiply this vector by the distance you want the point to be relative to the reference point (the player's camera vector I presume), and then you just add one to the other to get the point in the world where this point belongs.
I implemented a day/night shader built on the basis that only pixels on the side of an object that is facing the directional light source are illuminated. I calculate this based on the unit vectors between the directional light's position and the position of the pixel in 3D space:
float3 direction = normalize(Light.Position - Object.Position);
float theta = abs(acos(dot(normalize(Object.Position), direction))) * 180 / 3.14f;
if (theta < 180)
color = float3(1.0f);
else
color = float3(0.2f);
return float4(color, 1.0f);
This works well, but since I am brushing up on my math lately, it got me thinking that I should make sure I understand what acos is returning.
Mathematically, I know that the arccosine should give me an angle in radians from a value between -1 and 1, while cosine should give me a value between -1 and 1 from an angle in radians.
The documentation states that the input value should be between -1 and 1 for acos which follows that idea, but it doesn't tell me if the return value is 0 - π, -π - π, 0 - 2π, or a similar range.
Return Value
The arccosine of the x parameter.
Type Description
Name [Template Type] 'Component Type' Size
x [scalar, vector, or matrix] 'float' any
ret [same as input x] 'float' same dimension(s) as input x
HLSL doesn't really give me a way to test this very easily, so I'm wondering if anyone has any documentation on this.
What is the return range for the HLSL function acos?
I went through some testing on this topic and have discovered that the HLSL version of acos returns a value between 0 and π. I proved this to be true with the following:
n = 0..3
d = [0, 90, 180, 181]
r = π / 180 * d[n]
c = cos(r)
a = acos(c)
The following is the result of the evaluations for d[n]:
d[0] returns a = 0.
d[1] returns a = π/2.
d[2] returns a = π.
d[3] returns a ~= 3.12....
This tells us that the return value for acos stays true to the range of usual principle values for arccosine:
0 ≤ acos(x) ≤ π
Also remaining consistent with the definition:
acos(cos(θ)) = θ
I have provided feedback to Microsoft with regards to the lack of detailed documentation on HLSL intrinsic functions in comparison to more common languages.
Making a game using Golang since it seems to work quite well for games. I made the player face the mouse always, but wanted a turn rate to make certain characters turn slower than others. Here is how it calculates the turn circle:
func (p *player) handleTurn(win pixelgl.Window, dt float64) {
mouseRad := math.Atan2(p.pos.Y-win.MousePosition().Y, win.MousePosition().X-p.pos.X) // the angle the player needs to turn to face the mouse
if mouseRad > p.rotateRad-(p.turnSpeed*dt) {
p.rotateRad += p.turnSpeed * dt
} else if mouseRad < p.rotateRad+(p.turnSpeed*dt) {
p.rotateRad -= p.turnSpeed * dt
}
}
The mouseRad being the radians for the turn to face the mouse, and I'm just adding the turn rate [in this case, 2].
What's happening is when the mouse reaches the left side and crosses the center y axis, the radian angle goes from -pi to pi or vice-versa. This causes the player to do a full 360.
What is a proper way to fix this? I've tried making the angle an absolute value and it only made it occur at pi and 0 [left and right side of the square at the center y axis].
I have attached a gif of the problem to give better visualization.
Basic summarization:
Player slowly rotates to follow mouse, but when the angle reaches pi, it changes polarity which causes the player to do a 360 [counts all the back to the opposite polarity angle].
Edit:
dt is delta time, just for proper frame-decoupled changes in movement obviously
p.rotateRad starts at 0 and is a float64.
Github repo temporarily: here
You need this library to build it! [go get it]
Note beforehand: I downloaded your example repo and applied my change on it, and it worked flawlessly. Here's a recording of it:
(for reference, GIF recorded with byzanz)
An easy and simple solution would be to not compare the angles (mouseRad and the changed p.rotateRad), but rather calculate and "normalize" the difference so it's in the range of -Pi..Pi. And then you can decide which way to turn based on the sign of the difference (negative or positive).
"Normalizing" an angle can be achieved by adding / subtracting 2*Pi until it falls in the -Pi..Pi range. Adding / subtracting 2*Pi won't change the angle, as 2*Pi is exactly a full circle.
This is a simple normalizer function:
func normalize(x float64) float64 {
for ; x < -math.Pi; x += 2 * math.Pi {
}
for ; x > math.Pi; x -= 2 * math.Pi {
}
return x
}
And use it in your handleTurn() like this:
func (p *player) handleTurn(win pixelglWindow, dt float64) {
// the angle the player needs to turn to face the mouse:
mouseRad := math.Atan2(p.pos.Y-win.MousePosition().Y,
win.MousePosition().X-p.pos.X)
if normalize(mouseRad-p.rotateRad-(p.turnSpeed*dt)) > 0 {
p.rotateRad += p.turnSpeed * dt
} else if normalize(mouseRad-p.rotateRad+(p.turnSpeed*dt)) < 0 {
p.rotateRad -= p.turnSpeed * dt
}
}
You can play with it in this working Go Playground demo.
Note that if you store your angles normalized (being in the range -Pi..Pi), the loops in the normalize() function will have at most 1 iteration, so that's gonna be really fast. Obviously you don't want to store angles like 100*Pi + 0.1 as that is identical to 0.1. normalize() would produce correct result with both of these input angles, while the loops in case of the former would have 50 iterations, in the case of the latter would have 0 iterations.
Also note that normalize() could be optimized for "big" angles by using floating operations analogue to integer division and remainder, but if you stick to normalized or "small" angles, this version is actually faster.
Preface: this answer assumes some knowledge of linear algebra, trigonometry, and rotations/transformations.
Your problem stems from the usage of rotation angles. Due to the discontinuous nature of the inverse trigonometric functions, it is quite difficult (if not outright impossible) to eliminate "jumps" in the value of the functions for relatively close inputs. Specifically, when x < 0, atan2(+0, x) = +pi (where +0 is a positive number very close to zero), but atan2(-0, x) = -pi. This is exactly why you experience the difference of 2 * pi which causes your problem.
Because of this, it is often better to work directly with vectors, rotation matrices and/or quaternions. They use angles as arguments to trigonometric functions, which are continuous and eliminate any discontinuities. In our case, spherical linear interpolation (slerp) should do the trick.
Since your code measures the angle formed by the relative position of the mouse to the absolute rotation of the object, our goal boils down to rotating the object such that the local axis (1, 0) (= (cos rotateRad, sin rotateRad) in world space) points towards the mouse. In effect, we have to rotate the object such that (cos p.rotateRad, sin p.rotateRad) equals (win.MousePosition().Y - p.pos.Y, win.MousePosition().X - p.pos.X).normalized.
How does slerp come into play here? Considering the above statement, we simply have to slerp geometrically from (cos p.rotateRad, sin p.rotateRad) (represented by current) to (win.MousePosition().Y - p.pos.Y, win.MousePosition().X - p.pos.X).normalized (represented by target) by an appropriate parameter which will be determined by the rotation speed.
Now that we have laid out the groundwork, we can move on to actually calculating the new rotation. According to the slerp formula,
slerp(p0, p1; t) = p0 * sin(A * (1-t)) / sin A + p1 * sin (A * t) / sin A
Where A is the angle between unit vectors p0 and p1, or cos A = dot(p0, p1).
In our case, p0 == current and p1 == target. The only thing that remains is the calculation of the parameter t, which can also be considered as the fraction of the angle to slerp through. Since we know that we are going to rotate by an angle p.turnSpeed * dt at every time step, t = p.turnSpeed * dt / A. After substituting the value of t, our slerp formula becomes
p0 * sin(A - p.turnSpeed * dt) / sin A + p1 * sin (p.turnSpeed * dt) / sin A
To avoid having to calculate A using acos, we can use the compound angle formula for sin to simplify this further. Note that the result of the slerp operation is stored in result.
result = p0 * (cos(p.turnSpeed * dt) - sin(p.turnSpeed * dt) * cos A / sin A) + p1 * sin(p.turnSpeed * dt) / sin A
We now have everything we need to calculate result. As noted before, cos A = dot(p0, p1). Similarly, sin A = abs(cross(p0, p1)), where cross(a, b) = a.X * b.Y - a.Y * b.X.
Now comes the problem of actually finding the rotation from result. Note that result = (cos newRotation, sin newRotation). There are two possibilities:
Directly calculate rotateRad by p.rotateRad = atan2(result.Y, result.X), or
If you have access to the 2D rotation matrix, simply replace the rotation matrix with the matrix
|result.X -result.Y|
|result.Y result.X|
I'm trying to make SVG (and other 2D vector graphics) renderer in WebGL.
So far, I've figured out how to draw Quadratic Bezier with a triangle.
Here is the code.
var createProgram = function ( vsSource, fsSource ) {
var vs = gl.createShader( gl.VERTEX_SHADER );
gl.shaderSource( vs, vsSource );
gl.compileShader( vs );
var fs = gl.createShader( gl.FRAGMENT_SHADER );
gl.shaderSource( fs, fsSource );
gl.compileShader( fs );
var program = gl.createProgram();
gl.attachShader( program, vs );
gl.attachShader( program, fs );
gl.linkProgram( program );
return program;
}
var vsSource = `
precision mediump float;
attribute vec2 vertex;
attribute vec2 attrib;
varying vec2 p;
void main(void) {
gl_Position = vec4(vertex, 0.0, 1.0);
p = attrib;
}
`;
var fsSource = `
precision mediump float;
varying vec2 p;
void main(void) {
if (p.x*p.x - p.y > 0.0) {
// discard;
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
} else {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
}
`;
var canvas = document.querySelector( 'canvas' );
var gl = canvas.getContext( 'webgl' ) ||
canvas.getContext( 'experimental-webgl' );
gl.clearColor( 0.5, 0.5, 0.5, 1.0 );
var shapeData = [
-0.5, 0,
0.5, 0,
0, 1
];
var curveAttr = [
0, 0,
1, 1,
0.5, 0
];
var program = createProgram( vsSource, fsSource );
gl.useProgram( program );
var vertexLoc1 = gl.getAttribLocation( program, 'vertex' );
var attribLoc1 = gl.getAttribLocation( program, 'attrib' );
gl.clear( gl.COLOR_BUFFER_BIT );
gl.useProgram( program );
gl.enableVertexAttribArray(vertexLoc1);
gl.enableVertexAttribArray(attribLoc1);
var vertexBuffer1 = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, vertexBuffer1 );
gl.bufferData( gl.ARRAY_BUFFER, new Float32Array( shapeData ), gl.STATIC_DRAW );
gl.vertexAttribPointer(vertexLoc1, 2, gl.FLOAT, false, 0, 0);
var vertexBuffer2 = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, vertexBuffer2 );
gl.bufferData( gl.ARRAY_BUFFER, new Float32Array( curveAttr ), gl.STATIC_DRAW );
gl.vertexAttribPointer(attribLoc1, 2, gl.FLOAT, false, 0,0);
gl.drawArrays( gl.TRIANGLES, 0, shapeData.length / 2 );
<canvas></canvas>
My question is how to draw Cubic Bezier, like above.
I guess it should be done with 2 or a few triangles.
And also, I understand there is noway to convert Cubic Bezier to Quadratic.
Why quadratic works
Let's first understand why this works for quadratic. As you know, a quadratic Bézier curve is described as
(1−t)2∙A + 2 t(1−t)∙B + t2∙C .
Now if you plug the curve attributes into this formula, you get
(1−t)2∙(0, 0) + 2 (1−t)t∙(1/2, 0) + t2∙(1, 1) =
(0, 0) + (t−t2, 0) + (t2, t2) =
(t, t2)
So by squaring the first coordinate and subtracting the second, you always get 0 for a point on the curve.
Cubic is more difficult
Triangles are particularly easy. If you have a triangle with corners A, B and C, then for any point P inside the triangle (or in fact anywhere in the plane) there is a unique way to write P as αA+βB+γC with α+β+γ=1. This is essentially just a transformation between different 2D coordinate systems.
With cubic Bézier curves you have four defining points. The convex hull of these is a quadrilateral. While the parametrized representation of the curve still defines it in terms of linear combinations of these four points, this process is no longer easily reversible: you can't take a point in the plane and decompose it uniquely into the coefficients of the linear combination. Even if you take homogeneous coordinates (i.e. projective interpolation for your parameters), you still have to have your corners in a plane if you want to avoid seams at the inner triangle boundaries. Since you can get cubic Bézier curves to self-intersect, there can even be points on the Bézier curve which correspond to more than one value of t.
One way to tackle cubic
What you can do is have a closer look at the implicit representation. When you have
Px = (1−t)3∙Ax + 3 (1−t)2t∙Bx + 3 (1−t)t2∙Cx + t3∙Dx
Py = (1−t)3∙Ay + 3 (1−t)2t∙By + 3 (1−t)t2∙Cy + t3∙Dy
you can use a computer algebra system (or manual resultant computation) to eliminate t from these equations, resulting in a sixth-degree equation in all the other variables which characterizes the fact that the point (Px, Py) lies on the curve. To simplify things, you can choose an affine coordinate system such that
Ax = Ay = Bx = Dy = 0,
By = Dx = 1
in other words you use A as the origin, AD as the x unit vector and AB as the y unit vector. Then with respect to this coordinate system, point C has some specific coordinates (Cx, Cy) which you'll have to compute. If you use these coordinates as attributes for the vertices, then the linear interpolation of that attribute results in (Px, Py), which are the coordinates of the current point with respect to that coordinate system.
Using these coordinates, the condition for the point to lie on the curve is, according to my Sage computation, as follows:
0 = (-27*Cy^3 + 81*Cy^2 - 81*Cy + 27)*Px^3
+ (81*Cx*Cy^2 - 162*Cx*Cy - 27*Cy^2 + 81*Cx + 54*Cy - 27)*Px^2*Py
+ (-81*Cx^2*Cy + 81*Cx^2 + 54*Cx*Cy - 54*Cx - 9*Cy + 9)*Px*Py^2
+ (27*Cx^3 - 27*Cx^2 + 9*Cx - 1)*Py^3
+ (27*Cy^3 + 81*Cx*Cy - 81*Cy^2 + 81*Cy - 54)*Px^2
+ (-54*Cx*Cy^2 - 81*Cx^2 + 81*Cx*Cy + 81*Cx + 27*Cy - 54)*Px*Py
+ (27*Cx^2*Cy - 9*Cx)*Py^2
+ (-81*Cx*Cy + 27)*Px
The things in parentheses only depend on the coordinates of the control points, so they could become uniforms or per-face attributes in your shader code. In the fragment shader you'd plug in Px and Py from the interpolated position attribute, and use the sign of the result to decide what color to use.
There is a lot of room for improvements. It might be that a more clever way of choosing the coordinate system leads to a simpler formula. It might be that such a simpler formula, or perhaps even the formula above, could be simplified a lot by using the distributive law in a clever way. But I don't have time now to hunt for better formulations, and the above should be enough to get you going.
There are also some problems with my choice of coordinate system in specific situations. If B lies on the line AD, you may want to swap the roles of A and D and of B and C. If both B and C lie on that line, then the Bézier curve is itself a line, which is another special case although it's easy to implement. If A and D are the same point, you could write a different equation using AB and AC as the basis vectors. Distinguishing all these special cases, with some leeway for numeric errors, can be quite painful. You could avoid that by e.g. just making A the origin, essentially just translating your coordinate system. The resulting equation would be more complicated, but also more general since it would cover all the special cases simultaneously.
I was in need of a little math help that I can't seem to find the answer to, any links to documentation would be greatly appreciated.
Heres my situation, I have no idea where I am in this maze, but I need to move around and find my way back to the start. I was thinking of implementing a waypoint list of places i've been offset from my start at 0,0. This is a 2D cartesian plane.
I've been given 2 properties, my translation speed from 0-1 and my rotation speed from -1 to 1. -1 is very left and +1 is very right. These are speed and not angles so thats where my problem lies. If I'm given 0 as a translation speed and 0.2 I will continually turn to my right at a slow speed.
How do I figure out the offsets given these 2 variables? I can store it every time I take a 'step'.
I just need to figure out the offsets in x and y terms given the translations and rotation speeds. And the rotation to get to those points.
Any help is appreciated.
Your question is unclear on a couple of points, so I have to make some assumptions:
During each time interval, translation speed and rotational velocity are constant.
You know the values of these variables in every time interval (and you know rotational velocity in usable units, like radians per second, not just "very left").
You know initial heading.
You can maintain enough precision that roundoff error is not a problem.
Given that, there is an exact solution. First the easy part:
delta_angle = omega * delta_t
Where omega is the angular velocity. The distance traveled (maybe along a curve) is
dist = speed * delta_t
and the radius of the curve is
radius = dist / delta_angle
(This gets huge when angular velocity is near zero-- we'll deal with that in a moment.) If angle (at the beginning of the interval) is zero, defined as pointing in the +x direction, then the translation in the interval is easy, and we'll call it deta_x_0 and delta_y_0:
delta_x_0 = radius * sin(delta_angle)
delta_y_0 = radius * (1 - cos(delta_angle))
Since we want to be able to deal with very small delta_angle and very large radius, we'll expand sin and cos, and use this only when angular velocity is close to zero:
dx0 = r * sin(da) = (dist/da) * [ da - da^3/3! + da^5/5! - ...]
= dist * [ 1 - da^2/3! + da^4/5! - ...]
dy0 = r * (1-cos(da)) = (dist/da) * [ da^2/2! - da^4/4! + da^6/6! - ...]
= dist * [ da/2! - da^3/4! + da^5/6! - ...]
But angle generally isn't equal to zero, so we have to rotate these displacements:
dx = cos(angle) * dx0 - sin(angle) * dy0
dy = sin(angle) * dx0 - cos(angle) * dy0
You could do it in two stages. First work out the change of direction to get a new direction vector and then secondly work out the new position using this new direction. Something like
angle = angle + omega * delta_t;
const double d_x = cos( angle );
const double d_y = sin( angle );
x = x + d_x * delta_t * v;
y = y + d_y * delta_t * v;
where you store your current angle out at each step. ( d_x, d_y ) is the current direction vector and omega is the rotation speed that you have. delta_t is obviously your timestep and v is your speed.
This may be too naive to split it up into two distinct stages. I'm not sure I haven't really thought it through too much and haven't tested it but if it works let me know!