3D graphics - vector math, I seem to get something wrong - math

The goal is to get a point in 3D space projected on the cameras screen (the final goal is to produce a points cloud)
This is the setup:
Camera
postion: px,py,pz
up direction: ux,uy,uz
look_at_direction: lx,ly,lz
screen_width
screen_height
screen_dist
Point in space:
p = (x,y,z)
And
init U,V,W
w = || position - look_at_direction || = || (px,py,pz) - (ux,uy,uz) ||
u = || (ux,uy,uz) cross-product w ||
v = || w cross-product u ||
This gives me the u,v,w coords of the camera.
This are the steps I'm supposed to implement, and the way I understand them. I believe something was lost in the translation.
(1) Find the ray from the camera to the point
I subtract the point with the camera's position
ray_to_point = p - position
(2) Calculate the dot product between the ray to the point and the normalized ray
to the center of the screen (the camera direction). Divide the result by sc_dist.
This will give you a ratio between the distance of the point and the distance to the camera's screen.
ratio = (ray_to_point * w)/screen_dist
Here I'm not sure I should be using w or the original look at value of the camera or the w vector which is the unit vector in the camera's space.
(3) Divide the ray to the point by the ratio found in step 2, and add it to the camera position. This will give you the projection of the point on the camera screen.
point_on_camera_screen = ray_to_point / ratio
(4) Find a vector between the center of the camera's screen and the projected point.
Am I supposed to find the center pixel of the screen? How can I do that?
Thanks.

Consider the following figure:
The first step is to calculate the difference vector from camera to p(=>diff`). That's correct.
The next step is to express the difference vector in the camera's coordinate system. Since its axes are orthogonal, this can be done with the dot product. Be sure that the axes are unit vectors:
diffInCameraSpace = (dot(diff, u), dot(diff, v), dot(diff, w))
Now comes the scaling part so that the resulting difference vector has a z-component of screen_dist. So:
diffInCameraSpace *= screenDist / diffInCameraSpace.z
You don't want to transform it back to world space. You might need some further information on how camera units are mapped to pixels, but at this step you are basically done. You might want to shift the resulting diffInCameraSpace by (screenWidth / 2, screenHeight / 2, 0). Forget the z-component and you have the position on the screen.

Related

Calculating the angle tilt using only accelerometers

I would like to know how to calculate using the coordinates of the accelerometer of my Android phone the angle between the two segments connecting the accelerometer and the bottom of the tree (B) and the accelerometer and the top of the tree (T) .
The accelerometer takes a value of acceleration on 3 axes every second, so I calculated the average and I have:
For the phone towards B: Ay1 = -9.69m.s^-1 and Az1 = 0.71m.s^-1
For the phone towards T: Ay2 = -9.71m.s^-1 and Az2 = 0.71m.s^-1
I am located at a distance D = 20m from the tree.
I would like at the end to know the value of H. So I would like to know how to calculate the angle and then find the height of the tree.
Thanks for your help
The angles we need are the angles between world-up and device-up. Since the gravity vector is pointing towards world-down, this is simply (assuming, you are pointing with the device y-axis):
cos angle = -a.y / sqrt(a.x^2 + a.y^2 + a.z^2)
The two angles we get from your readings are:
angle1 = 4.19065°
angle2 = 4.18205°
You can already see that the angles are very close as the two acceleration values are also extremely close. Btw, I wonder if you are really pointing with the y-axis because the gravity values suggest that you are holding the phone almost upright in both cases.
Anyway, if we assume that the two angles are correct, we can now calculate the height of the respective triangles assuming a length to target l. Then:
tan (90° - angle) = h / l
Assuming l=20 m, this gives us two height values:
h1 = 272.958 m
h2 = 273.521 m
These are heights above the height of the phone. In theory, one should be positive and the other should be negative. The height of the tree would be the difference of the two heights:
treeH = h2 - h1
treeH = 0.56338 m
As you have seen throughout the example, your readings must be pretty off. Nevertheless, this is how you would calculate the tree height.

How can I compute normal on the surface of a cylinder?

I am working on a ray tracer and I got around to adding cylinders to the scene. The point I am stuck at is finding the surface normal vector in the point the ray hits. I need this to be able to do the diffuse lighting. What I have at this point is the 3d point where the camera ray hits the cylinder and the actual cylinder which is defined with a point on the central axis, the vector representing the direction of the axis and the radius. So to sum up my question, how do I find the normal vector in a point having the cylinder hit point, the radius, a point on its axis and the direction vector of the axis?
The cylinder normal vector starts at the centerline of the cylinder at the same z-height of the point where the ray intersects the cylinder, ends at the radial point of intersection. Normalize it and you have your unit normal vector.
If the cylinder centerline is not along the global z-direction of the scene you'll have to transform to cylinder coordinates, calculate the normal vector, and transform that back to global coordinates.
There are three possible situations:
the hit_pt is on the TOP CAP of the cylinder:
if (length(hit_pt - cy.top_center) < cy.radius)
surface_normal = cy.ori;
the hit_pt is on the BOTTOM CAP of the cylinder:
if (length(hit_pt - cy.bottom_center) < cy.radius)
surface_normal = -1 * cy.ori;
the hit_pt is on the SIDE of the cylinder. We can use dot product to find the point 'pt' on the center line of the cylinder, so that the vector (hit_pt - pt) is orthogonal to the cylinder's orientation.
t = dot((hit_pt - cy.bottom_center), cy.ori); // cy.ori should be normalized and so has the length of 1.
pt = cy.bottom_center + t * cy.ori;
surface_normal = normalize(hit_pt - pt)));

How to calculate azimut & elevation relative to a camera direction of view in 3D ...?

I'm rusty a bit here.
I have a vector (camDirectionX, camDirectionY, camDirectionZ) that represents my camera direction of view.
I have a (camX, camY, camZ) that is my camera position.
Then, I have an object placed at (objectX, objectY, objectZ)
How can I calculate, from the camera point of view, the azimut & elevation of my object ??
The first thing I would do, to simplify the problem, is transform the coordinate space so the camera is at (0, 0, 0) and pointing straight down one of the axes (so the direction is say (0, 0, 1)). Translating so the camera is at (0, 0, 0) is pretty trivial, so I won't go into that. Rotating so that the camera direction is (0, 0, 1) is a little trickier...
One way of doing it is to construct the full orthonormal basis of the camera, then stick that in a rotation matrix and apply it. The "orthonormal basis" of the camera is a fancy way of saying the three vectors that point forward, up, and right from the camera. They should all be at 90 degrees to each other (which is what the ortho bit means), and they should all be of length 1 (which is what the normal bit means).
You can get these vectors with a bit of cross-product trickery: the cross product of two vectors is perpendicular (at 90 degrees) to both.
To get the right-facing vector, we can just cross-product the camera direction vector with (0, 1, 0) (a vector pointing straight up). You'll need to normalise the vector you get out of the cross-product.
To get the up vector of the camera, we can cross product the camera direction vector with the right-facing vector we just calculated. Assuming both input vectors are normalised, this shouldn't need normalising.
We now have the orthonormal basis of the camera. If we stick these vectors into the rows of a 3x3 matrix, we get a rotation matrix that will transform our coordinate space so the camera is pointing straight down one of the axes (which one depends on the order you stick the vectors in).
It's now fairly easy to calculate the azimuth and elevation of the object.
To get the azimuth, just do an atan2 on the x/z coordinates of the object.
To get the elevation, project the object coordinates onto the x/z plane (just set the y coordinate to 0), then do:
acos(dot(normalise(object coordinates), normalise(projected coordinates)))
This will always give a positive angle -- you probably want to negate it if the object's y coordinate is less than 0.
The code for all of this will look something like:
fwd = vec3(camDirectionX, camDirectionY, camDirectionZ)
cam = vec3(camX, camY, camZ)
obj = vec3(objectX, objectY, objectZ)
# if fwd is already normalised you can skip this
fwd = normalise(fwd)
# translate so the camera is at (0, 0, 0)
obj -= cam
# calculate the orthonormal basis of the camera
right = normalise(cross(fwd, (0, 1, 0)))
up = cross(right, fwd)
# rotate so the camera is pointing straight down the z axis
# (this is essentially a matrix multiplication)
obj = vec3(dot(obj, right), dot(obj, up), dot(obj, fwd))
azimuth = atan2(obj.x, obj.z)
proj = vec3(obj.x, 0, obj.z)
elevation = acos(dot(normalise(obj), normalise(proj)))
if obj.y < 0:
elevation = -elevation
One thing to watch out for is that the cross-product of your original camera vector with (0, 1, 0) will return a zero-length vector when your camera is facing straight up or straight down. To fully define the orientation of the camera, I've assumed that it's always "straight", but that doesn't mean anything when it's facing straight up or down -- you need another rule.

Radius of projected Sphere

i want to refine a previous question:
How do i project a sphere onto the screen?
(2) gives a simple solution:
approximate radius on screen[CLIP SPACE] = world radius * cot(fov / 2) / Z
with:
fov = field of view angle
Z = z distance from camera to sphere
result is in clipspace, multiply by viewport size to get size in pixels
Now my problem is that i don't have the FOV. Only the view and projection matrices are known. (And the viewport size if that does help)
Anyone knows how to extract the FOV from the projection matrix?
Update:
This approximation works better in my case:
float radius = glm::atan(radius/distance);
radius *= glm::max(viewPort.width, viewPort.height) / glm::radians(fov);
I'm a bit late to this party. But I came across this thread when I was looking into the same problem. I spent a day looking into this and worked though some excellent articles I found here:
http://www.antongerdelan.net/opengl/virtualcamera.html
I ended up starting with the projection matrix and working backwards. I got the same formula you mention in your post above. ( where cot(x) = 1/tan(x) )
radius_pixels = (radius_worldspace / {tan(fovy/2) * D}) * (screen_height_pixels / 2)
(where D is the distance from camera to the target's bounding sphere)
I'm using this approach to determine the radius of an imaginary trackball that I use to rotate my object.
Btw Florian, you can extract the fovy from the Projection matrix as follows:
If you take the Sy component from the Projection matrix as shown here:
Sx 0 0 0
0 Sy 0 0
0 0 Sz Pz
0 0 -1 0
where Sy = near / range
and where range = tan(fovy/2) x near
(you can find these definitions at the page I linked above)
if you substitute range in the Sy eqn above you get:
Sy = 1 / tan(fovy/2) = cot(fovy/2)
rearranging:
tan(fovy/2) = 1 / Sy
taking arctan (the inverse of tan) of both sides we get:
fovy/2 = arctan(1/Sy)
so,
fovy = 2 x arctan(1/Sy)
Not sure if you still care - its been a while! - but maybe this will help someone else.
Update: see below.
Since you have the view and projection matrices, here's one way to do it, though it's probably not the shortest:
transform the sphere's center into view space using the view matrix: call the result point C
transform a point on the surface of the sphere, e.g. C+(r, 0, 0) in world coordinates where r is the sphere's world radius, into view space; call the result point S
compute rv = distance from C to S (in view space)
let point S1 in view coordinates be C + (rv, 0, 0) - i.e. another point on the surface of the sphere in view space, for which the line C -> S1 is perpendicular to the "look" vector
project C and S1 into screen coords using the projection matrix as Cs and S1s
compute screen radius = distance between Cs and S1s
But yeah, like Brandorf said, if you can preserve the camera variables, like FOVy, it would be a lot easier. :-)
Update:
Here's a more efficient variant on the above: make an inverse of the projection matrix. Use it to transform the viewport edges back into view space. Then you won't have to project every box into screen coordinates.
Even better, do the same with the view matrix and transform the camera frustum back into world space. That would be more efficient for comparing many boxes against; but harder to figure out the math.
The answer posted at your link radiusClipSpace = radius * cot(fov / 2) / Z, where fov is the angle of the field of view, and Z is the z-distance to the sphere, definitely works. However, keep in mind that radiusClipSpace must be multiplied by the viewport's width to get a pixel measure. The value measured in radiusClipSpace will be a value between 0 and 1 if the object fits on the screen.
An alternative solution may be to use the solid angle of the sphere. The solid angle subtended by a sphere in a sky is basically the area it covers when projected to the unit sphere.
The formulae are given at this link but roughly what I'm doing is:
if( (!radius && !distance) || fabsf(radius) > fabsf(distance) )
; // NAN conditions. do something special.
theta=arcsin( radius/distance )
sphereSolidAngle = ( 1 - cosf( theta ) ) ; // not multiplying by 2PI since below ratio used only
frustumSolidAngle = ( 1 - cosf( fovy / 2 ) ) / M_PI ; // I cheated here. I assumed
// the solid angle of a frustum is (conical), then divided by PI
// to turn it into a square (area unit square=area unit circle/PI)
numPxCovered = 768.f*768.f * sphereSolidAngle / frustumSolidAngle ; // 768x768 screen
radiusEstimate = sqrtf( numPxCovered/M_PI ) ; // area=pi*r*r
This works out to roughly the same numbers as radius * cot(fov / 2) / Z. If you only want an estimate of the area covered by the sphere's projection in px, this may be an easy way to go.
I'm not sure if a better estimate of the solid angle of the frustum could be found easily. This method involves more comps than radius * cot(fov / 2) / Z.
The FOV is not directly stored in the projection matrix, but rather used when you call gluPerspective to build the resulting matrix.
The best approach would be to simply keep all of your camera variables in their own class, such as a frustum class, whose member variables are used when you call gluPerspective or similar.
It may be possible to get the FOVy back out of the matrix, but the math required eludes me.

Triangle mathematics for game development

I'm trying to make a triangle (isosceles triangle) to move around the screen and at the same time slightly rotate it when a user presses a directional key (like right or left).
I would like the nose (top point) of the triangle to lead the triangle at all times. (Like that old asteroids game).
My problem is with the maths behind this. At every X time interval, I want the triangle to move in "some direction", I need help finding this direction (x and y increments/decrements).
I can find the center point (Centroid) of the triangle, and I have the top most x an y points, so I have a line vector to work with, but not a clue as to "how" to work with it.
I think it has something to do with the old Sin and Cos methods and the amount (angle) that the triangle has been rotated, but I'm a bit rusty on that stuff.
Any help is greatly appreciated.
The arctangent (inverse tangent) of vy/vx, where vx and vy are the components of your (centroid->tip) vector, gives you the angle the vector is facing.
The classical arctangent gives you an angle normalized to -90° < r < +90° degrees, however, so you have to add or subtract 90 degrees from the result depending on the sign of the result and the sign of vx.
Luckily, your standard library should proive an atan2() function that takes vx and vy seperately as parameters, and returns you an angle between 0° and 360°, or -180° and +180° degrees. It will also deal with the special case where vx=0, which would result in a division by zero if you were not careful.
See http://www.arctangent.net/atan.html or just search for "arctangent".
Edit: I've used degrees in my post for clarity, but Java and many other languages/libraries work in radians where 180° = π.
You can also just add vx and vy to the triangle's points to make it move in the "forward" direction, but make sure that the vector is normalized (vx² + vy² = 1), else the speed will depend on your triangle's size.
#Mark:
I've tried writing a primer on vectors, coordinates, points and angles in this answer box twice, but changed my mind on both occasions because it would take too long and I'm sure there are many tutorials out there explaining stuff better than I ever can.
Your centroid and "tip" coordinates are not vectors; that is to say, there is nothing to be gained from thinking of them as vectors.
The vector you want, vForward = pTip - pCentroid, can be calculated by subtracting the coordinates of the "tip" corner from the centroid point. The atan2() of this vector, i.e. atan2(tipY-centY, tipX-centX), gives you the angle your triangle is "facing".
As for what it's relative to, it doesn't matter. Your library will probably use the convention that the increasing X axis (---> the right/east direction on presumably all the 2D graphs you've seen) is 0° or 0π. The increasing Y (top, north) direction will correspond to 90° or (1/2)π.
It seems to me that you need to store the rotation angle of the triangle and possibly it's current speed.
x' = x + speed * cos(angle)
y' = y + speed * sin(angle)
Note that angle is in radians, not degrees!
Radians = Degrees * RadiansInACircle / DegreesInACircle
RadiansInACircle = 2 * Pi
DegressInACircle = 360
For the locations of the vertices, each is located at a certain distance and angle from the center. Add the current rotation angle before doing this calculation. It's the same math as for figuring the movement.
Here's some more:
Vectors represent displacement. Displacement, translation, movement or whatever you want to call it, is meaningless without a starting point, that's why I referred to the "forward" vector above as "from the centroid," and that's why the "centroid vector," the vector with the x/y components of the centroid point doesn't make sense. Those components give you the displacement of the centroid point from the origin. In other words, pOrigin + vCentroid = pCentroid. If you start from the 0 point, then add a vector representing the centroid point's displacement, you get the centroid point.
Note that:
vector + vector = vector
(addition of two displacements gives you a third, different displacement)
point + vector = point
(moving/displacing a point gives you another point)
point + point = ???
(adding two points doesn't make sense; however:)
point - point = vector
(the difference of two points is the displacement between them)
Now, these displacements can be thought of in (at least) two different ways. The one you're already familiar with is the rectangular (x, y) system, where the two components of a vector represent the displacement in the x and y directions, respectively. However, you can also use polar coordinates, (r, Θ). Here, Θ represents the direction of the displacement (in angles relative to an arbitary zero angle) and r, the distance.
Take the (1, 1) vector, for example. It represents a movement one unit to the right and one unit upwards in the coordinate system we're all used to seeing. The polar equivalent of this vector would be (1.414, 45°); the same movement, but represented as a "displacement of 1.414 units in the 45°-angle direction. (Again, using a convenient polar coordinate system where the East direction is 0° and angles increase counter-clockwise.)
The relationship between polar and rectangular coordinates are:
Θ = atan2(y, x)
r = sqrt(x²+y²) (now do you see where the right triangle comes in?)
and conversely,
x = r * cos(Θ)
y = r * sin(Θ)
Now, since a line segment drawn from your triangle's centroid to the "tip" corner would represent the direction your triangle is "facing," if we were to obtain a vector parallel to that line (e.g. vForward = pTip - pCentroid), that vector's Θ-coordinate would correspond to the angle that your triangle is facing.
Take the (1, 1) vector again. If this was vForward, then that would have meant that your "tip" point's x and y coordinates were both 1 more than those of your centroid. Let's say the centroid is on (10, 10). That puts the "tip" corner over at (11, 11). (Remember, pTip = pCentroid + vForward by adding "+ pCentroid" to both sides of the previous equation.) Now in which direction is this triangle facing? 45°, right? That's the Θ-coordinate of our (1, 1) vector!
keep the centroid at the origin. use the vector from the centroid to the nose as the direction vector. http://en.wikipedia.org/wiki/Coordinate_rotation#Two_dimensions will rotate this vector. construct the other two points from this vector. translate the three points to where they are on the screen and draw.
double v; // velocity
double theta; // direction of travel (angle)
double dt; // time elapsed
// To compute increments
double dx = v*dt*cos(theta);
double dy = v*dt*sin(theta);
// To compute position of the top of the triangle
double size; // distance between centroid and top
double top_x = x + size*cos(theta);
double top_y = y + size*sin(theta);
I can see that I need to apply the common 2d rotation formulas to my triangle to get my result, Im just having a little bit of trouble with the relationships between the different components here.
aib, stated that:
The arctangent (inverse tangent) of
vy/vx, where vx and vy are the
components of your (centroid->tip)
vector, gives you the angle the vector
is facing.
Is vx and vy the x and y coords of the centriod or the tip? I think Im getting confused as to the terminology of a "vector" here. I was under the impression that a Vector was just a point in 2d (in this case) space that represented direction.
So in this case, how is the vector of the centroid->tip calculated? Is it just the centriod?
meyahoocomlorenpechtel stated:
It seems to me that you need to store
the rotation angle of the triangle and
possibly it's current speed.
What is the rotation angle relative to? The origin of the triangle, or the game window itself? Also, for future rotations, is the angle the angle from the last rotation or the original position of the triangle?
Thanks all for the help so far, I really appreciate it!
you will want the topmost vertex to be the centroid in order to achieve the desired effect.
First, I would start with the centroid rather than calculate it. You know the position of the centroid and the angle of rotation of the triangle, I would use this to calculate the locations of the verticies. (I apologize in advance for any syntax errors, I have just started to dabble in Java.)
//starting point
double tip_x = 10;
double tip_y = 10;
should be
double center_x = 10;
double center_y = 10;
//triangle details
int width = 6; //base
int height = 9;
should be an array of 3 angle, distance pairs.
angle = rotation_angle + vertex[1].angle;
dist = vertex[1].distance;
p1_x = center_x + math.cos(angle) * dist;
p1_y = center_y - math.sin(angle) * dist;
// and the same for the other two points
Note that I am subtracting the Y distance. You're being tripped up by the fact that screen space is inverted. In our minds Y increases as you go up--but screen coordinates don't work that way.
The math is a lot simpler if you track things as position and rotation angle rather than deriving the rotation angle.
Also, in your final piece of code you're modifying the location by the rotation angle. The result will be that your ship turns by the rotation angle every update cycle. I think the objective is something like Asteroids, not a cat chasing it's tail!

Resources