Projection matrix point to sphere surface - math

I need to project a 3D object onto a sphere's surface (uhm.. like casting a shadow).
AFAIR this should be possible with a projection matrix.
If the "shadow receiver" was a plane, then my projection matrix would be a 3D to 2D-plane projection, but my receiver in this case is a 3D spherical surface.
So given sphere1(centerpoint,radius),sphere2(othercenter,otherradius) and an eyepoint how can I compute a matrix that projects all points from sphere2 onto sphere1 (like casting a shadow).

Do you mean that given a vertex v you want the following projection:
v'= centerpoint + (v - centerpoint) * (radius / |v - centerpoint|)
This is not possible with a projection matrix. You could easily do it in a shader though.

Matrixes are commonly used to represent linear operations, like projection onto a plane.
In your case, the resulting vertices aren't deduced from input using a linear function, so this projection is not possible using a matrix.

If the sphere1 is sphere((0,0,0),1), that is, the sphere of radius 1 centered at the origin, then you're in effect asking for a way to convert any location (x,y,z) in 3D to a corresponding location (x', y', z') on the unit sphere. This is equivalent to vector renormalization: (x',y',z') = (x,y,z)/sqrt(x^2+y^2+z^2).
If sphere1 is not the unit sphere, but is say sphere((a,b,c),R) you can do mostly the same thing:
(x',y',z') = R*(x-a,y-b,z-c) / sqrt((x-a)^2+(y-b)^2+(z-c)^2) + (a,b,c). This is equivalent to changing coordinates so the first sphere is the unit sphere, solving the problem, then changing coordinates back.
As people have pointed out, these functions are nonlinear, so the projection cannot be called a "matrix." But if you prefer for some reason to start with a projection matrix, you could project first from 3D to a plane, then from a plane to the sphere. I'm not sure if that would be any better though.
Finally, let me point out that linear maps don't produce division-by-zero errors, but if you look closely at the formulas above, you'll see that this map can. Geometrically, that's because it's hard to project the center point of a sphere to its boundary.

Related

OpenGL : equation of the line going through a point defined by a 4x4 matrix ? (camera for example)

I would like to know what is the set of 3 equations (in the world coordinates) of the line going through my camera (perpendicular to the camera screen). The position and rotation of my camera in the world coordinates being defined by a 4x4 matrix.
Any idea?
parametric line is simple just extract the Z axis direction vector and origin point O from the direct camera matrix (see the link below on how to do it). Then any point P on your line is defined as:
P(t) = O + t*Z
where t is your parameter. The camera view direction is usually -Z for OpenGL perspective in such case:
t = (-inf,0>
Depending on your projection you might want to use:
t = <-z_far,-z_near>
The problem is there are many combinations of conventions. So you need to know if you have row major or column major order of your matrix (so you know if the direction vectors and origins are in rows or columns). Also camera matrix in gfx is usually inverse one so you need to invert it first. For more info about this see:
Understanding 4x4 homogenous transform matrices

Picking in true 3D isometric view

To view my 3D environment, I use the "true" 3D isometric projection (flat square on XZ plane, Y is "always" 0). I used the explanation on wikipedia: http://en.wikipedia.org/wiki/Isometric_projection to come to how to do this transformation:
The projection matrix is an orthographic projection matrix between some minimum and maximum coordinate.
The view matrix is two rotations: one around the Y-axis (n * 45 degrees) and one around the X-axis (arctan(sin(45 degrees))).
The result looks ok, so I think I have done it correctly.
But now I want to be able to pick a coordinate with the mouse. I have successfully implemented this by rendering coordinates to an invisible framebuffer and then getting the pixel under the mouse cursor to get the coordinate. Although this works fine, I would really like to see a mathematical sollution because I will need it to calculate bounding boxes, frustums of the area on the screen and stuff like that.
My instincts tell me to:
- go from screen-coordinates to 2D projection coordinates (or how do you say this, I mean transforming screen coordinates to a coordinate between -1 and +1 for both axisses, with y inverted)
- untransform the coordinate with the inverse of the view-matrix.
- yeah... untransform this coordinate with the inverse of the projection matrix, but as my instincts tell, this won't work as everything will have the same Z-coordinate.
This, while every information is perfectly available on the isometric view (I know that the Y value is always 0). So I should be able to convert the isometric 2D x,y coordinate to a calculated 3d (x, 0, z) coordinate without using scans or something like that.
My math isn't bad, but this is something I can't seem to grasp.
Edit: IMO. every different (x, 0, z) coordinate corresponds to a different (x2, y2) coordinate in isometric view. So I should be able to simply calculate a way from (x2, y2) to (x, 0, z). But how?
Anyone?
there is something called project and unproject to transform screen to world and vice versa....
You seem to miss some core concepts here (it’s been a while since I did this stuff, so minor errors included):
There are 3 kinds of coordinates involved here (there are more, these are the relevant ones): Scene, Projection and Window
Scene (3D) are the coordinates in your world
Projection (3D) are those coordinates after being transformed by camera position and projection
Window (2D) are the coordinates in your window. They are generated from projection by scaling x and y appropriately and discarding z (z is still used for “who’s in front?” calculations)
You can not transform from window to scene with a matrix, as every point in window does correspond to a whole line in scene. If you want (x, 0, z) coordinates, you can generate this line and intersect it with the y-plane.
If you want to do this by hand, generate two points in projection with the same (x,y) and different (arbitrary) z coordinates and transform them to scene by multiplying with the inverse of your projection transformation. Now intersect the line through those two points with your y-plane and you’re done.
Note that there should be a “static” solution (a single formula) to this problem – if you solve this all on paper, you should get to it.

In a TBN Matrix are the normal, tangent, and bitangent vectors always perpendicular?

This is related to a problem described in another question (images there):
Opengl shader problems - weird light reflection artifacts
I have a .obj importer that creates a data structure and calculates the tangents and bitangents. Here is the data for the first triangle in my object:
My understanding of tangent space is that the normal points outward from the vertex, the tangent is perpendicular (orthogonal?) to the normal vector and points in the direction of positive S in the texture, and the bitangent is perpendicular to both. I'm not sure what you call it but I thought that these 3 vectors formed what would look like a rotated or transformed x,y,z axis. They wouldn't be 3 randomly oriented vectors, right?
Also my understanding: The normals in a normal map provide a new normal vector. But in tangent space texture maps there is no built in orientation between the rgb encoded normal and the per vertex normal. So you use a TBN matrix to bridge the gap and get them in the same space (or get the lighting in the right space).
But then I saw the object data... My structure has 270 vertices and all of them have a 0 for the Tangent Y. Is that correct for tangent data? Are these tangents in like a vertex normal space or something? Or do they just look completely wrong? Or am I confused about how this works and my data is right?
To get closer to solving my problem in the other question I need to make sure my data is right and my understanding on how tangent space lighting math works.
The tangent and bitangent vectors point in the direction of the S and T components of the texture coordinate (U and V for people not used to OpenGL terms). So the tangent vector points along S and the bitangent points along T.
So yes, these do not have to be orthogonal to either the normal or each other. They follow the direction of the texture mapping. Indeed, that's their purpose: to allow you to transform normals from model space into the texture's space. They define a mapping from model space into the space of the texture.
The tangent and bitangent will only be orthogonal to each other if the S and T components at that vertex are orthogonal. That is, if the texture mapping has no sheering. And while most texture mapping algorithms will try to minimize sheering, they can't eliminate it. So if you want an accurate matrix, you need a non-orthogonal tangent and bitangent.

What are barycentric calculations used for?

I've been looking at XNA's Barycentric method and the descriptions I find online are pretty opaque to me. An example would be nice. Just an explanation in English would be great... what is the purpose and how could it be used?
From Wikipedia:
In geometry, the barycentric coordinate system is a coordinate system in which the location of a point is specified as the center of mass, or barycenter, of masses placed at the vertices of a simplex (a triangle, tetrahedron, etc).
They are used, I believe, for raytracing in game development.
When a ray intersects a triangle in a normal mesh, you just record it as either a hit or a miss. But if you want to implement a subsurf modifier (image below), which makes meshes much smoother, you will need the distance the ray hit from the center of the triangle (which is much easier to work with in Barycentric coordinates).
Subsurf modifiers are not that hard to visualize:
The cube is the original shape, and the smooth mesh inside is the "subsurfed" cube, I think with a recursion depth of three or four.
Actually, that might not be correct. Don't take my exact word for it, but I do know that they are used for texture mapping on geometric shapes.
Here's a little set of slides you can look at: http://www8.cs.umu.se/kurser/TDBC07/HT04/handouts/HO-lecture11.pdf
In practice the barycentric coordinates of a point P in respect of a triangle ABC are just its weights (u,v,w) according to the triangle's vertices, such that P = u*A + v*B + w*C. If the point lies within the triangle, you got u,v,w in [0,1] and u+v+w = 1.
They are used for any task involving knowledge of a point's location in respect to the vertices of a triangle, like e.g. interpolation of attributes across a triangle. For example in raytracing you got a hitpoint inside the triangle. When you want to know that point's normal or other attributes, you compute its barycentric coordinates within the triangle. Then you can use these weights to sum up the attributes of the triangle's vertices and you got the interpolated attribute.
To compute a point P's barycentric coordinates (u,v,w) within a triangle ABC you can use:
u = [PBC] / [ABC]
v = [APC] / [ABC]
w = [ABP] / [ABC]
where [ABC] denotes the area of the triangle ABC.

Show lat/lon points on screen, in 3d

It's been a while since my math in university, and now I've come to need it like I never thought i would.
So, this is what I want to achieve:
Having a set of 3D points (geographical points, latitude and longitude, altitude doesn't matter), I want to display them on a screen, considering the direction I want to take into account.
This is going to be used along with a camera and a compass , so when I point the camera to the North, I want to display on my computer the points that the camera should "see". It's a kind of Augmented Reality.
Basically what (i think) i need is a way of transforming the 3D points viewed from above (like viewing the points on google maps) into a set of 3d Points viewed from a side.
The conversion of Latitude and longitude to 3-D cartesian (x,y,z) coordinates can be accomplished with the following (Java) code snippet. Hopefully it's easily converted to your language of choice. lat and lng are initially the latitude and longitude in degrees:
lat*=Math.PI/180.0;
lng*=Math.PI/180.0;
z=Math.sin(-lat);
x=Math.cos(lat)*Math.sin(-lng);
y=Math.cos(lat)*Math.cos(-lng);
The vector (x,y,z) will always lie on a sphere of radius 1 (i.e. the Earth's radius has been scaled to 1).
From there, a 3D perspective projection is required to convert the (x,y,z) into (X,Y) screen coordinates, given a camera position and angle. See, for example, http://en.wikipedia.org/wiki/3D_projection
It really depends on the degree of precision you require. If you're working on a high-precision, close-in view of points anywhere on the globe you will need to take the ellipsoidal shape of the earth into account. This is usually done using an algorithm similar to the one descibed here, on page 38 under 'Conversion between Geographical and Cartesian Coordinates':
http://www.icsm.gov.au/gda/gdatm/gdav2.3.pdf
If you don't need high precision the techniques mentioned above work just fine.
could anyone explain me exactly what these params mean ?
I've tried and the results where very weird so i guess i am missunderstanding some of the params for the perspective projection
* {a}_{x,y,z} - the point in 3D space that is to be projected.
* {c}_{x,y,z} - the location of the camera.
* {\theta}_{x,y,z} - The rotation of the camera. When {c}_{x,y,z}=<0,0,0>, and {\theta}_{x,y,z}=<0,0,0>, the 3D vector <1,2,0> is projected to the 2D vector <1,2>.
* {e}_{x,y,z} - the viewer's position relative to the display surface. [1]
Well, you'll want some 3D vector arithmetic to move your origin, and probably some quaternion-based rotation functions to rotate the vectors to match your direction. There are any number of good tutorials on using quaternions to rotate 3D vectors (since they're used a lot for rendering and such), and the 3D vector stuff is pretty simple if you can remember how vectors are represented.
well, just a pice ov advice, you can plot this points into a 3d space (you can do easily this using openGL).
You have to transforrm the lat/long into another system for example polar or cartesian.
So starting from lat/longyou put the origin of your space into the center of the heart, than you have to transform your data in cartesian coord:
z= R * sin(long)
x= R * cos(long) * sin(lat)
y= R * cos(long) * cos(lat)
R is the radius of the world, you can put it at 1 if you need only to cath the direction between yoour point of view anthe points you need "to see"
than put the Virtual camera in a point of the space you've created, and link data from your real camera (simply a vector) to the data of the virtual one.
The next stemp to gain what you want to do is to try to plot timages for your camera overlapped with your "virtual space", definitevly you should have a real camera that is a control to move the virtual one in a virtual space.

Resources