Rotation of Tetrehedra for 3d Tessellation - math

I'm trying to render some 3d graphics with a bunch of tetrahedra. I'm trying to figure out how to rotate one tetrahedron such that it will be perfectly face-to-face with another tetrahedron. If this is confusing, multiple tetrahedra touching face to face would look like this.
I'm using OpenGL to programmatically rotate objects, so I can only rotate on one of the three axes at a time. For example, I can rotate clockwise 20 degrees on X, then counterclockwise 45 degrees on Z, etc.
I understand the programming aspect of this program (using OpenGL's glRotatef() function to rotate on one axis at a time), but am more interested in the specific angles needed for each axis in order to achieve the 3d tessellation.
Thanks for any help, let me know if you need more clarification.

If they need to be perfectly face to face, I would not try to find a rotation at all.
Instead, I would start with one tetrahedron. Decide which face is shared with the next one.
Take the cross product of two edges on this face (50% chance that it points in the direction of the 4th point, in this case invert the vector). Normalize. Multiply by sqrt(6)/3 * edge_length (this is a constant, precompute!).
You now have a vector pointing in the direction of the new tetrahedron's 4th vertex (the other 3 you already know, they're the same as the ones on the face!), with the length of the new tetrahedron's height.
All you now need is an origin for your vector: Take the arithmetic mean of the coordinates of the 3 shared vertices, that will give the center point of that face.
Add the vector to that point, giving you the final point.
Now you two tetrahedrons sharing one face (regardless of orientation), no rotation math needed.

Related

How to find the appropriate rotation of a pentagon for fitting into a LibGDX hex-tessellated sphere?

I've got a tricky question today which involves a lot of vectors. I'm trying to keep them all straight. What I have is this shape (mostly hexagons with 12 pentagons): http://i.imgur.com/WDSWEcF.jpg
And I want to place 12 pentagon meshes into their 12 spots. I start by creating the 12 meshes at the origin (the center of this shape) and then using the following code to rotate and move them into position.
for (int i = 0; i < 12; ++i) {
Vector3 pentPoint = pentPoints.get(i); // The center of each pentagon.
ModelInstance pent = pents.get(i);
Vector3 direction = (pentPoint).cpy().sub(new Vector3(0, 0, 0))
.nor();
direction.set(direction.x, direction.y, direction.z);
pent.transform.setToRotation(Vector3.Y, direction);
pent.transform.setTranslation(pentPoint);}
Now, this is almost what I need. It results in this: http://i.imgur.com/Ch5Jhb8.jpg. Forgetting about the scaling for now, you can see that the pentagon is rotated improperly. It doesn't line up with its slot. I know that I can fix this rotation using pent.transform.rotate(Vector3.Y, *value*); based on some value for each pentagon. The problem is, I have no idea how I can calculate what this value should be.
Can anyone help or point me to some resources? Alternatively, I could use the fact that I know the coordinate of every vertex in the shape to fill in these pentagons by drawing triangles using LibGDX's ModelBuilder. I think this would be less performant than positioning the .objs. Thoughts?
I don't know anything about the library you're using, but maybe I can help with the geometry. One approach would be to draw a mesh for 1/5 of one of the pentagons. I suggest you do that in place, rather than at the origin. You need to know two adjacent vertices of a pentagon. From that, you can easily calculate the center of the pentagon (I can supply formulas if you wish). The three points you now have determine a triangle which is a "fundamental domain" for the rotation group of the dodecahedron. If you have a mesh on the fundamental domain, it can be propagated to the other 4/5 of the pentagon you chose by repeating a 72 degree rotation about the axis through the origin determined by the center of the pentagon. Call that rotation A. You can represent it by axis angle, quaternion, whatever.
To propagate the mesh to other pentagons in the figure, you just need one more rotation: a 180 degree rotation which takes your chosen pentagon to another nearby pentagon. Again, I could give a formula for the axis if you like, but if you can find the center of a second pentagon with the information you already have, the axis is determined by the midpoint of the segment connecting the two centers. (You may have to normalize the point determining the axis, depending on how you represent rotations.) Call the 180 degree rotation about that axis rotation B.
Rotation A and B together generate the entire 60 element rotation group of the icosahedron, which will allow you to propagate the mesh on the fundamental domain to every other pentagon in the figure. If you're not careful, however, you may hit some parts twice and others not at all. I think you can do it in this order: start with a fundamental domain. 4 A's fill in the first pentagonal face (let's call it the north pole). Then a B will map that pentagon to an adjacent pentagon. 4 more A's will fill in a meridian of pentagons. Another B will take a pentagon on the meridian to the other meridian. 4 more A's will fill in the second meridian. Finally another B will map a pentagon on the second meridian to the south pole.
The orientations of all the pentagons will be correct in this procedure.
Does that help?

How to get new camera direction vector when moving an arbitrary relative angle

I am implementing a camera class and am getting stuck with some things
Let's suppose the camera is at Point (0,0,0) looking at a certain direction with its corresponding UP and RIGHT vectors.
I have a joystick control which allows you to go forward-backwards, or change orientation by moving (left-right) or (up-down), according to the above mentioned vectors.
How can I know, given the 3 vectors, which is the resulting direction vector if for instance I want to move N degrees right??
If you are talking about rotating your camera, here is how it is done: every rotation is a matrix that transforms coordinates, so all you have to do is to calculate the matrix of your rotation and then apply it to Dir, Up and Right vectors of your camera to get new ones after rotation is done.
Here is a little reading about rotation matrices (read the section of 3D rotations):
http://mathworld.wolfram.com/RotationMatrix.html

Where is the triangle normal pointing so I can map correctly?

Here is what I have so far.
I have a 3D model and I made a triangle mesh. Calculated and applied normals to the model too.
I want to apply different textures into the triangle. I also have the direction vector of all the texture I need.
For mapping, I do this:
I just calculate the Dot product of each triangle normal with the texture direction vector of each texture, and start comparing to see which texture could suitable BASED UPON the calculation of dot product.
But I realised that it is not as straight forward as I thought it was. Because two or more, different triangle could be in almost same orientation in 3D space, meaning one could be facing towards me and the other could be facing opposite direction (maybe parallel but different direction).
I think a better question is how do I use the calculated dot-product to distinguish the face of the triangle so I know I know which image/texture should be used ?
If the triangles are facing in opposite directions, the normals will also face in opposite directions, and the dot products will have opposite signs. Therefore the dot product gives you enough information to distinguish between the opposite faces. I can't think of a simple test which would give better results than the dot product.

Calculating 2D angles for 3D objects in perspective

Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?
It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)
Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.

Polygon math

Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.

Resources