Can rotating a polygon 180 degree produce its reflection? - reflection

I have written code for Reflection in Java in the traditional way , I know how to make reflection of a Polygon.
My question is about trying something new, I was thinking If I rotate a polygon at 180 degrees will it produce the reflection of the polygon?? I think it will produce its reflection but will it be mathematically acceptable??

It depends on the shape. Some polygons have rotational symmetry, some have reflectional symmetry. If you rotate a rectangle by 180 degrees, or reflect it in a suitable axis, you get back the same shape.
But generally, no. They are different operations. If you rotate a shape by 180 degrees, you get an upside down shape, not a reflected shape.

Think of a triangle with a 90°angle:
|\
|_\
Apply a 180° rotation:
_
\ |
\|
That's not the reflection of the triangle
On some shape like squares or rectangles it will work but usually it won't.

Related

How to orient a 3D object in relation to its direction and the ground?

In my game I have characters walking around a 3d terrain. The characters treat the terrain however as a 2d game map, so each character has a direction and a rotation on a 2d plane.
I want to rotate the characters as they're walking on the terrain, so that they are oriented to stand in relation to the terrain, rather then always be oriented as if they're walking on flat ground. This with keeping the original direction of the characters.
Basically I want
For each arbitrary x\z (width\depth) point on the game map I have
the (x,y,z) vector of the point on the terrain
The normal of the the specific terrain face related to the point
Using this, how do I set the rotation of the characters to achieve this?
Depending on which axis you would like to rotate the object the dot product of the faces normal with that axis will return you the cosine of the angle between the two vectors. By that angle you would have to rotate your object.

In a TBN Matrix are the normal, tangent, and bitangent vectors always perpendicular?

This is related to a problem described in another question (images there):
Opengl shader problems - weird light reflection artifacts
I have a .obj importer that creates a data structure and calculates the tangents and bitangents. Here is the data for the first triangle in my object:
My understanding of tangent space is that the normal points outward from the vertex, the tangent is perpendicular (orthogonal?) to the normal vector and points in the direction of positive S in the texture, and the bitangent is perpendicular to both. I'm not sure what you call it but I thought that these 3 vectors formed what would look like a rotated or transformed x,y,z axis. They wouldn't be 3 randomly oriented vectors, right?
Also my understanding: The normals in a normal map provide a new normal vector. But in tangent space texture maps there is no built in orientation between the rgb encoded normal and the per vertex normal. So you use a TBN matrix to bridge the gap and get them in the same space (or get the lighting in the right space).
But then I saw the object data... My structure has 270 vertices and all of them have a 0 for the Tangent Y. Is that correct for tangent data? Are these tangents in like a vertex normal space or something? Or do they just look completely wrong? Or am I confused about how this works and my data is right?
To get closer to solving my problem in the other question I need to make sure my data is right and my understanding on how tangent space lighting math works.
The tangent and bitangent vectors point in the direction of the S and T components of the texture coordinate (U and V for people not used to OpenGL terms). So the tangent vector points along S and the bitangent points along T.
So yes, these do not have to be orthogonal to either the normal or each other. They follow the direction of the texture mapping. Indeed, that's their purpose: to allow you to transform normals from model space into the texture's space. They define a mapping from model space into the space of the texture.
The tangent and bitangent will only be orthogonal to each other if the S and T components at that vertex are orthogonal. That is, if the texture mapping has no sheering. And while most texture mapping algorithms will try to minimize sheering, they can't eliminate it. So if you want an accurate matrix, you need a non-orthogonal tangent and bitangent.

Planar 3D triangle mesh to 2D

I've a set of triangles creating a mesh along the same plane (think about a wall of a room with its triangle defining the geometry). I need to show a 2D representation of the mesh so that every point of each triangles (x, y, z) is trasformed to (x, y). I need to have the same exactly shape/area of each triangle, placed at the same relative locations of the other triangles.
There are already answers like this that solve how to transform a 3D triangle into 2D like this
Flattening a 3d triangle
but they need to set one vertex of the triangle as the origin. How can i apply the same idea so i don't need to put each triangle at the right position compared to the other triangles?
You can use the same approach. Just pick one point (the first vertex of the first triangle is as good as any) as the origin and use that same value for all the points in your mesh.
This should transform them in a consistent manner.

Understanding cos(theta) and sine(theta)

Is there any detailed document which describe math functions cos(theta) and sine(theta) with respect to image rotation ?
I am trying to imagine when I use these functions to calculate location of a shifted point/rect when an object is rotated, but I was unable to visualize it .
Can anybody give me a link/document for this?
It's Geometry/Trigonometry... Theoretically you have (or will) learn about them in Math. But to put a long story short, you will want to translate your Canvas (or other drawing surface) a distance calculated by the cosine and sine functions. Hypotenuse * cosine(angle in radians) will give you the horizontal displacement, and Hypotenuse * sine(angle in radians) will give you the vertical displacement. After you translate, you will then want to rotate your Canvas by the angle.
I'm not sure if reversing the order works the same or not. I believe if you rotate your Canvas first, then all you have to do is translate horizontally the distance you want. But I could be wrong on that part (since I have never done it in this order). Personally, I use the first approach.
If you want to learn more about Sine, Cosine, or Tangent, just google "Trigonometric ratios"..

What are barycentric calculations used for?

I've been looking at XNA's Barycentric method and the descriptions I find online are pretty opaque to me. An example would be nice. Just an explanation in English would be great... what is the purpose and how could it be used?
From Wikipedia:
In geometry, the barycentric coordinate system is a coordinate system in which the location of a point is specified as the center of mass, or barycenter, of masses placed at the vertices of a simplex (a triangle, tetrahedron, etc).
They are used, I believe, for raytracing in game development.
When a ray intersects a triangle in a normal mesh, you just record it as either a hit or a miss. But if you want to implement a subsurf modifier (image below), which makes meshes much smoother, you will need the distance the ray hit from the center of the triangle (which is much easier to work with in Barycentric coordinates).
Subsurf modifiers are not that hard to visualize:
The cube is the original shape, and the smooth mesh inside is the "subsurfed" cube, I think with a recursion depth of three or four.
Actually, that might not be correct. Don't take my exact word for it, but I do know that they are used for texture mapping on geometric shapes.
Here's a little set of slides you can look at: http://www8.cs.umu.se/kurser/TDBC07/HT04/handouts/HO-lecture11.pdf
In practice the barycentric coordinates of a point P in respect of a triangle ABC are just its weights (u,v,w) according to the triangle's vertices, such that P = u*A + v*B + w*C. If the point lies within the triangle, you got u,v,w in [0,1] and u+v+w = 1.
They are used for any task involving knowledge of a point's location in respect to the vertices of a triangle, like e.g. interpolation of attributes across a triangle. For example in raytracing you got a hitpoint inside the triangle. When you want to know that point's normal or other attributes, you compute its barycentric coordinates within the triangle. Then you can use these weights to sum up the attributes of the triangle's vertices and you got the interpolated attribute.
To compute a point P's barycentric coordinates (u,v,w) within a triangle ABC you can use:
u = [PBC] / [ABC]
v = [APC] / [ABC]
w = [ABP] / [ABC]
where [ABC] denotes the area of the triangle ABC.

Resources