Where is the triangle normal pointing so I can map correctly? - math

Here is what I have so far.
I have a 3D model and I made a triangle mesh. Calculated and applied normals to the model too.
I want to apply different textures into the triangle. I also have the direction vector of all the texture I need.
For mapping, I do this:
I just calculate the Dot product of each triangle normal with the texture direction vector of each texture, and start comparing to see which texture could suitable BASED UPON the calculation of dot product.
But I realised that it is not as straight forward as I thought it was. Because two or more, different triangle could be in almost same orientation in 3D space, meaning one could be facing towards me and the other could be facing opposite direction (maybe parallel but different direction).
I think a better question is how do I use the calculated dot-product to distinguish the face of the triangle so I know I know which image/texture should be used ?

If the triangles are facing in opposite directions, the normals will also face in opposite directions, and the dot products will have opposite signs. Therefore the dot product gives you enough information to distinguish between the opposite faces. I can't think of a simple test which would give better results than the dot product.

Related

Finding the normal between a cylinder and a triangle

I have a pretty rudimentary physics engine in the game I'm working on, between moving, cylindrical characters, and static meshes made of triangles. The intended behavior is for characters to slide across surfaces, and in most cases, it works fine. But the engine doesn't discriminate between a head-on collision and a glancing collision.
I'm not entirely sure what information I could give that would be helpful. I'm looking for a mathematical solution, at any rate, a method to determine the 'angle of contact' between an arbitrary cylinder and triangle. My instincts, or whatever, tell me that I need to find the point of contact between the triangle and the cylinder, then determine whether that point is within the triangle (Using the triangle's regular normal) or along one of its edges (Using the angle between the point of contact and some point on the cylinder, I'm not sure which.), but I'm sure there's a better solution.
As requested, here's a couple of examples. In this first image, a cylinder travels downwards towards a triangle (In this example, the triangle is vertical, simplified to a line.) I project the velocity vector onto the plane of the triangle, using the formula Vf = V - N * (dot(V,N)). This is the intended behavior for this type of collision.
In this image, the cylinder's axis is parallel with the normal of the triangle. Under the current implementation, Vf is still determined using the triangle's natural normal, which would cause the cylinder to begin moving vertically. Under intended behavior, N would be perpendicular to the colliding edge of the triangle.
But these are just the two extremes of collision. There are going to be a bunch of in-betweens, so I need a more arbitrary solution.
This is my attempt at a more 3D example. I apologize for the poor perspective. The bottom-most vertex in this triangle is closer to the 'camera'. The point of collision between the cylinder and the triangle is marked by the red X. Under intended behavior, if the cylinder was moving directly away from the camera, it would slide to the left, along the length of the triangle's edge. No vertical movement would be imparted, as the point of contact is along the cylinder's, uh, tube section, rather than the caps.
Under current behavior, the triangle's normal is used. The cylinder would be pushed upwards, as though sliding across the face of the triangle, while doing little to prevent movement into the triangle.
I understand that this is a difficult request, so I appreciate the suggestions made to help refine my question.
What you're looking for is probably an edge collision detector. In rigid body collision systems, there are usually two types of collisions: surface collisions (for colliding with things that have a regular surface normal, where the reaction normal can be computed easily, as you pointed out, by processing A velocity vs B surface normal), and edge collisions (where the A body hits the edge of B body, be it box, triangle or anything else). In this case, the matter is more complicated, because, obviously, edge is not a surface, and thus you can't calculate it's normal at all. Usually, it's approximated one way or another - you can for example assume that, for triangle mesh, the edge normal is the average between normals of the two edge triangle's. There are also other methods to deal with it, some discussed here:
https://code.google.com/p/bullet/downloads/detail?name=CEDEC2011_ErwinCoumans.pdf&can=2&q=
Usually, there's an edge processing threshold value, if a collision occurred in the radius of this value, it's considered an edge collision, and processed differently.
See the examples here:
http://www.wildbunny.co.uk/blog/2012/10/31/2d-polygonal-collision-detection-and-internal-edges/
Googling "internal edge collision" and learning about rigid body collisions/dynamics in general will help you understand and solve this problem by yourself.

How do I calculate a normal vector based on multiple triangles sharing a vertex?

If I have a mesh of triangles, how does one go about calculating the normals at each given vertex?
I understand how to find the normal of a single triangle. If I have triangles sharing vertices, I can partially find the answer by finding each triangle's respective normal, normalizing it, adding it to the total, and then normalizing the end result. However, this obviously does not take into account proper weighting of each normal (many tiny triangles can throw off the answer when linked with a large triangle, for example).
I think a good method should be using a weighted average but using angles instead of area as weights. This is in my opinion a better answer because the normal you are computing is a "local" feature so you don't really care about how big is the triangle that is contributing... you need a sort of "local" measure of the contribution and the angle between the two sides of the triangle on the specified vertex is such a local measure.
Using this approach a lot of small (thin) triangles doesn't give you an unbalanced answer.
Using angles is the same as using an area-weighted average if you localize the computation by using the intersection of the triangles with a small sphere centered in the vertex.
The weighted average appears to be the best approach.
But be aware that, depending on your application, sharp corners could still give you problems. In that case, you can compute multiple vertex normals by averaging surface normals whose cross product is less than some threshold (i.e., closer to being parallel).
Search for Offset triangular mesh using the multiple normal vectors of a vertex by SJ Kim, et. al., for more details about this method.
This blog post outlines three different methods and gives a visual example of why the standard and simple method (area weighted average of the normals of all the faces joining at the vertex) might sometimes give poor results.
You can give more weight to big triangles by multiplying the normal by the area of the triangle.
Check out this paper: Discrete Differential-Geometry Operators for Triangulated 2-Manifolds.
In particular, the "Discrete Mean Curvature Normal Operator" (Section 3.5, Equation 7) gives a robust normal that is independent of tessellation, unlike the methods in the blog post cited by another answer here.
Obviously you need to use a weighted average to get a correct normal, but using the triangles area won't give you what you need since the area of each triangle has no relationship with the % weight that triangles normal represents for a given vertex.
If you base it on the angle between the two sides coming into the vertex, you should get the correct weight for every triangle coming into it. It might be convenient if you could convert it to 2d somehow so you could go off of a 360 degree base for your weights, but most likely just using the angle itself as your weight multiplier for calculating it in 3d space and then adding up all the normals produced that way and normalizing the final result should produce the correct answer.

Calculating 2D angles for 3D objects in perspective

Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?
It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)
Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.

Detect Shapes in an array of points

I have an array of points. I want to know if this array of point represents a circle, a square or a triangle.
Where should i begin? (i use C#)
Thanks
Jon
Depending on your problem, a good approach for this problem may be to use the Hough transform and all its derived algorithm
It consists in a transformation of the image space to an other space where the coordinate represents the objects parameters (angle and initial point for a line, coordinates of the center and radius for a circle)
The algorithm transforms each point of your array of points in points in the other space. Then you have to search in the new space if some points are prevailing. From these points, you will get the parameters of your object.
Of course, you need to do it once to recognize the lines (so you will know how many lines are in your bitmap and where they are) and to it to recognize the circles (it is not exactly the same algorithm)
You may have a look to this lecture (for Hough Circle Transform), but you could easily find the algorithm for line
EDIT: you can also have a look to these answers
Shape recognition algorithm(s)
Detecting an object on the image based on geometrical form
imagine it is each of these one-by-one and try to fit each of these shapes on the data.. for a square, you could find the four extreme points, and try charting out a square that goes through all of them..
Once you have got a shape in place.. you could measure the distance between each of the points and the part of the shape that is nearest to it.. then square these distances and add them up.. the shape which has the smallest sum-of-squares is probably your best bet
Use the Hough Transform.
I'm going to take a wild stab and say if you have 3 points the shape represents a triangle, 4 points is some kind of quadrilateral, any more than that is a circle.
Perhaps there's more information to your problem you could provide.

Find X/Y/Z rotation angles from one position to another

I am using a 3D engine called Electro which is programmed using Lua. It's not a very good 3D engine, but I don't have any choice in the matter.
Anyway, I'm trying to take a flat quadrilateral and transform it to be in a specific location and orientation. I know exactly where it is supposed to go (i.e. I know the exact vertices where the corners should end up), but I'm hitting a snag in getting it rotated to the right place.
Electro does not allow you to apply transformation matrices. Instead, you must transform models by using built-in scale, position (that is, translate), and rotation functions. The rotation function takes an object and 3 angles (in degrees):
E.set_entity_rotation(entity, xangle, yangle, zangle)
The documentation does not speficy this, but after looking through Electro's source, I'm reasonably certain that the rotation is applied in order of X rotation -> Y rotation -> Z rotation.
My question is this: If my starting object is a flat quadrilateral lying on the X-Z plane centered at the origin, and the destination position is in a different location and orientation where the destination vertices are known, how could I use Electro's rotation function to rotate it into the correct orientation before I move it to the correct place?
I've been racking my brain for two days trying to figure this out, looking at math that I don't understand dealing with Euler angles and such, but I'm still lost. Can anyone help me out?
Can you tell us more about the problem? It sounds odd phrased in this way. What else do you know about the final orientation you have to hit? Is it completely arbitrary or user-specified or can you use more knowledge to help solve the problem? Is there any other Electro API you could use to help?
If you really must solve this general problem, then too bad, it's hard, and underspecified. Here's some guy's code that may work, from euclideanspace.com.
First do the translation to bring one corner of the quadrilateral to the point you'd like it to be, then apply the three rotational transformations in succession:
If you know where the quad is, and you know exactly where it needs to go, and you're certain that there are no distortions of the quad to fit it into the place where it needs to go, then you should be able to figure out the angles using the vector scalar product.
If you have two vectors, the angle between them can be calculated by taking the dot product.

Resources