How do you calculate the Angle of Incidence? - math

I'm working on a raytracer for a large side project, with the goal being to produce realistic renders without worrying about CPU time. Basically pre-rendering, so I'm going for accuracy over speed.
I'm having some trouble wrapping my head around some of the more advanced math going on in the lighting aspects of things. Basically, I have a point for my light. Assuming no distance falloff, I should be able to use the point on the polygon I've found, and compare the normal at that point to the angle of incidence on the light to figure out my illumination value. So given a point on a plane, the normal for that plane, and the point light, how would I go about figuring out that angle?
The reason I ask is that I can't seem to find any reference on finding the angle of incidence. I can find lots of references detailing what to do once you've got it, but nothing telling me how to get it in the first place. I imagine it's something simple, but I just can't logic it out.
Thanks

The dot product of the surface normal vector and the incident light vector will give you the cosine of the angle of incidence, if you've normalised your vectors.

It sounds to me like you are trying to calculate diffuse illumination. Assuming you have Surface Point http://www.yourequations.com/eq.latex?%5Cinline%20%5Coverrightarrow%7Bp_o%7D the point on the surface, Light Position http://www.yourequations.com/eq.latex?%5Cinline%20%5Coverrightarrow%7Bp_L%7D, and the Normal Vector http://www.yourequations.com/eq.latex?%5Cinline%20%5Coverrightarrow%7Bn%7D normal vector. You can calculate diffuse illumination like this:
Diffuse Illumination http://www.yourequations.com/eq.latex?%5Coverrightarrow%7BL%7D%3D%5Coverrightarrow%7Bp_L%7D-%5Coverrightarrow%7Bp_o%7D%5C%5C%0AI_d%3Dk%2a%5Cfrac%7B%5Coverrightarrow%7BL%7D%5Ccdot%5Coverrightarrow%7Bn%7D%7D%7B%5C%7C%5Coverrightarrow%7BL%7D%5C%7C%2a%5C%7C%5Coverrightarrow%7Bn%7D%5C%7C%7D
You technically don't need to calculate the actual angle of incident because you only need the cosine of that which the dot product conveniently gives you.

NOTE: From where I'm sitting right now, I can't upload a picture for you. I'll try to lay it out for you in words, though.
Here's how you can imagine this process:
Define alt text http://www.yourequations.com/eq.latex?%5Chat%7Bn%7D as your normalized normal (the vertical vector that comes out of your planar polygon and is of unit length, making the math easier).
Define alt text http://www.yourequations.com/eq.latex?p_0 as your eyeball point.
Define alt text http://www.yourequations.com/eq.latex?p_1 as the impact point of your "eyeball ray" on the polygon.
Define alt text http://www.yourequations.com/eq.latex?%5Chat%7Bv%7D as the normalized vector pointing from alt text http://www.yourequations.com/eq.latex?p_1 back to alt text http://www.yourequations.com/eq.latex?p_0. You can write this like so:
alt text http://www.yourequations.com/eq.latex?%5Chat%7Bv%7D%20=%20%5Cfrac%7B%5Coverrightarrow%7B(p_0%20-%20p_1)%7D%7D%7B||p_0%20-%20p_1||%7D
So, you have created a vector that points from alt text http://www.yourequations.com/eq.latex?p_1 to alt text http://www.yourequations.com/eq.latex?p_0 and then divided that vector by its own length, giving you a vector of length 1 that points from alt text http://www.yourequations.com/eq.latex?p_1 to alt text http://www.yourequations.com/eq.latex?p_0
The reason that we went to all this trouble is that we would really like the angle alt text http://www.yourequations.com/eq.latex?%5Ctheta which is the angle between the normal alt text http://www.yourequations.com/eq.latex?%5Chat%7Bn%7D and that vector alt text http://www.yourequations.com/eq.latex?%5Chat%7Bv%7D that you just created. Another word for theta is the angle of incidence.
An easy way to calculate this angle of incidence is to use the dot product. Using the terms defined above, you take the x, y and z components of each of those unit length vectors, multiply them together and add the sums to get the dot product.
alt text http://www.yourequations.com/eq.latex?%5Chat%7Bn%7D%20%5Ccdot%20%5Chat%7Bv%7D%20=%20%5Ccos%7B%5Ctheta%7D%20=%20n_x%20%20v_x%20+%20n_y%20%20v_y%20+%20n_z%20%20v_z
To calculate alt text http://www.yourequations.com/eq.latex?%5Ctheta, therefore, you simple use the inverse cosine on the dot product:
alt text http://www.yourequations.com/eq.latex?%5Ctheta%20=%20%5Carccos%28%5Chat%7Bn%7D%20%5Ccdot%20%5Chat%7Bv%7D%29
Edit: modified the above to add yourequations.com formatting.

Related

Area of polygon - clockwise

From this threa Determine the centroid of multiple points I came to know that area of polygon can also be negative if we start in clockwise direction. Why it can be negative?
It is a product of the maths. You can use the sign if you wish to, or use an absolute value for the area.
You often get a similar effect with dot products and cross products. This can be effective, for example determining the orientation of a polygon in 3d (does the 'outside' side of the polygon face towards me or away from me?)
The sign tells you some useful information, that you can either use or discard. For example, what is the area below the curve sin(x) and above the x axis, for x over the interval [0,pi]. Yes, this is simply a definite integral. In MATLAB, I'd do it as:
>> quad(#sin,0,pi)
ans =
2
But suppose I computed that same definite integral, with limits of integration [pi,0]? Clearly, we would get -2.
>> quad(#sin,pi,0)
ans =
-2
And of course this makes sense. In either case, we can assure that we get the positive area by ignoring the sign. But the sign tells us something in that integral.
The sign computed for the area of a polygon is indeed useful in some problems. In the case of a triangle, a cross product will yield a vector that points in the direction orthogonal to the plane of the triangle containing the vectors. The magnitude of the vector will be twice the area of that triangle. Note that this vector can point in one of two directions orthogonal to a given plane, which one is indicated by the right hand rule. You can think of that sign as indicating which direction the vector pointed.

Calculating 2D angles for 3D objects in perspective

Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?
It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)
Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.

Detect Shapes in an array of points

I have an array of points. I want to know if this array of point represents a circle, a square or a triangle.
Where should i begin? (i use C#)
Thanks
Jon
Depending on your problem, a good approach for this problem may be to use the Hough transform and all its derived algorithm
It consists in a transformation of the image space to an other space where the coordinate represents the objects parameters (angle and initial point for a line, coordinates of the center and radius for a circle)
The algorithm transforms each point of your array of points in points in the other space. Then you have to search in the new space if some points are prevailing. From these points, you will get the parameters of your object.
Of course, you need to do it once to recognize the lines (so you will know how many lines are in your bitmap and where they are) and to it to recognize the circles (it is not exactly the same algorithm)
You may have a look to this lecture (for Hough Circle Transform), but you could easily find the algorithm for line
EDIT: you can also have a look to these answers
Shape recognition algorithm(s)
Detecting an object on the image based on geometrical form
imagine it is each of these one-by-one and try to fit each of these shapes on the data.. for a square, you could find the four extreme points, and try charting out a square that goes through all of them..
Once you have got a shape in place.. you could measure the distance between each of the points and the part of the shape that is nearest to it.. then square these distances and add them up.. the shape which has the smallest sum-of-squares is probably your best bet
Use the Hough Transform.
I'm going to take a wild stab and say if you have 3 points the shape represents a triangle, 4 points is some kind of quadrilateral, any more than that is a circle.
Perhaps there's more information to your problem you could provide.

Polygon math

Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.

Calculating rotation along a path

I am trying to animate an object, let's say its a car. I want it go from point
x1,y1,z1
to point x2,y2,z2 . It moves to those points, but it appears to be drifting rather than pointing in the direction of motion. So my question is: how can I solve this issue in my updateframe() event? Could you point me in the direction of some good resources?
Thanks.
First off how do you represent the road?
I recently done exactly this thing and I used Catmull-Rom splines for the road. To orient an object and make it follow the spline path you need to interpolate the current x,y,z position from a t that walks along the spline, then orient it along the Frenet Coordinates System or Frenet Frame for that particular position.
Basically for each point you need 3 vectors: the Tangent, the Normal, and the Binormal. The Tangent will be the actual direction you will like your object (car) to point at.
I choose Catmull-Rom because they are easy to deduct the tangents at any point - just make the (vector) difference between 2 other near points to the current one. (Say you are at t, pick t-epsilon and t+epsilon - with epsilon being a small enough constant).
For the other 2 vectors, you can use this iterative method - that is you start with a known set of vectors on one end, and you work a new set based on the previous one each updateframe() ).
You need to work out the initial orientation of the car, and the final orientation of the car at its destination, then interpolate between them to determine the orientation in between for the current timestep.
This article describes the mathematics behind doing the interpolation, as well as some other things to do with rotating objects that may be of use to you. gamasutra.com in general is an excellent resource for this sort of thing.
I think interpolating is giving the drift you are seeing.
You need to model the way steering works .. your update function should 1) move the car always in the direction of where it is pointing and 2) turn the car toward the current target .. one should not affect the other so that the turning will happen and complete more rapidly than the arriving.
In general terms, the direction the car is pointing is along its velocity vector, which is the first derivative of its position vector.
For example, if the car is going in a circle (of radius r) around the origin every n seconds then the x component of the car's position is given by:
x = r.sin(2πt/n)
and the x component of its velocity vector will be:
vx = dx/dt = r.(2π/n)cos(2πt/n)
Do this for all of the x, y and z components, normalize the resulting vector and you have your direction.
Always pointing the car toward the destination point is simple and cheap, but it won't work if the car is following a curved path. In which case you need to point the car along the tangent line at its current location (see other answers, above).
going from one position to another gives an object a velocity, a velocity is a vector, and normalising that vector will give you the direction vector of the motion that you can plug into a "look at" matrix, do the cross of the up with this vector to get the side and hey presto you have a full matrix for the direction control of the object in motion.

Resources