Calculating 2D angles for 3D objects in perspective - math

Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?

It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)

Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.

Related

How to find the sides of a rectangle if you know the sides of a quadrilateral inside the rectangle?

I'm working on an application that uses a accelerometer to measure the sides of a room, I know it will not be exact measurements but it's fine.
In reality I would like the program to be able to calculate the sides of any room shape not only rectangles and squares (and more than 4 corners), but I'm starting with something more simple (rectangle shaped rooms).
My problem is not with the accelerometer but more with the math aspect of the code. Because I measured the room by placing the phone on a wall and then going to the connected wall, I will get the measurements of a quadrilateral inside the rectangle. From there, if it's possible, I will get the measurements of the sides of the rectangle, but I don't really know how.
What I've tried so far:
Divided the quadrilateral inside the rectangle in half, to make 2 triangles. Then I calculated the diagonal using the Pythagoras theorem. Then I used the law of Cosines to calculate one of the angles, and did the same again to find another. Then found the 3rd angle using the 2 other angles (c=a+b-180). I did this for both triangles.
I don't know if this is the right approach and if I have missed something simple, or if I simply don't have enough information to solve for the sides of the rectangle. I have looked into some geometry and trigonometry math online and haven't find anything that gives me a solution. But like I said, maybe I missed something simple.
Any push in the right direction would be helpful.
The rectangle and the quadrilateral
The problem lacks a unique solution. Imagine placing a pair of calipers around the quadrilateral. You'll be able to rotate the calipers around it, and at each angle the calipers will be able to close to a different width. Each of those widths is a different possible room dimension.
You'll also never get an accurate position measurement using the inertial sensors in a phone to begin with. The accels and gyros aren't even close to accurate enough. GPS is, but only outdoors away from structures that cause multipathing artifacts. Quick and sloppy with a tape measure will win every time.

Weird phenomenon with three.js plane

This is the first question I've ever asked on here! Apologies in advance if I've done it wrong somehow.
I have written a program which stacks up spheres in three.js.
Each sphere starts with randomly generated (within certain bounds) x and z co-ordinates, and a y co-ordinate high above the ground plane. I casts rays from each of the sphere's vertices to see how far down it can fall before it intersects with an existing mesh.
For each sphere, I test it in 80 different random xz positions, see where it can fall the furthest, and then 'drop' it into that position.
This is intended to create bubble towers like this one:
However, I have noticed that when I make the bubble radius very small and the base dimensions of the tower large, this happens:
If I turn the recursions down from 80, this effect is less apparent. For some reason, three.js seems to think that the spheres can fall further at the corners of the base square. The origin is exactly at the center of the base square - perhaps this is relevant.
When I console log all the fall-distances I'm receiving from the raycaster, they are indeed larger the further away you get from the center of the square... but only at the 11th or 12th decimal place.
This is not so much a problem I am trying to solve (I could just round fall distances to the nearest 10th decimal place before I pick the largest one), but something I am very curious about. Does anyone know why this is happening? Has anybody come across something similar to this before?
EDIT:
I edited my code to shift everything so that the origin is no longer at the center of the base square:
So am I correct in thinking... this phenomenon is something to do with distance from the origin, rather than anything relating to the surface onto which the balls are falling?
Indeed, the pattern you are seeing is exactly because the corners and edges of the bottom of your tower are furthest from the origin where you are dropping the balls. You are creating a right triangle (see image below) in which the vertical "leg" is the line from the origin from which you are dropping the balls down to the point directly below on mesh floor (at a right angle to the floor - thus the name, right triangle). The hypotenuse is always the longest leg of a right triangle, and the futher out your rays cast from the point just below the origin, the longer the hypotenuse will be, and the more your algorithm will favor that longer distance (no matter how fractional).
Increasing the size of the tower base would exaggerate this effect as the hypotenuse measurements can now grow even larger. Reducing the size of the balls would also favor the pattern you are seeing, as now each ball is not taking up as much space, and so the distant measurments to the corners won't fill in as quickly as they would with larger balls so that now more balls will congregate at the edges before filling in the rest of the space.
Moving your dropping origin to one side or another creates longer distances (hypotenuses) to the opposites sides and corners, so that the balls will fill in those distant locations first.
The reason you see less of an effect when you reduce the sample size from 80 to say, 20, is that there are simply fewer chances to detect these more distant locations to which the balls could fall (an odds game).
A right triangle:
A back-of-the-napkin sketch:

Find point in 3D plane

I have four points in a 3D space, example:
(0,0,1)
(1,0,1)
(1,0,2)
(0,0,2)
Then I have a 2D position on that square plane:
x = 0.5
y = 0.5
I need to find out the 3D space point of that position in the plane. In this example it's easy: (0.5,0,1.5), because Y is zero. But imagine that Y was not zero (and not all the same), that the plane is leaning in some direction. How would I calculate the point in that case?
I imagine this should be a pretty easy thing to solve, but I can't figure it out. Please answer in programming terms and not in straight math terms, if possible.
Update with image: The gray plane (made out of two triangles) are the real one actually existing. I create a non-existing plane on top of this, the ABCD corners are exactly the same, however it doesn't slope. What I need to do is project a pixel (blue one in example) from the non-existing plane to the existing plane. It will be in the exact same location, except that it has gained a Y value from the sloping plane.
(couldn't actually make the image appear because i need 10 reputation to show it, wtf?)
What I've been able to work out so far on my own is which one of the two triangles to use in the gray plane and the normal of triangle. I basically just need to figure out how I can project the pixel.
Figured it out mostly thanks to http://gamedeveloperjourney.blogspot.com/2009/04/point-plane-collision-detection.html
Made me realize I had to verify the normal a bit closer, turns out my plane's grid was being rendered a little different than the actual coordinates for the verticles. No wonder this was so hard to get right! The pixel was projected correctly but rendered incorrectly.

Finding the normal between a cylinder and a triangle

I have a pretty rudimentary physics engine in the game I'm working on, between moving, cylindrical characters, and static meshes made of triangles. The intended behavior is for characters to slide across surfaces, and in most cases, it works fine. But the engine doesn't discriminate between a head-on collision and a glancing collision.
I'm not entirely sure what information I could give that would be helpful. I'm looking for a mathematical solution, at any rate, a method to determine the 'angle of contact' between an arbitrary cylinder and triangle. My instincts, or whatever, tell me that I need to find the point of contact between the triangle and the cylinder, then determine whether that point is within the triangle (Using the triangle's regular normal) or along one of its edges (Using the angle between the point of contact and some point on the cylinder, I'm not sure which.), but I'm sure there's a better solution.
As requested, here's a couple of examples. In this first image, a cylinder travels downwards towards a triangle (In this example, the triangle is vertical, simplified to a line.) I project the velocity vector onto the plane of the triangle, using the formula Vf = V - N * (dot(V,N)). This is the intended behavior for this type of collision.
In this image, the cylinder's axis is parallel with the normal of the triangle. Under the current implementation, Vf is still determined using the triangle's natural normal, which would cause the cylinder to begin moving vertically. Under intended behavior, N would be perpendicular to the colliding edge of the triangle.
But these are just the two extremes of collision. There are going to be a bunch of in-betweens, so I need a more arbitrary solution.
This is my attempt at a more 3D example. I apologize for the poor perspective. The bottom-most vertex in this triangle is closer to the 'camera'. The point of collision between the cylinder and the triangle is marked by the red X. Under intended behavior, if the cylinder was moving directly away from the camera, it would slide to the left, along the length of the triangle's edge. No vertical movement would be imparted, as the point of contact is along the cylinder's, uh, tube section, rather than the caps.
Under current behavior, the triangle's normal is used. The cylinder would be pushed upwards, as though sliding across the face of the triangle, while doing little to prevent movement into the triangle.
I understand that this is a difficult request, so I appreciate the suggestions made to help refine my question.
What you're looking for is probably an edge collision detector. In rigid body collision systems, there are usually two types of collisions: surface collisions (for colliding with things that have a regular surface normal, where the reaction normal can be computed easily, as you pointed out, by processing A velocity vs B surface normal), and edge collisions (where the A body hits the edge of B body, be it box, triangle or anything else). In this case, the matter is more complicated, because, obviously, edge is not a surface, and thus you can't calculate it's normal at all. Usually, it's approximated one way or another - you can for example assume that, for triangle mesh, the edge normal is the average between normals of the two edge triangle's. There are also other methods to deal with it, some discussed here:
https://code.google.com/p/bullet/downloads/detail?name=CEDEC2011_ErwinCoumans.pdf&can=2&q=
Usually, there's an edge processing threshold value, if a collision occurred in the radius of this value, it's considered an edge collision, and processed differently.
See the examples here:
http://www.wildbunny.co.uk/blog/2012/10/31/2d-polygonal-collision-detection-and-internal-edges/
Googling "internal edge collision" and learning about rigid body collisions/dynamics in general will help you understand and solve this problem by yourself.

Where is the triangle normal pointing so I can map correctly?

Here is what I have so far.
I have a 3D model and I made a triangle mesh. Calculated and applied normals to the model too.
I want to apply different textures into the triangle. I also have the direction vector of all the texture I need.
For mapping, I do this:
I just calculate the Dot product of each triangle normal with the texture direction vector of each texture, and start comparing to see which texture could suitable BASED UPON the calculation of dot product.
But I realised that it is not as straight forward as I thought it was. Because two or more, different triangle could be in almost same orientation in 3D space, meaning one could be facing towards me and the other could be facing opposite direction (maybe parallel but different direction).
I think a better question is how do I use the calculated dot-product to distinguish the face of the triangle so I know I know which image/texture should be used ?
If the triangles are facing in opposite directions, the normals will also face in opposite directions, and the dot products will have opposite signs. Therefore the dot product gives you enough information to distinguish between the opposite faces. I can't think of a simple test which would give better results than the dot product.

Resources