I have a pretty rudimentary physics engine in the game I'm working on, between moving, cylindrical characters, and static meshes made of triangles. The intended behavior is for characters to slide across surfaces, and in most cases, it works fine. But the engine doesn't discriminate between a head-on collision and a glancing collision.
I'm not entirely sure what information I could give that would be helpful. I'm looking for a mathematical solution, at any rate, a method to determine the 'angle of contact' between an arbitrary cylinder and triangle. My instincts, or whatever, tell me that I need to find the point of contact between the triangle and the cylinder, then determine whether that point is within the triangle (Using the triangle's regular normal) or along one of its edges (Using the angle between the point of contact and some point on the cylinder, I'm not sure which.), but I'm sure there's a better solution.
As requested, here's a couple of examples. In this first image, a cylinder travels downwards towards a triangle (In this example, the triangle is vertical, simplified to a line.) I project the velocity vector onto the plane of the triangle, using the formula Vf = V - N * (dot(V,N)). This is the intended behavior for this type of collision.
In this image, the cylinder's axis is parallel with the normal of the triangle. Under the current implementation, Vf is still determined using the triangle's natural normal, which would cause the cylinder to begin moving vertically. Under intended behavior, N would be perpendicular to the colliding edge of the triangle.
But these are just the two extremes of collision. There are going to be a bunch of in-betweens, so I need a more arbitrary solution.
This is my attempt at a more 3D example. I apologize for the poor perspective. The bottom-most vertex in this triangle is closer to the 'camera'. The point of collision between the cylinder and the triangle is marked by the red X. Under intended behavior, if the cylinder was moving directly away from the camera, it would slide to the left, along the length of the triangle's edge. No vertical movement would be imparted, as the point of contact is along the cylinder's, uh, tube section, rather than the caps.
Under current behavior, the triangle's normal is used. The cylinder would be pushed upwards, as though sliding across the face of the triangle, while doing little to prevent movement into the triangle.
I understand that this is a difficult request, so I appreciate the suggestions made to help refine my question.
What you're looking for is probably an edge collision detector. In rigid body collision systems, there are usually two types of collisions: surface collisions (for colliding with things that have a regular surface normal, where the reaction normal can be computed easily, as you pointed out, by processing A velocity vs B surface normal), and edge collisions (where the A body hits the edge of B body, be it box, triangle or anything else). In this case, the matter is more complicated, because, obviously, edge is not a surface, and thus you can't calculate it's normal at all. Usually, it's approximated one way or another - you can for example assume that, for triangle mesh, the edge normal is the average between normals of the two edge triangle's. There are also other methods to deal with it, some discussed here:
https://code.google.com/p/bullet/downloads/detail?name=CEDEC2011_ErwinCoumans.pdf&can=2&q=
Usually, there's an edge processing threshold value, if a collision occurred in the radius of this value, it's considered an edge collision, and processed differently.
See the examples here:
http://www.wildbunny.co.uk/blog/2012/10/31/2d-polygonal-collision-detection-and-internal-edges/
Googling "internal edge collision" and learning about rigid body collisions/dynamics in general will help you understand and solve this problem by yourself.
Related
I'm working on an application that uses a accelerometer to measure the sides of a room, I know it will not be exact measurements but it's fine.
In reality I would like the program to be able to calculate the sides of any room shape not only rectangles and squares (and more than 4 corners), but I'm starting with something more simple (rectangle shaped rooms).
My problem is not with the accelerometer but more with the math aspect of the code. Because I measured the room by placing the phone on a wall and then going to the connected wall, I will get the measurements of a quadrilateral inside the rectangle. From there, if it's possible, I will get the measurements of the sides of the rectangle, but I don't really know how.
What I've tried so far:
Divided the quadrilateral inside the rectangle in half, to make 2 triangles. Then I calculated the diagonal using the Pythagoras theorem. Then I used the law of Cosines to calculate one of the angles, and did the same again to find another. Then found the 3rd angle using the 2 other angles (c=a+b-180). I did this for both triangles.
I don't know if this is the right approach and if I have missed something simple, or if I simply don't have enough information to solve for the sides of the rectangle. I have looked into some geometry and trigonometry math online and haven't find anything that gives me a solution. But like I said, maybe I missed something simple.
Any push in the right direction would be helpful.
The rectangle and the quadrilateral
The problem lacks a unique solution. Imagine placing a pair of calipers around the quadrilateral. You'll be able to rotate the calipers around it, and at each angle the calipers will be able to close to a different width. Each of those widths is a different possible room dimension.
You'll also never get an accurate position measurement using the inertial sensors in a phone to begin with. The accels and gyros aren't even close to accurate enough. GPS is, but only outdoors away from structures that cause multipathing artifacts. Quick and sloppy with a tape measure will win every time.
This is the first question I've ever asked on here! Apologies in advance if I've done it wrong somehow.
I have written a program which stacks up spheres in three.js.
Each sphere starts with randomly generated (within certain bounds) x and z co-ordinates, and a y co-ordinate high above the ground plane. I casts rays from each of the sphere's vertices to see how far down it can fall before it intersects with an existing mesh.
For each sphere, I test it in 80 different random xz positions, see where it can fall the furthest, and then 'drop' it into that position.
This is intended to create bubble towers like this one:
However, I have noticed that when I make the bubble radius very small and the base dimensions of the tower large, this happens:
If I turn the recursions down from 80, this effect is less apparent. For some reason, three.js seems to think that the spheres can fall further at the corners of the base square. The origin is exactly at the center of the base square - perhaps this is relevant.
When I console log all the fall-distances I'm receiving from the raycaster, they are indeed larger the further away you get from the center of the square... but only at the 11th or 12th decimal place.
This is not so much a problem I am trying to solve (I could just round fall distances to the nearest 10th decimal place before I pick the largest one), but something I am very curious about. Does anyone know why this is happening? Has anybody come across something similar to this before?
EDIT:
I edited my code to shift everything so that the origin is no longer at the center of the base square:
So am I correct in thinking... this phenomenon is something to do with distance from the origin, rather than anything relating to the surface onto which the balls are falling?
Indeed, the pattern you are seeing is exactly because the corners and edges of the bottom of your tower are furthest from the origin where you are dropping the balls. You are creating a right triangle (see image below) in which the vertical "leg" is the line from the origin from which you are dropping the balls down to the point directly below on mesh floor (at a right angle to the floor - thus the name, right triangle). The hypotenuse is always the longest leg of a right triangle, and the futher out your rays cast from the point just below the origin, the longer the hypotenuse will be, and the more your algorithm will favor that longer distance (no matter how fractional).
Increasing the size of the tower base would exaggerate this effect as the hypotenuse measurements can now grow even larger. Reducing the size of the balls would also favor the pattern you are seeing, as now each ball is not taking up as much space, and so the distant measurments to the corners won't fill in as quickly as they would with larger balls so that now more balls will congregate at the edges before filling in the rest of the space.
Moving your dropping origin to one side or another creates longer distances (hypotenuses) to the opposites sides and corners, so that the balls will fill in those distant locations first.
The reason you see less of an effect when you reduce the sample size from 80 to say, 20, is that there are simply fewer chances to detect these more distant locations to which the balls could fall (an odds game).
A right triangle:
A back-of-the-napkin sketch:
Let me explain my problem:
I have a black vector shape (let's say it's a series of joined, straight lines for now, but it'd be nice if I could also support quadratic curves).
I also have a rectangle of a predefined width and height. I'm going to place it on top of the black shape, and then take the union of the two.
My first issue is that I don't know how to quickly extract vector unions, but I think there is a well-defined formula I can figure out for myself.
My second, and more tricky issue is how to efficiently detect the position the rectangle needs to be in (i.e., what translation and rotation are needed by the matrices), in order to maximize the black, remaining after the union (see figure, below).
The red outlined shape below is ~33% black; the green is something like 85%; and there are positions for this shape & rectangle wherein either could have 100% coverage.
Obviously, I can brute-force this by trying every translation and rotation value for every point where at least part of the rectangle is touching the black shape, then keep track of the one with the most black coverage. The problem is, I can only try a finite number of positions (and may therefore miss the maximum). Apart from that, it feels very inefficient!
Can you think of a more efficient way of tackling this problem?
Something from my Uni days tells me that a Fourier transform might improve the efficiency here, but I can't figure out how I'd do that with a vector shape!
Three ideas that have promise of being faster and/or more precise than brute force search:
Suppose you have a 3d physics engine. Define a "cone-shaped" surface where the apex is at say (0,0,-1), the black polygon boundary on the z=0 plane with its centroid at the origin, and the cone surface is formed by connecting the apex with semi-infinite rays through the polygon boundary. Think of a party hat turned upside down and crumpled to the shape of the black polygon. Now constrain the rectangle to be parallel to the z=0 plane and initially so high above the cone (large z value) that it's easy to find a place where it's definitely "inside". Then let the rectangle fall downward under gravity, twisting about z and translating in x-y only as it touches the cone, staying inside all the way down until it settles and can't move any farther. The collision detection and force resolution of the physics engine takes care of the complexities. When it settles, it will be in a position of maximal coverage of the black polygon in a local sense. (If it settles with z<0, then coverage is 100%.) For the convex case it's probably a global maximum. To probabilistically improve the result for non-convex cases (like your example), you'd randomize the starting position, dropping the polygon many times, taking the best result. Note you don't really need a full blown physics engine (though they certainly exist in open source). It's enough to use collision resolution to tell you how to rotate and translate the rectangle in a pseudo-physical way as it twists and slides uniformly down the z axis as far as possible.
Different physics model. Suppose the black area is an attractive field generator in 2d following the usual inverse square rule like gravity and magnetism. Now let the rectangle drift in a damping medium responding to this field. It ought to settle with a maximal area overlapping the black area. There are problems with "nulls" like at the center of a donut, but I don't think these can ever be stable equillibria. Can they? The simulation could be easily done by modeling both shapes as particle swarms. Or since the rectangle is a simple shape and you are a physicist, you could come up with a closed form for the integral of attractive force between a point and the rectangle. This way only the black shape needs representation as particles. Come to think of it, if you can come up with a closed form for torque and linear attraction due to two triangles, then you can decompose both shapes with a (e.g. Delaunay) triangulation and get a precise answer. Unfortunately this discussion implies it can't be done analytically. So particle clouds may be the final solution. The good news is that modern processors, particularly GPUs, do very large particle computations with amazing speed. Edit: I implemented this quick and dirty. It works great for convex shapes, but concavities create stable points that aren't what you want. Using the example:
This problem is related to robot path planning. Looking at this literature may turn up some ideas In RPP you have obstacles and a robot and want to find a path the robot can travel while avoiding and/or sliding along them. If the robot is asymmetric and can rotate, then 2d planning is done in a 3d (toroidal) configuration space (C-space) where one dimension is rotation (so closes on itself). The idea is to "grow" the obstacles in C-space while shrinking the robot to a point. Growing the obstacles is achieved by computing Minkowski Differences.) If you decompose all polygons to convex shapes, then there is a simple "edge merge" algorithm for computing the MD.) When the C-space representation is complete, any 1d path that does not pierce the "grown" obstacles corresponds to continuous translation/rotation of the robot in world space that avoids the original obstacles. For your problem the white area is the obstacle and the rectangle is the robot. You're looking for any open point at all. This would correspond to 100% coverage. For the less than 100% case, the C-space would have to be a function on 3d that reflects how "bad" the intersection of the robot is with the obstacle rather than just a binary value. You're looking for the least bad point. C-space representation is an open research topic. An octree might work here.
Lots of details to think through in both cases, and they may not pan out at all, but at least these are frameworks to think more about the problem. The physics idea is a bit like using simulated spring systems to do graph layout, which has been very successful.
I don't believe it is possible to find the precise maximum for this problem, so you will need to make do with an approximation.
You could potentially render the vector image into a bitmap and use Haar features for this - they provide a very quick O(1) way of calculating the average colour of a rectangular region.
You'd still need to perform this multiple times for different rotations and positions, but it would bring it algorithmic complexity down from a naive O(n^5) to O(n^3) which may be acceptably fast. (with n here being the size of the different degrees of freedom you are scanning)
Have you thought to keep track of the remaining white space inside the blocks with something like if whitespace !== 0?
Here is what I have so far.
I have a 3D model and I made a triangle mesh. Calculated and applied normals to the model too.
I want to apply different textures into the triangle. I also have the direction vector of all the texture I need.
For mapping, I do this:
I just calculate the Dot product of each triangle normal with the texture direction vector of each texture, and start comparing to see which texture could suitable BASED UPON the calculation of dot product.
But I realised that it is not as straight forward as I thought it was. Because two or more, different triangle could be in almost same orientation in 3D space, meaning one could be facing towards me and the other could be facing opposite direction (maybe parallel but different direction).
I think a better question is how do I use the calculated dot-product to distinguish the face of the triangle so I know I know which image/texture should be used ?
If the triangles are facing in opposite directions, the normals will also face in opposite directions, and the dot products will have opposite signs. Therefore the dot product gives you enough information to distinguish between the opposite faces. I can't think of a simple test which would give better results than the dot product.
Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?
It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)
Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.