I recently came across a problem where I had four circles (midpoints and radius) and had to calculate the area of the union of these circles.
Example image:
For two circles it's quite easy,
I can just calculate the fraction of the each circles area that is not within the triangles and then calculate the area of the triangles.
But is there a clever algorithm I can use when there is more than two circles?
Find all circle intersections on the outer perimeter (e.g. B,D,F,H on the following diagram). Connect them together with the centres of the corresponding circles to form a polygon. The area of the union of the circles is the area of the polygon + the area of the circle slices defined by consecutive intersection points and the circle center in between them. You'll need to also account for any holes.
I'm sure there is a clever algorithm, but here's a dumb one to save having to look for it;
put a bounding box around the circles;
generate random points within the bounding box;
figure out whether the random point is inside one of the circles;
compute the area by some simple addition and division (proportion_of_points_inside*area_of_bounding_box).
Sure it's dumb, but:
you can get as accurate an answer as you want, just generate more points;
it will work for any shapes for which you can calculate the inside/outside distinction;
it will parallelise beautifully so you can use all your cores.
Ants Aasma's answer gave the basic idea, but I wanted to make it a little more concrete. Take a look at the five circles below and the way they've been decomposed.
The blue dots are circle centers.
The red dots are circle boundary intersections.
The red dots with white interior are circle boundary intersections that are not contained in any other circles.
Identifying these 3 types of dots is easy. Now construct a graph data structure where the nodes are the blue dots and the red dots with white interior. For every circle, put an edge between the circle middle (blue dot) and each of its intersections (red dots with white interior) on its boundary.
This decomposes the circle union into a set of polygons (shaded blue) and circular pie pieces (shaded green) that are pairwise disjoint and cover the original union (that is, a partition). Since each piece here is something that's easy to compute the area of, you can compute the area of the union by summing the pieces' areas.
For a different solution from the previous one you could produce an estimation with an arbitrary precision using a quadtree.
This also works for any shape union if you can tell if a square is inside or outside or intersects the shape.
Each cell has one of the states : empty , full , partial
The algorithm consists in "drawing" the circles in the quadtree starting with a low resolution ( 4 cells for instance marked as empty). Each cell is either :
inside at least one circle, then mark the cell as full,
outside all circles, mark the cell as empty,
else mark the cell as partial.
When it's done, you can compute an estimation of the area : the full cells give the lower bound, the empty cells give the higher bound, the partial cells give the max area error.
If the error is too big for you, you refine the partial cells until you get the right precision.
I think this will be easier to implement than the geometric method which may require to handle a lot of special cases.
I love the approach to the case of 2 intersecting circles -- here's how i'd use a slight variation of the same approach for the more complex example.
It might give better insight into generalising the algorithm for larger numbers of semi-overlapping circles.
The difference here is that i start by linking the centres (so there's a vertice between the centre of the circles, rather than between the places where the circles intersect) I think this lets it generalise better.
(in practice, maybe the monte-carlo method is worthwhile)
(source: secretGeek.net)
If you want a discrete (as opposed to a continuous) answer, you could do something similar to a pixel painting algorithm.
Draw the circles on a grid, and then color each cell of the grid if it's mostly contained within a cirle (i.e., at least 50% of its area is inside one of the circles). Do this for the entire grid (where the grid spans all of the area covered by the circles), then count the number of colored cells in the grid.
Hmm, very interesting problem. My approach would probably be something along the lines of the following:
Work out a way of working out what the areas of intersection between an arbitrary number of circles is, i.e. if I have 3 circles, I need to be able to work out what the intersection between those circles is. The "Monte-Carlo" method would be a good way of approximating this (http://local.wasp.uwa.edu.au/~pbourke/geometry/circlearea/).
Eliminate any circles that are contained entirely in another larger circle (look at radius and the modulus of the distance between the centre of the two circles) I dont think is mandatory.
Choose 2 circles (call them A and B) and work out the total area using this formula:
(this is true for any shape, be it circle or otherwise)
area(A∪B) = area(A) + area(B) - area(A∩B)
Where A ∪ B means A union B and A ∩ B means A intersect B (you can work this out from the first step.
Now keep on adding circles and keep on working out the area added as a sum / subtraction of areas of circles and areas of intersections between circles. For example for 3 circles (call the extra circle C) we work out the area using this formula:
(This is the same as above where A has been replaced with A∪B)
area((A∪B)∪C) = area(A∪B) + area(C) - area((A∪B)∩C)
Where area(A∪B) we just worked out, and area((A∪B)∩C) can be found:
area((A∪B)nC) = area((A∩C)∪(B∩C)) = area(A∩C) + area(A∩B) - area((A∩C)∩(B∩C)) = area(A∩C) + area(A∩B) - area(A∩B∩C)
Where again you can find area(A∩B∩C) from above.
The tricky bit is the last step - the more circles get added the more complex it becomes. I believe there is an expansion for working out the area of an intersection with a finite union, or alternatively you may be able to recursively work it out.
Also with regard to using Monte-Carlo to approximate the area of itersection, I believe its possible to reduce the intersection of an arbitrary number of circles to the intersection of 4 of those circles, which can be calculated exactly (no idea how to do this however).
There is probably a better way of doing this btw - the complexity increases significantly (possibly exponentially, but I'm not sure) for each extra circle added.
There are efficient solutions to this problem using what are known as power diagrams. This is really heavy math though and not something that I would want to tackle offhand. For an "easy" solution, look up line-sweep algorithms. The basic principle here is that that you divide the figure up into strips, where calculating the area in each strip is relatively easy.
So, on the figure containing all of the circles with nothing rubbed out, draw a horizontal line at each position which is either the top of a circle, the bottom of a circle or the intersection of 2 circles. Notice that inside these strips, all of the areas you need to calculate look the same: a "trapezium" with two sides replaced by circular segments. So if you can work out how to calculate such a shape, you just do it for all the individual shapes and add them together. The complexity of this naive approach is O(N^3), where N is the number of circles in the figure. With some clever data structure use, you could improve this line-sweep method to O(N^2 * log(N)), but unless you really need to, it's probably not worth the trouble.
The pixel-painting approach (as suggested by #Loadmaster) is superior to the mathematical solution in a variety of ways:
Implementation is much simpler. The above problem can be solved in less than 100 lines of code, as this JSFiddle solution demonstrates (mostly because it’s conceptually much simpler, and has no edge cases or exceptions to deal with).
It adapts easily to more general problems. It works with any shape, regardless of morphology, as long as it’s renderable with 2D drawing libraries (i.e., “all of them!”) — circles, ellipses, splines, polygons, you name it. Heck, even bitmap images.
The complexity of the pixel-painting solution is ~O[n], as compared to ~O[n*n] for the mathematical solution. This means it will perform better as the number of shapes increases.
And speaking of performance, you’ll often get hardware acceleration for free, as most modern 2D libraries (like HTML5’s canvas, I believe) will offload rendering work to graphics accelerators.
The one downside to pixel-painting is the finite accuracy of the solution. But that is tunable by simply rendering to larger or smaller canvases as the situation demands. Note, too, that anti-aliasing in the 2D rendering code (often turned on by default) will yield better-than-pixel-level accuracy. So, for example, rendering a 100x100 figure into a canvas of the same dimensions should, I think, yield accuracy on the order of 1 / (100 x 100 x 255) = .000039% ... which is probably “good enough” for all but the most demanding problems.
<p>Area computation of arbitrary figures as done thru pixel-painting, in which a complex shape is drawn into an HTML5 canvas and the area determined by comparing the number of white pixels found in the resulting bitmap. See javascript source for details.</p>
<canvas id="canvas" width="80" height="100"></canvas>
<p>Area = <span id="result"></span></p>
// Get HTML canvas element (and context) to draw into
var canvas = document.getElementById('canvas');
var ctx = canvas.getContext('2d');
// Lil' circle drawing utility
function circle(x,y,r) {
ctx.beginPath();
ctx.arc(x, y, r, 0, Math.PI*2);
ctx.fill();
}
// Clear canvas (to black)
ctx.fillStyle = 'black';
ctx.fillRect(0, 0, canvas.width, canvas.height);
// Fill shape (in white)
ctx.fillStyle = 'white';
circle(40, 50, 40);
circle(40, 10, 10);
circle(25, 15, 12);
circle(35, 90, 10);
// Get bitmap data
var id = ctx.getImageData(0, 0, canvas.width, canvas.height);
var pixels = id.data; // Flat array of RGBA bytes
// Determine area by counting the white pixels
for (var i = 0, area = 0; i < pixels.length; i += 4) {
area += pixels[i]; // Red channel (same as green and blue channels)
}
// Normalize by the max white value of 255
area /= 255;
// Output result
document.getElementById('result').innerHTML = area.toFixed(2);
I have been working on a problem of simulating overlapping star fields, attempting to estimate the true star counts from the actual disk areas in dense fields, where the larger bright stars can mask fainter ones. I too had hoped to be able to do this by rigorous formal analysis, but was unable to find an algorithm for the task. I solved it by generating the star fields on a blue background as green disks, whose diameter was determined by a probability algorithm. A simple routine can pair them to see if there's an overlap (turning the star pair yellow); then a pixel count of the colours generates the observed area to compare to the theoretical area. This then generates a probability curve for the true counts. Brute force maybe, but it seems to work OK.
(source: 2from.com)
Here's an algorithm that should be easy to implement in practice, and could be adjusted to produce arbitrarily small error:
Approximate each circle by a regular polygon centered at the same point
Calculate the polygon which is the union of the approximated circles
Calculate the area of the merged polygon
Steps 2 and 3 can be carried out using standard, easy-to-find algorithms from computational geometry.
Obviously, the more sides you use for each approximating polygon, the closer to exact your answer would be. You could approximate using inscribed and circumscribed polygons to get bounds on the exact answer.
I found this link which may be useful. There does not seem to be a definitive answer though.
Google answers. Another reference for three circles is Haruki's theorem. There is a paper there as well.
Depending on what problem you are trying to solve it could be sufficient to get an upper and lower bound. An upper bound is easy, just the sum of all the circles. For a lower bound you can pick a single radius such that none of the circles overlap. To better that find the largest radius (up to the actual radius) for each circle so that it doesn't overlap. It should also be pretty trivial to remove any completely overlapped circles (All such circles satisfy |P_a - P_b| <= r_a) where P_a is the center of circle A, P_b is the center of circle B, and r_a is the radius of A) and this betters both the upper and lower bound. You could also get a better Upper bound if you use your pair formula on arbitrary pairs instead of just the sum of all the circles. There might be a good way to pick the "best" pairs (the pairs that result in the minimal total area.
Given an upper and lower bound you might be able to better tune a Monte-carlo approach, but nothing specific comes to mind. Another option (again depending on your application) is to rasterize the circles and count pixels. It is basically the Monte-carlo approach with a fixed distribution.
I've got a way to get an approximate answer if you know that all your circles are going to be within a particular region, i.e. each point in circle is inside a box whose dimensions you know. This assumption would be valid, for example, if all the circles are in an image of known size. If you can make this assumption, divide the region which contains your image into 'pixels'. For each pixel, compute whether it is inside at least one of the circles. If it is, increment a running total by one. Once you are done, you know how many pixels are inside at least one circle, and you also know the area of each pixel, so you can calculate the total area of all the overlapping circles.
By increasing the 'resolution' of your region (the number of pixels), you can improve your approximation.
Additionally, if the size of the region containing your circles is bounded, and you keep the resolution (number of pixels) constant, the algorithm runs in O(n) time (n is the number of circles). This is because for each pixel, you have to check whether it is inside each one of your n circles, and the total number of pixels is bounded.
This can be solved using Green's Theorem, with a complexity of n^2log(n).
If you're not familiar with the Green's Theorem and want to know more, here is the video and notes from Khan Academy. But for the sake of our problem, I think my description will be enough.
If I put L and M such that
then the RHS is simply the area of the Region R and can be obtained by solving the closed integral or LHS and this is exactly what we're going to do.
So Integrating along the path in the anticlockwise gives us the Area of the region and integrating along the clockwise gives us negative of the Area. So
AreaOfUnion = (Integration along red arcs in anticlockwise direction + Integration along blue arcs in clockwise direction)
But the cool trick is if for each circle if we integrate the arcs which are not inside any other circle we get our required area i.e. we get integration in an anticlockwise direction along all red arcs and integration along all blue arcs along the clockwise direction. JOB DONE!!!
Even the cases when a circle doesn't intersect with any other is taken
care of.
Here is the GitHub link to my C++ Code
So I am using Kinect with Unity.
With the Kinect, we detect a hand gesture and when it is active we draw a line on the screen that follows where ever the hand is going. Every update the location is stored as the newest (and last) point in a line. However the lines can often look very choppy.
Here is a general picture that shows what I want to achieve:
With the red being the original line, and the purple being the new smoothed line. If the user suddenly stops and turns direction, we think we want it to not exactly do that but instead have a rapid turn or a loop.
My current solution is using Cubic Bezier, and only using points that are X distance away from each other (with Y points being placed between the two points using Cubic Bezier). However there are two problems with this, amongst others:
1) It often doesn't preserve the curves to the distance outwards the user drew them, for example if the user suddenly stop a line and reverse the direction there is a pretty good chance the line won't extend to point where the user reversed the direction.
2) There is also a chance that the selected "good" point is actually a "bad" random jump point.
So I've thought about other solutions. One including limiting the max angle between points (with 0 degrees being a straight line). However if the point has an angle beyond the limit the math behind lowering the angle while still following the drawn line as best possible seems complicated. But maybe it's not. Either way I'm not sure what to do and looking for help.
Keep in mind this needs to be done in real time as the user is drawing the line.
You can try the Ramer-Douglas-Peucker algorithm to simplify your curve:
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
It's a simple algorithm, and parameterization is reasonably intuitive. You may use it as a preprocessing step or maybe after one or more other algorithms. In any case it's a good algorithm to have in your toolbox.
Using angles to reject "jump" points may be tricky, as you've seen. One option is to compare the total length of N line segments to the straight-line distance between the extreme end points of that chain of N line segments. You can threshold the ratio of (totalLength/straightLineLength) to identify line segments to be rejected. This would be a quick calculation, and it's easy to understand.
If you want to take line segment lengths and segment-to-segment angles into consideration, you could treat the line segments as vectors and compute the cross product. If you imagine the two vectors as defining a parallelogram, and if knowing the area of the parallegram would be a method to accept/reject a point, then the cross product is another simple and quick calculation.
https://www.math.ucdavis.edu/~daddel/linear_algebra_appl/Applications/Determinant/Determinant/node4.html
If you only have a few dozen points, you could randomly eliminate one point at a time, generate your spline fits, and then calculate the point-to-spline distances for all the original points. Given all those point-to-spline distances you can generate a metric (e.g. mean distance) that you'd like to minimize: the best fit would result from eliminating points (Pn, Pn+k, ...) resulting in a spline fit quality S. This technique wouldn't scale well with more points, but it might be worth a try if you break each chain of line segments into groups of maybe half a dozen segments each.
Although it's overkill for this problem, I'll mention that Euler curves can be good fits to "natural" curves. What's nice about Euler curves is that you can generate an Euler curve fit by two points in space and the tangents at those two points in space. The code gets hairy, but Euler curves (a.k.a. aesthetic curves, if I remember correctly) can generate better and/or more useful fits to natural curves than Bezier nth degree splines.
https://en.wikipedia.org/wiki/Euler_spiral
i was wondering if anyone of u here knows how to smooth a polygon in Maya? I've tried 2 methods which i found online. One of which is 'Vertice Averaging' and the other 'Smooth' which are both under the 'Mesh' option.
Vertice Averaging caused my polygons to have 'gaps' or 'holes' between the triangles, which i do not intend for that to happen.
While Smooth causes the polygon's face to have 4 vertex instead of the original 3, which i do not want as well, as i need a polygon with triangle faces.
http://img.photobucket.com/albums/v483/dragonlancer/PolygonAveragingampSmoothing.jpg
And to whoever told me that it is a bug, i tried, but doesnt work =[
You said you wanted to maintain your tris so you could switch the smooth option 'Add Divisions' from exponential to linear.
If you're getting gaps, its because the original mesh has verts which are not welded together. Try Edit Mesh > Merge with a small tolerance value before running average or smooth.
In general you'll get more pleasant results if you smooth a quadrangular mesh instead of a trimesh - when you subdivide quads the results are very similar to NURBS curves, whereas smoothed triangles always tend to look look like old-fashioned 1990's game graphics.
Let me explain my problem:
I have a black vector shape (let's say it's a series of joined, straight lines for now, but it'd be nice if I could also support quadratic curves).
I also have a rectangle of a predefined width and height. I'm going to place it on top of the black shape, and then take the union of the two.
My first issue is that I don't know how to quickly extract vector unions, but I think there is a well-defined formula I can figure out for myself.
My second, and more tricky issue is how to efficiently detect the position the rectangle needs to be in (i.e., what translation and rotation are needed by the matrices), in order to maximize the black, remaining after the union (see figure, below).
The red outlined shape below is ~33% black; the green is something like 85%; and there are positions for this shape & rectangle wherein either could have 100% coverage.
Obviously, I can brute-force this by trying every translation and rotation value for every point where at least part of the rectangle is touching the black shape, then keep track of the one with the most black coverage. The problem is, I can only try a finite number of positions (and may therefore miss the maximum). Apart from that, it feels very inefficient!
Can you think of a more efficient way of tackling this problem?
Something from my Uni days tells me that a Fourier transform might improve the efficiency here, but I can't figure out how I'd do that with a vector shape!
Three ideas that have promise of being faster and/or more precise than brute force search:
Suppose you have a 3d physics engine. Define a "cone-shaped" surface where the apex is at say (0,0,-1), the black polygon boundary on the z=0 plane with its centroid at the origin, and the cone surface is formed by connecting the apex with semi-infinite rays through the polygon boundary. Think of a party hat turned upside down and crumpled to the shape of the black polygon. Now constrain the rectangle to be parallel to the z=0 plane and initially so high above the cone (large z value) that it's easy to find a place where it's definitely "inside". Then let the rectangle fall downward under gravity, twisting about z and translating in x-y only as it touches the cone, staying inside all the way down until it settles and can't move any farther. The collision detection and force resolution of the physics engine takes care of the complexities. When it settles, it will be in a position of maximal coverage of the black polygon in a local sense. (If it settles with z<0, then coverage is 100%.) For the convex case it's probably a global maximum. To probabilistically improve the result for non-convex cases (like your example), you'd randomize the starting position, dropping the polygon many times, taking the best result. Note you don't really need a full blown physics engine (though they certainly exist in open source). It's enough to use collision resolution to tell you how to rotate and translate the rectangle in a pseudo-physical way as it twists and slides uniformly down the z axis as far as possible.
Different physics model. Suppose the black area is an attractive field generator in 2d following the usual inverse square rule like gravity and magnetism. Now let the rectangle drift in a damping medium responding to this field. It ought to settle with a maximal area overlapping the black area. There are problems with "nulls" like at the center of a donut, but I don't think these can ever be stable equillibria. Can they? The simulation could be easily done by modeling both shapes as particle swarms. Or since the rectangle is a simple shape and you are a physicist, you could come up with a closed form for the integral of attractive force between a point and the rectangle. This way only the black shape needs representation as particles. Come to think of it, if you can come up with a closed form for torque and linear attraction due to two triangles, then you can decompose both shapes with a (e.g. Delaunay) triangulation and get a precise answer. Unfortunately this discussion implies it can't be done analytically. So particle clouds may be the final solution. The good news is that modern processors, particularly GPUs, do very large particle computations with amazing speed. Edit: I implemented this quick and dirty. It works great for convex shapes, but concavities create stable points that aren't what you want. Using the example:
This problem is related to robot path planning. Looking at this literature may turn up some ideas In RPP you have obstacles and a robot and want to find a path the robot can travel while avoiding and/or sliding along them. If the robot is asymmetric and can rotate, then 2d planning is done in a 3d (toroidal) configuration space (C-space) where one dimension is rotation (so closes on itself). The idea is to "grow" the obstacles in C-space while shrinking the robot to a point. Growing the obstacles is achieved by computing Minkowski Differences.) If you decompose all polygons to convex shapes, then there is a simple "edge merge" algorithm for computing the MD.) When the C-space representation is complete, any 1d path that does not pierce the "grown" obstacles corresponds to continuous translation/rotation of the robot in world space that avoids the original obstacles. For your problem the white area is the obstacle and the rectangle is the robot. You're looking for any open point at all. This would correspond to 100% coverage. For the less than 100% case, the C-space would have to be a function on 3d that reflects how "bad" the intersection of the robot is with the obstacle rather than just a binary value. You're looking for the least bad point. C-space representation is an open research topic. An octree might work here.
Lots of details to think through in both cases, and they may not pan out at all, but at least these are frameworks to think more about the problem. The physics idea is a bit like using simulated spring systems to do graph layout, which has been very successful.
I don't believe it is possible to find the precise maximum for this problem, so you will need to make do with an approximation.
You could potentially render the vector image into a bitmap and use Haar features for this - they provide a very quick O(1) way of calculating the average colour of a rectangular region.
You'd still need to perform this multiple times for different rotations and positions, but it would bring it algorithmic complexity down from a naive O(n^5) to O(n^3) which may be acceptably fast. (with n here being the size of the different degrees of freedom you are scanning)
Have you thought to keep track of the remaining white space inside the blocks with something like if whitespace !== 0?
Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?
It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)
Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.