When triangulating a set of points and the number of points is huge (10 millions), you need to triangulate one chunk at time after subdividing the problem using a quad-tree or an oct-tree.
So far so good, we are now looking for a smart approach to fill the small straight gaps between each mesh. Do you know a good one?
Thanks.
Rather than finish by welding together the separate parts of the mesh, why not start by decomposing the point set into overlapping chunks ? This way your problem becomes one of removing unwanted edges rather than finding missing ones, at the expense of duplicating the computation of the mesh along the borders. This might be easier though I suspect its computational complexity is no different.
I believe that most standard approaches to triangulation can not be expected to produce the same mesh across the boundary for the two overlapping chunks. However, I also believe (without proof) that the computation of the mesh across the boundary between (the interior of neighbouring) chunks is increasingly likely to produce the same triangulation across the boundary as the depth of the overlap increases.
Think of an existing triangulation of a set of points, and add a new point outside the hull of the existing points. Triangulating the extended set of points will require only local (in some vague sense) adjustment of the existing mesh, in most cases. Simlilarly, deleting a point at the edge of an existing mesh will rarely affect the triangulation at the centre of the mesh.
If this ad-hoc approach doesn't appeal to you, use your favourite search engine and look for parallel delaunay triangulation
If the mesh is connected using linear elements (straight sides), the only way you can have gaps is if endpoints on adjacent edges aren't coincident.
You can check within some tolerance sphere whether two points should be made one, but the tolerance has to be smaller than the shortest edge in your mesh or you'll collapse elements.
The smartest thing I can think of is to parallelize the job. Break the mesh into one chunk per thread/process and do the tolerance check on each one.
It might be a good map-reduce job. Or perhaps GPUs and CUDA would be a good way to go.
When you calculate the distance between two points you can forgo the expensive square root and just look at the square of the distance compared to the tolerance.
Related
I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.
I am trying to solve a programming interview question that requires one to find the maximum number of points that lie on the same straight straight line in a 2D plane. I have looked up the solutions on the web. All of them discuss a O(N^2) solution using hashing such as the one at this link: Here
I understand the part where common gradient is used to check for co-linear points since that is a common mathematical technique. However, the solution points out that one must beware of vertical lines and overlapping points. I am not sure how these points can cause problems? Can't I just store the gradient of vertical lines as infinity (a large number)?
Hint:
Three distinct points are collinear if
x_1*(y_2-y_3)+x_2*(y_3-y_1)+x_3*(y_1-y_2) = 0
No need to check for slopes or anything else. You need need to eliminate duplicate points from the set before the search begins though.
So pick a pair of points, and find all other points that are collinear and store them in a list of lines. For the remainder points do the same and then compare which lines have the most points.
The first time around you have n-2 tests. The second time around you have n-4 tests because there is no point on revisiting the first two points. Next time n-6 etc, for a total of n/2 tests. At worst case this results in (n/2)*(n/2-1) operations which is O(n^2) complexity.
PS. Who ever decided the canonical answer is using slopes knows very little about planar geometry. People invented homogeneous coordinates for points and lines in a plane for the exact reason of having to represent vertical lines and other degenerate situations.
I'm writing a data analysis program and part of it requires finding the volume of a shape. The shape information comes in the form of a lost of points, giving the radius and the angular coordinates of the point.
If the data points were uniformly distributed in coordinate space I would be able to perform the integral, but unfortunately the data points are basically randomly distributed.
My inefficient approach would be to find the nearest neighbours to each point and stitch the shape together like that, finding the volume of the stitched together parts.
Does anyone have a better approach to take?
Thanks.
IF those are surface points, one good way to do it would be to discretize the surface as triangles and convert the volume integral to a surface integral using Green's Theorem. Then you can use simple Gauss quadrature over the triangles.
Ok, here it is, along duffymo's lines I think.
First, triangulate the surface, and make sure you have consistent orientation of the triangles. Meaning that orientation of neighbouring triangle is such that the common edge is traversed in opposite directions.
Second, for each triangle ABC compute this expression: H*cross2D(B-A,C-A), where cross2D computes cross product using coordinates X and Y only, ignoring the Z coordinates, and H is the Z-coordinate of any convenient point in the triangle (although the barycentre would improve precision).
Third, sum up all the above expressions. The result would be the signed volume inside the surface (plus or minus depending on the choice of orientation).
Sounds like you want the convex hull of a point cloud. Fortunately, there are efficient ways of getting you there. Check out scipy.spatial.ConvexHull.
Let me explain my problem:
I have a black vector shape (let's say it's a series of joined, straight lines for now, but it'd be nice if I could also support quadratic curves).
I also have a rectangle of a predefined width and height. I'm going to place it on top of the black shape, and then take the union of the two.
My first issue is that I don't know how to quickly extract vector unions, but I think there is a well-defined formula I can figure out for myself.
My second, and more tricky issue is how to efficiently detect the position the rectangle needs to be in (i.e., what translation and rotation are needed by the matrices), in order to maximize the black, remaining after the union (see figure, below).
The red outlined shape below is ~33% black; the green is something like 85%; and there are positions for this shape & rectangle wherein either could have 100% coverage.
Obviously, I can brute-force this by trying every translation and rotation value for every point where at least part of the rectangle is touching the black shape, then keep track of the one with the most black coverage. The problem is, I can only try a finite number of positions (and may therefore miss the maximum). Apart from that, it feels very inefficient!
Can you think of a more efficient way of tackling this problem?
Something from my Uni days tells me that a Fourier transform might improve the efficiency here, but I can't figure out how I'd do that with a vector shape!
Three ideas that have promise of being faster and/or more precise than brute force search:
Suppose you have a 3d physics engine. Define a "cone-shaped" surface where the apex is at say (0,0,-1), the black polygon boundary on the z=0 plane with its centroid at the origin, and the cone surface is formed by connecting the apex with semi-infinite rays through the polygon boundary. Think of a party hat turned upside down and crumpled to the shape of the black polygon. Now constrain the rectangle to be parallel to the z=0 plane and initially so high above the cone (large z value) that it's easy to find a place where it's definitely "inside". Then let the rectangle fall downward under gravity, twisting about z and translating in x-y only as it touches the cone, staying inside all the way down until it settles and can't move any farther. The collision detection and force resolution of the physics engine takes care of the complexities. When it settles, it will be in a position of maximal coverage of the black polygon in a local sense. (If it settles with z<0, then coverage is 100%.) For the convex case it's probably a global maximum. To probabilistically improve the result for non-convex cases (like your example), you'd randomize the starting position, dropping the polygon many times, taking the best result. Note you don't really need a full blown physics engine (though they certainly exist in open source). It's enough to use collision resolution to tell you how to rotate and translate the rectangle in a pseudo-physical way as it twists and slides uniformly down the z axis as far as possible.
Different physics model. Suppose the black area is an attractive field generator in 2d following the usual inverse square rule like gravity and magnetism. Now let the rectangle drift in a damping medium responding to this field. It ought to settle with a maximal area overlapping the black area. There are problems with "nulls" like at the center of a donut, but I don't think these can ever be stable equillibria. Can they? The simulation could be easily done by modeling both shapes as particle swarms. Or since the rectangle is a simple shape and you are a physicist, you could come up with a closed form for the integral of attractive force between a point and the rectangle. This way only the black shape needs representation as particles. Come to think of it, if you can come up with a closed form for torque and linear attraction due to two triangles, then you can decompose both shapes with a (e.g. Delaunay) triangulation and get a precise answer. Unfortunately this discussion implies it can't be done analytically. So particle clouds may be the final solution. The good news is that modern processors, particularly GPUs, do very large particle computations with amazing speed. Edit: I implemented this quick and dirty. It works great for convex shapes, but concavities create stable points that aren't what you want. Using the example:
This problem is related to robot path planning. Looking at this literature may turn up some ideas In RPP you have obstacles and a robot and want to find a path the robot can travel while avoiding and/or sliding along them. If the robot is asymmetric and can rotate, then 2d planning is done in a 3d (toroidal) configuration space (C-space) where one dimension is rotation (so closes on itself). The idea is to "grow" the obstacles in C-space while shrinking the robot to a point. Growing the obstacles is achieved by computing Minkowski Differences.) If you decompose all polygons to convex shapes, then there is a simple "edge merge" algorithm for computing the MD.) When the C-space representation is complete, any 1d path that does not pierce the "grown" obstacles corresponds to continuous translation/rotation of the robot in world space that avoids the original obstacles. For your problem the white area is the obstacle and the rectangle is the robot. You're looking for any open point at all. This would correspond to 100% coverage. For the less than 100% case, the C-space would have to be a function on 3d that reflects how "bad" the intersection of the robot is with the obstacle rather than just a binary value. You're looking for the least bad point. C-space representation is an open research topic. An octree might work here.
Lots of details to think through in both cases, and they may not pan out at all, but at least these are frameworks to think more about the problem. The physics idea is a bit like using simulated spring systems to do graph layout, which has been very successful.
I don't believe it is possible to find the precise maximum for this problem, so you will need to make do with an approximation.
You could potentially render the vector image into a bitmap and use Haar features for this - they provide a very quick O(1) way of calculating the average colour of a rectangular region.
You'd still need to perform this multiple times for different rotations and positions, but it would bring it algorithmic complexity down from a naive O(n^5) to O(n^3) which may be acceptably fast. (with n here being the size of the different degrees of freedom you are scanning)
Have you thought to keep track of the remaining white space inside the blocks with something like if whitespace !== 0?
I have a 3D closed mesh car object having a surface made up
triangles. I want to calculate its volume, center of volume and inertia tensor.
Could you help me
Regards.
George
For volume...
For each triangular facet, lookup its corner points. Call 'em P,Q,R.
Compute this quantity (I call it "partial volume")
pv = PxQyRz + PyQzRx + PzQxRy - PxQzRy - PyQxRz - PzQyRx
Add these together for all facets and divide by 6.
Important! The P,Q,R for each facet must be arranged clockwise as seen from outside. (Or all counter-clockwise, as long as it's consistent for all facets.)
If the mesh has any quadrilaterals, just temporarily hallucinate a diagonal joining one pair of opposite corners. That makes it into two triangles.
Practical computationial improvement: Before doing math with P,Q and R, subtract the coordinates of some "center" point C. This can be the center of mass, a midpoint between the min/max x, y and z, or any convenient point inside or near the mesh. This helps minimize truncation errors for more accurate volumes.
From numerical point of view, what you are trying to achieve is quite simple and can be reduced to calculating few quadratures. Wikipedia will provide needed information about maths behind it.
If you are looking for out-of-the-box volume calculation, take a look at this entry.
As of inertia -- shape is not enough, as you also need distribution of mass.
Well, there isn't much information on the car being provided here - you should be able to break down the car into simpler shapes - the higher degree of approximation your require - the more simpler shapes you can break it into. (This could be difficult if the car is somehow dynamically generated and completely different every time ... but I don't see that situation making any sense).
This should help with finding the Inertial Tensor of various simpler shapes: http://www.gamedev.net/community/forums/topic.asp?topic_id=57001 , finding the volumes and the likes of things like spheres and cubes is pretty common knowledge so I won't bother linking that.
I think it was Archimedes who discovered that if you submerge the car in a volume of liquid, the displaced liquid will have the same volume as the car.
I'm not sure what this would help you in this case though. Having a liquid simulation running in the background and submerging the mesh into it sounds a bit over the top. Although, I think it does work, and therefore qualifies as a (bit useless nonetheless) answer. ;^)