Voronoi diagram using custom (great circle) distance - math

I want to create a Voronoi diagram on several pairs of
latitudes/longitudes, but want to use the great circle distance
between them, not the (inaccurate) Pythagorean distance.
Can I make qhull/qvoronoi or some other Linux program do this?
I considered mapping the points to 3D, having qvoronoi create a 3D
Voronoi diagram[1], and intersecting the result with the unit sphere, but
I'm not sure that's easy.
[1] I realize the 3D distance between two latitudes/longitudes (the
"through the Earth" path) isn't the same as the great circle distance,
but it's easy to prove that this transformation preserves relative
distances, which is all that matters for a Voronoi diagram.

I assume you've found this article. From that, it seems like you have the right idea by using a 3D embedding. Your question is then how to intersect the result with the sphere.
First of all you need to consider how you're going to represent the voronoi diagram. If you want to work in lat/long coordinates in a 2D plane, then your voronoi diagram will contain curved edges, so maybe it is best to just use a 3D representation.
If you use a program like qvoronoi, you should in theory only need the inifinite hyperplane data (generated by Fo). This gives you the equation of the plane and the two points it corresponds to. Usually you only need to use the voronoi diagram to test for inclusion within regions, and the hyperplanes should be enough for that.

See also this question: Algorithm to compute a Voronoi diagram on a sphere?

Related

How to use delaunay trianglation in 3d points?

I understand how to use delaunay triangulation in 2d points?
But how to use delaunay triangulation in 3d points?
I mean I want to generate surface triangle mesh not tetrahedron mesh, so how can I use delaunay triangulation to generate 3d surface mesh?
Please give me some hint.
To triangulate a 3D point cloud you need the BallPivoting algorithm: https://vgc.poly.edu/~csilva/papers/tvcg99.pdf
There are two meanings of a 3D triangulation. One is when the whole space is filled, likely with tetrahedra (hexahedra and others may be also used). The other is called 2.5D, typically for terrains where the z is a property as the color or whatever, which doesn't influence the resulting triangulation.
If you use Shewchuk's triangle you can get the result.
If you are curious enough, you'll be able to select those tetrahedra that have one face not shared with other tetrahedra. These are the same tetrahedra "joined" with infinite/enclosing points. Extract those faces and you have your 3D surface triangulation.
If you want "direct" surface reconstruction then you undoubtly need to know in advance which vertices among the total given are in the surface. If you don't know them, perhaps the "maxima method" allows to find them out.
One your points cloud consists only of surface vertices, the triangulation method can be any one you like, from (adapted) incremental Chew's, Ruppert, etc to "ball-pivoting" method and "marching cubes" method.
The Delaunay tetrahedrization doesn't fit for two reasons
it fills a volume with tetrahedra, instead of defining a surface,
it fills the convex hull of the points, which is probably not what you expect.
To address the second problem, you need to accept concavities, and this implies that you need to specify a reference scale that tells what level of detail you want. This leads to the concept of Alpha Shapes, which are obtained as a subset of the faces.
Lookup "Alpha Shape" in an image search engine.

How to compute Voronoi tesselation based on manhattan distance in R

I am trying to compute a Voronoi tesselation in 2D with the Manhattan distance in R.
Ideally this would be a function that takes a set of two-dimensional points and outputs a list of polygons that partition the space. I am not certain what representations of Voronoi tesselations are standard.
There are of course many ways to do this with the Euclidean metric (packages like deldir and qhull make this very easy), but I haven't found a way to do this for the manhattan distance. A search using sos's findFn('voronoi') also yielded no results.
Info: taxicabgeometry.net
Interactive: Manhattan-metric Voronoi diagram(Click version)
I've been rolling my own in python, and can sum up the basics here:
Between neighboring centroids is a perpendicular line, in manhattan metric - two rays and a 45 degree diagonal most likely, if the centroids are randomly generated, but a straight horizontal, vertical, or 45 degree diagonal line may also occur. Given a set of such lines for every centroid pair, the edges separating the regions are among them. Collect intersect points of each pair of lines which are equal-distant (within an epsilon), in manhattan metric, to it's 3 nearest centroids. Also collect the two mid points of the 45 degree diagonal which are similarly equal-distant to their nearest two centroids. The outer polies won't be closed. How to deal with them depends on what you need. The poly borders and border verts will need sorting, so your polies aren't a zigzagged mess. Winding order can be determined if they should be clockwise or other. More can be done, just depends on what you need.
Unfortunately, this gets exponentially slower the more points are involved. The intersecting of every bisector to every other bisector is the bottleneck. I've been attempting an insertion method, with some success, but . Now I'm thinking to try first creating a nearest-neighbor linkage between the centroids. If the neighbors are known, the bisectors to intersect will be minimal, and many centroids can be computed quickly.
Anyway, the brute-force approach does work:
The point near the cursor is actually 2 points of a tiny diagonal. It's a precise method, but more complicated than it first seems. The java code from the interactive link above may be faster, but was difficult to get solid and precise geometry from.
Sorry, I dont know R.
Maybe the question is about finding the maximum area of a square that match inside a circumcircle (of a triangle). The equation for such a square abs(x)+abs(y)=r (www.mathematische-basteleien.de/taxicabgeometry.htm). When you have a mesh of triangles the voronoi diagram is the dual.

2d integration over non-uniform grid

I'm writing a data analysis program and part of it requires finding the volume of a shape. The shape information comes in the form of a lost of points, giving the radius and the angular coordinates of the point.
If the data points were uniformly distributed in coordinate space I would be able to perform the integral, but unfortunately the data points are basically randomly distributed.
My inefficient approach would be to find the nearest neighbours to each point and stitch the shape together like that, finding the volume of the stitched together parts.
Does anyone have a better approach to take?
Thanks.
IF those are surface points, one good way to do it would be to discretize the surface as triangles and convert the volume integral to a surface integral using Green's Theorem. Then you can use simple Gauss quadrature over the triangles.
Ok, here it is, along duffymo's lines I think.
First, triangulate the surface, and make sure you have consistent orientation of the triangles. Meaning that orientation of neighbouring triangle is such that the common edge is traversed in opposite directions.
Second, for each triangle ABC compute this expression: H*cross2D(B-A,C-A), where cross2D computes cross product using coordinates X and Y only, ignoring the Z coordinates, and H is the Z-coordinate of any convenient point in the triangle (although the barycentre would improve precision).
Third, sum up all the above expressions. The result would be the signed volume inside the surface (plus or minus depending on the choice of orientation).
Sounds like you want the convex hull of a point cloud. Fortunately, there are efficient ways of getting you there. Check out scipy.spatial.ConvexHull.

Depth interpolation for surface removal with perspective projection

This seems like a question for which an answer should readily available on the web or books but my quest for an answer has led me so far only to blind alleys that turned out to be dead ends.
I'm trying to draw 3D lines in real-time with hidden surface removal (the lines are edges of solid objects).
So I have two 3D points that were projected to 2D points using perspective projection. For each point I have computed the depth of the point. Now I want to draw the line segment that joins the 2 points, and for hidden surface removal to work I have to compute, for each intermediary 2D point on the 2D line (that results from the projection) the depth of the corresponding 3D point (the 3D point that is projected on that intermediary 2D point).
My problem is that, since the depth function isn't linear when you do perspective projection, I can't interpolate the depth of the 2 original 3D points to compute the depth of the intermediary point.
So how do I compute the depth of each point on the line with a method that's compatible with the constraints of real-time rendering?
Thanks in advance for any help.
Use homogeneous coordinates, which can be linearly interpolated in screen space: http://www.cs.unc.edu/~olano/papers/2dh-tri/

3D mesh to particle cloud conversion

I need to convert arbitrary triangulated 3D mesh to cloud of particles that are uniformly spaced.
First thought was to try find a way to fill one 3D triangle. And then fill each triangle of mesh, removing duplicated particles on edges, but that's just hard and too much work. I was hoping for some more-math way.
Can anyone point me to an algorithm which can help me do my task correctly... well, at least approximatively?
Thanks
There are two main options:
Voxelization of mesh. Easy to implement the conversion of mesh to voxels, but it's inaccurate since uniform spacing cannot be achieved: distance between cubes can be x, x*sqrt(2) or x*sqrt(3) depending if neighbor cubes are in same plane and adjacent.
Poisson disk sampling on surface. Hard to implement and lack of research material and code, but mathematically very correct. Some links:
http://research.microsoft.com/apps/pubs/default.aspx?id=135760
http://web.mysites.ntu.edu.sg/cwfu/public/Shared%20Documents/dualtiling/index.html
You could convert the TIN to raster using a GIS package or software such as R, then retrieve one point at the center of each pixel representing the value. (Example in ArcGIS)
EDIT: If the irregular 3D mesh has multiple heights per {x, y} a similar approach would be to sample the mesh using a voxel "grid" and keep one value per voxel. GRASS GIS has the functionality to take the vertices of the TIN (3d mesh) and convert them to voxels, then back to a regular 3d cloud.

Resources