3D mesh to particle cloud conversion - math

I need to convert arbitrary triangulated 3D mesh to cloud of particles that are uniformly spaced.
First thought was to try find a way to fill one 3D triangle. And then fill each triangle of mesh, removing duplicated particles on edges, but that's just hard and too much work. I was hoping for some more-math way.
Can anyone point me to an algorithm which can help me do my task correctly... well, at least approximatively?
Thanks

There are two main options:
Voxelization of mesh. Easy to implement the conversion of mesh to voxels, but it's inaccurate since uniform spacing cannot be achieved: distance between cubes can be x, x*sqrt(2) or x*sqrt(3) depending if neighbor cubes are in same plane and adjacent.
Poisson disk sampling on surface. Hard to implement and lack of research material and code, but mathematically very correct. Some links:
http://research.microsoft.com/apps/pubs/default.aspx?id=135760
http://web.mysites.ntu.edu.sg/cwfu/public/Shared%20Documents/dualtiling/index.html

You could convert the TIN to raster using a GIS package or software such as R, then retrieve one point at the center of each pixel representing the value. (Example in ArcGIS)
EDIT: If the irregular 3D mesh has multiple heights per {x, y} a similar approach would be to sample the mesh using a voxel "grid" and keep one value per voxel. GRASS GIS has the functionality to take the vertices of the TIN (3d mesh) and convert them to voxels, then back to a regular 3d cloud.

Related

How to use delaunay trianglation in 3d points?

I understand how to use delaunay triangulation in 2d points?
But how to use delaunay triangulation in 3d points?
I mean I want to generate surface triangle mesh not tetrahedron mesh, so how can I use delaunay triangulation to generate 3d surface mesh?
Please give me some hint.
To triangulate a 3D point cloud you need the BallPivoting algorithm: https://vgc.poly.edu/~csilva/papers/tvcg99.pdf
There are two meanings of a 3D triangulation. One is when the whole space is filled, likely with tetrahedra (hexahedra and others may be also used). The other is called 2.5D, typically for terrains where the z is a property as the color or whatever, which doesn't influence the resulting triangulation.
If you use Shewchuk's triangle you can get the result.
If you are curious enough, you'll be able to select those tetrahedra that have one face not shared with other tetrahedra. These are the same tetrahedra "joined" with infinite/enclosing points. Extract those faces and you have your 3D surface triangulation.
If you want "direct" surface reconstruction then you undoubtly need to know in advance which vertices among the total given are in the surface. If you don't know them, perhaps the "maxima method" allows to find them out.
One your points cloud consists only of surface vertices, the triangulation method can be any one you like, from (adapted) incremental Chew's, Ruppert, etc to "ball-pivoting" method and "marching cubes" method.
The Delaunay tetrahedrization doesn't fit for two reasons
it fills a volume with tetrahedra, instead of defining a surface,
it fills the convex hull of the points, which is probably not what you expect.
To address the second problem, you need to accept concavities, and this implies that you need to specify a reference scale that tells what level of detail you want. This leads to the concept of Alpha Shapes, which are obtained as a subset of the faces.
Lookup "Alpha Shape" in an image search engine.

2d integration over non-uniform grid

I'm writing a data analysis program and part of it requires finding the volume of a shape. The shape information comes in the form of a lost of points, giving the radius and the angular coordinates of the point.
If the data points were uniformly distributed in coordinate space I would be able to perform the integral, but unfortunately the data points are basically randomly distributed.
My inefficient approach would be to find the nearest neighbours to each point and stitch the shape together like that, finding the volume of the stitched together parts.
Does anyone have a better approach to take?
Thanks.
IF those are surface points, one good way to do it would be to discretize the surface as triangles and convert the volume integral to a surface integral using Green's Theorem. Then you can use simple Gauss quadrature over the triangles.
Ok, here it is, along duffymo's lines I think.
First, triangulate the surface, and make sure you have consistent orientation of the triangles. Meaning that orientation of neighbouring triangle is such that the common edge is traversed in opposite directions.
Second, for each triangle ABC compute this expression: H*cross2D(B-A,C-A), where cross2D computes cross product using coordinates X and Y only, ignoring the Z coordinates, and H is the Z-coordinate of any convenient point in the triangle (although the barycentre would improve precision).
Third, sum up all the above expressions. The result would be the signed volume inside the surface (plus or minus depending on the choice of orientation).
Sounds like you want the convex hull of a point cloud. Fortunately, there are efficient ways of getting you there. Check out scipy.spatial.ConvexHull.

Map 3D point cloud onto surface then flatten

Mapping a point cloud onto a 3D "fabric" then flattening.
So I have a scientific dataset consisting of a point cloud in 3D, this point cloud comprises points on a surface that is curved. In order to perform quantitative analysis I however need to map these point clouds onto a surface I can then flatten. I thought about using mapping tools sort of like in the case of the 3d world being flattened onto a map, but not sure how to even begin as I have no experience in cartography and maybe I'm trying to solve an easy problem with the wrong tools.
Just to briefly describe the dataset: imagine entirely transparent curtains on the window with small dots on them, if I could use that dot pattern to fit the material the dots are on I could then "straighten" it and do meaningful analysis on the spread of the dots. I'm guessing the procedure would be to first manually fit the "sheet" onto the point cloud data by using contours or something along those lines then flattening the sheet thus putting the points into a 2d array. Ultimately I'll probably also reduce that into a 1D but I assume I need the intermediate 2D step as the length of the 2nd dimension is variable (i.e. one end of the sheet is shorter than the other but still corresponds to the same position in terms of contours) I'm using Matlab and Amira though I'm always happy to learn new tools!
Any advice or hints how to approach are much appreciated!
You can use a space filling curve to reduce the 3d complexity to a 1d complexity. I use a hilbert curve to index lat-lng pairs on a 2d map. You can do the same with a 3d space but it's easier to start with a simple curve for example a z morton order curve. Space filling curves are often used in mapping applications. A space filling curve also adds some proximity information and a new sort order to the 3d points.
You can try to build a surface that approximates your dataset, then unfold the surface with the points you want. Solid3dtech.com has the tool to unfold the surfaces with the curves or points.

Getting scan lines of arbitrary 2d triangle

How would one go about retrieving scan lines for all the lines in a 2D triangle?
I'm attempting to implement the most basic feature of a 2D software renderer, that of texture mapping triangles. I've done this more times than i can count using OpenGL, but i find myself limping when trying to do it myself.
I see a number of articles saying that in order to fill a triangle (whose three vertices each have texture coordinates clamped to [0, 1]), i need to linearly interpolate between the three points. What? I thought interpolation was between two n-dimensional values.
NOTE; This is not for 3D, it's strictly 2D, all the triangles are arbitrary (not axis-aligned in any way). I just need to fill the screen with their textures the way OpenGL would. I cannot use OpenGL as a solution.
An excellent answer and description can be found here: http://sol.gfxile.net/tri/index.html
You can use the Bresenham algorithm to draw/find the sides.
One way to handle it is to interpolate in two steps if you use scanline algorithm. First you interpolate the value on the edges of the triangle and when you start drawing the scanline you interpolate between the start and end value of that scanline.
Since you are working in 2d you can also use a matrix transformation to obtain the screen coordinate to texture coordinate. Yesterday I answered a similar question here. The technique is called change of basis in mathematics.

Voronoi diagram using custom (great circle) distance

I want to create a Voronoi diagram on several pairs of
latitudes/longitudes, but want to use the great circle distance
between them, not the (inaccurate) Pythagorean distance.
Can I make qhull/qvoronoi or some other Linux program do this?
I considered mapping the points to 3D, having qvoronoi create a 3D
Voronoi diagram[1], and intersecting the result with the unit sphere, but
I'm not sure that's easy.
[1] I realize the 3D distance between two latitudes/longitudes (the
"through the Earth" path) isn't the same as the great circle distance,
but it's easy to prove that this transformation preserves relative
distances, which is all that matters for a Voronoi diagram.
I assume you've found this article. From that, it seems like you have the right idea by using a 3D embedding. Your question is then how to intersect the result with the sphere.
First of all you need to consider how you're going to represent the voronoi diagram. If you want to work in lat/long coordinates in a 2D plane, then your voronoi diagram will contain curved edges, so maybe it is best to just use a 3D representation.
If you use a program like qvoronoi, you should in theory only need the inifinite hyperplane data (generated by Fo). This gives you the equation of the plane and the two points it corresponds to. Usually you only need to use the voronoi diagram to test for inclusion within regions, and the hyperplanes should be enough for that.
See also this question: Algorithm to compute a Voronoi diagram on a sphere?

Resources