Depth interpolation for surface removal with perspective projection - math

This seems like a question for which an answer should readily available on the web or books but my quest for an answer has led me so far only to blind alleys that turned out to be dead ends.
I'm trying to draw 3D lines in real-time with hidden surface removal (the lines are edges of solid objects).
So I have two 3D points that were projected to 2D points using perspective projection. For each point I have computed the depth of the point. Now I want to draw the line segment that joins the 2 points, and for hidden surface removal to work I have to compute, for each intermediary 2D point on the 2D line (that results from the projection) the depth of the corresponding 3D point (the 3D point that is projected on that intermediary 2D point).
My problem is that, since the depth function isn't linear when you do perspective projection, I can't interpolate the depth of the 2 original 3D points to compute the depth of the intermediary point.
So how do I compute the depth of each point on the line with a method that's compatible with the constraints of real-time rendering?
Thanks in advance for any help.

Use homogeneous coordinates, which can be linearly interpolated in screen space: http://www.cs.unc.edu/~olano/papers/2dh-tri/

Related

How to use delaunay trianglation in 3d points?

I understand how to use delaunay triangulation in 2d points?
But how to use delaunay triangulation in 3d points?
I mean I want to generate surface triangle mesh not tetrahedron mesh, so how can I use delaunay triangulation to generate 3d surface mesh?
Please give me some hint.
To triangulate a 3D point cloud you need the BallPivoting algorithm: https://vgc.poly.edu/~csilva/papers/tvcg99.pdf
There are two meanings of a 3D triangulation. One is when the whole space is filled, likely with tetrahedra (hexahedra and others may be also used). The other is called 2.5D, typically for terrains where the z is a property as the color or whatever, which doesn't influence the resulting triangulation.
If you use Shewchuk's triangle you can get the result.
If you are curious enough, you'll be able to select those tetrahedra that have one face not shared with other tetrahedra. These are the same tetrahedra "joined" with infinite/enclosing points. Extract those faces and you have your 3D surface triangulation.
If you want "direct" surface reconstruction then you undoubtly need to know in advance which vertices among the total given are in the surface. If you don't know them, perhaps the "maxima method" allows to find them out.
One your points cloud consists only of surface vertices, the triangulation method can be any one you like, from (adapted) incremental Chew's, Ruppert, etc to "ball-pivoting" method and "marching cubes" method.
The Delaunay tetrahedrization doesn't fit for two reasons
it fills a volume with tetrahedra, instead of defining a surface,
it fills the convex hull of the points, which is probably not what you expect.
To address the second problem, you need to accept concavities, and this implies that you need to specify a reference scale that tells what level of detail you want. This leads to the concept of Alpha Shapes, which are obtained as a subset of the faces.
Lookup "Alpha Shape" in an image search engine.

Projection of points on plane and the inverse transformation

i'm working on a project where i have a cloud of points in space as input data, my goal is to create a surface.
I started by computing a regression plan for the cloud, then i projected my points on the plane using dot products :
My plane is represented by a point and a normal , i construct the axis of the plane's space using cross products then project each point on these axis.
then i triangulate in 2D (that's the point of the whole operation).
My problem is that my points now are in the plane space and i want to get them back to their inital position (inverse the transformation) to have my surface ON my points.
thank you :)
the best way is to keep the original positions and make the triangulation give you the indices rather than the positions , i hope it will help !

2d integration over non-uniform grid

I'm writing a data analysis program and part of it requires finding the volume of a shape. The shape information comes in the form of a lost of points, giving the radius and the angular coordinates of the point.
If the data points were uniformly distributed in coordinate space I would be able to perform the integral, but unfortunately the data points are basically randomly distributed.
My inefficient approach would be to find the nearest neighbours to each point and stitch the shape together like that, finding the volume of the stitched together parts.
Does anyone have a better approach to take?
Thanks.
IF those are surface points, one good way to do it would be to discretize the surface as triangles and convert the volume integral to a surface integral using Green's Theorem. Then you can use simple Gauss quadrature over the triangles.
Ok, here it is, along duffymo's lines I think.
First, triangulate the surface, and make sure you have consistent orientation of the triangles. Meaning that orientation of neighbouring triangle is such that the common edge is traversed in opposite directions.
Second, for each triangle ABC compute this expression: H*cross2D(B-A,C-A), where cross2D computes cross product using coordinates X and Y only, ignoring the Z coordinates, and H is the Z-coordinate of any convenient point in the triangle (although the barycentre would improve precision).
Third, sum up all the above expressions. The result would be the signed volume inside the surface (plus or minus depending on the choice of orientation).
Sounds like you want the convex hull of a point cloud. Fortunately, there are efficient ways of getting you there. Check out scipy.spatial.ConvexHull.

3D mesh to particle cloud conversion

I need to convert arbitrary triangulated 3D mesh to cloud of particles that are uniformly spaced.
First thought was to try find a way to fill one 3D triangle. And then fill each triangle of mesh, removing duplicated particles on edges, but that's just hard and too much work. I was hoping for some more-math way.
Can anyone point me to an algorithm which can help me do my task correctly... well, at least approximatively?
Thanks
There are two main options:
Voxelization of mesh. Easy to implement the conversion of mesh to voxels, but it's inaccurate since uniform spacing cannot be achieved: distance between cubes can be x, x*sqrt(2) or x*sqrt(3) depending if neighbor cubes are in same plane and adjacent.
Poisson disk sampling on surface. Hard to implement and lack of research material and code, but mathematically very correct. Some links:
http://research.microsoft.com/apps/pubs/default.aspx?id=135760
http://web.mysites.ntu.edu.sg/cwfu/public/Shared%20Documents/dualtiling/index.html
You could convert the TIN to raster using a GIS package or software such as R, then retrieve one point at the center of each pixel representing the value. (Example in ArcGIS)
EDIT: If the irregular 3D mesh has multiple heights per {x, y} a similar approach would be to sample the mesh using a voxel "grid" and keep one value per voxel. GRASS GIS has the functionality to take the vertices of the TIN (3d mesh) and convert them to voxels, then back to a regular 3d cloud.

Voronoi diagram using custom (great circle) distance

I want to create a Voronoi diagram on several pairs of
latitudes/longitudes, but want to use the great circle distance
between them, not the (inaccurate) Pythagorean distance.
Can I make qhull/qvoronoi or some other Linux program do this?
I considered mapping the points to 3D, having qvoronoi create a 3D
Voronoi diagram[1], and intersecting the result with the unit sphere, but
I'm not sure that's easy.
[1] I realize the 3D distance between two latitudes/longitudes (the
"through the Earth" path) isn't the same as the great circle distance,
but it's easy to prove that this transformation preserves relative
distances, which is all that matters for a Voronoi diagram.
I assume you've found this article. From that, it seems like you have the right idea by using a 3D embedding. Your question is then how to intersect the result with the sphere.
First of all you need to consider how you're going to represent the voronoi diagram. If you want to work in lat/long coordinates in a 2D plane, then your voronoi diagram will contain curved edges, so maybe it is best to just use a 3D representation.
If you use a program like qvoronoi, you should in theory only need the inifinite hyperplane data (generated by Fo). This gives you the equation of the plane and the two points it corresponds to. Usually you only need to use the voronoi diagram to test for inclusion within regions, and the hyperplanes should be enough for that.
See also this question: Algorithm to compute a Voronoi diagram on a sphere?

Resources