Calculate volume from crossections - math

I have an irregularly shaped 3d object. Of this object I know the areas of the crossections in regular intervals. How can I calculate the volume of this object?

You can only approximate the volume. Just add up all the areas and then multiply by the distance between intervals.
Obviously the smaller the distance between intervals, the more accurate the volume. It is just integration (calculus).

Discretize it using tetrahedra or bricks and add up their volumes, a la finite element methods. Integrate using Gaussian quadrature and sum.

You're estimating a Riemann integral. There are many methods to do this, of varying complexity. Simpson's rule is reasonably straightforward and will be pretty accurate as long as the cross-sectional area varies in a smooth enough fashion, however it requires that the number of intervals be even.

Ed Heal's answer is a Riemann sum that approaches the (volume) integral in the limit. Depending on where the cross-sections are located with respect to the extent of the object, it might be viewed as an application of the midpoint rule.
Assuming the cross-section area varies smoothly with distance (twice continuously differentiable along the axis perpendicular to the cross-sections), the midpoint rule and trapezoid rule have accuracy that improves with the square of the interval width (here assumed regular). Averaging the midpoint and trapezoid rule approximations amounts to an application of Simpson's rule, outlined in Peter Milley's answer, with higher order accuracy (improving with the fourth power of the interval width) provided the integrand is sufficiently smooth (continuous 4th derivative of cross-section area with respect to distance).
Of course many real world figures will not have such smoothness (too many corners, holes, etc.), so it is prudent not to expect exceptional accuracy from making more sophisticated approximations.

Related

How to smooth hand drawn lines?

So I am using Kinect with Unity.
With the Kinect, we detect a hand gesture and when it is active we draw a line on the screen that follows where ever the hand is going. Every update the location is stored as the newest (and last) point in a line. However the lines can often look very choppy.
Here is a general picture that shows what I want to achieve:
With the red being the original line, and the purple being the new smoothed line. If the user suddenly stops and turns direction, we think we want it to not exactly do that but instead have a rapid turn or a loop.
My current solution is using Cubic Bezier, and only using points that are X distance away from each other (with Y points being placed between the two points using Cubic Bezier). However there are two problems with this, amongst others:
1) It often doesn't preserve the curves to the distance outwards the user drew them, for example if the user suddenly stop a line and reverse the direction there is a pretty good chance the line won't extend to point where the user reversed the direction.
2) There is also a chance that the selected "good" point is actually a "bad" random jump point.
So I've thought about other solutions. One including limiting the max angle between points (with 0 degrees being a straight line). However if the point has an angle beyond the limit the math behind lowering the angle while still following the drawn line as best possible seems complicated. But maybe it's not. Either way I'm not sure what to do and looking for help.
Keep in mind this needs to be done in real time as the user is drawing the line.
You can try the Ramer-Douglas-Peucker algorithm to simplify your curve:
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
It's a simple algorithm, and parameterization is reasonably intuitive. You may use it as a preprocessing step or maybe after one or more other algorithms. In any case it's a good algorithm to have in your toolbox.
Using angles to reject "jump" points may be tricky, as you've seen. One option is to compare the total length of N line segments to the straight-line distance between the extreme end points of that chain of N line segments. You can threshold the ratio of (totalLength/straightLineLength) to identify line segments to be rejected. This would be a quick calculation, and it's easy to understand.
If you want to take line segment lengths and segment-to-segment angles into consideration, you could treat the line segments as vectors and compute the cross product. If you imagine the two vectors as defining a parallelogram, and if knowing the area of the parallegram would be a method to accept/reject a point, then the cross product is another simple and quick calculation.
https://www.math.ucdavis.edu/~daddel/linear_algebra_appl/Applications/Determinant/Determinant/node4.html
If you only have a few dozen points, you could randomly eliminate one point at a time, generate your spline fits, and then calculate the point-to-spline distances for all the original points. Given all those point-to-spline distances you can generate a metric (e.g. mean distance) that you'd like to minimize: the best fit would result from eliminating points (Pn, Pn+k, ...) resulting in a spline fit quality S. This technique wouldn't scale well with more points, but it might be worth a try if you break each chain of line segments into groups of maybe half a dozen segments each.
Although it's overkill for this problem, I'll mention that Euler curves can be good fits to "natural" curves. What's nice about Euler curves is that you can generate an Euler curve fit by two points in space and the tangents at those two points in space. The code gets hairy, but Euler curves (a.k.a. aesthetic curves, if I remember correctly) can generate better and/or more useful fits to natural curves than Bezier nth degree splines.
https://en.wikipedia.org/wiki/Euler_spiral

Finding sections of a NURBS curve that has a curvature over a predefined value

I am trying to find the the sharp corners of a NURBS curve. For this problem I define a limit curvature. I am trying to find the sections on the curve that has a curvature higher then this value. One option is to interpolate over the curve and calculate curvature for all values but it may take time and some sharp points are likely to be missed. Any ideas about how to find these sections in an effective way?
Computing the derivative of the curvature analytically, I guess that you will find a (terrible) expression with a polynomial at the numerator. A good polynomial solver will allow you to find the roots, hence the extrema, to split the curve in sections with a monotonic curvature, and from there find the precise solutions of k=c by regula falsi or similar.
A much simpler approach is by flattening the curve (convert to a smooth polyline) and estimating the local curvature on all triples of consecutive points (using their circumscribed circle). High curvature sections will probably also be detectable by anomalies in the point density while flattening.
The benefit of flattening over uniform sampling is that it auto-adjusts the point density.
Another idea is to resort to a method of approximation of curves by circular arcs (this can be compared to a second order flattening operation). You will find a few papers on the topic (do not confuse with circle approximation by curves), but usually these methods are complex.
Maybe it is also possible to devise an analytic formula for a lower bound on the NURBS curvature in a given interval and use that to implement a bisection approach.

Path tracing: why is there no cosine term when calculating perfect mirror reflection?

I've been looking at Kevin Beason's path tracer "smallpt" (http://www.kevinbeason.com/smallpt/) and have a question regarding the mirror reflection calculation (line 62).
My understanding of the rendering equation (http://en.wikipedia.org/wiki/Rendering_equation) is that to calculate the outgoing radiance for a differential area, you integrate the incoming radiance over each differential solid angle in a hemisphere above the differential area, weighted by the BRDF and a cosine factor, with the purpose of the cosine factor being to reduce the contribution to the differential irradiance landing on the area for incoming light at more grazing angles (as light at these angles will be spread across a larger area, meaning that the differential area in question will receive less of this light).
But in the smallpt code this cosine factor is not part of the calculation for mirror reflection on line 62. (It is also omitted from the diffuse calculation, but I believe that to be because the diffuse ray is chosen with cosine-weighted importance sampling, meaning that an explicit multiplication by the cosine factor is not needed).
My question is why doesn't the mirror reflection calculation require the cosine factor? If the incoming radiance is the same, but the angle of incidence becomes more grazing, then won't the irradiance landing on the differential area be decreased, regardless of whether diffuse or mirror reflection is being considered?
This is a question that I have raised recently: Why the BRDF of specular reflection is infinite in the reflection direction?
For perfect specular reflection, BRDF is infinite in the reflection direction. So we can't integrate for rendering equation.
But we can make reflected radiance equal the incident according to energy conservation.
The diffuse light paths are, as you suspect, chosen such that the cosine term is balanced out by picking rays proportionally more often in the direction where the cosine would have been higher (i.e. closer to the direction of the surface normal) a good explanation can be found here. This makes the simple division by the number of samples enough to accurately model diffuse reflection.
In the rendering equation, which is the basis for path tracing, there is a term for the reflected light:
Here
represents the BRDF of the material. For a perfect reflector this BRDF would be zero in every direction except for in the reflecting direction. It then makes little sense to sample any other direction than the reflected ray path. Even so, the dot product at the end would not be omitted.
But in the smallpt code this cosine factor is not part of the
calculation for mirror reflection on line 62.
By the definitions stated above, my conclusion is that it should be part of it, since this would make it needless to specify special cases for one material or another.
That's a very good question. I don't understand it fully, but let me attempt to give an answer.
In the diffuse calculation, the cosine factor is included via the sampling. Out of the possible halfsphere of incidence rays, it is more likely a priori that one came directly from above than directly from the horizon.
In the mirror calculation, the cosine factor is included via the sampling. Out of the possible single direction that an incidence ray could have come from, it is more likely a priori - you see where I'm going.
If you sampled coarse reflection via a cone of incoming rays (as for a matte surface) you would again need to account for cosine weighting. However, for the trivial case of a single possible incidence direction, sampling reduces to if true.
From formal perspective cosine factor in the integral cancels out with cosine in denominator of specular BRDF (f_r = delta(omega_i, omega_o)/dot(omega_i, n)).
In different literature, the ideal mirror BRDF is defined by
a specular albedo
dirac deltas (infinite in direction of perfect reflectance, zero everywhere else) and
and 1/cos(theta_i) canceling out the cosine term in the rendering equation.
See e.g.: http://resources.mpi-inf.mpg.de/departments/d4/teaching/ws200708/cg/slides/CG07-Brdf+Texture.pdf, Slide 12
For an intuition of the third point, consider that the differential footprint of the surface covered by a viewing ray from direction omega_r is the same as the footprint of the surface covered by the incident ray from direction omega_i. Thus, all incident radiance is reflected towards omgea_r, independent of the angle of incidence.

How do I calculate a normal vector based on multiple triangles sharing a vertex?

If I have a mesh of triangles, how does one go about calculating the normals at each given vertex?
I understand how to find the normal of a single triangle. If I have triangles sharing vertices, I can partially find the answer by finding each triangle's respective normal, normalizing it, adding it to the total, and then normalizing the end result. However, this obviously does not take into account proper weighting of each normal (many tiny triangles can throw off the answer when linked with a large triangle, for example).
I think a good method should be using a weighted average but using angles instead of area as weights. This is in my opinion a better answer because the normal you are computing is a "local" feature so you don't really care about how big is the triangle that is contributing... you need a sort of "local" measure of the contribution and the angle between the two sides of the triangle on the specified vertex is such a local measure.
Using this approach a lot of small (thin) triangles doesn't give you an unbalanced answer.
Using angles is the same as using an area-weighted average if you localize the computation by using the intersection of the triangles with a small sphere centered in the vertex.
The weighted average appears to be the best approach.
But be aware that, depending on your application, sharp corners could still give you problems. In that case, you can compute multiple vertex normals by averaging surface normals whose cross product is less than some threshold (i.e., closer to being parallel).
Search for Offset triangular mesh using the multiple normal vectors of a vertex by SJ Kim, et. al., for more details about this method.
This blog post outlines three different methods and gives a visual example of why the standard and simple method (area weighted average of the normals of all the faces joining at the vertex) might sometimes give poor results.
You can give more weight to big triangles by multiplying the normal by the area of the triangle.
Check out this paper: Discrete Differential-Geometry Operators for Triangulated 2-Manifolds.
In particular, the "Discrete Mean Curvature Normal Operator" (Section 3.5, Equation 7) gives a robust normal that is independent of tessellation, unlike the methods in the blog post cited by another answer here.
Obviously you need to use a weighted average to get a correct normal, but using the triangles area won't give you what you need since the area of each triangle has no relationship with the % weight that triangles normal represents for a given vertex.
If you base it on the angle between the two sides coming into the vertex, you should get the correct weight for every triangle coming into it. It might be convenient if you could convert it to 2d somehow so you could go off of a 360 degree base for your weights, but most likely just using the angle itself as your weight multiplier for calculating it in 3d space and then adding up all the normals produced that way and normalizing the final result should produce the correct answer.

Point Sequence Interpolation

Given an arbitrary sequence of points in space, how would you produce a smooth continuous interpolation between them?
2D and 3D solutions are welcome. Solutions that produce a list of points at arbitrary granularity and solutions that produce control points for bezier curves are also appreciated.
Also, it would be cool to see an iterative solution that could approximate early sections of the curve as it received the points, so you could draw with it.
The Catmull-Rom spline is guaranteed to pass through all the control points. I find this to be handier than trying to adjust intermediate control points for other types of splines.
This PDF by Christopher Twigg has a nice brief introduction to the mathematics of the spline. The best summary sentence is:
Catmull-Rom splines have C1
continuity, local control, and
interpolation, but do not lie within
the convex hull of their control
points.
Said another way, if the points indicate a sharp bend to the right, the spline will bank left before turning to the right (there's an example picture in that document). The tightness of those turns in controllable, in this case using his tau parameter in the example matrix.
Here is another example with some downloadable DirectX code.
One way is Lagrange polynominal, which is a method for producing a polynominal which will go through all given data points.
During my first year at university, I wrote a little tool to do this in 2D, and you can find it on this page, it is called Lagrange solver. Wikipedia's page also has a sample implementation.
How it works is thus: you have a n-order polynominal, p(x), where n is the number of points you have. It has the form a_n x^n + a_(n-1) x^(n-1) + ...+ a_0, where _ is subscript, ^ is power. You then turn this into a set of simultaneous equations:
p(x_1) = y_1
p(x_2) = y_2
...
p(x_n) = y_n
You convert the above into a augmented matrix, and solve for the coefficients a_0 ... a_n. Then you have a polynomial which goes through all the points, and you can now interpolate between the points.
Note however, this may not suit your purpose as it offers no way to adjust the curvature etc - you are stuck with a single solution that can not be changed.
You should take a look at B-splines. Their advantage over Bezier curves is that each part is only dependent on local points. So moving a point has no effect on parts of the curve that are far away, where "far away" is determined by a parameter of the spline.
The problem with the Langrange polynomial is that adding a point can have extreme effects on seemingly arbitrary parts of the curve; there's no "localness" like described above.
Have you looked at the Unix spline command? Can that be coerced into doing what you want?
There are several algorithms for interpolating (and exrapolating) between an aribtrary (but final) set of points. You should check out numerical recipes, they also include C++ implementations of those algorithms.
Unfortunately the Lagrange or other forms of polynomial interpolation will not work on an arbitrary set of points. They only work on a set where in one dimension e.g. x
xi < xi+1
For an arbitary set of points, e.g. an aeroplane flight path, where each point is a (longitude, latitude) pair, you will be better off simply modelling the aeroplane's journey with current longitude & latitude and velocity. By adjusting the rate at which the aeroplane can turn (its angular velocity) depending on how close it is to the next waypoint, you can achieve a smooth curve.
The resulting curve would not be mathematically significant nor give you bezier control points. However the algorithm would be computationally simple regardless of the number of waypoints and could produce an interpolated list of points at arbitrary granularity. It would also not require you provide the complete set of points up front, you could simply add waypoints to the end of the set as required.
I came up with the same problem and implemented it with some friends the other day. I like to share the example project on github.
https://github.com/johnjohndoe/PathInterpolation
Feel free to fork it.
Google "orthogonal regression".
Whereas least-squares techniques try to minimize vertical distance between the fit line and each f(x), orthogonal regression minimizes the perpendicular distances.
Addendum
In the presence of noisy data, the venerable RANSAC algorithm is worth checking out too.
In the 3D graphics world, NURBS are popular. Further info is easily googled.

Resources