I was working on a method to approximate the normal to a surface of a 3d voxel image.
The method suggested in this article (only algorithm I found via Google) seems to work. The suggested method from the paper is to find the direction the surface varies the most in, choose 2 points on the tangent plane using some procedure, and then take the cross product. Some Pascal code by the article author code, commented in Portuguese, implements this method.
However, using the gradient of f (use each partial derivative as a component of the vector) as the normal seems to work pretty well; I tested this along several circles on a voxellated sphere and I got results that look correct in most spots (there are a few outliers that are off by about 30 degrees). This is very different from the method used in the paper, but it still works. What I don't understand is why the gradient of f = 1/dist calculated along the surface of an object should produce the normal.
Why does this procedure work? Is it just the fact that the sphere test was too much of a special case? Could you suggest a simpler method, or explain any of these methods?
Using the gradient of the volume as a normal for lighting is a standard technique in volume rendering.
If you interpret the value of a voxel as the opacity, the gradient will give you the direction of the greatest change in the opacity, which is similar to a surface normal.
Related
I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.
I have a general question about what method to use for smoothing a 3D (xyz) grid.
My program has large matrixes of 3D points obtained with a stereovision method. The shape of the result is always something like a semisphere, but it has a rugosity due to stereovision errors I want to eliminate.
The question is, how to do it? Rigth now I have half developed a method for soomthing, but I think there may be a better method.
My actual idea is to use Hermite method. The idea is to:
Take all XY and smooth in two directions ->XYnew and XnewY
Convert the Hermite lines into Bezier lines and find the cross point between XYnew and XnewY, having the new point. (Repeat with all points, normally 2000)
Use hermite XYZ smoothing having XYZnew.
Rigth now I have the hermite surface smoothing and hermite line smoothing inplemented in C++, but the middle part is not as easy as espected.
Anyway, my question is, is this a correct method or is there another one which may be better?
Of course the idea is to elliminate the error generated by the stereovision method, this is not a computer graphics problem, is more a data treatment problem
Appendix:
At first I thougth that with a Z smoothing would be suficient, but clearly it is not, there is also a lot of XY error. In the images below you can see the Z fitting working but still it is really rugous as it can be seen in the 2 image. (The colours are deformations and shoul be quite continous)
Unless you have better priors, it's hard to beat the classic Taubin's algorithm: http://mesh.brown.edu/taubin/pdfs/taubin-iccv95a.pdf
I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.
For use in a rigid body simulation, I want to compute the mass and inertia tensor (moment of inertia), given a triangle mesh representing the boundary of the (not necessarily convex) object, and assuming constant density in the interior.
Assuming your trimesh is closed (whether convex or not) there is a way!
As dmckee points out, the general approach is building tetrahedrons from each surface triangle, then applying the obvious math to total up the mass and moment contributions from each tet. The trick comes in when the surface of the body has concavities that make internal pockets when viewed from whatever your reference point is.
So, to get started, pick some reference point (the origin in model coordinates will work fine), it doesn't even need to be inside of the body. For every triangle, connect the three points of that triangle to the reference point to form a tetrahedron. Here's the trick: use the triangle's surface normal to figure out if the triangle is facing towards or away from the reference point (which you can find by looking at the sign of the dot product of the normal and a vector pointing at the centroid of the triangle). If the triangle is facing away from the reference point, treat its mass and moment normally, but if it is facing towards the reference point (suggesting that there is open space between the reference point and the solid body), negate your results for that tet.
Effectively what this does is over-count chunks of volume and then correct once those areas are shown to be not part of the solid body. If a body has lots of blubbery flanges and grotesque folds (got that image?), a particular piece of volume may be over-counted by a hefty factor, but it will be subtracted off just enough times to cancel it out if your mesh is closed. Working this way you can even handle internal bubbles of space in your objects (assuming the normals are set correctly). On top of that, each triangle can be handled independently so you can parallelize at will. Enjoy!
Afterthought: You might wonder what happens when that dot product gives you a value at or near zero. This only happens when the triangle face is parallel (its normal is perpendicular) do the direction to the reference point -- which only happens for degenerate tets with small or zero area anyway. That is to say, the decision to add or subtract a tet's contribution is only questionable when the tet wasn't going to contribute anything anyway.
Decompose your object into a set of tetrahedrons around the selected interior point. (That is solids using each triangular face element and the chosen center.)
You should be able to look up the volume of each element. The moment of inertia should also be available.
It gets to be rather more trouble if the surface is non-convex.
I seem to have miss-remembered by nomenclature and skew is not the adjective I wanted. I mean non-regular.
This is covered in the book "Game Physics, Second Edition" by D. Eberly. The chapter 2.5.5 and sample code is available online. (Just found it, haven't tried it out yet.)
Also note that the polyhedron doesn't have to be convex for the formulas to work, it just has to be simple.
I'd take a look at vtkMassProperties. This is a fairly robust algorithm for computing this, given a surface enclosing a volume.
If your polydedron is complicated, consider using Monte Carlo integration, which is often used for multidimensional integrals. You will need an enclosing hypercube, and you will need to be able to test whether a given point is inside or outside the polyhedron. And you will need to be patient, as Monte Carlo integration is slow.
Start as usual at Wikipedia, and then follow the external links pages for further reading.
(For those unfamiliar with Monte Carlo integration, here's how to compute a mass. Pick a point in the containing hypercube. Add to the point_total counter. Is it in the polyhedron? If yes, add to the point_internal counter. Do this lots (see the convergence and error bound estimates.) Then
mass_polyhedron/mass_hypercube \approx points_internal/points_total.
For a moment of inertia, you weight each count by the square of the distance of the point to the reference axis.
The tricky part is testing whether a point is inside or outside your polyhedron. I'm sure that there are computational geometry algorithms for that.
Given an arbitrary sequence of points in space, how would you produce a smooth continuous interpolation between them?
2D and 3D solutions are welcome. Solutions that produce a list of points at arbitrary granularity and solutions that produce control points for bezier curves are also appreciated.
Also, it would be cool to see an iterative solution that could approximate early sections of the curve as it received the points, so you could draw with it.
The Catmull-Rom spline is guaranteed to pass through all the control points. I find this to be handier than trying to adjust intermediate control points for other types of splines.
This PDF by Christopher Twigg has a nice brief introduction to the mathematics of the spline. The best summary sentence is:
Catmull-Rom splines have C1
continuity, local control, and
interpolation, but do not lie within
the convex hull of their control
points.
Said another way, if the points indicate a sharp bend to the right, the spline will bank left before turning to the right (there's an example picture in that document). The tightness of those turns in controllable, in this case using his tau parameter in the example matrix.
Here is another example with some downloadable DirectX code.
One way is Lagrange polynominal, which is a method for producing a polynominal which will go through all given data points.
During my first year at university, I wrote a little tool to do this in 2D, and you can find it on this page, it is called Lagrange solver. Wikipedia's page also has a sample implementation.
How it works is thus: you have a n-order polynominal, p(x), where n is the number of points you have. It has the form a_n x^n + a_(n-1) x^(n-1) + ...+ a_0, where _ is subscript, ^ is power. You then turn this into a set of simultaneous equations:
p(x_1) = y_1
p(x_2) = y_2
...
p(x_n) = y_n
You convert the above into a augmented matrix, and solve for the coefficients a_0 ... a_n. Then you have a polynomial which goes through all the points, and you can now interpolate between the points.
Note however, this may not suit your purpose as it offers no way to adjust the curvature etc - you are stuck with a single solution that can not be changed.
You should take a look at B-splines. Their advantage over Bezier curves is that each part is only dependent on local points. So moving a point has no effect on parts of the curve that are far away, where "far away" is determined by a parameter of the spline.
The problem with the Langrange polynomial is that adding a point can have extreme effects on seemingly arbitrary parts of the curve; there's no "localness" like described above.
Have you looked at the Unix spline command? Can that be coerced into doing what you want?
There are several algorithms for interpolating (and exrapolating) between an aribtrary (but final) set of points. You should check out numerical recipes, they also include C++ implementations of those algorithms.
Unfortunately the Lagrange or other forms of polynomial interpolation will not work on an arbitrary set of points. They only work on a set where in one dimension e.g. x
xi < xi+1
For an arbitary set of points, e.g. an aeroplane flight path, where each point is a (longitude, latitude) pair, you will be better off simply modelling the aeroplane's journey with current longitude & latitude and velocity. By adjusting the rate at which the aeroplane can turn (its angular velocity) depending on how close it is to the next waypoint, you can achieve a smooth curve.
The resulting curve would not be mathematically significant nor give you bezier control points. However the algorithm would be computationally simple regardless of the number of waypoints and could produce an interpolated list of points at arbitrary granularity. It would also not require you provide the complete set of points up front, you could simply add waypoints to the end of the set as required.
I came up with the same problem and implemented it with some friends the other day. I like to share the example project on github.
https://github.com/johnjohndoe/PathInterpolation
Feel free to fork it.
Google "orthogonal regression".
Whereas least-squares techniques try to minimize vertical distance between the fit line and each f(x), orthogonal regression minimizes the perpendicular distances.
Addendum
In the presence of noisy data, the venerable RANSAC algorithm is worth checking out too.
In the 3D graphics world, NURBS are popular. Further info is easily googled.