I am trying to understand how to manually generate objects.
I have a mesh, part of which I delete and create a new geometry in its place. I have information about the normals of deleted vertices. On the basis of which I have to build new faces (in a different size and quantity) looking in the same direction.
But I don’t understand how to choose the correct winding. It sounds easy when the lessons talk about CCW winding in screen space. But what if I have a bunch of almost chaotic points in the model space? How then to determine this CCW, which axis is used for this? I suggest that the nearest old normals might help. But what is the cheapest method to determine the correct order?
It turned out to be easier than I thought. It is necessary to find the cross product of the first two vectors from the vertices of a triangle, then find the dot of the resulting vector and the normal vector, if the result is negative, then during generation it is necessary to change the order of vertices.
Related
I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.
I'm working on a problem to eliminate common line segments in a collection of Paths. Many of these paths share the same segment.
It seems that a 2D line would have some way to uniquely identity itself. Like a Key.
So a Line [(A,B), (C,D)] is the same as [(C,D), (A,B)]
Only Solution I could come up with is to sort the points.
This seems like it would be a common problem in Math or Graphics but the solution escapes me.
From a mathematical point of view, this looks like a matter of an undirected graph (as opposed to a directed graph).
Sorting the points is one way to handle this: it's a straightforward way to represent an unordered edge with a single, unambiguously selected value (it shouldn't matter what ordering you choose, as long as it is consistent for all possible segments). You do need to ensure that you maintain this ordering as an invariant: accidentally slipping in a mis-ordered edge could cause problems for anything that depends on the ordering.
However, mathematically speaking, undirected graphs are often defined as directed graphs with a symmetry property: if (A,B) is an edge, then so is (B,A). This suggests another way: ensure that you always store both (A,B) and (B,A). Perhaps both segment orderings could have a link to any common data, and possibly a fast way to access one from the other. (As with the sorted point method, you need to maintain this symmetry as an invariant.)
The best choice depends on your application. If you're using your segments as a key, the sorting method might be best. However, some applications are a better match for the symmetric method. For example, the doubly connected edge list is a data structure which represents each edge as two linked "half-edges", one in each direction.
Since you mention graphics, note that the doubly connected edge list is often used to represent the edges of 3-D polytopes.
Also, note the similarity to oriented triangles: there are good, practical reasons for computer graphics to treat triangles as "one-sided", such that drawing a triangle visible from one side is distinct from drawing the same triangle visible from the other. Like half-edges, this distinction is determined by the order of the vertices: clockwise for one side, counterclockwise for the other.
I'm very new to PCL.
I try to detect the floor under an object for checking if the object topples or is it positioned horizontally.
I've checked API and found the method: pcl::PointCloud< T >::at.
Seems like I could detect Z-value of a point using at. Is it correct?
If yes, I'm confused, how it should work. Mathematically a point is infinite small. On my scans I see the point-density the smaller the more distinct they are in Z-direction.
Will at always return a point? Is the value the mean of nearest physical points?
As referenced in the documentation, pcl::PointCloud< T >::at returns the information of a single point (the coordinates plus other data depending on the point format) given column and row information (roughly the X,Y in the depth image). For this reason, this method just works on organized clouds.
Unfortunately, not every point is a valid point. Unless you filter the point cloud, you could find invalid measurements (points which have NaN components). This is pretty normal, just discard those points using a filter. Your intuition is right, the point density is smaller the further away you go from the sensor.
As for what you're trying to achieve, you should take a look at the planar segmentation tutorial on the PCL website and at the Table Object Detector software by Nicolas Burrus. The latter extracts a plane, and the clusters of objects on top of it.
I'm working on a PyMEL script that allows the user to duplicate a selected object multiple times, using a CV curve and its points coordinates to transform & rotate each copy to a certain point in space.
In order to achieve this, Im using the adjacent 2 points of each CV (control vertex) to determine the rotation for the object.
I have managed to retrieve the coordinates of the curve's CVs
#Add all points of the curve to the cvDict dictionary
int=0
cvDict={}
while int<selSize:
pointName='point%s' % int
coords= pointPosition ('%s.cv[%s]' % (obj,int), w=1)
#Setup the key for the current point
cvDict[pointName]={}
#add coords to x,y,z subkeys to dict
cvDict[pointName]['x']= coords[0]
cvDict[pointName]['y']= coords[1]
cvDict[pointName]['z']= coords[2]
int += 1
Now the problem I'm having is figuring out how to get the angle for each CV.
I stumbled upon the angleBetween() function:
http://download.autodesk.com/us/maya/2010help/CommandsPython/angleBetween.html
In theory, this should be my solution, since I could find the "middle vector" (not sure if that's the mathematical term) of each of the curve's CVs (using the adjacent CVs' coordinates to find a fourth point) and use the above mentioned function to determine how much I'd have to rotate the object using a reference vector, for example on the z axis.
At least theoretically - the issue is that the function only takes 1 set of coords for each vector and I have absolutely no Idea how to convert my point coords to that format (since I always have at least 2 sets of coordinates, one for each point).
Thanks.
If you wanna go the long way and not grab the world transforms of the curve, definitely make use of pymel's datatypes module. It has everything that python's native math module does and a few others that are Maya specific. Also the math you would require to do this based on CVs can be found here.
Hope that puts you in the right direction.
If you're going to skip the math, maybe you should just create a locator, path-animate it along the curve, and then sample the result. That would allow you to get completely continuous orientations along the curve. The midpoint-constraint method you've outlined above is limited to 1 valid sample per curve segment -- if you wanted 1/4 of the way or 3/4 of the way between two cv's your orientation would be off. Plus you don't have to reinvent all of the manu different options for deciding on the secondary axis of rotation, reading curves with funky parameterization, and so forth.
I'm making a program that selects an area within a canvas by clicking a sequence of points. The points clicked are linked by some lines this way: every new point is linked with the first and the last ones. I'm looking for an algorithm that computes the area of the resulting polygon.
Intersections are allowed, and here is the complexity, so the algorithm must manage this case by finding the polygon according to the ordered sequence of points clicked and calculating its area.
After many searches, the best I've found is this http://sigbjorn.vik.name/projects/Triangulation.pdf, but I would need something easier to implement in Processing.js.
First cut the line segments where they intersect. If the input set is small, you can simply check every pair. Otherwise use an R-Tree. Then compute a constrained (Delaunay) Triangulation. Then determine the inner triangles using rayshooting and sum up their areas.
hth