Point cloud registration using pcl - point-cloud-library

OK, so here is the problem. I followed the link given here for registering a pair of point clouds.
I have a couple of queries:
1) Since the second point cloud is aligned to the frame of the first one, the coordinates of the points in the first point cloud should remain unchanged in the final point cloud, right ?
2) Is there a way to map the target points to the aligned points in the final coordinate. In other words, for example, I have two point clouds pc1 and pc2. pc1 has 3 points A, B, and C, and pc2 has 4 points W, X, Y, and Z. After registration, the final point clouds contains points A, B, C (since they should remain unchanged), and W', X', Y', and Z'. My question is, is there a way to know that W' corresponds to W in the target cloud, X' to X, etc etc. ? Also, is there a way for the other way around? I mean, given W, how to know what it corresponds to (i.e. W') ?
Thanks in advance.

In pcl the registration by itself does changes neither clouds. The result of registration is the transformation from the frame of source cloud to the frame of target cloud.
In your link the registration is done in the pairAlign(). It incrementally runs
points_with_normals_src = reg_result;
reg.align(reg_result);
each time getting as reg_result the transformed points_with_normals_src cloud, and
//accumulate transformation between each Iteration
Ti = reg.getFinalTransformation () * Ti;
accumulating transformation from previous steps.
the transformation (for some reason they want to apply the inverse transformation to target to transform it to the source frame) and the result merge of the aligned clouds happens only after the loop of registrations:
// Transform target back in source frame
pcl::transformPointCloud (*cloud_tgt, *output, targetToSource);
...
//add the source to the transformed target
*output += *cloud_src;
If I understand you right your question is
given a point A in cloud_tgt find its image in the merged output cloud? (right?)
If all you need is to find the transformed coordinates of the point A then it's simple:
pcl::PointXYZ transformed_a = targetToSource * a;
If you want to find the index of the transformed A in the output cloud then it's a bit more complicated: though you can simply enumerate all points in output cloud and compare their coordinates with transformed_a this approach will be a noticeable performance issue if you are going to find the correspondence for many points from your target cloud. In this case you'd rather use pcl::search::KdTree (see http://docs.pointclouds.org/trunk/a02948.html)

Related

Generate a network graph inside a cylinder

I need to solve a problem and I'm really stuck with it, so I want to summon the power of the community to see if anyone has an idea on how to handle it.
I need to create a porous material from a given surface . So, I have a point cloud representing the surface of a cylinder, like the one in the figure, and I need to generate a graph inside it from the points on the surface to be filled with volume. It is mandatory that all the points of the surface are used (some more can be added if necessary), the length of the edges must be an input parameter of the function and the angle between two nodes must always be greater than 45º with respect to the horizontal plane.
My initial idea was to make a while loop that in each iteration creates a random point cloud inside the cylinder between the current z (the current z is the maximum z value of the previous iteration) and the current z + the given edge length. Once this point cloud is created, it joins the surface nodes of the last iteration with the points of this point cloud that satisfy the condition of edge length and angularity. And it continues until the current z is greater than the maximum z of the cylinder surface.
The problem with this idea is that it is not consistent and the results are disastrous. So I would like to ask if anyone has a better idea or if anyone knows if any python libraries could help me. I am currently using networkx and numpy-stl but those are not meant to do what I want. ChatGPT is unable to understant me too :(.
Thank you so much for you time!!

Generate random point on a 2D disk in 3D space given normal vector

Is there a simple and efficient method to generate a random (uniformly distributed) point on a disk "hanging" in 3-dimensional space? The disk is defined by its normal.
Ideally, I would like to avoid rotation matrices, as I do not fully understand them, and I know they have issues.
So far, I've tried generating a 3D unit vector and projecting it onto the plane of the disk, which does ensure that the point is within the disk, but not that it's uniformly distributed.
I also tried scaling the generated vector according to some function of its length, but I couldn't get a uniform distribution back regardless.
I had an idea that involved creating 2 vectors perpendicular to each other and the normal, to define a local coordinate system. Then I could generate a point on the unit disk as in 2D and convert the result back to the global coordinate system. This seems like it would be quite efficient, as it involves some precomputation (which I'm completely fine with) and only simple calculations afterwards (this is for a raytracer, so it'll happen a lot). The problem is, I don't know how to reliably calculate the local coordinate system's basis vectors while avoiding possible issues like collinearity.
Any help is much appreciated.
A easy way to calculate orthogonal basis vectors u, v for a plane with normal n = (a,b,c) is finding the component with least absolute value, and making u orthogonal to that component; the rest pretty much follows. For example, if the first component is the one with minimal absolute value, you can pick these basis vectors:
u = (0, -c, b) // n·u = -bc+cb = 0
v = (b²+c², -ab, -ac) // n·v = ab²+ac²-ab²-ac² = 0, u·v = abc-abc = 0

Converting 2 point coords to vector coords for angleBetween()?

I'm working on a PyMEL script that allows the user to duplicate a selected object multiple times, using a CV curve and its points coordinates to transform & rotate each copy to a certain point in space.
In order to achieve this, Im using the adjacent 2 points of each CV (control vertex) to determine the rotation for the object.
I have managed to retrieve the coordinates of the curve's CVs
#Add all points of the curve to the cvDict dictionary
int=0
cvDict={}
while int<selSize:
pointName='point%s' % int
coords= pointPosition ('%s.cv[%s]' % (obj,int), w=1)
#Setup the key for the current point
cvDict[pointName]={}
#add coords to x,y,z subkeys to dict
cvDict[pointName]['x']= coords[0]
cvDict[pointName]['y']= coords[1]
cvDict[pointName]['z']= coords[2]
int += 1
Now the problem I'm having is figuring out how to get the angle for each CV.
I stumbled upon the angleBetween() function:
http://download.autodesk.com/us/maya/2010help/CommandsPython/angleBetween.html
In theory, this should be my solution, since I could find the "middle vector" (not sure if that's the mathematical term) of each of the curve's CVs (using the adjacent CVs' coordinates to find a fourth point) and use the above mentioned function to determine how much I'd have to rotate the object using a reference vector, for example on the z axis.
At least theoretically - the issue is that the function only takes 1 set of coords for each vector and I have absolutely no Idea how to convert my point coords to that format (since I always have at least 2 sets of coordinates, one for each point).
Thanks.
If you wanna go the long way and not grab the world transforms of the curve, definitely make use of pymel's datatypes module. It has everything that python's native math module does and a few others that are Maya specific. Also the math you would require to do this based on CVs can be found here.
Hope that puts you in the right direction.
If you're going to skip the math, maybe you should just create a locator, path-animate it along the curve, and then sample the result. That would allow you to get completely continuous orientations along the curve. The midpoint-constraint method you've outlined above is limited to 1 valid sample per curve segment -- if you wanted 1/4 of the way or 3/4 of the way between two cv's your orientation would be off. Plus you don't have to reinvent all of the manu different options for deciding on the secondary axis of rotation, reading curves with funky parameterization, and so forth.

Determining if a list of points fit a "formation"?

I have, as input, an arbitrary "formation", which is a list of rectangles, F:
And as another input, an unordered list of 2D points, P:
In this example, I consider P to match the formation F, because if P were to be rotated 45° counter-clockwise, each rectangle in F will be satisfied by containing a point. It would also be considered a match if there were an extraneous point in P which did not fall into a rectangle.
Neither the formation, nor point inputs, have any particular origin, and the scale between the two are not required to be the same, e.g., the formation could describe an area of a kilometer, and the input points could describe an area of a centimeter. And lastly, I need to know which point ended up in which node in the formation.
I'm trying to develop a general-purpose algorithm that satisfies all of these constraints. It will be executed millions of times per second against a large database of location information, so I'm trying to "fail out" as soon as I can.
I've considered taking the angles between all points in both inputs and comparing them, or calculating and comparing hulls, but every approach seems to fall apart with one of the constraints.
Points in the formation could also easily be represented as circles with an x,y origin and tolerance radius, and that seems to simplify the approaches I've tried so far. I'd appreciate any solid plan-of-attack or A-Ha! insights.
I've had another thought - using polar coordinates this time.
The description was getting complex/ambiguous, so here is some code that hopefully illustrates the idea.
The gist is to express the formations and points in terms of polar coordinates, with the origin in the center of the formation/point set. It then becomes a lot easier to find the rotation and scaling factors of the transform between points and formations. The translation component is trivially found by comparing the average of the point set and of the formation zone set.
Note that this approach will treat your formation zones not as squares or circles, but as sections of circle segments. Hopefully this is a fudge that you can live with.
It will also not return the exact scaling and rotation terms of a valid mapping transform. It will give you a mapping between formation zones and points, and a good approximation of the final rotation and scaling factors. This approximation could be very quickly refined into a valid solution via a simple relaxation scheme. It will also quickly disregard invalid point sets.
One approach would be to express the point sets and formations in relative coordinate systems.
For each point set and formation:
Identify the most mutually-distant pair of points, call them A and B
Identify the point farthest from the line through A and B, call it C. Ensure that C is on the left of the line AB - you may need to swap A and B to make this so.
Express the rest of the points in terms of A, B and C. This is a simple matter of finding the closest point D on the line through AB for each point, and scaling such that all distances are in terms of the distance between A and B. The distance from A to D is your relative x coordinate, and the distance from D to the point is the y.
For example, if you find that A and B are ten units apart, and that C is 5 units distant from the midpoint of AB, then the relative coordinates would be:
A: (0,0)
B: (1,0)
C: (0.5,0.5)
You can then compare the point sets and formations independently of the global coordinate system. Note that the distance tolerances to find a match also have to be scaled in terms of AB.
I can easily imagine problem formations for this approach, where the choices of A, B and C are difficult to make unambiguously, but it's a start.

Calculating rotation along a path

I am trying to animate an object, let's say its a car. I want it go from point
x1,y1,z1
to point x2,y2,z2 . It moves to those points, but it appears to be drifting rather than pointing in the direction of motion. So my question is: how can I solve this issue in my updateframe() event? Could you point me in the direction of some good resources?
Thanks.
First off how do you represent the road?
I recently done exactly this thing and I used Catmull-Rom splines for the road. To orient an object and make it follow the spline path you need to interpolate the current x,y,z position from a t that walks along the spline, then orient it along the Frenet Coordinates System or Frenet Frame for that particular position.
Basically for each point you need 3 vectors: the Tangent, the Normal, and the Binormal. The Tangent will be the actual direction you will like your object (car) to point at.
I choose Catmull-Rom because they are easy to deduct the tangents at any point - just make the (vector) difference between 2 other near points to the current one. (Say you are at t, pick t-epsilon and t+epsilon - with epsilon being a small enough constant).
For the other 2 vectors, you can use this iterative method - that is you start with a known set of vectors on one end, and you work a new set based on the previous one each updateframe() ).
You need to work out the initial orientation of the car, and the final orientation of the car at its destination, then interpolate between them to determine the orientation in between for the current timestep.
This article describes the mathematics behind doing the interpolation, as well as some other things to do with rotating objects that may be of use to you. gamasutra.com in general is an excellent resource for this sort of thing.
I think interpolating is giving the drift you are seeing.
You need to model the way steering works .. your update function should 1) move the car always in the direction of where it is pointing and 2) turn the car toward the current target .. one should not affect the other so that the turning will happen and complete more rapidly than the arriving.
In general terms, the direction the car is pointing is along its velocity vector, which is the first derivative of its position vector.
For example, if the car is going in a circle (of radius r) around the origin every n seconds then the x component of the car's position is given by:
x = r.sin(2πt/n)
and the x component of its velocity vector will be:
vx = dx/dt = r.(2π/n)cos(2πt/n)
Do this for all of the x, y and z components, normalize the resulting vector and you have your direction.
Always pointing the car toward the destination point is simple and cheap, but it won't work if the car is following a curved path. In which case you need to point the car along the tangent line at its current location (see other answers, above).
going from one position to another gives an object a velocity, a velocity is a vector, and normalising that vector will give you the direction vector of the motion that you can plug into a "look at" matrix, do the cross of the up with this vector to get the side and hey presto you have a full matrix for the direction control of the object in motion.

Resources