I got two point clouds. To match them I try to do a registration with ICP. The point cloud's are not super similar but I want to at least get them very near together.
When using IterativeClosestPoint from the pcl library this works when I use my pointCloud A as a source and pointCloud B as a target. But it doesn't work when I use B as source and A as target. In the latter case it even increases the distance between my both clouds.
Does anyone know what I am doing wrong? Why should there be a difference in performance when changing the source/target?
This is my code:
pcl::IterativeClosestPoint<pcl::PointXYZ, pcl::PointXYZ> icp;
icp.setInputSource(A);
icp.setInputTarget(B);
icp.setMaximumIterations(50);
icp.setTransformationEpsilon(1e-8);
icp.setEuclideanFitnessEpsilon(1);
icp.setMaxCorrespondenceDistance(0.5); // 50cm
icp.setRANSACOutlierRejectionThreshold(0.03);
icp.align(aligned_model_cloud);
I am happy for any ideas and input.
Edit: here are the two clouds
Cloud A
Cloud B
Update:
I tried my code using Cloud A as source and Cloud A* as target. Where Cloud A* is a copy of Cloud A with just a translation on the x-axis. I did the same experiment with Cloud B and both were able to successfully converge in icp.
But as soon as I use Cloud B as source and Cloud A as target, it doesn't work anymore and converges after moving the cloud only a tiny bit (even the wrong direction). I checked the convergecriteria and found that it is CONVERGENCE_CRITERIA_REL_MSE (when transfromationEpslion is almost zero). I tried reducing the relative MSE with
icp.getConvergeCriteria()->setRelativeMSE(1e-15) but this didn't succeed. When checking the value of the relativeMSE after converging I get something like this: -124034642 which doesn't make any sense at all for me.
Update2: I moved the clouds quite near together first without ICP. When doing this ICP works fine.
Update3: I am doing an FPFH for a first estimation and afterwards ICP. Doing it like this works too.
This question is old and OP has already found a solution, but I'll just explain in case OP and someone find it useful.
First, ICP works by iteratively estimating correspondences between the two clouds and then minimize the overall distances between them. And ICP estimates correspondences using closest point data association (hence the name Iterative Closest Point).
And as you may know, closest neigbor graph is directed. That is to say, if point A has B as its closest neighbor, point B might not have A as its closest neighbor since C is closer to B than A!
Now that ICP uses closest point data association to estimate correspondences between the two clouds, specifying A as source will get a different correspondence set from specifying B. That explains the differences you observed.
Usually the difference is small and you may not notice after ICP. But in your case, I found the two clouds you provided are too different (one is extra large and the other small) and the relation becomes too asymmetric.
If you want to ensure the result is symmetric, you can just change the data association step (PCL might provide the option to do that) to make closest point correspondences come from both cloud (and this is just a variant of the classic ICP. For more information you can see my other answer).
Related
// Compute the normals
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZ>);
normalEstimation.setInputCloud(source_cloud);
normalEstimation.setSearchMethod(tree);
Hello everyone,
I am beginner for learning PCL
I don't understand at "normalEstimation.setSearchMethod(tree);"
what does this part mean?
Does it mean there are some methods that we have to choose?
Sometimes I see the code is like this
// Normal estimation
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> n;
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZ>);
tree->setInputCloud(cloud_smoothed)*/; [this part I dont understand too]
n.setInputCloud(cloud_smoothed);
n.setSearchMethod(tree);
Thankyou guys
cheers
you can find some information on how normals are computed here: http://pointclouds.org/documentation/tutorials/normal_estimation.php.
Basically, to compute a normal at a specific point, you need to "analyze" its neighbourhood, so the points that are around it. Usually, you take either the N closest points or all the points that are within a certain radius around the target point.
Finding these neighbour points is not trivial if the pointcloud is completely unstructured. KD-trees are here to speedup this search. In fact, they are an optimized data structure which allows very fast nearest-neighbour search for example. You can find more information on KD-trees here http://pointclouds.org/documentation/tutorials/kdtree_search.php.
So, the line normalEstimation.setSearchMethod(tree); just sets an empty KD-tree that will be used by the normal estimation algorithm.
Problem
I want to find
The first root
The first local minimum/maximum
of a black-box function in a given range.
The function has following properties:
It's continuous and differentiable.
It's combination of constant and periodic functions. All periods are known.
(It's better if it can be done with weaker assumptions)
What is the fastest way to get the root and the extremum?
Do I need more assumptions or bounds of the function?
What I've tried
I know I can use root-finding algorithm. What I don't know is how to find the first root efficiently.
It needs to be fast enough so that it can run within a few miliseconds with precision of 1.0 and range of 1.0e+8, which is the problem.
Since the range could be quite large and it should be precise enough, I can't brute-force it by checking all the possible subranges.
I considered bisection method, but it's too slow to find the first root if the function has only one big root in the range, as every subrange should be checked.
It's preferable if the solution is in java, but any similar language is fine.
Background
I want to calculate when arbitrary celestial object reaches certain height.
It's a configuration-defined virtual object, so I can't assume anything about the object.
It's not easy to get either analytical solution or simple approximation because various coordinates are involved.
I decided to find a numerical solution for this.
For a general black box function, this can't really be done. Any root finding algorithm on a black box function can't guarantee that it has found all the roots or any particular root, even if the function is continuous and differentiable.
The property of being periodic gives a bit more hope, but you can still have periodic functions with infinitely many roots in a bounded domain. Given that your function relates to celestial objects, this isn't likely to happen. Assuming your periodic functions are sinusoidal, I believe you can get away with checking subranges on the order of one-quarter of the shortest period (out of all the periodic components).
Maybe try Brent's Method on the shortest quarter period subranges?
Another approach would be to apply your root finding algorithm iteratively. If your range is (a, b), then apply your algorithm to that range to find a root at say c < b. Then apply your algorithm to the range (a, c) to find a root in that range. Continue until no more roots are found. The last root you found is a good candidate for your minimum root.
Black box function for any range? You cannot even be sure it has the continuous domain over that range. What kind of solutions are you looking for? Natural numbers, integers, real numbers, complex? These are all the question that greatly impact the answer.
So 1st thing should be determining what kind of number you accept as the result.
Second is having some kind of protection against limes of function that will try to explode your calculations as it goes for plus or minus infinity.
Since we are touching the limes topics you could have your solution edge towards zero and look like a solution but never touch 0 and become a solution. This depends on your margin of error, how close something has to be to be considered ok, it's good enough.
I think for this your SIMPLEST TO IMPLEMENT bet for real number solutions (I assume those) is to take an interval and this divide and conquer algorithm:
Take lower and upper border and middle value (or approx middle value for infinity decimals border/borders)
Try to calculate solution with all 3 and have some kind of protection against infinities
remember all 3 values in an array with results from them (3 pair of values)
remember the current best value (one its closest to solution) in seperate variable (a pair of value and result for that value)
STEP FORWARD - repeat above with 1st -2nd value range and 2nd -3rd value range
have a new pair of value and result to be closest to solution.
clear the old value-result pairs, replace them with new ones gotten from this iteration while remembering the best value solution pair (total)
Repeat above for how precise you wish to get and look at that memory explode with each iteration, keep in mind you are gonna to have exponential growth of values there. It can be further improved if you lets say take one interval and go as deep as you wanna, remember best value-result pair and then delete all other memory and go for next interval and dig deep.
So let's say I got a matrix with two types of cells: 0 and 1. 1 is not passable.
I want to find a point, from which I can run paths (say, A*) to a bunch of destinations (don't expect it to be more than 4). And I want the length of these paths to be such that l1/l2/l3/l4 = 1 or as close to 1 as possible.
For two destinations is simple: run a path between them and take the midpoint. For more destinations, I imagine I can run paths between each pair, then they will create a sort of polygon, and I could grab the centroid (or average of all path point coordinates)? Or would it be better to take all midpoints of paths between each pair and then use them as vertices in a polygon which will contain my desired point?
It seems you want to find the point with best access to multiple endpoints. For other readers, this is like trying to found an ideal settlement to trade with nearby cities; you want them to be as accessible as possible. It appears to be a variant of the Weber Problem applied to pathfinding.
The best solution, as you can no longer rely on exploiting geometry (imagine a mountain path or two blocking the way), is going to be an iterative approach. I don't imagine it will be easy to find an optimal solution because you'll need to check every square; you can't guess by pathing between endpoints anymore. In nearly any large problem space, you will need to path from each possible centroid to all endpoints. A suboptimal solution will be fairly fast. I recommend these steps:
try to estimate the centroid using geometry, forming a search area
Use a modified A* algorithm from each point S in the search area to all your target points T to generate a perfect path from S to each T.
Add the length of each path S -> T together to get Cost (probably stored in a matrix for all sample points)
Select the lowest Cost from all your samples in the matrix (or the entire population if you didn't cull the search space).
The algorithm above can also work without estimating a centroid and limiting solutions. If you choose to search the entire space, the search will be much longer, but you can find a perfect solution even in a labyrinth. If you estimate the centroid and start the search near it, you'll find good answers faster.
I mentioned earlier that you should use a modified A* algorithm... Rather than repeating a generic A* search S->Tn for every T, code A* so that it seeks multiple target locations, storing the paths to each one and stopping when it has found them all.
If you really want a perfect solution to the problem, you'll be waiting a long time, so I recommend that you use any exploit you can to reduce wasteful calculations. Even go so far as to store found paths in a lookup table for each T, and see if a point already exists along any of those paths.
To put it simply, finding the point is easy. Finding it fast-enough might take lots of clever heuristics (cost-saving measures) and stored data.
I have a logic question, therefore chose from two explanations:
Mathematical:
I have a undirected weighted complete graph over 2-14 nodes. The nodes always come in pairs (startpoint to endpoint). For this I already have the minimum spanning tree, which considers that the pairs startpoint always comes before his endpoint. Now I want to add another pair of nodes.
Real life explanation:
I already have a optimal taxi route for 1-7 people. Each joins (startpoint) and leaves (endpoint) at different places. Now I want to find the optimal route when I add another person to the taxi. I have already the calculated subpaths from each point to each point in my database (therefore this is a weighted graph). All calculated paths are real value, not heuristics.
Now I try to find the most performant solution to solve this. My current idea:
Find the point nearest to the new startpoint. Add it a) before and b) after this point. Choose the faster one.
Find the point nearest to the new endpoint. Add it a) before and b) after this point. Choose the faster one.
Ignoring the case that the new endpoint comes before the new start point, this seams feasible.
I expect that the general direction of the taxi is one direction, this eliminates the following edge case.
Is there any case I'm missing in which this algorithm wouldn't calculate the optimal solution?
There are definitely many cases were this algorithm (which is a First Fit construction heuristic) won't find the optimal solution. Given a reasonable sized dataset, in my experience, I would guess to get improvements of 10-20% by simply taking that result and adding metaheuristics (or other optimization algo's).
Explanation:
If you have multiple taxis with a limited person capacity, it has an inherit bin packing problem, which is NP-complete (which is proven to be suboptimally solved by all known construction heuristics in P).
But even if you have just 1 taxi, it is similar to TSP: if you have the optimal solution for 10 locations and add 1 location, it can create a snowball effect in the optimal solution to make the optimal solution look completely different. (sorry, no visual image of this yet)
And if you need to any additional constraints on top of that later on, you need to be aware of these false assumptions.
(edited)
Is there any library or tool that allows for knowing the maximum accumulated error in arithmetic operations?
For example if I make some iterative calculation ...
myVars = initialValues;
while (notEnded) {
myVars = updateMyVars(myVars)
}
... I want to know at the end not only the calculated values, but also the potential error (the range of posible values if results in each individual operations took the range limits for each operand).
I have already written a Java class called EADouble.java (EA for Error Accounting) which holds and updates the maximum positive and negative errors along with the calculated value, for some basic operations, but I'm afraid I might be reinventing an square wheel.
Any libraries/whatever in Java/whatever? Any suggestions?
Updated on July 11th: Examined existing libraries and added link to sample code.
As commented by fellows, there is the concept of Interval Arithmetic, and there was a previous question ( A good uncertainty (interval) arithmetic library? ) on the topic. There just a couple of small issues about my intent:
I care more about the "main" value than about the upper and lower bounds. However, to add that extra value to an open library should be straight-forward.
Accounting the error as an independent floating point might allow for a finer accuracy (e.g. for addition the upper bound would be incremented just half ULP instead of a whole ULP).
Libraries I had a look at:
ia_math (Java. Just would have to add the main value. My favourite so far)
Boost/numeric/Interval (C++, Very complex/complete)
ErrorProp (Java, accounts value, and error as standard deviation)
The sample code (TestEADouble.java) runs ok a ballistic simulation and a calculation of number e. However those are not very demanding scenarios.
probably way too late, but look at BIAS/Profil: http://www.ti3.tuhh.de/keil/profil/index_e.html
Pretty complete, simple, account for computer error, and if your errors are centered easy access to your nominal (through Mid(...)).