PCL: PFH with ISS keypoints - point-cloud-library

Currently I try to compute PFH descriptors for ISS keypoints. I perform the following steps:
(1) Detect keypoints with pcl::ISSKeypoint3D
(2) Estimate normals of the new keypoint cloud from (1) with pcl::NormalEstimation
(3) Estimate PFH for the keypoints and the normals from (2) with pcl::PFHEstimation
To my understanding, the PFH estimation takes k neighbors into account and therefore has a complexity of O(n*k^2), where n denotes the number of keypoints. However, I only pass a cloud consisting of the keypoints itself to the estimator in (3).
So my question is: How can i retrieve the k neighbors for each ISS keypoint?

You have to put original normal as input.
So, there are three items that should be set.
setinputcloud(filtered one for vertices)
//iss keypoints
setinputnormals(non-filtered one for normals)
//original normals using original vertices
setSearchSurface(non-filtered one for vetices)
// original vertices
You can not use normal of keypoints and original vertices at the same time.
PCL will show you some error.
follow case will show you error.
1.
setinputcloud(filtered one for vertices)
//iss keypoints
setinputnormals(filtered one for normals)
//recomputed normals using filtered vertices
setSearchSurface(non-filtered one for vetices)
// original vertices
2.
setinputcloud(filtered one for vertices)
//iss keypoints
setSearchSurface(non-filtered one for vetices)
// original vertices

Related

Explanation of network indices normalization

Could someone explain in pretty simple words why the normalization of many network analysis indicies (graph theory) is n(n - 1), where n – the graph size? Why do we need to take into account (n - 1)?
Most network measures that focus on counting edges (e.g. clustering coefficient) are normalized by the total number of possible edges. Since every edge connects a pair of vertices, we need to know how many possible pairs of vertices we can make. There are n possible vertices we could choose as the source of our edge, and therefore there are n-1 possible vertices that could be the target of our edge (assuming no self-loops, and if undirected divide by 2 bc source and target are exchangeable). Hence, you frequently encounter $n(n-1)$ or $\binomal{n}{2}$.

Shortest Path Function (Dijkstra's Algorithm)

I have a data frame composed of a latitude, longitude, node ID, from NodeID, to Node_ID, length. The from and to node columns are my edges. I can only travel on my edges when trying to find the shortest path. I want to be able to go from a node to another node while minimizing my total length traveled. The output should return every node I have to travel through to get to my destination. I have tried many built in packages like cppRouting and igraph, but I can not get anything to work correctly . Any ideas on how to either create a function or how to use any existing functions to accomplish this? Thank you.
Below are the detailed steps used in Dijkstra’s algorithm to find the shortest path from a single source vertex to all other vertices in the given graph.
Algorithm:
1) Create a set sptSet (shortest path tree set) that keeps track of vertices included in shortest path tree, i.e., whose minimum distance from source is calculated and finalized. Initially, this set is empty.
2) Assign a distance value to all vertices in the input graph. Initialize all distance values as INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
3) While sptSet doesn’t include all vertices
….a) Pick a vertex u which is not there in sptSet and has minimum distance value.
….b) Include u to sptSet.
….c) Update distance value of all adjacent vertices of u. To update the distance values, iterate through all adjacent vertices. For every adjacent vertex v, if sum of distance value of u (from source) and weight of edge u-v, is less than the distance value of v, then update the distance value of v.
Go through the following link: Printing Paths in Dijkstra’s Shortest Path Algorithm

Single source shortest path using BFS for a undirected weighted graph

I was trying to come up with a solution for finding the single-source shortest path algorithm for an undirected weighted graph using BFS.
I came up with a solution to convert every edge weight say x into x edges between the vertices each new edge with weight 1 and then run the BFS. I would get a new BFS tree and since it is a tree there exists only 1 path from the root node to every other vertex.
The problem I am having with is trying to come up with the analysis of the following algorithm. Every edge needs to be visited once and then be split into the corresponding number of edges according to its weight. Then we need to find the BFS of the new graph.
The cost for visiting every edge is O(m) where m is the number of edges as every edge is visited once to split it. Suppose the new graph has km edges (say m').
The time complexity of BFS is O (n + m') = O(n + km) = O(n + m) i.e the Time complexity remains unchanged.
Is the given proof correct?
I'm aware that I could use Dijkstra's algorithm here, but I'm specifically interested in analyzing this BFS-based algorithm.
The analysis you have included here is close but not correct. If you assume that every edge's cost is at most k, then your new graph will have O(kn) nodes (there are extra nodes added per edge) and O(km) edges, so the runtime would be O(kn + km). However, you can't assume that k is a constant here. After all, if I increase the weight on the edges, I will indeed increase the amount of time that your code takes to run. So overall, you could give a runtime of O(kn + km).
Note that k here is a separate parameter to the runtime, the same way that m and n are. And that makes sense - larger weights give you larger runtimes.
(As a note, this is not considered a polynomial-time algorithm. Rather, it's a pseudopolynomial-time algorithm because the number of bits required to write out the weight k is O(log k).)

How can I select the optimal radius value in order to obtain the best normal estmation results

I'm running a model-scene match between a set of point clouds in order to test the matching results.
The match is based on 3D features such as normals and point feature histogram.
I'm using the normal estimation of point cloud library (pcl) to compute the histogram after I'd resampled the point cloud of both model and scene.
My question is, how can I test the accuracy of selecting different radius values in the nearest-neighbor estimation step.
I need to use that values for normal estimation, resampling and histogram in objects such as cup/knife/hummer etc.
I tried to visualize those objects using the pcl visulizer with different radius values and choosing which one that gives correct normals (In terms of how perpendicular were the normals orientation to the surfaces).
But I think that this visual testing is not enough and I would like to know if there are some empiric ways to estimate the optimal radius value.
I would appreciate any suggestion or help ,share your thoughts :)
Thank you.
I think you should start from a ground test: create a point cloud from a mesh using the mesh normals (using CloudCompare for example), then load it twice: once with full data (including normals) and once without normals.
Rebuild normals using the search radius to be tested then you can directly compare de obtained normals with the one extracted from the mesh...

Algorithm to modify the weights of the edges of a graph, given a shortest path

Given a graph with edges having positive weights, a pair of nodes, and a path between the nodes, what's the best algorithm that will tell me how to modify the edge weights of the graph to the minimum extent possible such that the specified path becomes the shortest path between the nodes (as computed by A*)? (Of course, had I specified the shortest path as input, the output would be "make no changes").
Note: Minimum extent refers to the total changes made to edge weights. For example, the other extreme (the most disruptive change) would be to change the weights of all edges not along the specified path to infinity and those along the path to zero.
You could use the Floyd-Warshall algorithm to compute the distances for all the paths, and then modify the desired path so that it becomes the shortest path. For example, imagine the following graph of 3 nodes.
Let the path be a -> b -> c. The Floyd-Warshall algorithm will compute the following matrix.
The numbers with green circles are the distances of a -> b (2) and b -> c (4). The red circled number is the shortest distance for a path between a and c (3). Since 2 + 4 = 6 ≠ 3, you know that the path must be adjusted by 3 to be the minimum path.
The reason I suggest this approach as opposed to just calculating the distance of the shortest path and adjusting the desired path accordingly is that this method allows you to see the distances between any two nodes so that you can adjust the weights of the edges as you desire.
This reminds me vaguely of a back-propagation strategy as is often found in neural network training. I'll sketch two strategies, the first of which is going to be flawed:
Compute the cost of your candidate path P, which we will call c(P).
Compute the cost of the shortest path S, which we will call c(S).
Reduce every edge weight w(p) ∈ P by (c(P) - c(S) - epsilon) / |P|, where epsilon is some vanishingly small constant by which you would like your path to be less than c(S), and |P| is the number of edges in P.
Of course, the problem with this is that you might well reduce the cost of path S (or some other path) by more than you reduce the cost of P! This suggests to me that this is going to require an iterative approach, whereby you start forwards and reduce the cost of a given weight relative to the shortest path cost which you recompute at each step. This is hugely more expensive, but thankfully shortest path algorithms tend to have nice dynamic programming solutions!
So the modified algorithm looks something like this (assume i = 0 to start):
Compute the cost of the first i steps of your candidate path P, which we will call c(p_0...p_i).
Compute the cost of the shortest path S, which we will call c(S), and the cost of its first i components, which we will denote by c(s_0...s_i).
Reduce edge weight w(p_n) by c(p_0...p_i) - c(s_0...s_i) - epsilon, where epsilon is some vanishingly small constant by which you would like your path to be less than c(S).
Repeat from step 1, increasing i by 1.
Where P = S to begin with, if epsilon is 0, you should leave the original path untouched. Otherwise you should reduce by no more than epsilon * |P| beyond the ideal update.
Optimizing this algorithm will require that you figure out how to compute c(s_0...s_i+1) from c(s_0...s_i) in an efficient manner, but that is left as an exercise to the reader ;-)

Resources