How do you actually get the (a,b,c,d) plane model after doing ransac in PCL?
PCL: Get the plane model from pcl::SampleConsensusModelPlane / pcl::RandomSampleConsensus
The examples only demonstrate how to extract the list of inliers. I spent some time reading header files and the DOxygen reference, and coudn't figure it out. Intuitively I would have expected RandomSampleConsensus to return a SampleConsensusModelPlane that would contain the plane parameters, but that class doesn't seem to contain ~any data members, let alone an obvious accessor for getting at them without unnecessarily recomputing anything.
I feel like I'm missing something really obvious.
ugh. I found it.
pcl::RandomSampleConsensus inherits getModelCoefficients from pcl::SampleConsensus.
Related
In traditional Simplex Algorithm notation, we have x at the current basis selection B as so:
xB = AB-1b - AB-1ANxN. How can I compute the AB-1AN term inside a separator in SCIP, or at least iterate over its columns?
I see three helpful methods: getLPColsData, getLPRowsData, getLPBasisInd. I'm just not sure exactly what data those methods represent, particularly the last one, with its negative row indexes. How do I use those to get the value I want?
Do those methods return the same data no matter what LP algorithm is used? Or do I need to account for dual vs primal? How does the use of the "revised" algorithm play into my calculation?
Update: I discovered the getLPBInvARow and getLPBInvRow. That seems to be much closer to what I'm after. I don't yet understand their results; they seem to include more/less dimensions than expected. I'm still looking for understanding at how to use them to get the rays away from the corner.
you are correct that getLPBInvRow or getLPBInvARow are the methods you want. getLPBInvARow directly returns you a of the simplex tableau, but it is not more efficient to use than getLPBInvRow and doing the multiplication yourself since the LP solver needs to also compute the actual tableau first.
I suggest you look into either sepa_gomory.c or sepa_gmi.c for examples of how to use these methods. How do they include less dimensions than expected? They both return sparse vectors.
// Compute the normals
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZ>);
normalEstimation.setInputCloud(source_cloud);
normalEstimation.setSearchMethod(tree);
Hello everyone,
I am beginner for learning PCL
I don't understand at "normalEstimation.setSearchMethod(tree);"
what does this part mean?
Does it mean there are some methods that we have to choose?
Sometimes I see the code is like this
// Normal estimation
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> n;
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZ>);
tree->setInputCloud(cloud_smoothed)*/; [this part I dont understand too]
n.setInputCloud(cloud_smoothed);
n.setSearchMethod(tree);
Thankyou guys
cheers
you can find some information on how normals are computed here: http://pointclouds.org/documentation/tutorials/normal_estimation.php.
Basically, to compute a normal at a specific point, you need to "analyze" its neighbourhood, so the points that are around it. Usually, you take either the N closest points or all the points that are within a certain radius around the target point.
Finding these neighbour points is not trivial if the pointcloud is completely unstructured. KD-trees are here to speedup this search. In fact, they are an optimized data structure which allows very fast nearest-neighbour search for example. You can find more information on KD-trees here http://pointclouds.org/documentation/tutorials/kdtree_search.php.
So, the line normalEstimation.setSearchMethod(tree); just sets an empty KD-tree that will be used by the normal estimation algorithm.
Here is a piece of R code that writes to each element of a matrix in a reference class. It runs incredibly slowly, and I’m wondering if I’ve missed a simple trick that will speed this up.
nx = 2000
ny = 10
ref_matrix <- setRefClass(
"ref_matrix",fields = list(data = "matrix"),
)
out <- ref_matrix(data = matrix(0.0,nx,ny))
#tracemem(out$data)
for (iy in 1:ny) {
for (ix in 1:nx) {
out$data[ix,iy] <- ix + iy
}
}
It seems that each write to an element of the matrix triggers a check that involves a copy of the entire matrix. (Uncommenting the tracemen() call shows this.) Now, I’ve found a discussion that seems to confirm this:
https://r-devel.r-project.narkive.com/8KtYICjV/rd-copy-on-assignment-to-large-field-of-reference-class
and this also seems to be covered by Speeding up field access in R reference classes
but in both of these this behaviour can be bypassed by not declaring a class for the field, and this works for the example in the first link which uses a 1D vector, b, which can just be set as b <<- 1:10000. But I’ve not found an equivalent way of creating a 2D array without using a explicit “matrix” instance.
Am I just missing something simple, or is this actually not possible?
Let me add a couple of things. First, I’m very new to R, so could easily have missed something. Second, I’m really just curious about the way reference classes work in this case and whether there’s a simple way to use them efficiently; I’m not looking for a really fast way to set the elements of a matrix - I can do that by not having the matrix in a reference class at all, and if I really care about speed I can write a C routine to do it and call it from R.
Here’s some background that might explain why I’m interested in this, which you’re welcome to ignore.
I got here by wanting to see how different languages, and even different compiler options and different ways of coding the same operation, compared for efficiency when accessing 2D rectangular arrays. I’ve been playing with a test program that creates two 2D arrays of the same size, and calls a subroutine that sets the first to the elements of the second plus their index values. (Almost any operation would do, but this one isn’t completely trivial to optimise.) I have this in a number of languages now, C, C++, Julia, Tcl, Fortran, Swift, etc., even hand-coded assembler (spoiler alert: assembler isn’t worth the effort any more) and thought I’d try R. The obvious implementation in R passes the two arrays to a subroutine that does the work, but because R doesn’t normally pass by reference, that routine has to make a copy of the modified array and return that as the function value. I thought using a reference class would avoid the relatively minor overhead of that copy, so I tried that and was surprised to discover that, far from speeding things up, it slowed them down enormously.
Use outer:
out$data <- outer(1:ny, 1:nx, `+`)
Also, don't use reference classes (or R6 classes) unless you actually need reference semantics. KISS and all that.
Being unable to reproduce a given result. (either because it's wrong or because I was doing something wrong) I was asking myself if it would be easy to just write a small program which takes all the constants and given number and permutes it with a possible operators (* / - + exp(..)) etc) until the result is found.
Permutations of n distinct objects with repetition allowed is n^r. At least as long as r is small I think you should be able to do this. I wonder if anybody did something similar here..
Yes, it has been done here: Code Golf: All +-*/ Combinations for 3 integers
However, because a formula gives the desired result doesn't guarantee that it's the correct formula. Also, you don't learn anything by just guessing what to do to get to the desired result.
If you're trying to fit some data with a function whose form is uncertain, you can try using Eureqa.
I earlier asked a question about arrays in scheme (turns out they're called vectors but are basically otherwise the same as you'd expect).
Is there an easy way to do multidimensional arrays vectors in PLT Scheme though? For my purposes I'd like to have a procedure called make-multid-vector or something.
By the way if this doesn't already exist, I don't need a full code example of how to implement it. If I have to roll this myself I'd appreciate some general direction though. The way I'd probably do it is to just iterate through each element of the currently highest dimension of the vector to add another dimension, but I can see that being a bit ugly using scheme's recursive setup.
Also, this seems like something I should have been able to find myself so please know that I did actually google it and nothing came up.
The two common approaches are the same as in many languages, either use a vector of vectors, or (more efficiently) use a single vector of X*Y and compute the location of each reference. But there is a library that does that -- look in the docs for srfi/25, which you can get with (require srfi/25).