PCL RANSAC model fitting: How can I initialise the model parameters? - point-cloud-library

I'm reading the PCL tutorial on plane segmentation, because I want to find 3D circles in a very large and dense point cloud I have.
I know already the approximate values for center, radius and orientation of the circle, but I have found no way so far to inform the SACSegmentation object of this fact. I could also name 3 inliers to compute initial values on, but I also don't find a way to do this.
My pointcloud is extremely large (10-20M points), so just random samples will likely be prohibitive, especially since I know already more or less what the parameter values should be and only want to optimize them.
Question: How can I set the starting point of the Sample Consensus optimization procedure?

To segment and optimize model
Set SACSegmentation::setOptimizeCoefficients(true)
Use SACSegmentation::segment which takes in an initial guess (or the final model to segment using iff optimize coefficients is set as false)
You can provide you guess here. Depending on optimization method used, you can reduce the computational load.

Related

Min Max component, objective function

I would like to perform some optimizations by minimizing the maximum of a specific path variable within Dymos. or the maximum of the absolute of such a variable.
In linear programming methods, this can be done by introducing slack variables.
Do you know if this has been attempted before with Dymos, or if there was a reason not to include it?
I understand gradient based methods are not entirely suitable for these problems, though I think some "functions" can be introduced to mitigate this.
For example,
The space shuttle reentry problem from [Betts][1] used as a [test example][2] in dymos, the original source contains an example where the maximum heat flux is minimized. Such functionality could be implemented with the "loc" argument as:
phase.add_objective('q_c', loc='max')
[1]: J. Betts. Practical Methods for Optimal Control and Estimation Using Nonlinear Programming. Society for Industrial and Applied Mathematics, second edition, 2010. URL: https://epubs.siam.org/doi/abs/10.1137/1.9780898718577, arXiv:https://epubs.siam.org/doi/pdf/10.1137/1.9780898718577, doi:10.1137/1.9780898718577.
[2]: https://openmdao.github.io/dymos/examples/reentry/reentry.html
This has been done with pseudospectral methods before. Dymos currently doesn't have any direct way of implementing this, for a few reasons:
As you said, doing this naively can introduce discontinuous gradients that confuse the optimizer. When the node at which the maximum occurs switches, this tends to cause a sharp edge discontinuity in the gradient.
Since the pseudospectral methods are discrete, you cannot guarantee that the maximum will occur at a node. It's often fine to assume it does, but sometimes your requirements might demand more precision.
There are two possible ways to get around this.
The KSComp in OpenMDAO can be used as a "differentiable maximum". Add one after the trajectory, feed it the timeseries data for the output of interest, and set it up such that it returns a smooth approximation to the maximum. The KS function is a bit conservative, so it won't pick out the precise maximum, but depending on the value of the rho option it can be tuned to get pretty close.
When a more precise value of a maximum is needed, it's pretty common to set up a trajectory such that a phase ends when the maximum or minimum is reached.
If the variable whose maximum is being sought is a state, this can be done by adding a boundary constraint on the rate source for that state.
This ensures that the maximum occurs at the first or last node in the phase (depending on if its an initial or final boundary constraint). That lets you more accurately capture its value.
If the variable being sought is not a state, its possible to use the polynomials that are used for fitting states and controls in a phase to interpolate the variable of interest. By then taking the time derivative of that polynomial we can get a reasonably good approximation for its rate. The master branch of dymos has a method add_timeseries_rate_output that does this. And soon, within a few weeks hopefully, we'll add add_boundary_rate_constraint so that these interpolated rates can be easily used as boundary constraints.
In the meantime, you should be able to achieve this by adding the timeseries rate output and then manually applying the OpenMDAO method 'add_constraint' to the resulting timeseries output, using either indices=[0] or indices=[-1] to treat it as an initial or final constraint.
This is a common enough request that we'll add some documentation on how to achieve this behavior using both the KSComp approach and the boundary constraint approach.
Personally I'm not as much of a fan of KSComp because I've had trouble getting problems getting those types of objectives to converge in the past. I've used the slack variable and that has worked well. In the following example, we take a guess at the Rotor power in static analysis, and then we run a trajectory and get the actual rotor power during the mission. The objective was to minimize aircraft weight, so if you have a large amount of power in statics, that costs more weight. The constraint shown below prevents us from decreasing our updated guess of rotor power in statics below the maximum power required during the trajectory.
p.model.add_subsystem(
'static_power_check',
om.ExecComp('Power_check = Power_ODE - Power_statics',
Power_check = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_ODE = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_statics = {'value':0.0, 'units':'kW'}),
promotes_inputs=[
('Power_ODE','hop0.main_phase.timeseries.Power_R'), ('Power_statics','Power_{rotor,slack}')],
promotes_outputs=['Power_check'])
p.model.add_constraint('Power_check', upper=0, ref=1)
The constraint on the slack variable effectively helped us ensure that our slack rotor power matched the maximum rotor power during the mission. This allowed us to get the right sizes for the rotor parts (i.e. motors).

(Differentiable Image Sampling) Custom Integer Sampling Kernel, Spatial Transformer Network

I was looking through the Spatial Transformer Network paper, and I am trying to implement a custom grid_sample function (inheriting the autograd.Function class) in PyTorch for the Integer Sampling Kernel.
While defining the backward function, I have come across the following conundrum.
Given that the integer sampling works as the following:
I think that the gradients w.r.t the input map and the transformed grid (x_i^s, y_i^s) should be like the following:
Gradient w.r.t. input map:
Gradient w.r.t transformed grid (x_i^s):
Gradient w.r.t transformed grid (y_i^s):
as the derivative of the Kronecker delta function is zero (I'm unsure about this!! - HELP)
Derivative of the Kronecker delta?
Thus I am reaching a conclusion that the gradient w.r.t to the input should be: a tensor of the same size as the input filled with ones if the pixel was sampled and 0 if it wasn't sampled, and the gradient w.r.t the transformed grid should be a tensor full of zeros.
However, if the gradient of the transformed grid is 0, then due to the chain rule, no information will be passed on to the layers before the integer sampler. Therefore I think the derivate with respect to the grid should be something else. Could anybody point out what I'm doing wrong?
Many thanks in advance!
For future reference, and for those who might have had similar questions to the one I posted.
I've emailed Dr Jaderberg (one of the authors of the 'Spatial Transformer Networks') about this question and he has confirmed: "that the gradient wrt the coordinates for integer sampling is 0.". So I wasn't doing anything wrong, and it was right all along!
He was very kind in his response and expressed that integer sampling was mentioned in the paper to introduce the bilinear sampling scheme, and have given insights into how to possibly implement integer sampling if I really wanted to:
"you could think about using some numerical differentiation techniques (e.g. look at difference of x to its neighbours). This would assume smoothness in the image wrt coordinates."
So with great thanks to Dr Jaderberg, I'm happy to close this question.
I guess thinking about how I'd use numerical methods to implement the integer kernel for the sampling function is another challenge for myself, but until then I guess the bilinear sampler is my friend! :)

8 point algorithm for estimating Fundamental Matrix

I'm watching a lecture about estimating the fundamental matrix for use in stereo vision using the 8 point algorithm. I understand that once we recover the fundamental matrix between two cameras we can compute the epipolar line on one camera given a point on the other. To my understanding this epipolar line (after it's been rectified) makes it easy to find feature correspondences, because we are simply matching features along a 1D line.
The confusion comes from the fact that 8-point algorithm itself requires at least 8 feature correspondences to estimate the Fundamental Matrix.
So, we are finding point correspondences to recover a matrix that is used to find point correspondences?
This seems like a chicken-egg paradox so I guess I'm misunderstanding something.
The fundamental matrix can be precomputed. This leads to two advantages:
You can use a nice environment in which features can be matched easily (like using a chessboard) to compute the fundamental matrix.
You can use more computationally expensive operations like a sequence of SIFT, FLANN and RANSAC across the entire image since you only need to do that once.
After getting the fundamental matrix, you can find correspondences in a noisy environment more efficiently than using the same method when you compute the fundamental matrix.

What is the purpose in this part of the Monte Carlo path tracing algorithm?

In all of the simple algorithms for path tracing using lots of monte carlo samples the tracing the path part of the algorithm randomly chooses between returning with the emitted value for the current surface and continuing by tracing another ray from that surface's hemisphere (for example in the slides here). Like so:
TracePath(p, d) returns (r,g,b) [and calls itself recursively]:
Trace ray (p, d) to find nearest intersection p’
Select with probability (say) 50%:
Emitted:
return 2 * (Le_red, Le_green, Le_blue) // 2 = 1/(50%)
Reflected:
generate ray in random direction d’
return 2 * fr(d ->d’) * (n dot d’) * TracePath(p’, d’)
Is this just a way of using russian roulette to terminate a path while remaining unbiased? Surely it would make more sense to count the emissive and reflective properties for all ray paths together and use russian roulette just to decide whether to continue tracing or not.
And here's a follow up question: why do some of these algorithms I'm seeing (like in the book 'Physically Based Rendering Techniques') only compute emission once, instead of taking in to account all the emissive properties on an object? The rendering equation is basically
L_o = L_e + integral of (light exiting other surfaces in to the hemisphere of this surface)
which seems like it counts the emissive properties in both this L_o and the integral of all the other L_o's, so the algorithms should follow.
In reality, the single emission vs. reflection calculation is a bit too simplistic. To answer the first question, the coin-flip is used to terminate the ray but it leads to much greater biases. The second question is a bit more complex....
In the abstract of Shirley, Wang and Zimmerman TOG 94, the authors briefly summarize the benefits and complexities of Monte Carlo sampling:
In a distribution ray tracer, the crucial part of the direct lighting
calculation is the sampling strategy for shadow ray testing. Monte
Carlo integration with importance sampling is used to carry out this
calculation. Importance sampling involves the design of
integrand-specific probability density functions which are used to
generate sample points for the numerical quadrature. Probability
density functions are presented that aid in the direct lighting
calculation from luminaires of various simple shapes. A method for
defining a probability density function over a set of luminaires is
presented that allows the direct lighting calculation to be carried
out with one sample, regardless of the number of luminaires.
If we start dissecting that abstract, here are some of the important points:
Lights aren't points: in real life, we're almost never dealing with a point light source (e.g., a single LED).
Shadows are usually soft: this is a consequence of the non-point lights. It's very rare to see a truly hard-edged shadow in real life.
Noise (especially bright sampling artifacts) are disproportionately distracting: humans have a lot of intuition about how things should look. Look at slide 5 (the glass sphere on a table) in the OP's linked presentation. Note the bright specks in the shadow.
When rendering for more visual realism, both of the sets of reflected visibility rays and lighting calculation rays must be sampled and weighted according to the surface's bidirectional reflectance distribution function.
Note that this is a guided sampling method that's distinctly different from the original question's "generate ray in random direction" method in that it is both:
More accurate: the images in the linked PDF suffer a bit from the PDF process. Figure 10 is a reasonable representation of the original - note that lack of bright speckle artifacts that you will sometimes see (as in figure 5 of the original presentation).
Significantly faster: as the original presentation notes, unguided Monte Carlo sampling can take quite a while to converge. More sampling rays = much more computation = more time.
After reading the slides (thank you for posting), I'll amend my answer as best I can.
Is this just a way of using russian roulette to terminate a path
while remaining unbiased? Surely it would make more sense to count
the emissive and reflective properties for all ray paths together
and use russian roulette just to decide whether to continue tracing
or not.
Perhaps the emitted and reflected properties are treated differently because the reflected path depends on the incident path in a way that emitted paths do not (at least for a spectral surface). Does the algorithm take a Bayesian approach and use prior information about the incidence angle as a prior for predicting the reflective angle? Or is this a Feynman integration over all paths to come up with a probability? It's hard to tell without digging deeper into the details of the theory.
My earlier black body comment is quite incorrect. I see that the slides talk about (R, G, B) components; black body emissivities are integrated over all wavelengths.
And here's a follow up question: why do some of these algorithms I'm
seeing (like in the book 'Physically Based Rendering Techniques')
only compute emission once, instead of taking in to account all the
emissive properties on an object? The rendering equation is
basically
L_o = L_e + integral of (light exiting other surfaces in to the
hemisphere of this surface)
A single emissivity for the surface would assume that there's no functional relationship on wavelength or direction. I don't know how significant it is for rendering photo-realistic images.
The ones that are posted are certainly impressive. I wonder how different they would look if the complexities that you have in mind were included?
Thank you for posting a nice question - I'm voting it up. It's been a long time since I've thought about this kind of problem. I wish I could be more helpful.
Yes that is a very basic implementation of Russian Roulette, though normally the probability of terminating would take into account the light intensity (i.e. less light means the value contributes less to the final summation so use a higher probability of terminating).

fitness function and Selection for a Genetic Algorithm

I'm trying to design a nonlinear fitness function where I maximize variable A and minimize the variable B. The issue is that maximizing A is much more important at single digit values, almost logarithmic. B needs to be minimized and in contrast to A, it becomes less important when small (less than one) and more important when it's larger (>1), so exponential decay.
The main goal is to optimize A, so I guess an analog is A=profits, B=costs
Should I aim to keep everything positive so that the I can use a roulette wheel selection, or would it be better to use a rank/torunament kind of system? The purpose of my algorithm is shape optimization.
Thanks
When considering a multi-objective problem the goal is usually to identify all solutions that lie on the Pareto curve - the Pareto optimal set. Have a look here for a 2-dimensional visual example. When the algorithm completes you want a set of solutions that are not dominated by any other solution. You therefore need to define a pareto ranking mechanism to take into account both objectives - for a more in depth explanation, as well as links to even more reading, go here
With this in mind, in order to effectively explore all solutions along the pareto front you do not want an implementation that encourages premature convergence, otherwise your algorithm will only explore the search space in one specific area of the Pareto curve. I would implement a selection operator that keeps all members of each iteration's optimal set of solutions, that is all solutions which are not dominated by another + plus a parameter controlled percentage of other solutions. This way you encourage exploration all along the Pareto curve.
You also need to ensure your mutation and crossover operators are tuned correctly too. With any novel application of Evolutionary Algorithms, part of the problem is trying to identify an optimal parameter set for the problem domain... this is where it gets really interesting!!
The description is very vague, but assuming that you actually have an idea of what the function should look like and you're just wondering whether you need to modify it so that proportional selection can be used easily, then no. Regardless of fitness function, you should probably default to using something like tournament selection. Controlling selection pressure is one of the most important things you have to do in order to get consistently good results, and roulette wheel selection doesn't allow you that control. You typically get enormous pressure very early, which drives premature convergence. That might be preferable in a few cases, but it's not where I'd start my investigations.

Resources