Mapping function for RBF - math

I am reading about the feature map of the gaussian kernel and I was wondering about the feature map derivation. For the 1-D scalar case, there is a clear explanation:
http://www.csie.ntu.edu.tw/~cjlin/talks/kuleuven_svm.pdf
(slide 11)
but how would it work for the 2-D case? During the expansion in the initial case, there will be a dot product. Even after the taylor expansion I don't see why we can just separate them into two feature mappings.

Related

PCL RANSAC model fitting: How can I initialise the model parameters?

I'm reading the PCL tutorial on plane segmentation, because I want to find 3D circles in a very large and dense point cloud I have.
I know already the approximate values for center, radius and orientation of the circle, but I have found no way so far to inform the SACSegmentation object of this fact. I could also name 3 inliers to compute initial values on, but I also don't find a way to do this.
My pointcloud is extremely large (10-20M points), so just random samples will likely be prohibitive, especially since I know already more or less what the parameter values should be and only want to optimize them.
Question: How can I set the starting point of the Sample Consensus optimization procedure?
To segment and optimize model
Set SACSegmentation::setOptimizeCoefficients(true)
Use SACSegmentation::segment which takes in an initial guess (or the final model to segment using iff optimize coefficients is set as false)
You can provide you guess here. Depending on optimization method used, you can reduce the computational load.

About the fft function

Does anyone know which algorithm is used in Julia to perform the fast Fourier transform? The documentation only says:
...
A one-dimensional FFT computes the one-dimensional discrete Fourier transform (DFT) as defined by
\operatorname{DFT}(A)[k] =
\sum_{n=1}^{\operatorname{length}(A)}
\exp\left(-i\frac{2\pi
(n-1)(k-1)}{\operatorname{length}(A)} \right) A[n].
...
In particular, I have a discrepancy in my transformed data, i.e. this transformed data is "shifted" a phase of I think pi. Is there a convention to fix this global phase?
EDIT:
Perhaps it's worth saying that if I perform the inverse fft, then the discrepancy in the phase is corrected.
Julia uses the FFTW library, I believe, which uses several variants of the Cooley-Tukey algorithm, as described in the reference below.
http://www.fftw.org/fftw-paper-ieee.pdf

Labeling matlab plots with input parameters

I am integrating a system of ode's using the MATLAB utility routine, ode45. I do not have a reliable way to label plots with the parameters used to produce the plotted results. It would be easy if there were a an approved substitute for global variables. It would be possible to write a script that automatically edits the derivative function for each case in order to hard-wire the constants, but there must be a better way.
To specify constants, simply add an equation for each constant and give 0 as its derivative. This adds a column to the result matrix but the constant value is available for use in calculating the other derivatives.

In what situation would a taylor series for a polynomial be necessary?

I'm having a hard time understanding why it would be useful to use the Taylor series for a function in order to gain an approximation of a function, instead of just using the function itself when programming. If I can tell my computer to compute e^(.1) and it will give me an exact value, why would I take an approximation instead?
Taylor series are generally not used to approximate functions. Usually, some form of minimax polynomial is used.
Taylor series converge slowly (it takes many terms to get the accuracy desired) and are inefficient (they are more accurate near the point around which they are centered and less accurate away from it). The largest use of Taylor series is likely in mathematics classes and papers, where they are useful for examining the properties of functions and for learning about calculus.
To approximate functions, minimax polynomials are often used. A minimax polynomial has the minimum possible maximum error for a particular situation (interval over which a function is to be approximated, degree available for the polynomial). There is usually no analytical solution to finding a minimax polynomial. They are found numerically, using the Remez algorithm. Minimax polynomials can be tailored to suit particular needs, such as minimizing relative error or absolute error, approximating a function over a particular interval, and so on. Minimax polynomials need fewer terms than Taylor series to get acceptable results, and they “spread” the error over the interval instead of being better in the center and worse at the ends.
When you call the exp function to compute ex, you are likely using a minimax polynomial, because somebody has done the work for you and constructed a library routine that evaluates the polynomial. For the most part, the only arithmetic computer processors can do is addition, subtraction, multiplication, and division. So other functions have to be constructed from those operations. The first three give you polynomials, and polynomials are sufficient to approximate many functions, such as sine, cosine, logarithm, and exponentiation (with some additional operations of moving things into and out of the exponent field of floating-point values). Division adds rational functions, which is useful for functions like arctangent.
For two reasons. First and foremost - most processors do not have hardware implementations of complex operations like exponentials, logarithms, etc... In such cases the programming language may provide a library function for computing those - in other words, someone used a taylor series or other approximation for you.
Second, you may have a function that not even the language supports.
I recently wanted to use lookup tables with interpolation to get an angle and then compute the sin() and cos() of that angle. Trouble is that it's a DSP with no floating point and no trigonometric functions so those two functions are really slow (software implementation). Instead I put sin(x) in the table instead of x and then used the taylor series for y=sqrt(1-x*x) to compute the cos(x) from that. This taylor series is accurate over the range I needed with only 5 terms (denominators are all powers of two!) and can be implemented in fixed point using plain C and generates code that is faster than any other approach I could think of.

What is the purpose in this part of the Monte Carlo path tracing algorithm?

In all of the simple algorithms for path tracing using lots of monte carlo samples the tracing the path part of the algorithm randomly chooses between returning with the emitted value for the current surface and continuing by tracing another ray from that surface's hemisphere (for example in the slides here). Like so:
TracePath(p, d) returns (r,g,b) [and calls itself recursively]:
Trace ray (p, d) to find nearest intersection p’
Select with probability (say) 50%:
Emitted:
return 2 * (Le_red, Le_green, Le_blue) // 2 = 1/(50%)
Reflected:
generate ray in random direction d’
return 2 * fr(d ->d’) * (n dot d’) * TracePath(p’, d’)
Is this just a way of using russian roulette to terminate a path while remaining unbiased? Surely it would make more sense to count the emissive and reflective properties for all ray paths together and use russian roulette just to decide whether to continue tracing or not.
And here's a follow up question: why do some of these algorithms I'm seeing (like in the book 'Physically Based Rendering Techniques') only compute emission once, instead of taking in to account all the emissive properties on an object? The rendering equation is basically
L_o = L_e + integral of (light exiting other surfaces in to the hemisphere of this surface)
which seems like it counts the emissive properties in both this L_o and the integral of all the other L_o's, so the algorithms should follow.
In reality, the single emission vs. reflection calculation is a bit too simplistic. To answer the first question, the coin-flip is used to terminate the ray but it leads to much greater biases. The second question is a bit more complex....
In the abstract of Shirley, Wang and Zimmerman TOG 94, the authors briefly summarize the benefits and complexities of Monte Carlo sampling:
In a distribution ray tracer, the crucial part of the direct lighting
calculation is the sampling strategy for shadow ray testing. Monte
Carlo integration with importance sampling is used to carry out this
calculation. Importance sampling involves the design of
integrand-specific probability density functions which are used to
generate sample points for the numerical quadrature. Probability
density functions are presented that aid in the direct lighting
calculation from luminaires of various simple shapes. A method for
defining a probability density function over a set of luminaires is
presented that allows the direct lighting calculation to be carried
out with one sample, regardless of the number of luminaires.
If we start dissecting that abstract, here are some of the important points:
Lights aren't points: in real life, we're almost never dealing with a point light source (e.g., a single LED).
Shadows are usually soft: this is a consequence of the non-point lights. It's very rare to see a truly hard-edged shadow in real life.
Noise (especially bright sampling artifacts) are disproportionately distracting: humans have a lot of intuition about how things should look. Look at slide 5 (the glass sphere on a table) in the OP's linked presentation. Note the bright specks in the shadow.
When rendering for more visual realism, both of the sets of reflected visibility rays and lighting calculation rays must be sampled and weighted according to the surface's bidirectional reflectance distribution function.
Note that this is a guided sampling method that's distinctly different from the original question's "generate ray in random direction" method in that it is both:
More accurate: the images in the linked PDF suffer a bit from the PDF process. Figure 10 is a reasonable representation of the original - note that lack of bright speckle artifacts that you will sometimes see (as in figure 5 of the original presentation).
Significantly faster: as the original presentation notes, unguided Monte Carlo sampling can take quite a while to converge. More sampling rays = much more computation = more time.
After reading the slides (thank you for posting), I'll amend my answer as best I can.
Is this just a way of using russian roulette to terminate a path
while remaining unbiased? Surely it would make more sense to count
the emissive and reflective properties for all ray paths together
and use russian roulette just to decide whether to continue tracing
or not.
Perhaps the emitted and reflected properties are treated differently because the reflected path depends on the incident path in a way that emitted paths do not (at least for a spectral surface). Does the algorithm take a Bayesian approach and use prior information about the incidence angle as a prior for predicting the reflective angle? Or is this a Feynman integration over all paths to come up with a probability? It's hard to tell without digging deeper into the details of the theory.
My earlier black body comment is quite incorrect. I see that the slides talk about (R, G, B) components; black body emissivities are integrated over all wavelengths.
And here's a follow up question: why do some of these algorithms I'm
seeing (like in the book 'Physically Based Rendering Techniques')
only compute emission once, instead of taking in to account all the
emissive properties on an object? The rendering equation is
basically
L_o = L_e + integral of (light exiting other surfaces in to the
hemisphere of this surface)
A single emissivity for the surface would assume that there's no functional relationship on wavelength or direction. I don't know how significant it is for rendering photo-realistic images.
The ones that are posted are certainly impressive. I wonder how different they would look if the complexities that you have in mind were included?
Thank you for posting a nice question - I'm voting it up. It's been a long time since I've thought about this kind of problem. I wish I could be more helpful.
Yes that is a very basic implementation of Russian Roulette, though normally the probability of terminating would take into account the light intensity (i.e. less light means the value contributes less to the final summation so use a higher probability of terminating).

Resources