Fastest numerical solution of a real cubic polynomial? - r

R question: Looking for the fastest way to NUMERICALLY solve a bunch of arbitrary cubics known to have real coeffs and three real roots. The polyroot function in R is reported to use Jenkins-Traub's algorithm 419 for complex polynomials, but for real polynomials the authors refer to their earlier work. What are the faster options for a real cubic, or more generally for a real polynomial?

The numerical solution for doing this many times in a reliable, stable manner, involve: (1) Form the companion matrix, (2) find the eigenvalues of the companion matrix.
You may think this is a harder problem to solve than the original one, but this is how the solution is implemented in most production code (say, Matlab).
For the polynomial:
p(t) = c0 + c1 * t + c2 * t^2 + t^3
the companion matrix is:
[[0 0 -c0],[1 0 -c1],[0 1 -c2]]
Find the eigenvalues of such matrix; they correspond to the roots of the original polynomial.
For doing this very fast, download the singular value subroutines from LAPACK, compile them, and link them to your code. Do this in parallel if you have too many (say, about a million) sets of coefficients.
Notice that the coefficient of t^3 is one, if this is not the case in your polynomials, you will have to divide the whole thing by the coefficient and then proceed.
Good luck.
Edit: Numpy and octave also depend on this methodology for computing the roots of polynomials. See, for instance, this link.

The fastest known way (that I'm aware of) to find the real solutions a system of arbitrary polynomials in n variables is polyhedral homotopy. A detailed explanation is probably beyond a StackOverflow answer, but essentially it's a path algorithm that exploits the structure of each equation using toric geometries. Google will give you a number of papers.
Perhaps this question is better suited for mathoverflow?

Fleshing out Arietta's answer above:
> a <- c(1,3,-4)
> m <- matrix(c(0,0,-a[1],1,0,-a[2],0,1,-a[3]), byrow=T, nrow=3)
> roots <- eigen(m, symm=F, only.values=T)$values
Whether this is faster or slower than using the cubic solver in the GSL package (as suggested by knguyen above) is a matter of benchmarking it on your system.

Do you need all 3 roots or just one? If just one, I would think Newton's Method would work ok. If all 3 then it might be problematic in circumstances where two are close together.

1) Solve for the derivative polynomial P' to locate your three roots. See there to know how to do it properly. Call those roots a and b (with a < b)
2) For the middle root, use a few steps of bisection between a and b, and when you're close enough, finish with Newton's method.
3) For the min and max root, "hunt" the solution. For the max root:
Start with x0 = b, x1 = b + (b - a) * lambda, where lambda is a moderate number (say 1.6)
do x_n = b + (x_{n - 1} - a) * lambda until P(x_n) and P(b) have different signs
Perform bisection + newton between x_{n - 1} and x_n

The common methods are available: Newton's Method, Bisection Method, Secant, Fixed point iteration, etc. Google any one of them.
If you have a non-linear system on the other hand (e.g. a system on N polynomial eqn's in N unknowns), a method such as high-order Newton may be used.

Have you tried looking into the GSL package http://cran.r-project.org/web/packages/gsl/index.html?

Related

Flexagon Simulation

What is the best way to simulate a flexagon?
My best guess at a starting point is to represent the faces and edges, and simulate transformations based where edges meet. I'm thinking that in the process of implementing a transformation, it will be apparent when folding in a given direction is physically impossible.
I'm going to try to figure this out by experimentation, but it definitely feels like the kind of problem where a gap in my facility with mathematics is holding me back.
Edit: To clarify, I'm interested in what sort of data structures I could use to represent a flexagon and how I can manipulate those data structures to simulate the folding of a flexagon.
If you write all of the invariants of the flexagon as a system of equations, small deviations around legal states may be written as a linear system. For instance, the stiffness of a piece of paper between (x1,y1) and (x2,y2) enforces
(x1 - x2)**2 + (y1 - y2)**2 - L**2 == 0
This can be be softened to
chi2 = (x1 - x2)**2 + (y1 - y2)**2 - L**2 + other constraints...
Derivatives of chi2 with respect to x1, x2, y1, y2 yield linear equations. A system of linear equations is a matrix, and an eigenvalue/eigenvector decomposition of that matrix give you linear combinations of the x1, x2, y1, y2 parameters that are easy or hard to bend. The eigenvectors are a basis set of possible directions and each one's corresponding eigenvalue tells you how hard it is to bend in that direction. Larger eigenvalues are more constrained.
A problem with the above is that if there are any directions that are truly allowed, that is, the derivative of chi2 with respect to p is 0 (the original constraint is absolutely satisfied), then the matrix is singular and can't be inverted to get the eigensystem. If you only want to know what those absolutely allowed directions are, you can compute the null space of the matrix instead of its eigensystem. However, I suspect (never having played with a flexagon) that the "allowed" directions involve a little bit of bending, in which case chi2 is small but nonzero. Then you'd be looking for small but nonzero eigenvalues. Other degrees of freedom are allowed and uninteresting, such as translation or rotation of the whole object. To turn it into a pure eigensystem problem (no null space at all), add constraints to the system with arbitrarily small constants lambda:
chi2 += lambda_x * (x1 + x2)**2/4.0 + lambda_y * (y1 + y2)**2/4.0
You'll recognize them in your solution because they'll vary as you vary each lambda. (The example above gives a penalty lambda_x to translating in x and lambda_y to translating in y.)
In terms of implementation, you can use any linear algebra software to compute solutions and check for variation with the lambdas. I used Python to prototype a problem like this (detector alignment in high energy physics, in which the constraints are measurements like "this detector is 3 cm from that detector" and the chi2 was derived from the uncertainties "3 cm +- 0.1 cm") and then ported the solution to C++ (BLAS) for production. The Numpy library for Python had enough linear algebra (it's BLAS under the hood), though I also used the generic, non-linear minimizers in Scipy to debug the matrix solution. The hardest part is getting the indexes to line up right, which is necessary when casting it as a matrix and not when you give an objective function to a generic minimizer (because you use variable names instead). This is more of a Matlab or Mathematica problem, so if you're more comfortable with one of them, use it instead. This problem will require a lot of trial and error, so use the most interactive system possible (one with a good REPL or worksheet/notebook-style interface).
It can also be helpful to draw a graph of the connections (graph-theory graph, not a plot), on which to label their constraints. For me, that was a necessary first step before writing out the equations.
It might also help to visualize the system by writing a set of functions that take parameter values (x1, etc.) and draw the figure with OpenGL (or other 3-D mesh renderer). This can show you if some constraint is being violated, because the mesh tiles would pass theough each other. It can also help you identify the degrees of freedom represented by each eigenvector: vary the parameters by the linear combination represented by the eigenvector and you'll see if it's just translating/rotating or if it's doing some interesting twist or fold.

Sample uniformly at random from an n-dimensional unit simplex

Sampling uniformly at random from an n-dimensional unit simplex is the fancy way to say that you want n random numbers such that
they are all non-negative,
they sum to one, and
every possible vector of n non-negative numbers that sum to one are equally likely.
In the n=2 case you want to sample uniformly from the segment of the line x+y=1 (ie, y=1-x) that is in the positive quadrant.
In the n=3 case you're sampling from the triangle-shaped part of the plane x+y+z=1 that is in the positive octant of R3:
(Image from http://en.wikipedia.org/wiki/Simplex.)
Note that picking n uniform random numbers and then normalizing them so they sum to one does not work. You end up with a bias towards less extreme numbers.
Similarly, picking n-1 uniform random numbers and then taking the nth to be one minus the sum of them also introduces bias.
Wikipedia gives two algorithms to do this correctly: http://en.wikipedia.org/wiki/Simplex#Random_sampling
(Though the second one currently claims to only be correct in practice, not in theory. I'm hoping to clean that up or clarify it when I understand this better. I initially stuck in a "WARNING: such-and-such paper claims the following is wrong" on that Wikipedia page and someone else turned it into the "works only in practice" caveat.)
Finally, the question:
What do you consider the best implementation of simplex sampling in Mathematica (preferably with empirical confirmation that it's correct)?
Related questions
Generating a probability distribution
java random percentages
This code can work:
samples[n_] := Differences[Join[{0}, Sort[RandomReal[Range[0, 1], n - 1]], {1}]]
Basically you just choose n - 1 places on the interval [0,1] to split it up then take the size of each of the pieces using Differences.
A quick run of Timing on this shows that it's a little faster than Janus's first answer.
After a little digging around, I found this page which gives a nice implementation of the Dirichlet Distribution. From there it seems like it would be pretty simple to follow Wikipedia's method 1. This seems like the best way to do it.
As a test:
In[14]:= RandomReal[DirichletDistribution[{1,1}],WorkingPrecision->25]
Out[14]= {0.8428995243540368880268079,0.1571004756459631119731921}
In[15]:= Total[%]
Out[15]= 1.000000000000000000000000
A plot of 100 samples:
alt text http://www.public.iastate.edu/~zdavkeos/simplex-sample.png
I'm with zdav: the Dirichlet distribution seems to be the easiest way ahead, and the algorithm for sampling the Dirichlet distribution which zdav refers to is also presented on the Wikipedia page on the Dirichlet distribution.
Implementationwise, it is a bit of an overhead to do the full Dirichlet distribution first, as all you really need is n random Gamma[1,1] samples. Compare below
Simple implementation
SimplexSample[n_, opts:OptionsPattern[RandomReal]] :=
(#/Total[#])& # RandomReal[GammaDistribution[1,1],n,opts]
Full Dirichlet implementation
DirichletDistribution/:Random`DistributionVector[
DirichletDistribution[alpha_?(VectorQ[#,Positive]&)],n_Integer,prec_?Positive]:=
Block[{gammas}, gammas =
Map[RandomReal[GammaDistribution[#,1],n,WorkingPrecision->prec]&,alpha];
Transpose[gammas]/Total[gammas]]
SimplexSample2[n_, opts:OptionsPattern[RandomReal]] :=
(#/Total[#])& # RandomReal[DirichletDistribution[ConstantArray[1,{n}]],opts]
Timing
Timing[Table[SimplexSample[10,WorkingPrecision-> 20],{10000}];]
Timing[Table[SimplexSample2[10,WorkingPrecision-> 20],{10000}];]
Out[159]= {1.30249,Null}
Out[160]= {3.52216,Null}
So the full Dirichlet is a factor of 3 slower. If you need m>1 samplepoints at a time, you could probably win further by doing (#/Total[#]&)/#RandomReal[GammaDistribution[1,1],{m,n}].
Here's a nice concise implementation of the second algorithm from Wikipedia:
SimplexSample[n_] := Rest## - Most## &[Sort#Join[{0,1}, RandomReal[{0,1}, n-1]]]
That's adapted from here: http://www.mofeel.net/1164-comp-soft-sys-math-mathematica/14968.aspx
(Originally it had Union instead of Sort#Join -- the latter is slightly faster.)
(See comments for some evidence that this is correct!)
I have created an algorithm for uniform random generation over a simplex. You can find the details in the paper in the following link:
http://www.tandfonline.com/doi/abs/10.1080/03610918.2010.551012#.U5q7inJdVNY
Briefly speaking, you can use following recursion formulas to find the random points over the n-dimensional simplex:
x1=1-R11/n-1
xk=(1-Σi=1kxi)(1-Rk1/n-k), k=2, ..., n-1
xn=1-Σi=1n-1xi
Where R_i's are random number between 0 and 1.
Now I am trying to make an algorithm to generate random uniform samples from constrained simplex.that is intersection between a simplex and a convex body.
Old question, and I'm late to the party, but this method is much faster than the accepted answer if implemented efficiently.
In Mathematica code:
#/Total[#,{2}]&#Log#RandomReal[{0,1},{n,d}]
In plain English, you generate n rows * d columns of randoms uniformly distributed between 0 and 1. Then take the Log of everything. Then normalize each row, dividing each element in the row by the row total. Now you have n samples uniformly distributed over the (d-1) dimensional simplex.
If found this method here: https://mathematica.stackexchange.com/questions/33652/uniformly-distributed-n-dimensional-probability-vectors-over-a-simplex
I'll admit, I'm not sure why it works, but it passes every statistical test I can think of. If anyone has a proof of why this method works, I'd love to see it!

How do I efficiently find the maximum value in an array containing values of a smooth function?

I have a function that takes a floating point number and returns a floating point number. It can be assumed that if you were to graph the output of this function it would be 'n' shaped, ie. there would be a single maximum point, and no other points on the function with a zero slope. We also know that input value that yields this maximum output will lie between two known points, perhaps 0.0 and 1.0.
I need to efficiently find the input value that yields the maximum output value to some degree of approximation, without doing an exhaustive search.
I'm looking for something similar to Newton's Method which finds the roots of a function, but since my function is opaque I can't get its derivative.
I would like to down-thumb all the other answers so far, for various reasons, but I won't.
An excellent and efficient method for minimizing (or maximizing) smooth functions when derivatives are not available is parabolic interpolation. It is common to write the algorithm so it temporarily switches to the golden-section search (Brent's minimizer) when parabolic interpolation does not progress as fast as golden-section would.
I wrote such an algorithm in C++. Any offers?
UPDATE: There is a C version of the Brent minimizer in GSL. The archives are here: ftp://ftp.club.cc.cmu.edu/gnu/gsl/ Note that it will be covered by some flavor of GNU "copyleft."
As I write this, the latest-and-greatest appears to be gsl-1.14.tar.gz. The minimizer is located in the file gsl-1.14/min/brent.c. It appears to have termination criteria similar to what I implemented. I have not studied how it decides to switch to golden section, but for the OP, that is probably moot.
UPDATE 2: I googled up a public domain java version, translated from FORTRAN. I cannot vouch for its quality. http://www1.fpl.fs.fed.us/Fmin.java I notice that the hard-coded machine efficiency ("machine precision" in the comments) is 1/2 the value for a typical PC today. Change the value of eps to 2.22045e-16.
Edit 2: The method described in Jive Dadson is a better way to go about this. I'm leaving my answer up since it's easier to implement, if speed isn't too much of an issue.
Use a form of binary search, combined with numeric derivative approximations.
Given the interval [a, b], let x = (a + b) /2
Let epsilon be something very small.
Is (f(x + epsilon) - f(x)) positive? If yes, the function is still growing at x, so you recursively search the interval [x, b]
Otherwise, search the interval [a, x].
There might be a problem if the max lies between x and x + epsilon, but you might give this a try.
Edit: The advantage to this approach is that it exploits the known properties of the function in question. That is, I assumed by "n"-shaped, you meant, increasing-max-decreasing. Here's some Python code I wrote to test the algorithm:
def f(x):
return -x * (x - 1.0)
def findMax(function, a, b, maxSlope):
x = (a + b) / 2.0
e = 0.0001
slope = (function(x + e) - function(x)) / e
if abs(slope) < maxSlope:
return x
if slope > 0:
return findMax(function, x, b, maxSlope)
else:
return findMax(function, a, x, maxSlope)
Typing findMax(f, 0, 3, 0.01) should return 0.504, as desired.
For optimizing a concave function, which is the type of function you are talking about, without evaluating the derivative I would use the secant method.
Given the two initial values x[0]=0.0 and x[1]=1.0 I would proceed to compute the next approximations as:
def next_x(x, xprev):
return x - f(x) * (x - xprev) / (f(x) - f(xprev))
and thus compute x[2], x[3], ... until the change in x becomes small enough.
Edit: As Jive explains, this solution is for root finding which is not the question posed. For optimization the proper solution is the Brent minimizer as explained in his answer.
The Levenberg-Marquardt algorithm is a Newton's method like optimizer. It has a C/C++ implementation levmar that doesn't require you to define the derivative function. Instead it will evaluate the objective function in the current neighborhood to move to the maximum.
BTW: this website appears to be updated since I last visited it, hope it's even the same one I remembered. Apparently it now also support other languages.
Given that it's only a function of a single variable and has one extremum in the interval, you don't really need Newton's method. Some sort of line search algorithm should suffice. This wikipedia article is actually not a bad starting point, if short on details. Note in particular that you could just use the method described under "direct search", starting with the end points of your interval as your two points.
I'm not sure if you'd consider that an "exhaustive search", but it should actually be pretty fast I think for this sort of function (that is, a continuous, smooth function with only one local extremum in the given interval).
You could reduce it to a simple linear fit on the delta's, finding the place where it crosses the x axis. Linear fit can be done very quickly.
Or just take 3 points (left/top/right) and fix the parabola.
It depends mostly on the nature of the underlying relation between x and y, I think.
edit this is in case you have an array of values like the question's title states. When you have a function take Newton-Raphson.

How to check if m n-sized vectors are linearly independent?

Disclaimer
This is not strictly a programming question, but most programmers soon or later have to deal with math (especially algebra), so I think that the answer could turn out to be useful to someone else in the future.
Now the problem
I'm trying to check if m vectors of dimension n are linearly independent. If m == n you can just build a matrix using the vectors and check if the determinant is != 0. But what if m < n?
Any hints?
See also this video lecture.
Construct a matrix of the vectors (one row per vector), and perform a Gaussian elimination on this matrix. If any of the matrix rows cancels out, they are not linearly independent.
The trivial case is when m > n, in this case, they cannot be linearly independent.
Construct a matrix M whose rows are the vectors and determine the rank of M. If the rank of M is less than m (the number of vectors) then there is a linear dependence. In the algorithm to determine the rank of M you can stop the procedure as soon as you obtain one row of zeros, but running the algorithm to completion has the added bonanza of providing the dimension of the spanning set of the vectors. Oh, and the algorithm to determine the rank of M is merely Gaussian elimination.
Take care for numerical instability. See the warning at the beginning of chapter two in Numerical Recipes.
If m<n, you will have to do some operation on them (there are multiple possibilities: Gaussian elimination, orthogonalization, etc., almost any transformation which can be used for solving equations will do) and check the result (eg. Gaussian elimination => zero row or column, orthogonalization => zero vector, SVD => zero singular number)
However, note that this question is a bad question for a programmer to ask, and this problem is a bad problem for a program to solve. That's because every linearly dependent set of n<m vectors has a different set of linearly independent vectors nearby (eg. the problem is numerically unstable)
I have been working on this problem these days.
Previously, I have found some algorithms regarding Gaussian or Gaussian-Jordan elimination, but most of those algorithms only apply to square matrix, not general matrix.
To apply for general matrix, one of the best answers might be this:
http://rosettacode.org/wiki/Reduced_row_echelon_form#MATLAB
You can find both pseudo-code and source code in various languages.
As for me, I transformed the Python source code to C++, causes the C++ code provided in the above link is somehow complex and inappropriate to implement in my simulation.
Hope this will help you, and good luck ^^
If computing power is not a problem, probably the best way is to find singular values of the matrix. Basically you need to find eigenvalues of M'*M and look at the ratio of the largest to the smallest. If the ratio is not very big, the vectors are independent.
Another way to check that m row vectors are linearly independent, when put in a matrix M of size mxn, is to compute
det(M * M^T)
i.e. the determinant of a mxm square matrix. It will be zero if and only if M has some dependent rows. However Gaussian elimination should be in general faster.
Sorry man, my mistake...
The source code provided in the above link turns out to be incorrect, at least the python code I have tested and the C++ code I have transformed does not generates the right answer all the time. (while for the exmample in the above link, the result is correct :) -- )
To test the python code, simply replace the mtx with
[30,10,20,0],[60,20,40,0]
and the returned result would be like:
[1,0,0,0],[0,1,2,0]
Nevertheless, I have got a way out of this. It's just this time I transformed the matalb source code of rref function to C++. You can run matlab and use the type rref command to get the source code of rref.
Just notice that if you are working with some really large value or really small value, make sure use the long double datatype in c++. Otherwise, the result will be truncated and inconsistent with the matlab result.
I have been conducting large simulations in ns2, and all the observed results are sound.
hope this will help you and any other who have encontered the problem...
A very simple way, that is not the most computationally efficient, is to simply remove random rows until m=n and then apply the determinant trick.
m < n: remove rows (make the vectors shorter) until the matrix is square, and then
m = n: check if the determinant is 0 (as you said)
m < n (the number of vectors is greater than their length): they are linearly dependent (always).
The reason, in short, is that any solution to the system of m x n equations is also a solution to the n x n system of equations (you're trying to solve Av=0). For a better explanation, see Wikipedia, which explains it better than I can.

Approximating nonparametric cubic Bezier

What is the best way to approximate a cubic Bezier curve? Ideally I would want a function y(x) which would give the exact y value for any given x, but this would involve solving a cubic equation for every x value, which is too slow for my needs, and there may be numerical stability issues as well with this approach.
Would this be a good solution?
Just solve the cubic.
If you're talking about Bezier plane curves, where x(t) and y(t) are cubic polynomials, then y(x) might be undefined or have multiple values. An extreme degenerate case would be the line x= 1.0, which can be expressed as a cubic Bezier (control point 2 is the same as end point 1; control point 3 is the same as end point 4). In that case, y(x) has no solutions for x != 1.0, and infinite solutions for x == 1.0.
A method of recursive subdivision will work, but I would expect it to be much slower than just solving the cubic. (Unless you're working with some sort of embedded processor with unusually poor floating-point capacity.)
You should have no trouble finding code that solves a cubic that has already been thoroughly tested and debuged. If you implement your own solution using recursive subdivision, you won't have that advantage.
Finally, yes, there may be numerical stablility problems, like when the point you want is near a tangent, but a subdivision method won't make those go away. It will just make them less obvious.
EDIT: responding to your comment, but I need more than 300 characters.
I'm only dealing with bezier curves where y(x) has only one (real) root. Regarding numerical stability, using the formula from http://en.wikipedia.org/wiki/Cubic_equation#Summary, it would appear that there might be problems if u is very small. – jtxx000
The wackypedia article is math with no code. I suspect you can find some cookbook code that's more ready-to-use somewhere. Maybe Numerical Recipies or ACM collected algorithms link text.
To your specific question, and using the same notation as the article, u is only zero or near zero when p is also zero or near zero. They're related by the equation:
u^^6 + q u^^3 == p^^3 /27
Near zero, you can use the approximation:
q u^^3 == p^^3 /27
or p / 3u == cube root of q
So the computation of x from u should contain something like:
(fabs(u) >= somesmallvalue) ? (p / u / 3.0) : cuberoot (q)
How "near" zero is near? Depends on how much accuracy you need. You could spend some quality time with Maple or Matlab looking at how much error is introduced for what magnitudes of u. Of course, only you know how much accuracy you need.
The article gives 3 formulas for u for the 3 roots of the cubic. Given the three u values, you can get the 3 corresponding x values. The 3 values for u and x are all complex numbers with an imaginary component. If you're sure that there has to be only one real solution, then you expect one of the roots to have a zero imaginary component, and the other two to be complex conjugates. It looks like you have to compute all three and then pick the real one. (Note that a complex u can correspond to a real x!) However, there's another numerical stability problem there: floating-point arithmetic being what it is, the imaginary component of the real solution will not be exactly zero, and the imaginary components of the non-real roots can be arbitrarily close to zero. So numeric round-off can result in you picking the wrong root. It would be helpfull if there's some sanity check from your application that you could apply there.
If you do pick the right root, one or more iterations of Newton-Raphson can improve it's accuracy a lot.
Yes, de Casteljau algorithm would work for you. However, I don't know if it will be faster than solving the cubic equation by Cardano's method.

Resources