Find the coefficients of a polynomial given its roots (zeros) in Prolog - math

How I can implement in Prolog program that find coefficients of the polynomial if I know its roots.
for example:
input data (2;-1)
output (1;-1;2)

Multiplying together the first-degree factors for given roots will form an expanded polynomial. This is a natural fit with an "accumulator" design pattern in Prolog.
That is, we'll introduce an auxiliary argument to remember the product of factors "so far" dealt with. Once the list of specified roots has been emptied, then we will have the desired polynomial expansion:
/* polynomialFromListofRoots(ListofRoots,[1],Polynomial) */
polynomialFromListofRoots([ ],Poly,Poly).
polynomialFromListofRoots([R|Roots],Pnow,Poly) :-
polyMultiplyRootFactor(R,Pnow,Pnew),
polynomialFromListofRoots(Roots,Pnew,Poly).
/* polyMultiplyRootFactor(R,Poly,ProductXminusR) */
polyMultiplyRootFactor(R,Poly,Prod) :-
polyMultiplyRootFactorAux(R,0,Poly,Prod).
/* polyMultiplyRootFactorAux(R,Aux,Poly,Product) */
polyMultiplyRootFactorAux(R,A,[ ],[B]) :-
B is -R*A.
polyMultiplyRootFactorAux(R,A,[P|Ps],[RP|RPs]) :-
RP is P - R*A,
polyMultiplyRootFactorAux(R,P,Ps,RPs).
Using the example in the Question:
?- polynomialFromListofRoots([2,-1],[1],Poly).
Poly = [1, -1, -2]
yes
Note that this corrects the output claimed in the Question.

Sorry misread question.
a^2x + bx +c = 0
Take the sum of the roots x1 + x2 this is equal to -b/a.
Take the product of the roots x1*x2 this is equal to c/a.
Now solve the resulting system of linear equations to find a b and c.
Edit:
The above solution works if you set the parameter of a = 1. You see when your given the roots you'll end up with two equations and three unknowns so you'll have to set a fixed value on one of the parameters and the above solutions fixes a = 1.
So given 2 roots you can't get back a specific polynomial because theres no unique answer theres an infinte number of answers

Related

lpsolve - unfeasible solution, but I have example of 1

I'm trying to solve this in LPSolve IDE:
/* Objective function */
min: x + y;
/* Variable bounds */
r_1: 2x = 2y;
r_2: x + y = 1.11 x y;
r_3: x >= 1;
r_4: y >= 1;
but the response I get is:
Model name: 'LPSolver' - run #1
Objective: Minimize(R0)
SUBMITTED
Model size: 4 constraints, 2 variables, 5 non-zeros.
Sets: 0 GUB, 0 SOS.
Using DUAL simplex for phase 1 and PRIMAL simplex for phase 2.
The primal and dual simplex pricing strategy set to 'Devex'.
The model is INFEASIBLE
lp_solve unsuccessful after 2 iter and a last best value of 1e+030
How come this can happen when x=1.801801802 and y=1.801801802 are possible solutions here?
How To Find The Solution
Let's do some math.
Your problem is:
min x+y
s.t. 2x = 2y
x + y = 1.11 x y
x >= 1
y >= 1
The first constraint 2x = 2y can be simplified to x=y. We now substitute throughout the problem:
min 2*x
s.t. 2*x = 1.11 x^2
x >= 1
And rearrange:
min 2*x
s.t. 1.11 x^2-2*x=0
x >= 1
From geometry we know that 1.11 x^2-2*x makes an upward-opening parabola with a minimum less than zero. Therefore, there are exactly two points. These are given by the quadratic equation: 200/111 and 0.
Only one of these satisfies the second constraint: 200/111.
Why Can't I Find This Constraint With My Solver
The easy way out is to say it's because the x^2 term (x*y before the substitution is nonlinear). But it goes a little deeper than that. Nonlinear problems can be easy to solve as long as they are convex. A convex problem is one whose constraints form a single, contiguous space such that any line drawn between two points in the space stays within the boundaries of the space.
Your problem is not convex. The constraint 1.11 x^2-2*x=0 defines an infinite number of points. No two of these points can be connected by a straight line which stays in the space defined by the constraint because that space is curved. If the constraint were instead 1.11 x^2-2*x<=0 then the space would be convex because all points could be connected with straight lines that stay in its interior.
Nonconvex problems are part of a broader class of problems called NP-Hard. This means that there is not (and perhaps cannot) be any easy way of solving the problem. We have to be smart.
Solvers that can handle mixed-integer programming (MIP/MILP) can solve many non-convex problems efficiently, as can other techniques such as genetic algorithms. But, beneath the hood, these techniques all rely on glorified guess-and-check.
So your solver fails because the problem is nonconvex and your solver is neither smart enough to use MIP to guess-and-check its way to a solution nor smart enough to use the quadratic equation.
How Then Can I Solve The Problem?
In this particular instance, we are able to use mathematics to quickly find a solution because, although the problem is nonconvex, it is part of a class of special cases. Deep thinking by mathematicians has given us a simple way of handling this class.
But consider a few generalizations of the problem:
(a) a x^3+b x^2+c x+d=0
(b) a x^4+b x^3+c x^2+d x+e =0
(c) a x^5+b x^4+c x^3+d x^2+e x+f=0
(a) has three potential solutions which must be checked (exact solutions are tricky), (b) has four (trickier), and (c) has five. The formulas for (a) and (b) are much more complex than the quadratic formula and mathematicians have shown that there is no formula for (c) that can be expressed using "elementary operations". Instead, we have to resort to glorified guess-and-check.
So the techniques we used to solve your problem don't generalize very well. This is what it means to live in the realm of the nonconvex and NP-hard, and it's a good reason to fund research in mathematics, computer science, and related fields.

How to solve the following recurence relation

How do I solve the following recurrence relation?
f(n+2) = 2*f(n+1) - f(n) + 2 where n is even
f(n+2) = 3*f(n) where n is odd
f(1) = f(2) = 1
For odd n I could solve the recurrence and it turns out to be a geometric series with common ratio 3.
When n is even I could find and solve the homogeneous part of the recurrence relation by substituting f(n) = r^n. So the solution comes to be r = 1. Therefore the solution is c1 + c2*n. But how do I solve the particular integral part? Am I on the right track? Are there any other approaches to the above solution?
The recurrence for odd n is very easy to solve with the substitution you tried:
Substituting this into the recurrence for even n:
Attempt #1
Make a general substitution of the form:
Note that the exponent is n/2 instead of n based on the odd recurrence, but it is purely a matter of choice
Matching the same types of terms:
But this solution doesn't work with the boundary condition f(2) = 1:
Attempt #2
It turns out that a second exponential term is required:
As before, one of the exponential terms needs to match 3^(n/2):
The last equation has solutions d = 0, -1; obviously only the non-trivial one
is useful:
The final solution for all n ≥ 2:
Alternative method
Longer but (possibly, at least I found it to be) more intuitive - expand the recurrence m times:
Observe the pattern:
The additive factor of 2 is present for odd number of expansions m but cancels out for even m.
Each expansion adds a factor of 2 * 3^(n/2-m) for odd m, and subtracts it for even m.
Each expansion also adds a factor of f(n-2m) for even m, and subtracts it for odd m.
Combining these observations to write a general closed form expression for the m-th expansion:
Using the standard formula for geometric series in the last step.
Recursion stops at f(2) = 1:
The same result as before.

How to solve multivariable nonlinear equotation systems? [R]

I have three equotations with 3 unknown variables like this:
Assume that the following variables are given as parameters:
Desired, not given:
What I want is that when I pass three parameters it should give me the solution of the remaining ones. What's the simpliest way to do this in R?
Because I'm a beginner I would like to have some (short) explanations. :)
Solving for sigma.eps^2 is a matter of mathematics. In the second equation you can substitute a (take the expression from the third equation). Then you can solve for sigma.eps^2. After that you can calculate a and then b:
sigma.eps2 <- (1-p^2)*sigma2^2 # sigma.eps2 stands for sigma.eps^2
sigma.eps <- sqrt(sigma.eps2)
a <- (sigma.eps / sigma1) * p / sqrt(1-p^2)
b <- mu2 - a*mu1
Eventually the second value for sigma.eps is relevant. In this case the second value is:
sigma.eps <- -sqrt(sigma.eps2)
This would also implicate other values for a and b (to compute in the same way as above).

efficiently determining if a polynomial has a root in the interval [0,T]

I have polynomials of nontrivial degree (4+) and need to robustly and efficiently determine whether or not they have a root in the interval [0,T]. The precise location or number of roots don't concern me, I just need to know if there is at least one.
Right now I'm using interval arithmetic as a quick check to see if I can prove that no roots can exist. If I can't, I'm using Jenkins-Traub to solve for all of the polynomial roots. This is obviously inefficient since it's checking for all real roots and finding their exact positions, information I don't end up needing.
Is there a standard algorithm I should be using? If not, are there any other efficient checks I could do before doing a full Jenkins-Traub solve for all roots?
For example, one optimization I could do is to check if my polynomial f(t) has the same sign at 0 and T. If not, there is obviously a root in the interval. If so, I can solve for the roots of f'(t) and evaluate f at all roots of f' in the interval [0,T]. f(t) has no root in that interval if and only if all of these evaluations have the same sign as f(0) and f(T). This reduces the degree of the polynomial I have to root-find by one. Not a huge optimization, but perhaps better than nothing.
Sturm's theorem lets you calculate the number of real roots in the range (a, b). Given the number of roots, you know if there is at least one. From the bottom half of page 4 of this paper:
Let f(x) be a real polynomial. Denote it by f0(x) and its derivative f′(x) by f1(x). Proceed as in Euclid's algorithm to find
f0(x) = q1(x) · f1(x) − f2(x),
f1(x) = q2(x) · f2(x) − f3(x),
.
.
.
fk−2(x) = qk−1(x) · fk−1(x) − fk,
where fk is a constant, and for 1 ≤ i ≤ k, fi(x) is of degree lower than that of fi−1(x). The signs of the remainders are negated from those in the Euclid algorithm.
Note that the last non-vanishing remainder fk (or fk−1 when fk = 0) is a greatest common
divisor of f(x) and f′(x). The sequence f0, f1,. . ., fk (or fk−1 when fk = 0) is called a Sturm sequence for the polynomial f.
Theorem 1 (Sturm's Theorem) The number of distinct real zeros of a polynomial f(x) with
real coefficients in (a, b) is equal to the excess of the number of changes of sign in the sequence f0(a), ..., fk−1(a), fk over the number of changes of sign in the sequence f0(b), ..., fk−1(b), fk.
You could certainly do binary search on your interval arithmetic. Start with [0,T] and substitute it into your polynomial. If the result interval does not contain 0, you're done. If it does, divide the interval in 2 and recurse on each half. This scheme will find the approximate location of each root pretty quickly.
If you eventually get 4 separate intervals with a root, you know you are done. Otherwise, I think you need to get to intervals [x,y] where f'([x,y]) does not contain zero, meaning that the function is monotonically increasing or decreasing and hence contains at most one zero. Double roots might present a problem, I'd have to think more about that.
Edit: if you suspect a multiple root, find roots of f' using the same procedure.
Use Descartes rule of signs to glean some information. Just count the number of sign changes in the coefficients. This gives you an upper bound on the number of positive real roots. Consider the polynomial P.
P = 131.1 - 73.1*x + 52.425*x^2 - 62.875*x^3 - 69.225*x^4 + 11.225*x^5 + 9.45*x^6 + x^7
In fact, I've constructed P to have a simple list of roots. They are...
{-6, -4.75, -2, 1, 2.3, -i, +i}
Can we determine if there is a root in the interval [0,3]? Note that there is no sign change in the value of P at the endpoints.
P(0) = 131.1
P(3) = 4882.5
How many sign changes are there in the coefficients of P? There are 4 sign changes, so there may be as many as 4 positive roots.
But, now substitute x+3 for x into P. Thus
Q(x) = P(x+3) = ...
4882.5 + 14494.75*x + 15363.9*x^2 + 8054.675*x^3 + 2319.9*x^4 + 370.325*x^5 + 30.45*x^6 + x^7
See that Q(x) has NO sign changes in the coefficients. All of the coefficients are positive values. Therefore there can be no roots larger than 3.
So there MAY be either 2 or 4 roots in the interval [0,3].
At least this tells you whether to bother looking at all. Of course, if the function has opposite signs on each end of the interval, we know there are an odd number of roots in that interval.
It's not that efficient, but is quite reliable. You can construct the polynomial's Companion Matrix (A sparse matrix whose eigenvalues are the polynomial's roots).
There are efficient eigenvalue algorithms that can find eigenvalues in a given interval. One of them is the inverse iteration (Can find eigenvalues closest to some input value. Just give the middle point of the interval as the above value).
If the value f(0)*f(t)<=0 then you are guaranteed to have a root. Otherwise you can start splitting the domain into two parts (bisection) and check the values in the ends until you are confident there is no root in that segment.
if f(0)*f(t)>0 you either have no, two, four, .. roots. Your limit is the polynomial order. if f(0)*f(t)<0 you may have one, three, five, .. roots.

Can I force two components in a three-way linear regression to be positive?

I'm sorry if I'm not using the correct mathemathical terms, but I hope you'll understand what I'm trying to accomplish.
My problem:
I'm using linear regression (currently least squares method) on the values from two vectors x and y against the result z. This is to be done in matlab, and I'm using the \-operator to perform the regression. My dataset will contain a few thousand observations (up to about 50000 at max).
The x-values will be in the area of 10-300 (most between 60 and 100) and the y-values in the 1-3 area.
My code looks like this:
X = [ones(size(x,1) x y];
parameters = X\y;
The output "parameters" are then the three factors a0, a1 and a2 which is used in this formula:
a0 * 1 + a1 * xi + a2 * yi = zi
(The i's are supposed to be subscripted)
This works like expected, although I want the two parameters a1 and a2 to ALWAYS be positive values, even when the vector z is negative (this means that the a0 will be negative, of course), since this is what the real model looks like (z is always positively correlated to x and z). Is this possible using the least squares method? I'm also open for other algorithms for linear regression.
Let me try and rephrase to clarify. Accoring to your model z is always positively correlated with x and y. However, sometimes when you solve the linear regression for the coefficient this gives you a negative value.
If you are right about the data, this should only happen when the correct coefficient is small, and noise happens to take it negative. You could just assign it to zero, but then the means wouldn't match properly.
In which case the correct solution is as jpalacek says, but explained with more detail here:
Try and regress against x and y. If both positive take the result.
If a1 is negative, assume it should be zero. regress z against y. If a2 is positive then take a1 as 0, and a0 and a2 from this regression.
If a2 is negative, assume it should be zero too. Regress z against 1, and take this as a0. Let a1 and a2 be 0.
This should give you what you want.
The simple solution is to use a tool designed to solve it. That is, use lsqlin, from the optimization toolbox. Set a lower bound constraint for two of the three parameters.
Thus, assuming x, y, and z are all COLUMN vectors,
A = [ones(length(x),1),x,y];
lb = [-inf, 0, 0];
a = lsqlin(A,z,[],[],[],[],lb);
This will constrain only the second and third unknown parameters.
Without the optimization toolbox, use lsqnonneg, which is part of matlab itself. Here too the solution is easy enough.
A = [ones(length(x),1),x,y];
a = lsqnonneg(A,z);
Your model will be
z = a(1) + a(2)*x + a(3)*y
If a(1) is essentially zero, i.e., it is within a tolerance of zero, then assume that the first parameter was constrained by the bound at zero. In that case, solve a second problem by changing the sign on the column of ones in A.
A(:,1) = -1;
a = lsqnonneg(A,z);
If this solution has a(1) significantly non-zero, then the second solution must be better than the first. Your model will now be
z = -a(1) + a(2)*x + a(3)*y
It costs you at most two calls to lsqnonneg, and the second call is only ever made some fraction (lacking any information about your problem, the odds are 50% of the second call) of the time.

Resources