I have to solve polynomial equation system which gives error as it has infinite solutions and i just require few solutions(any 2 or 3) so how can i get them? , Can i specify condition on solution like solutions whose values range between 1 to 10 so that i can get few value.
Equations are actually long complicated but infinite solutions are due to "sin(0)" at root.
You can try to add additional equations to the system, like x1 = 0, x2 = 0 etc., to restrict the number of possible solutions.
Which definition of a solution are you meaning here: That a given function has a value of zero for certain inputs or that a given system of multiple equations overlap in multiple points? The latter could be described as 2 planes intersecting on a line but this isn't necessarily what people may think of when they picture solving a polynomial equation system.
For example: x^2 =4 has only 2 solutions, but x^2=y^2 may have infinitely many solutions as x=y and x=-y are both lines that define where that equality would hold, yet both can be considered polynomial equations to my mind.
I presume you have read through things like SOLUTION OF EQUATIONS USING MATLAB, MATLAB Programming/Symbolic Toolbox, and Solving non linear equations, right? Those may have some ideas for how to use Matlab to do that.
In Mathematica you could use FindInstance to find one or more solutions to your equations. Here's how to get 2 solutions of a particular set of equations:
In[2]:= FindInstance[
x^2 + y^2 + z^2 == -1 && z^2 == 2 x - 5 y, {x, y, z}, 2]
Out[2]= {{x -> -(46/5) - (6 I)/5,
y -> 1/10 (25 - Sqrt[-5955 - 1968 I]),
z -> -Sqrt[1/10 ((-309 - 24 I) + 5 Sqrt[-5955 - 1968 I])]}, {x ->
11/5 - (43 I)/5, y -> 1/10 (25 - Sqrt[6997 + 5504 I]),
z -> Sqrt[(1/5 - I/10) ((2 - 85 I) + (2 + I) Sqrt[6997 + 5504 I])]}}
You can also give inequalities like 1 < var < 10 to FindInstance or to Reduce to further restrict possible solutions, as you suggested.
Mathematica's FindRoot function will give you the closest solution to a given value, so you can use FindRoot a few times with various inputs.
Any other mathematical program should have something similar, it just happens that I'm most familiar with Mathematica at the moment.
If you solve a system of the form f(x)=0 numerically, you can use FMINCON to add constraints. For example, you can specify that the solution should be between 1 and 10.
In a similar manner to Jonas, if you solve for f(x) = 0 numerically in Matlab, you can use fsolve. Should the polynomial have many potential outputs, you may well be able to iterate towards these from different initial points.
Beware local minima in your solution space though, they can be a serious problem for iterative solutions as they guide your algorithm to an answer that is not actually correct.
If the system is big or there are many solutions (isolated or high dimension components) you can use packages like HOM4PS2. If the system is (extremely) small you can solve it symbolically by finding the so called Grobner's basis, which gives you a equivalent (but different) set of polynomials whose solutions are almost obvious. Both Maple and Mathematica 7 can do this.
Mathematica provides quite a few features to help you. For example:
Plot3D[{0, x^2 - y^2}, {x, -1, 1}, {y, -1, 1},
PlotStyle -> {Red, Green}]
a = ToRules#Reduce[x^2 - y^2 == 0, {x, y}];
Plot[Evaluate#({x, y} /. {a}), {x, -1, 1}]
Related
I'm trying to solve this in LPSolve IDE:
/* Objective function */
min: x + y;
/* Variable bounds */
r_1: 2x = 2y;
r_2: x + y = 1.11 x y;
r_3: x >= 1;
r_4: y >= 1;
but the response I get is:
Model name: 'LPSolver' - run #1
Objective: Minimize(R0)
SUBMITTED
Model size: 4 constraints, 2 variables, 5 non-zeros.
Sets: 0 GUB, 0 SOS.
Using DUAL simplex for phase 1 and PRIMAL simplex for phase 2.
The primal and dual simplex pricing strategy set to 'Devex'.
The model is INFEASIBLE
lp_solve unsuccessful after 2 iter and a last best value of 1e+030
How come this can happen when x=1.801801802 and y=1.801801802 are possible solutions here?
How To Find The Solution
Let's do some math.
Your problem is:
min x+y
s.t. 2x = 2y
x + y = 1.11 x y
x >= 1
y >= 1
The first constraint 2x = 2y can be simplified to x=y. We now substitute throughout the problem:
min 2*x
s.t. 2*x = 1.11 x^2
x >= 1
And rearrange:
min 2*x
s.t. 1.11 x^2-2*x=0
x >= 1
From geometry we know that 1.11 x^2-2*x makes an upward-opening parabola with a minimum less than zero. Therefore, there are exactly two points. These are given by the quadratic equation: 200/111 and 0.
Only one of these satisfies the second constraint: 200/111.
Why Can't I Find This Constraint With My Solver
The easy way out is to say it's because the x^2 term (x*y before the substitution is nonlinear). But it goes a little deeper than that. Nonlinear problems can be easy to solve as long as they are convex. A convex problem is one whose constraints form a single, contiguous space such that any line drawn between two points in the space stays within the boundaries of the space.
Your problem is not convex. The constraint 1.11 x^2-2*x=0 defines an infinite number of points. No two of these points can be connected by a straight line which stays in the space defined by the constraint because that space is curved. If the constraint were instead 1.11 x^2-2*x<=0 then the space would be convex because all points could be connected with straight lines that stay in its interior.
Nonconvex problems are part of a broader class of problems called NP-Hard. This means that there is not (and perhaps cannot) be any easy way of solving the problem. We have to be smart.
Solvers that can handle mixed-integer programming (MIP/MILP) can solve many non-convex problems efficiently, as can other techniques such as genetic algorithms. But, beneath the hood, these techniques all rely on glorified guess-and-check.
So your solver fails because the problem is nonconvex and your solver is neither smart enough to use MIP to guess-and-check its way to a solution nor smart enough to use the quadratic equation.
How Then Can I Solve The Problem?
In this particular instance, we are able to use mathematics to quickly find a solution because, although the problem is nonconvex, it is part of a class of special cases. Deep thinking by mathematicians has given us a simple way of handling this class.
But consider a few generalizations of the problem:
(a) a x^3+b x^2+c x+d=0
(b) a x^4+b x^3+c x^2+d x+e =0
(c) a x^5+b x^4+c x^3+d x^2+e x+f=0
(a) has three potential solutions which must be checked (exact solutions are tricky), (b) has four (trickier), and (c) has five. The formulas for (a) and (b) are much more complex than the quadratic formula and mathematicians have shown that there is no formula for (c) that can be expressed using "elementary operations". Instead, we have to resort to glorified guess-and-check.
So the techniques we used to solve your problem don't generalize very well. This is what it means to live in the realm of the nonconvex and NP-hard, and it's a good reason to fund research in mathematics, computer science, and related fields.
I would like to ask a question why computing value of (e^x-1)/x for numbers very close to zero does not work properly (for example if x=10^-15, result is 1.1102230), but when I use formula (e^x-1)/log(e^x), which is mathematical equivalent, it gives me correct result of 1.000000. Thanks.
The problem is that the first function exhibits what is known as catastrophic cancellation: for x near 0, ex is very close to 1 + x. As floating point numbers are less dense near 1 than 0, the result of the expression ex − 1 will be very close to x, but lose accuracy due to intermediate rounding.
The second exploits a neat trick of "cancelling out" the rounding error. In fact, this particular example is covered in detail in section 1.14.1 of Nicholas J. Higham's excellent book Accuracy and Stability of Numerical Algorithms. The crux of his explanation is
The expression (ex − 1) / x cannot be accurately evaluated for a given x ≈ 0 in floating point arithmetic, while the expression (y − 1) / log y can be be accurately evaluated for a given y ≈ 1. Since these functions are slowly varying near x = 0 (y = 1), evaluating (y − 1) / log y with an accurate, if inexact, approximation to y = ex ≈ 1 produces an accurate result.
Since this is an R question, how about computing exp(x)-1 by calling expm1(x)? expm1() is an R function designed to return accurate values of exp(x)-1 even for values of x close to 0. expm1(x)/x gives you the right answer.
I have an equation:
Y[u_, v_, w_]:=(sin[u] + v*cos[u]*sin[u - w] - v*sin[w])
Which needs to be expressed in a specific form:
Y[u_, v_, w_]:=(a*sin[u] + b*sin[2*u] + c*cos[2*u] + d*v*sin[w])
From doing this by hand, I happen to know that:
a=1
b=(v*cos[w]/2)
c=-(v*sin[w]/2)
d=-(3/2)
This particular example is easy to do by hand with trig identities, but for more complicated equations, mathematica could be very useful if the final form of the equation is known. Is there some specific solver function, or a way to use Solve to do this?
For my particular application with a more complicated equation, I have found a few coefficients of the final form by hand, but others are very long and I would like to use mathematica to both check, and finish the rearrangement.
Trying to get Mathematica to put an expression into exactly the form you want is often very difficult to do.
You might consider this method of checking your calculations
In[1]:= expr = (Sin[u] + v*Cos[u]*Sin[u - w] - v*Sin[w]);
{ Integrate[expr*Sin[u], {u, 0, 2 Pi}]/Pi,
Integrate[expr*Sin[2*u], {u, 0, 2 Pi}]/Pi,
Integrate[expr*Cos[2*u], {u, 0, 2 Pi}]/Pi,
Integrate[expr*v*Sin[w], {w, 0, 2 Pi}]/Pi}
Out[1]= {1, 1/2 v Cos[w], -(1/2) v Sin[w], -v^2 (1 + Cos[u]^2)}
What that is doing is the equivalent of finding the Fourier transform of your expression for a single frequency u, or w in the case of d. In your example it correctly finds the values of a, b and c, but fails in finding d. Unfortunately I can't see at the moment exactly why it is failing for d, but perhaps someone with a clear head can point out what the error is.
Be careful with this, as you see, don't just assume the result will always be correct. I am worried for your much more complicated actual problem this may not give you what you are looking for.
So this problem was given in Hackerrank 20/20 hack february :
Let’s consider a random permutation p1, p2, …, pN of numbers 1, 2, …, N and calculate the value F=(X2+…+XN-1)^K, where Xi equals 1 if one of the following two conditions holds: pi-1 < pi > pi+1 or pi-1 > pi < pi+1 and Xi equals 0 otherwise. What is the expected value of F?
Constraints: 1000 <= N <= 10^9, 1 <= K <= 5
I thought it was Eulerian number related problem. As the contest is over,I can see the solutions. But I don't understand any of them. Is there any tricks?
so a few words about my "solution" ;)
What I basically did:
1) write a brute force solver (obviously for N << 20)
-> this solver won't handle high values of N, as given in the constraints
2) analyze the output of the solutions to these (invalid) inputs
-> observe that with K=1, the output follows a straight line
-> K=2, is a quadratic function
-> K=3, is a cubic function, and so on
3) find the parameters for each function (K=1 - 5) by using a solver, or how I did it, wolfram alpha ;)
-> additionally I "normalized" each parameter to only have one division afterwards
4) use any programming language / big integer class to solve the correct inputs in O(1)
I'm pretty sure that one can come up with these parameters in a very clever way, but for me, during the contest, this solution was easy and fast enough without having to think too much about the "why" ;)
If I have a general function,f(z,a), z and a are both real, and the function f takes on real values for all z except in some interval (z1,z2), where it becomes complex. How do I determine z1 and z2 (which will be in terms of a) using Mathematica (or is this possible)? What are the limitations?
For a test example, consider the function f[z_,a_]=Sqrt[(z-a)(z-2a)]. For real z and a, this takes on real values except in the interval (a,2a), where it becomes imaginary. How do I find this interval in Mathematica?
In general, I'd like to know how one would go about finding it mathematically for a general case. For a function with just two variables like this, it'd probably be straightforward to do a contour plot of the Riemann surface and observe the branch cuts. But what if it is a multivariate function? Is there a general approach that one can take?
What you have appears to be a Riemann surface parametrized by 'a'. Consider the algebraic (or analytic) relation g(a,z)=0 that would be spawned from this branch of a parametrized Riemann surface. In this case it is simply g^2 - (z - a)*(z - 2*a) == 0. More generally it might be obtained using Groebnerbasis, as below (no guarantee this will always work without some amount of user intervention).
grelation = First[GroebnerBasis[g - Sqrt[(z - a)*(z - 2*a)], {x, a, g}]]
Out[472]= 2 a^2 - g^2 - 3 a z + z^2
A necessary condition for the branch points, as functions of the parameter 'a', is that the zero set for 'g' not give a (single valued) function in a neighborhood of such points. This in turn means that the partial derivative of this relation with respect to g vanishes (this is from the implicit function theorem of multivariable calculus). So we find where grelation and its derivative both vanish, and solve for 'z' as a function of 'a'.
Solve[Eliminate[{grelation == 0, D[grelation, g] == 0}, g], z]
Out[481]= {{z -> a}, {z -> 2 a}}
Daniel Lichtblau
Wolfram Research
For polynomial systems (and some class of others), Reduce can do the job.
E.g.
In[1]:= Reduce[Element[{a, z}, Reals]
&& !Element[Sqrt[(z - a) (z - 2 a)], Reals], z]
Out[1]= (a < 0 && 2a < z < a) || (a > 0 && a < z < 2a)
This type of approach also works (often giving very complicated solutions for functions with many branch cuts) for other combinations of elementary functions I checked.
To find the branch cuts (as opposed to the simple class of branch points you're interested in) in general, I don't know of a good approach. The best place to find the detailed conventions that Mathematica uses is at the functions.wolfram site.
I do remember reading a good paper on this a while back... I'll try to find it....
That's right! The easiest approach I've seen for branch cut analysis uses the unwinding number. There's a paper "Reasoning about the elementary functions of complex analysis" about this the the journal "Artificial Intelligence and Symbolic Computation". It and similar papers can be found at one of the authors homepage: http://www.apmaths.uwo.ca/~djeffrey/offprints.html.
For general functions you cannot make Mathematica calculate it.
Even for polynomials, finding an exact answer takes time.
I believe Mathematica uses some sort of quantifier elimination when it uses Reduce,
which takes time.
Without any restrictions on your functions (are they polynomials, continuous, smooth?)
one can easily construct functions which Mathematica cannot simplify further:
f[x_,y_] := Abs[Zeta[y+0.5+x*I]]*I
If this function is real for arbitrary x and any -0.5 < y < 0 or 0<y<0.5,
then you will have found a counterexample to the Riemann zeta conjecture,
and I'm sure Mathematica cannot give a correct answer.