Controlling measure zero sets of solutions with Manipulate. A case study - math

To address the question we start with the following toy model problem being here just a case study:
Given two circles on a plane (its centers (c1 and c2) and radii (r1 and r2)) as well as a positive number r3, find all circles with radii = r3 (i.e all points c3 being centers of circles with radii = r3) tangent (externally and internally) to given two circles.
In general, depending on Circle[c1,r1], Circle[c2,r2] and r3 there are 0,1,2,...8 possible solutions. A typical case with 8 solutions :
I slightly modified a neat Mathematica implementation by Jaime Rangel-Mondragon on Wolfram Demonstration Project, but its core is similar:
Manipulate[{c1, a, c2, b} = pts;
{r1, r2} = Map[Norm, {a - c1, b - c2}];
w = Table[
Solve[{radius[{x, y} - c1]^2 == (r + k r1)^2,
radius[{x, y} - c2]^2 == (r + l r2)^2}
] // Quiet,
{k, -1, 1, 2}, {l, -1, 1, 2}
];
w = Select[
Cases[Flatten[{{x, y}, r} /. w, 2],
{{_Real, _Real}, _Real}
],
Last[#] > 0 &
];
Graphics[
{{Opacity[0.35], EdgeForm[Thin], Gray,
Disk[c1, r1], Disk[c2, r2]},
{EdgeForm[Thick], Darker[Blue,.5],
Circle[First[#], Last[#]]& /# w}
},
PlotRange -> 8, ImageSize -> {915, 915}
],
"None" -> {{pts, {{-3, 0}, {1, 0}, {3, 0}, {7, 0}}},
{-8, -8}, {8, 8}, Locator},
{{r, 0.3, "r3"}, 0, 8},
TrackedSymbols -> True,
Initialization :> (radius[z_] := Sqrt[z.z])
]
We can easily conclude that in a generic case we have an even number of solutions 0,2,4,6,8 while cases with an odd number of solutions 1,3,5,7 are exceptional - they are of zero measure in terms of control ranges. Thus changing in Manipulate c1, r1, c2, r2, r3 one can observe that it is much more difficult to track cases with an odd number of circles.
One could modify on a basic level the above approach : solving purely symbolically equations for c3 as well as redesignig Manipulate structure with an emphasis on changing number of solutions. If I'm not wrong Solve can work only numerically with Locator in Manipulate, however here Locator seems to be crucial for simplicity of controlling c1, r1, c2, r2 as well as for the whole implementation.
Let's state the questions, :
1. How can we force Manipulate to track seamlessly cases with an odd number of solutions (circles) ?
2. Is there any way to make Solve to find exact solutions of the underlying equations?
( I find the answer by Daniel Lichtblau to be the best approach to the question 2, but it seems in this instance there is still an essential need for sketching of a general technique of emphasizing measure zero sets of solutions while working with Manipulate )
These considerations are of less importance while dealing with exact solutions
For example Solve[x^2 - 3 == 0, x] yields {{x -> -Sqrt[3]}, {x -> Sqrt[3]}}
while in case from the above of slightly more difficult equations extracted from Manipulate setting the following arguments :
c1 = {-Sqrt[3], 0}; a = {1, 0}; c2 = {6 - Sqrt[3], 0}; b = {7, 0};
{r1, r2} = Map[ Norm, {a - c1, b - c2 }];
r = 2.0 - Sqrt[3];
to :
w = Table[Solve[{radius[{x, y} - {x1, y1}]^2 == (r + k r1)^2,
radius[{x, y} - {x2, y2}]^2 == (r + l r2)^2}],
{k, -1, 1, 2}, {l, -1, 1, 2}];
w = Select[ Cases[ Flatten[ {{x, y}, r} /. w, 2], {{_Real, _Real}, _Real}],
Last[#] > 0 &]
we get two solutions :
{{{1.26795, -3.38871*10^-8}, 0.267949}, {{1.26795, 3.38871*10^-8}, 0.267949}}
similarly under the same arguments and equations, putting :
r = 2 - Sqrt[3];
we get no solutions : {}
but in fact there is exactly one solution which we would like to emphasize:
{ {3 - Sqrt[3], 0 }, 2 - Sqrt[3] }
In fact, passing to Graphics such a small difference between two different solutions and the uniqe one is indistinguishable, however working with Manipulate we cannot track carefully with a desired accuracy merging of two circles and usually the last observed configuration when lowering r3 before vanishing all solutions (reminding so-called structural instability) looks like this :
Manipulate is rather a powerful tool, not only a toy, and its mastering could be very useful. The considered issues when appearing in a serious research are frequently critical, for example: in studying solutions of nonlinear differential equations, occurence of singularities in its solutions, qualitative behavior of dynamical systems, bifurcations, phenomena in Catastrophe theory and so on.

As this is a measure zero set, tools that require some granularity will generally have trouble with the concept. Perhaps better is to look for the singularity locus explicitly, where solutions have multiplicity or in other ways depart from the nearby solution behavior(s). It will be a part of the discriminant variety. In particular, you can grab the relevant part by setting your defining polynomials to zero and simultaneously making the Jacobian determinant zero.
Here is your example. I will eventually (wlog) put one center at the origin and the other at (1,0).
centers = Array[c, {2, 2}];
radii = Array[r, 3];
circ[cen_, rad_, x_, y_] := ({x, y} - cen).({x, y} - cen) - rad^2
I'll use your 'k' for both polynomials. Your formulation has pairs (k,l) where each is +-1. We can just use k, arrange by squaring to get a polynomial in k^2, and replace that with 1.
polys =
Table[Expand[
circ[centers[[j]], radii[[3]] + k*radii[[j]], x, y]], {j, 2}]
Out[18]= {x^2 + y^2 - 2 x c[1, 1] + c[1, 1]^2 - 2 y c[1, 2] +
c[1, 2]^2 - k^2 r[1]^2 - 2 k r[1] r[3] - r[3]^2,
x^2 + y^2 - 2 x c[2, 1] + c[2, 1]^2 - 2 y c[2, 2] + c[2, 2]^2 -
k^2 r[2]^2 - 2 k r[2] r[3] - r[3]^2}
We'll remove the part that is linear in k, square the rest, square that removed part, and equate the two. We also then replace k with unity.
p2 = polys - k*Coefficient[polys, k];
polys2 = Expand[p2^2 - (k*Coefficient[polys, k])^2] /. k -> 1;
We now get the determinant of the Jacobian and add that to the brew.
discrim = Det[D[polys2, #] & /# {x, y}];
allrelations = Join[polys2, {discrim}];
Now set the centers as noted earlier (could have done this from the beginning, one would suppose).
ar2 =
allrelations /. {c[1, 1] -> 0, c[1, 2] -> 0, c[2, 1] -> 0,
c[2, 2] -> 0}
Out[38]= {x^4 + 2 x^2 y^2 + y^4 - 2 x^2 r[1]^2 - 2 y^2 r[1]^2 +
r[1]^4 - 2 x^2 r[3]^2 - 2 y^2 r[3]^2 - 2 r[1]^2 r[3]^2 + r[3]^4,
x^4 + 2 x^2 y^2 + y^4 - 2 x^2 r[2]^2 - 2 y^2 r[2]^2 + r[2]^4 -
2 x^2 r[3]^2 - 2 y^2 r[3]^2 - 2 r[2]^2 r[3]^2 + r[3]^4, 0}
We now eliminate x and y to get the locus in r[1],r[2],r[3] parameter space that determines where we'll have multiplicity in our solutions.
gb = GroebnerBasis[ar2, radii, {x, y},
MonomialOrder -> EliminationOrder]
{r[1]^6 - 3 r[1]^4 r[2]^2 + 3 r[1]^2 r[2]^4 - r[2]^6 -
8 r[1]^4 r[3]^2 + 8 r[2]^4 r[3]^2 + 16 r[1]^2 r[3]^4 -
16 r[2]^2 r[3]^4}
If I did this all correctly, then we now have the polynomial defining the locus in parameter space where solution sets can get silly. Off this set they should never have multiplicity, and real counts should always be even. The intersection of this set with real space will be a 2d surface in the 3d space of the radii parameters. It will separate regions that have 0, 2, 4, 6, or 8 real solutions from one another.
Last, I'll point out that in this example the variety in question reduces nicely into a product of planes. I guess from a geometric view this is not too surprising.
Factor[gb[[1]]]
Out[43]= (r[1] - r[2]) (r[1] + r[2]) (r[1] - r[2] - 2 r[3]) (r[1] +
r[2] - 2 r[3]) (r[1] - r[2] + 2 r[3]) (r[1] + r[2] + 2 r[3])

Related

Artifacts in DensityPlot (Wolfram Mathematica)

I am trying to visualize the result of numerical integration in Wolfram Mathematica using DensityPlot. But dark artifacts appear inside the circle on the graph, this is incorrect. How to fix this?
f[u_, v_]:=(1/1.2)*(1/(3.14159*0.02^2)*E^(-((x-u)^2+(y-v)^2)/0.02^2)+0.2/(3.14159*2^2)*E^(-((x-u)^2+(y-v)^2)/2^2));
i[x_, y_]:=NIntegrate[f[u, v],{u, v} \[Element] Disk[{0,0},2.5], AccuracyGoal -> 30];
DensityPlot[i[x, y], {x, -3, 3}, {y, -3, 3},ColorFunction->"SunsetColors",PlotPoints -> 20, PlotLegends -> Automatic]
Link to the image: https://postimg.cc/47MJTYj7
Well, that is not an artifact, or is it. First I'd like to point out that f is a function of u, v, x and y. So the way it is programmed is not very clean. Now to the main problem. There are two Gaussian peaks. One is very broad, the other very sharp. In many cases the numerical integral just does not get points near the sharp peak, therefore not detecting it. As the rest is smooth and easy, no subset is generated and the integral is just missing this contribution. Luckily, there is an option specifically for this. MinRecursion: NIntegrate may miss sharp peaks of integrands:.... So this works
g[x_, y_, s_] := Exp[ -( x^2 + y^2 ) / s^2] / (s^2 Pi)
f[x_, y_, u_, v_] := g[ x - u, y - v, 0.02 ] / 1.2 + 0.2 g[ x - u, y - v, 2 ]
i[x_, y_] := NIntegrate[ f[x, y, u, v], {u, v} \[Element] Disk[{0, 0}, 2.5], MinRecursion -> 5]
nn = 30;
t = Table[i[x, y], {x, -3, 3, 6/nn}, {y, -3, 3, 6/nn}];
ListDensityPlot[t, ColorFunction -> "SunsetColors", PlotLegends -> Automatic]
Takes quite some time, though. Probably, it would be better to split this up in two integrals and invest some thought on the integration boundaries.

Plotting a 3D surface in Mathematica: NIntegrate Evaluates to Non-Numerical Values, How Can I Correct This?

I'm trying to run this in Mathematica 10, but I don't know how to deal with the error that I receive when trying to plot the surface that I define. Here is what I'm trying to run, followed by the error I receive:
In[1]: s[y_] = ceiling[y*(1 - UnitStep[-y])]
Out[1]: ceiling[y (1 - UnitStep[-y])]
In[2]: f[t_] = t^2
Out[2]: t^2
In[3]: (Integrate \!\(
\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(x\)]\(\((\((x -
t)\)^\((\(-1\) - y -
ceiling[y*\((1 - UnitStep[\(-y\)])\)])\))\)*
D[f[t], {t,
ceiling[y*\((1 - UnitStep[\(-y\)])\)]}] \[DifferentialD]t\)\))/
Gamma[ceiling[y*(1 - UnitStep[-y])] - y]
Out[3]: (Integrate \!\(
\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(x\)]\(\(
\*SuperscriptBox[\((\(-t\) + x)\), \(\(-1\) - y -
ceiling[y\ \((1 - UnitStep[\(-y\)])\)]\)]\ \*
SuperscriptBox["Power",
TagBox[
RowBox[{"(",
RowBox[{
RowBox[{"ceiling", "[",
RowBox[{"y", " ",
RowBox[{"(",
RowBox[{"1", "-",
RowBox[{"UnitStep", "[",
RowBox[{"-", "y"}], "]"}]}], ")"}]}], "]"}], ",", "0"}], ")"}],
Derivative],
MultilineFunction->None][t, 2]\) \[DifferentialD]t\)\))/Gamma[-y +
ceiling[y (1 - UnitStep[-y])]]
In[4]: Plot3D[(Integrate \!\(
\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(x\)]\(\((\((x -
t)\)^\((\(-1\) - y -
ceiling[y*\((1 - UnitStep[\(-y\)])\)])\))\)*
D[f[t], {t,
ceiling[y*\((1 -
UnitStep[\(-y\)])\)]}] \[DifferentialD]t\)\))/
Gamma[ceiling[y*(1 - UnitStep[-y])] - y], {x, -6, 6}, {y, -2, 2}]
Out[4] NIntegrate::inumr: The integrand (-5.99914-t)^(0.999714 -ceiling[0.]) (Power^(ceiling[0.],0))[t,2] has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,-5.99914}}. >>
NIntegrate::inumr: The integrand (-5.99914-t)^(0.999714 -ceiling[0.]) (Power^(ceiling[0.],0))[t,2] has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,-5.99914}}. >>
NIntegrate::inumr: The integrand (-5.99914-t)^(0.999714 -ceiling[0.]) (Power^(ceiling[0.],0))[t,2] has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,-5.99914}}. >>
General::stop: Further output of NIntegrate::inumr will be suppressed during this calculation. >>
\!\(\*
Graphics3DBox[{},
Axes->True,
AxesLabel->{None, None, None},
BoxRatios->{1, 1, 0.4},
DisplayFunction->Identity,
FaceGridsStyle->Automatic,
Method->{"DefaultBoundaryStyle" -> Directive[
GrayLevel[0.3]], "RotationControl" -> "Globe"},
PlotRange->{{-6, 6}, {-2, 2}, {0., 0.}},
PlotRangePadding->{
Scaled[0.02],
Scaled[0.02],
Scaled[0.02]},
Ticks->{Automatic, Automatic, Automatic}]\)
Can anybody tell me how exactly I can fix my code so that I can plot the surface that I am trying to plot? Any help at all would be appreciated.
Thanks,
Mike

Prolog:: f(x) recursion

I'm a beginner to Prolog and have two requirements:
f(1) = 1
f(x) = 5x + x^2 + f(x - 1)
rules:
f(1,1).
f(X,Y) :-
Y is 5 * X + X * X + f(X-1,Y).
query:
f(4,X).
Output:
ERROR: is/2: Arguments are not sufficiently instantiated
How can I add value of f(X-1)?
This can be easily solved by using auxiliary variables.
For example, consider:
f(1, 1).
f(X, Y) :-
Y #= 5*X + X^2 + T1,
T2 #= X - 1,
f(T2, T1).
This is a straight-forward translation of the rules you give, using auxiliary variables T1 and T2 which stand for the partial expressions f(X-1) and X-1, respectively. As #BallpointBen correctly notes, it is not sufficient to use the terms themselves, because these terms are different from their arithmetic evaluation. In particular, -(2,1) is not the integer 1, but 2 - 1 #= 1 does hold!
Depending on your Prolog system, you may ned to currently still import a library to use the predicate (#=)/2, which expresses equality of integer expressesions.
Your example query now already yields a solution:
?- f(4, X).
X = 75 .
Note that the predicate does not terminate universally in this case:
?- f(4, X), false.
nontermination
We can easily make it so with an additional constraint:
f(1, 1).
f(X, Y) :-
X #> 1,
Y #= 5*X + X^2 + T1,
T2 #= X - 1,
f(T2, T1).
Now we have:
?- f(4, X).
X = 75 ;
false.
Note that we can use this as a true relation, also in the most general case:
?- f(X, Y).
X = Y, Y = 1 ;
X = 2,
Y = 15 ;
X = 3,
Y = 39 ;
X = 4,
Y = 75 ;
etc.
Versions based on lower-level arithmetic typically only cover a very limited subset of instances of such queries. I therefore recommend that you use (#=)/2 instead of (is)/2. Especially for beginners, using (is)/2 is too hard to understand. Take the many related questions filed under instantiation-error as evidence, and see clpfd for declarative solutions.
The issue is that you are trying to evaluate f(X-1,Y) as if it were a number, but of course it is a predicate that may be true or false. After some tinkering, I found this solution:
f(1,1).
f(X,Y) :- X > 0, Z is X-1, f(Z,N), Y is 5*X + X*X + N.
The trick is to let it find its way down to f(1,N) first, without evaluating anything; then let the results bubble back up by satisfying Y is 5*X + X*X + N. In Prolog, order matters for its search. It needs to satisfy f(Z,N) in order to have a value of N for the statement Y is 5*X + X*X + N.
Also, note the condition X > 0 to avoid infinite recursion.

Mathematica doesn't solve wave equation when given boundary conditions

Still new to Mathematica syntax. When I do:
DSolve[{
D[u[x, t], {x, 2}] == (1/(v*v))*D[u[x, t], {t, 2}],
u[0, t] == 0,
u[l, 0] == 0
}, u, {x, t}]
it just returns what I entered
DSolve[{(u^(2,0))[x,t]==(u^(0,2))[x,t]/v^2,u[0,t]==0,u[l,0]==0},u,{x,t}]
However, when I remove the boundary conditions, I get
{{u->Function[{x,t},C[1][t-(Sqrt[v^2] x)/v^2]+C[2][t+(Sqrt[v^2] x)/v^2]]}}
with C[1] and C[2] representing the functions for the boundary conditions.
Anyone know why this is happening?
2 things:
Don't you need more boundary and initial conditions than just 2? You have second order derivatives on left and right side, each requires 2 conditions. Hence total is 4. see http://mathworld.wolfram.com/WaveEquation1-Dimensional.html
I do not think DSolve or NDSolve can solve initial and boundary value problems? I seem to have read this somewhere sometime ago. No time to check now.
I think that Mathematica doesn't know how to deal with these boundary conditions for 2nd order PDEs. How would you want the answer returned? As a general Fourier series?
This is mentioned in the Mathematica Cookbook (and probably other places)...
Breaking down the problem for Mathematica (with the dimensional factor v->1), you find
In[1]:= genSoln = DSolve[D[u[x, t], {x, 2}] == D[u[x, t], {t, 2}], u, {x, t}] // First
Out[1]= {u -> Function[{x, t}, C[1][t - x] + C[2][t + x]]}
In[2]:= Solve[u[0, t] == 0 /. genSoln]
Out[2]= {{C[1][t] -> -C[2][t]}}
In[3]:= u[l, 0] == 0 /. genSoln /. C[1][x_] :> -C[2][x] // Simplify
Out[3]= C[2][-l] == C[2][l]
that the solution is written as f(t-x)-f(t+x) where f is periodic over [-l,l]...
You can't do any more with out making assumptions about the smoothness of the solution.
You can check that the standard Fourier series approach would work, e.g.
In[4]:= f[x_, t_] := Sin[n Pi (t + x)/l] - Sin[n Pi (t - x)/l]
In[5]:= And[D[u[x, t], {x, 2}] == D[u[x, t], {t, 2}],
u[0, t] == 0, u[l, 0] == 0] /. u -> f // Reduce[#, n] & // Simplify
Out[5]= C[1] \[Element] Integers && (n == 2 C[1] || n == 1 + 2 C[1])

Code or formula for intersection of two parabolas in any rotation

I am working on a geometry problem that requires finding the intersection of two parabolic arcs in any rotation. I was able to intesect a line and a parabolic arc by rotating the plane to align the arc with an axis, but two parabolas cannot both align with an axis. I am working on deriving the formulas, but I would like to know if there is a resource already available for this.
I'd first define the equation for the parabolic arc in 2D without rotations:
x(t) = ax² + bx + c
y(t) = t;
You can now apply the rotation by building a rotation matrix:
s = sin(angle)
c = cos(angle)
matrix = | c -s |
| s c |
Apply that matrix and you'll get the rotated parametric equation:
x' (t) = x(t) * c - s*t;
y' (t) = x(t) * s + c*t;
This will give you two equations (for x and y) of your parabolic arcs.
Do that for both of your rotated arcs and subtract them. This gives you an equation like this:
xa'(t) = rotated equation of arc1 in x
ya'(t) = rotated equation of arc1 in y.
xb'(t) = rotated equation of arc2 in x
yb'(t) = rotated equation of arc2 in y.
t1 = parametric value of arc1
t2 = parametric value of arc2
0 = xa'(t1) - xb'(t2)
0 = ya'(t1) - yb'(t2)
Each of these equation is just a order 2 polynomial. These are easy to solve.
To find the intersection points you solve the above equation (e.g. find the roots).
You'll get up to two roots for each axis. Any root that is equal on x and y is an intersection point between the curves.
Getting the position is easy now: Just plug the root into your parametric equation and you can directly get x and y.
Unfortunately, the general answer requires solution of a fourth-order polynomial. If we transform coordinates so one of the two parabolas is in the standard form y=x^2, then the second parabola satisfies (ax+by)^2+cx+dy+e==0. To find the intersection, solve both simultaneously. Substituting in y=x^2 we see that the result is a fourth-order polynomial: (ax+bx^2)^2+cx+dx^2+e==0. Nils solution therefore won't work (his mistake: each one is a 2nd order polynomial in each variable separately, but together they're not).
It's easy if you have a CAS at hand.
See the solution in Mathematica.
Choose one parabola and change coordinates so its equation becomes y(x)=a x^2 (Normal form).
The other parabola will have the general form:
A x^2 + B x y + CC y^2 + DD x + EE y + F == 0
where B^2-4 A C ==0 (so it's a parabola)
Let's solve a numeric case:
p = {a -> 1, A -> 1, B -> 2, CC -> 1, DD -> 1, EE -> -1, F -> 1};
p1 = {ToRules#N#Reduce[
(A x^2 + B x y + CC y^2 + DD x + EE y +F /. {y -> a x^2 } /. p) == 0, x]}
{{x -> -2.11769}, {x -> -0.641445},
{x -> 0.379567- 0.76948 I},
{x -> 0.379567+ 0.76948 I}}
Let's plot it:
Show[{
Plot[a x^2 /. p, {x, -10, 10}, PlotRange -> {{-10, 10}, {-5, 5}}],
ContourPlot[(A x^2 + B x y + CC y^2 + DD x + EE y + F /. p) ==
0, {x, -10, 10}, {y, -10, 10}],
Graphics[{
PointSize[Large], Pink, Point[{x, x^2} /. p /. p1[[1]]],
PointSize[Large], Pink, Point[{x, x^2} /. p /. p1[[2]]]
}]}]
The general solution involves calculating the roots of:
4 A F + 4 A DD x + (4 A^2 + 4 a A EE) x^2 + 4 a A B x^3 + a^2 B^2 x^4 == 0
Which is done easily in any CAS.

Resources