How should I evaluate a function at a point in Maxima? - substitution

I want to take the directional derivative of the function omega at the point p in the direction V:
omega:x*y*z*z;
p:[2,3,-1];
V:[1,2,3];
p2:p+t*V;
sp:[x=p[1],y=p[2],z=p[3]];
sp2:[x=p2[1],y=p2[2],z=p2[3]];
deltaomega:subst(sp2,omega)-subst(sp,omega);
slope:ratsimp(deltaomega/t);
gVomega:limit(slope,t,0);
This works, but the two substitutions seem a bit hacky. Is there a better way to say 'evaluate omega at p and p+t*V'?
I know there are better ways of doing this! I want to be able to do it from first principles (because I have more complicated versions in mind for which there are no built ins).

I can see at least two ways to do it; we can probably find others as well.
(%i1) omega (x, y, z) := x * y * z^2 $
(%i2) p : [2, 3, -1] $
(%i3) V : [1, 2, 3] $
(%i4) p2 : p + t * V $
(%i5) deltaomega : apply (omega, p2) - apply (omega, p);
2
(%o5) (t + 2) (2 t + 3) (3 t - 1) - 6
... and then the rest is the same. Or define omega so its argument is a list:
(%i1) omega (p) := p[1] * p[2] * p[3]^2 $
(%i2) p : [2, 3, -1] $
(%i3) V : [1, 2, 3] $
(%i4) p2 : p + t * V $
(%i5) deltaomega : omega (p2) - omega (p);
2
(%o5) (t + 2) (2 t + 3) (3 t - 1) - 6
Notice that in both cases I've defined omega as a function.

Related

Return arc length of every rotation of an Archimedean Spiral given arm spacing and total length

I want to calculate the length of every full rotation of an Archimedean Spiral given the spacing between each arm and the total length are known. The closest to a solution I've been able to find is here, but this is for finding an unknown length.
I can't interpret math notation so am unable to extrapolate from the info in the link above. The closest I've been able to achieve is:
Distance between each spiral arm:
ArmSpace <- 7
Total length of spiral:
TotalLength <- 399.5238
Create empty df to accommodate TotalLength (note that sum(df[,2]) can be > TotalLength):
df <- data.frame(matrix(NA, nrow=0, ncol=2))
colnames(df) <- c("turn_num", "turn_len_m")
df[1,1] <- 0 # Start location of spiral
df[1,2] <- pi*1/1000
Return length of every turn:
i <- 0
while(i < TotalLength) {
df[nrow(df)+1,1] <- nrow(df) # Add turn number
df[nrow(df),2] <- pi*(df[nrow(df)-1,2] +
(2*df[nrow(df),1])*ArmSpace)/1000
i <- sum(df[,2])
}
An annotated example explaining the steps would be most appreciated.
I used approximation Clackson formula
t = 2 * Pi * Sqrt(2 * s / a)
to get theta angle corresponding to arc length s.
Example in Delphi, I hope idea is clear enough to implement in R
var
i, cx, cy, x, y: Integer;
s, t, a, r : Double;
begin
cx := 0;
cy := 0;
a := 10; //spiral size parameter
Canvas.MoveTo(cx, cy);
for i := 1 to 1000 do begin
s := 0.07 * i; //arc length
t := 2 * Pi * Sqrt(2 * s / a); //theta
r := a * t; //radius
x := Round(cx + r * cos(t)); //rounded coordinates
y := Round(cy + r * sin(t));
Memo1.Lines.Add(Format('len %5.3f theta %5.3f r %5.3f x %d y %d', [s, t, r, x, y]));
Canvas.LineTo(x, y);
if i mod 10 = 1 then //draw some points as small circles
Canvas.Ellipse(x-2, y-2, x+3, y+3);
end;
Some generated points
len 0.070 theta 0.743 r 7.434 x 5 y 5
len 0.140 theta 1.051 r 10.514 x 5 y 9
len 0.210 theta 1.288 r 12.877 x 4 y 12
len 0.280 theta 1.487 r 14.869 x 1 y 15
len 0.350 theta 1.662 r 16.624 x -2 y 17
len 0.420 theta 1.821 r 18.210 x -5 y 18
Link gives exact formula for ac length,
s(t) = 1/(2*a) * (t * Sqrt(1 + t*t) + ln(t + Sqrt(1+t*t)))
but we cannot calculate inverse (t for given s) using simple formula, so one need to apply numerical methods to find theta for arc length value.
Addition: length of k-th turn. Here we can use exact formula. Python code:
import math
def arch_sp_len(a, t):
return a/2 * (t * math.sqrt(1 + t*t) + math.log(t + math.sqrt(1+t*t)))
def arch_sp_turnlen(a, k):
return arch_sp_len(a, k*2*math.pi) - arch_sp_len(a, (k-1)*2*math.pi)
print(arch_sp_turnlen(1, 1))
print(arch_sp_turnlen(1, 2))
print(arch_sp_turnlen(10, 3))

Renewal Function for Weibull Distribution

The renewal function for Weibull distribution m(t) with t = 10 is given as below.
I want to find the value of m(t). I wrote the following r code to compute m(t)
last_term = NULL
gamma_k = NULL
n = 50
for(k in 1:n){
gamma_k[k] = gamma(2*k + 1)/factorial(k)
}
for(j in 1: (n-1)){
prev = gamma_k[n-j]
last_term[j] = gamma(2*j + 1)/factorial(j)*prev
}
final_term = NULL
find_value = function(n){
for(i in 2:n){
final_term[i] = gamma_k[i] - sum(last_term[1:(i-1)])
}
return(final_term)
}
all_k = find_value(n)
af_sum = NULL
m_t = function(t){
for(k in 1:n){
af_sum[k] = (-1)^(k-1) * all_k[k] * t^(2*k)/gamma(2*k + 1)
}
return(sum(na.omit(af_sum)))
}
m_t(20)
The output is m(t) = 2.670408e+93. Does my iteratvie procedure correct? Thanks.
I don't think it will work. First, lets move Γ(2k+1) from denominator of m(t) into Ak. Thus, Ak will behave roughly as 1/k!.
In the nominator of the m(t) terms there is t2k, so roughly speaking you're computing sum with terms
100k/k!
From Stirling formula
k! ~ kk, making terms
(100/k)k
so yes, they will start to decrease and converge to something but after 100th term
Anyway, here is the code, you could try to improve it, but it breaks at k~70
N <- 20
A <- rep(0, N)
# compute A_k/gamma(2k+1) terms
ps <- 0.0 # previous sum
A[1] = 1.0
for(k in 2:N) {
ps <- ps + A[k-1]*gamma(2*(k-1) + 1)/factorial(k-1)
A[k] <- 1.0/factorial(k) - ps/gamma(2*k+1)
}
print(A)
t <- 10.0
t2 <- t*t
r <- 0.0
for(k in 1:N){
r <- r + (-t2)^k*A[k]
}
print(-r)
UPDATE
Ok, I calculated Ak as in your question, got the same answer. I want to estimate terms Ak/Γ(2k+1) from m(t), I believe it will be pretty much dominated by 1/k! term. To do that I made another array k!*Ak/Γ(2k+1), and it should be close to one.
Code
N <- 20
A <- rep(0.0, N)
psum <- function( pA, k ) {
ps <- 0.0
if (k >= 2) {
jmax <- k - 1
for(j in 1:jmax) {
ps <- ps + (gamma(2*j+1)/factorial(j))*pA[k-j]
}
}
ps
}
# compute A_k/gamma(2k+1) terms
A[1] = gamma(3)
for(k in 2:N) {
A[k] <- gamma(2*k+1)/factorial(k) - psum(A, k)
}
print(A)
B <- rep(0.0, N)
for(k in 1:N) {
B[k] <- (A[k]/gamma(2*k+1))*factorial(k)
}
print(B)
shows that
I got the same Ak values as you did.
Bk is indeed very close to 1
It means that term Ak/Γ(2k+1) could be replaced by 1/k! to get quick estimate of what we might get (with replacement)
m(t) ~= - Sum(k=1, k=Infinity) (-1)k (t2)k / k! = 1 - Sum(k=0, k=Infinity) (-t2)k / k!
This is actually well-known sum and it is equal to exp() with negative argument (well, you have to add term for k=0)
m(t) ~= 1 - exp(-t2)
Conclusions
Approximate value is positive. Probably will stay positive after all, Ak/Γ(2k+1) is a bit different from 1/k!.
We're talking about 1 - exp(-100), which is 1-3.72*10-44! And we're trying to compute it precisely summing and subtracting values on the order of 10100 or even higher. Even with MPFR I don't think this is possible.
Another approach is needed
OK, so I ended up going down a pretty different road on this. I have implemented a simple discretization of the integral equation which defines the renewal function:
m(t) = F(t) + integrate (m(t - s)*f(s), s, 0, t)
The integral is approximated with the rectangle rule. Approximating the integral for different values of t gives a system of linear equations. I wrote a function to generate the equations and extract a matrix of coefficients from it. After looking at some examples, I guessed a rule to define the coefficients directly and used that to generate solutions for some examples. In particular I tried shape = 2, t = 10, as in OP's example, with step = 0.1 (so 101 equations).
I found that the result agrees pretty well with an approximate result which I found in a paper (Baxter et al., cited in the code). Since the renewal function is the expected number of events, for large t it is approximately equal to t/mu where mu is the mean time between events; this is a handy way to know if we're anywhere in the neighborhood.
I was working with Maxima (http://maxima.sourceforge.net), which is not efficient for numerical stuff, but which makes it very easy to experiment with different aspects. At this point it would be straightforward to port the final, numerical stuff to another language such as Python.
Thanks to OP for suggesting the problem, and S. Pappadeux for insightful discussions. Here is the plot I got comparing the discretized approximation (red) with the approximation for large t (blue). Trying some examples with different step sizes, I saw that the values tend to increase a little as step size gets smaller, so I think the red line is probably a little low, and the blue line might be more nearly correct.
Here is my Maxima code:
/* discretize weibull renewal function and formulate system of linear equations
* copyright 2020 by Robert Dodier
* I release this work under terms of the GNU General Public License
*
* This is a program for Maxima, a computer algebra system.
* http://maxima.sourceforge.net/
*/
"Definition of the renewal function m(t):" $
renewal_eq: m(t) = F(t) + 'integrate (m(t - s)*f(s), s, 0, t);
"Approximate integral equation with rectangle rule:" $
discretize_renewal (delta_t, k) :=
if equal(k, 0)
then m(0) = F(0)
else m(k*delta_t) = F(k*delta_t)
+ m(k*delta_t)*f(0)*(delta_t / 2)
+ sum (m((k - j)*delta_t)*f(j*delta_t)*delta_t, j, 1, k - 1)
+ m(0)*f(k*delta_t)*(delta_t / 2);
make_eqs (n, delta_t) :=
makelist (discretize_renewal (delta_t, k), k, 0, n);
make_vars (n, delta_t) :=
makelist (m(k*delta_t), k, 0, n);
"Discretized integral equation and variables for n = 4, delta_t = 1/2:" $
make_eqs (4, 1/2);
make_vars (4, 1/2);
make_eqs_vars (n, delta_t) :=
[make_eqs (n, delta_t), make_vars (n, delta_t)];
load (distrib);
subst_pdf_cdf (shape, scale, e) :=
subst ([f = lambda ([x], pdf_weibull (x, shape, scale)), F = lambda ([x], cdf_weibull (x, shape, scale))], e);
matrix_from (eqs, vars) :=
(augcoefmatrix (eqs, vars),
[submatrix (%%, length(%%) + 1), - col (%%, length(%%) + 1)]);
"Subsitute Weibull pdf and cdf for shape = 2 into discretized equation:" $
apply (matrix_from, make_eqs_vars (4, 1/2));
subst_pdf_cdf (2, 1, %);
"Just the right-hand side matrix:" $
rhs_matrix_from (eqs, vars) :=
(map (rhs, eqs),
augcoefmatrix (%%, vars),
[submatrix (%%, length(%%) + 1), col (%%, length(%%) + 1)]);
"Generate the right-hand side matrix, instead of extracting it from equations:" $
generate_rhs_matrix (n, delta_t) :=
[delta_t * genmatrix (lambda ([i, j], if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))];
"Generate numerical right-hand side matrix, skipping over formulas:" $
generate_rhs_matrix_numerical (shape, scale, n, delta_t) :=
block ([f, F, numer: true], local (f, F),
f: lambda ([x], pdf_weibull (x, shape, scale)),
F: lambda ([x], cdf_weibull (x, shape, scale)),
[genmatrix (lambda ([i, j], delta_t * if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))]);
"Solve approximate integral equation (shape = 3, t = 1) via LU decomposition:" $
fpprintprec: 4 $
n: 20 $
t: 1;
[AA, bb]: generate_rhs_matrix_numerical (3, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Iterative solution of approximate integral equation (shape = 3, t = 1):" $
xx: bb;
for i thru 10 do xx: AA . xx + bb;
xx - (AA.xx + bb);
xx_iterative: xx;
"Should find iterative and LU give same result:" $
xx_diff: xx_iterative - xx_by_lu[1];
sqrt (transpose(xx_diff) . xx_diff);
"Try shape = 2, t = 10:" $
n: 100 $
t: 10 $
[AA, bb]: generate_rhs_matrix_numerical (2, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Baxter, et al., Eq. 3 (for large values of t) compared to discretization:" $
/* L.A. Baxter, E.M. Scheuer, D.J. McConalogue, W.R. Blischke.
* "On the Tabulation of the Renewal Function,"
* Econometrics, vol. 24, no. 2 (May 1982).
* H(t) is their notation for the renewal function.
*/
H(t) := t/mu + sigma^2/(2*mu^2) - 1/2;
tx_points: makelist ([float (k/n*t), xx_by_lu[1][k, 1]], k, 1, n);
plot2d ([H(u), [discrete, tx_points]], [u, 0, t]), mu = mean_weibull(2, 1), sigma = std_weibull(2, 1);

How to compute this double integral in r?

I'm wanting to compute an integral of the following density function :
Using the packages "rmutil" and "psych" in R , i tried :
X=c(8,1,2,3)
Y=c(5,2,4,6)
correlation=cov(X,Y)/(SD(X)*SD(Y))
bvtnorm <- function(x, y, mu_x = mean(X), mu_y = mean(Y), sigma_x = SD(X), sigma_y = SD(Y), rho = correlation) {
force(x)
force(y)
function(x, y)
1 / (2 * pi * sigma_x * sigma_y * sqrt(1 - rho ^ 2)) *
exp(- 1 / (2 * (1 - rho ^ 2)) * ((x - mu_x) / sigma_x) ^ 2 +
((y - mu_y) / sigma_y) ^ 2 - 2 * rho * (x - mu_x) * (y - mu_y) /
(sigma_x * sigma_y))
}
f2 <- bvtnorm(x, y)
print("sum_double_integral :")
integral_1=int2(f2, a=c(-Inf,-Inf), b=c(Inf,Inf)) # should normaly give 1
print(integral_1) # gives Nan
The problem :
This integral should give 1 , but it gives Nan ??
I don't know how can i fix the problem , i tried to force() x and y variables without success.
You were missing a pair of parentheses. The corrected code looks like:
library(rmutil)
X=c(8,1,2,3)
Y=c(5,2,4,6)
correlation=cor(X,Y)
bvtnorm <- function(x, y, mu_x = mean(X), mu_y = mean(Y), sigma_x = sd(X), sigma_y = sd(Y), rho = correlation) {
function(x, y)
1 / (2 * pi * sigma_x * sigma_y * sqrt(1 - rho ^ 2)) *
exp(- 1 / (2 * (1 - rho ^ 2)) * (((x - mu_x) / sigma_x) ^ 2 +
((y - mu_y) / sigma_y) ^ 2 - 2 * rho * (x - mu_x) * (y - mu_y) /
(sigma_x * sigma_y)))
}
f2 <- bvtnorm(x, y)
print("sum_double_integral :")
integral_1=int2(f2, a=c(-Inf,-Inf), b=c(Inf,Inf)) # should normaly give 1
print(integral_1) # prints 1.000047
This was hard to spot. From a debugging point of view, I found it helpful to first integrate over a finite domain. Trying it with things like first [-1,1] and then [-2,2] (on both axes) showed that the integrals were blowing up rather than converging. After that, I looked at the grouping even more carefully.
I also cleaned up the code a bit. I dropped SD in favor of sd since I don't see the motivation in importing the package psych just to make the code less readable (less flippantly, dropping psych from the question makes it easier for others to reproduce. There is no good reason to include a package which isn't be used in any essential way). I also dropped the force() which was doing nothing and used the built-in function cor for calculating the correlation.

maclaurin series in J

I'm trying to implement the series expansion for sine(x) in J (I'm not worried about accuracy, but more the problem of expressing the series nicely).
So far I have the following explicit version which computes sine(pi) using 50 terms:
3.14 (4 :'+/((_1^y) * (x^(1+2*y)) % !1+2*y)') i.50
But it seems somewhat clunky, is there a "better" version (maybe tacit?) ?
You want a list of odd numbers for powers and factorials: l =: >:+:i. y (>:#+:#i.) or >:#+: if your y is i..
Then, you want the powers (x^l) divided by the factorials (!l). One way is to see this as a fork (x f y) h (x g y) -> (x ^ l) % (x (]!) l) → (^ % (]!)).
The last step is to multiply this series by the series 1, _1, 1, ...: _1 ^ y → _1&^
So, the final form is (_1 ^ y) * (x (^ * (]!)) (>:#+:#i.) y) which is the train (h y) j (x f (g y)) → (h y) j (x (f g) y) → (x (]h) y) j (x (f g) y) → (]h) j (f g):
ms =: (] _1&^) * ((^ % (]!)) (>:#+:))
+/ 3.14 ms i.50
0.00159265
or
f =: +/#(ms i.)
3.14 f 50
0.00159265
On the other hand, you can use T. for the taylor approximation.
3.14 (4 :'+/((_1^y) * (x^(1+2*y)) % !1+2*y)') i.50
0.00159265
Tacit version could look like this:
3.14 +/#:((_1 ^ ]) * ([ ^ 1 + +:#]) % !#(1 + +:#])) i.50
0.00159265
or this:
3.14 +/#:((_1 ^ ]) * ([ ^ >:#+:#]) % !#>:#+:#]) i.50
0.00159265
or even this:
3.14 +/#:((_1 ^ ]) * (( ^ % !#])(>:#+:#]))) i.50
0.00159265
The first and second are pretty much tacit translations, the last uses hooks and forks, which can be a bit much unless you are used to them.

Controlling measure zero sets of solutions with Manipulate. A case study

To address the question we start with the following toy model problem being here just a case study:
Given two circles on a plane (its centers (c1 and c2) and radii (r1 and r2)) as well as a positive number r3, find all circles with radii = r3 (i.e all points c3 being centers of circles with radii = r3) tangent (externally and internally) to given two circles.
In general, depending on Circle[c1,r1], Circle[c2,r2] and r3 there are 0,1,2,...8 possible solutions. A typical case with 8 solutions :
I slightly modified a neat Mathematica implementation by Jaime Rangel-Mondragon on Wolfram Demonstration Project, but its core is similar:
Manipulate[{c1, a, c2, b} = pts;
{r1, r2} = Map[Norm, {a - c1, b - c2}];
w = Table[
Solve[{radius[{x, y} - c1]^2 == (r + k r1)^2,
radius[{x, y} - c2]^2 == (r + l r2)^2}
] // Quiet,
{k, -1, 1, 2}, {l, -1, 1, 2}
];
w = Select[
Cases[Flatten[{{x, y}, r} /. w, 2],
{{_Real, _Real}, _Real}
],
Last[#] > 0 &
];
Graphics[
{{Opacity[0.35], EdgeForm[Thin], Gray,
Disk[c1, r1], Disk[c2, r2]},
{EdgeForm[Thick], Darker[Blue,.5],
Circle[First[#], Last[#]]& /# w}
},
PlotRange -> 8, ImageSize -> {915, 915}
],
"None" -> {{pts, {{-3, 0}, {1, 0}, {3, 0}, {7, 0}}},
{-8, -8}, {8, 8}, Locator},
{{r, 0.3, "r3"}, 0, 8},
TrackedSymbols -> True,
Initialization :> (radius[z_] := Sqrt[z.z])
]
We can easily conclude that in a generic case we have an even number of solutions 0,2,4,6,8 while cases with an odd number of solutions 1,3,5,7 are exceptional - they are of zero measure in terms of control ranges. Thus changing in Manipulate c1, r1, c2, r2, r3 one can observe that it is much more difficult to track cases with an odd number of circles.
One could modify on a basic level the above approach : solving purely symbolically equations for c3 as well as redesignig Manipulate structure with an emphasis on changing number of solutions. If I'm not wrong Solve can work only numerically with Locator in Manipulate, however here Locator seems to be crucial for simplicity of controlling c1, r1, c2, r2 as well as for the whole implementation.
Let's state the questions, :
1. How can we force Manipulate to track seamlessly cases with an odd number of solutions (circles) ?
2. Is there any way to make Solve to find exact solutions of the underlying equations?
( I find the answer by Daniel Lichtblau to be the best approach to the question 2, but it seems in this instance there is still an essential need for sketching of a general technique of emphasizing measure zero sets of solutions while working with Manipulate )
These considerations are of less importance while dealing with exact solutions
For example Solve[x^2 - 3 == 0, x] yields {{x -> -Sqrt[3]}, {x -> Sqrt[3]}}
while in case from the above of slightly more difficult equations extracted from Manipulate setting the following arguments :
c1 = {-Sqrt[3], 0}; a = {1, 0}; c2 = {6 - Sqrt[3], 0}; b = {7, 0};
{r1, r2} = Map[ Norm, {a - c1, b - c2 }];
r = 2.0 - Sqrt[3];
to :
w = Table[Solve[{radius[{x, y} - {x1, y1}]^2 == (r + k r1)^2,
radius[{x, y} - {x2, y2}]^2 == (r + l r2)^2}],
{k, -1, 1, 2}, {l, -1, 1, 2}];
w = Select[ Cases[ Flatten[ {{x, y}, r} /. w, 2], {{_Real, _Real}, _Real}],
Last[#] > 0 &]
we get two solutions :
{{{1.26795, -3.38871*10^-8}, 0.267949}, {{1.26795, 3.38871*10^-8}, 0.267949}}
similarly under the same arguments and equations, putting :
r = 2 - Sqrt[3];
we get no solutions : {}
but in fact there is exactly one solution which we would like to emphasize:
{ {3 - Sqrt[3], 0 }, 2 - Sqrt[3] }
In fact, passing to Graphics such a small difference between two different solutions and the uniqe one is indistinguishable, however working with Manipulate we cannot track carefully with a desired accuracy merging of two circles and usually the last observed configuration when lowering r3 before vanishing all solutions (reminding so-called structural instability) looks like this :
Manipulate is rather a powerful tool, not only a toy, and its mastering could be very useful. The considered issues when appearing in a serious research are frequently critical, for example: in studying solutions of nonlinear differential equations, occurence of singularities in its solutions, qualitative behavior of dynamical systems, bifurcations, phenomena in Catastrophe theory and so on.
As this is a measure zero set, tools that require some granularity will generally have trouble with the concept. Perhaps better is to look for the singularity locus explicitly, where solutions have multiplicity or in other ways depart from the nearby solution behavior(s). It will be a part of the discriminant variety. In particular, you can grab the relevant part by setting your defining polynomials to zero and simultaneously making the Jacobian determinant zero.
Here is your example. I will eventually (wlog) put one center at the origin and the other at (1,0).
centers = Array[c, {2, 2}];
radii = Array[r, 3];
circ[cen_, rad_, x_, y_] := ({x, y} - cen).({x, y} - cen) - rad^2
I'll use your 'k' for both polynomials. Your formulation has pairs (k,l) where each is +-1. We can just use k, arrange by squaring to get a polynomial in k^2, and replace that with 1.
polys =
Table[Expand[
circ[centers[[j]], radii[[3]] + k*radii[[j]], x, y]], {j, 2}]
Out[18]= {x^2 + y^2 - 2 x c[1, 1] + c[1, 1]^2 - 2 y c[1, 2] +
c[1, 2]^2 - k^2 r[1]^2 - 2 k r[1] r[3] - r[3]^2,
x^2 + y^2 - 2 x c[2, 1] + c[2, 1]^2 - 2 y c[2, 2] + c[2, 2]^2 -
k^2 r[2]^2 - 2 k r[2] r[3] - r[3]^2}
We'll remove the part that is linear in k, square the rest, square that removed part, and equate the two. We also then replace k with unity.
p2 = polys - k*Coefficient[polys, k];
polys2 = Expand[p2^2 - (k*Coefficient[polys, k])^2] /. k -> 1;
We now get the determinant of the Jacobian and add that to the brew.
discrim = Det[D[polys2, #] & /# {x, y}];
allrelations = Join[polys2, {discrim}];
Now set the centers as noted earlier (could have done this from the beginning, one would suppose).
ar2 =
allrelations /. {c[1, 1] -> 0, c[1, 2] -> 0, c[2, 1] -> 0,
c[2, 2] -> 0}
Out[38]= {x^4 + 2 x^2 y^2 + y^4 - 2 x^2 r[1]^2 - 2 y^2 r[1]^2 +
r[1]^4 - 2 x^2 r[3]^2 - 2 y^2 r[3]^2 - 2 r[1]^2 r[3]^2 + r[3]^4,
x^4 + 2 x^2 y^2 + y^4 - 2 x^2 r[2]^2 - 2 y^2 r[2]^2 + r[2]^4 -
2 x^2 r[3]^2 - 2 y^2 r[3]^2 - 2 r[2]^2 r[3]^2 + r[3]^4, 0}
We now eliminate x and y to get the locus in r[1],r[2],r[3] parameter space that determines where we'll have multiplicity in our solutions.
gb = GroebnerBasis[ar2, radii, {x, y},
MonomialOrder -> EliminationOrder]
{r[1]^6 - 3 r[1]^4 r[2]^2 + 3 r[1]^2 r[2]^4 - r[2]^6 -
8 r[1]^4 r[3]^2 + 8 r[2]^4 r[3]^2 + 16 r[1]^2 r[3]^4 -
16 r[2]^2 r[3]^4}
If I did this all correctly, then we now have the polynomial defining the locus in parameter space where solution sets can get silly. Off this set they should never have multiplicity, and real counts should always be even. The intersection of this set with real space will be a 2d surface in the 3d space of the radii parameters. It will separate regions that have 0, 2, 4, 6, or 8 real solutions from one another.
Last, I'll point out that in this example the variety in question reduces nicely into a product of planes. I guess from a geometric view this is not too surprising.
Factor[gb[[1]]]
Out[43]= (r[1] - r[2]) (r[1] + r[2]) (r[1] - r[2] - 2 r[3]) (r[1] +
r[2] - 2 r[3]) (r[1] - r[2] + 2 r[3]) (r[1] + r[2] + 2 r[3])

Resources