Find possible values - formula

I want to verify a formula of the form:
Exists p . ForAll x != 0 . f(x, p) > 0 and g(x, p) < 0
All variables are reals.
As suggested here, I add this list to the solver:
[ForAll([x0, x1],
Implies(Or(x0 != 0, x1 != 0),
And(P0*x0*x0 + P1*x0*x1 + P2*x0*x1 + P3*x1*x1 > 0,
-2*P0*x0*x1 + P1*x0*x0 - P1*x0*x1 - P1*x1*x1 + P2*x0*x0 - P2*x0*x1 - P2*x1*x1 + 2*P3*x0*x1 - 2*P3*x1*x1 < 0
)
)
)
]
The solver with the above formula returns unsat. A possible solution is for P to be [[1.5, -0.5], [-0.5, 1]] and in fact, by substituting those values, the formula is satisfied:
And(3/2*x0*x0 - 1*x0*x1 + x1*x1 > 0,
-1*x0*x0 - 1*x1*x1 < 0)
Is there a way to actually compute such a p? If it's hard for z3, is there any alternative for this problem?

When you say 'Exists' followed by 'Forall', then you are saying that the formula should be true for every such x0, x1. And Z3 is telling you that is simply not the case.
If you are interested in finding one such P, and corresponding x values, simply drop the quantification and make everything a top-level variable:
from z3 import *
def f(x0, x1, P0, P1, P2, P3):
return P0*x0*x0 + P1*x0*x1 + P2*x0*x1 + P3*x1*x1
def g(x0, x1, P0, P1, P2, P3):
return -2*P0*x0*x1 + P1*x0*x0 - P1*x0*x1 - P1*x1*x1 + P2*x0*x0 - P2*x0*x1 - P2*x1*x1 + 2*P3*x0*x1 - 2*P3*x1*x1
p0, p1, p2, p3 = Reals('p0 p1 p2 p3')
x0, x1 = Reals('x0 x1')
fmls = [Implies(Or(x0 != 0, x1 != 0), And(f(x0, x1, p0, p1, p2, p3) > 0, g(x0, x1, p0, p1, p2, p3) < 0))]
while True:
s = Solver()
s.add(fmls)
res = s.check()
print res
if res == sat:
m = s.model()
print m
fmls += [Or(p0 != m[p0], p1 != m[p1])]
else:
print "giving up"
break
When I run this, I get:
sat
[x0 = 1/8, p0 = -1/2, p1 = -1/2, x1 = 1/2, p2 = 1, p3 = 1]
and many others; which is I believe what you're after.
Note that you can also do some programming to get rid of the existential quantification depending on where you are; i.e., start with the quantified version, if you get an unsat, then switch to a new solver and use the unquantified version to automate this process. Of course, this is just programming and doesn't really have anything to do with z3 at this point.

Related

Renewal Function for Weibull Distribution

The renewal function for Weibull distribution m(t) with t = 10 is given as below.
I want to find the value of m(t). I wrote the following r code to compute m(t)
last_term = NULL
gamma_k = NULL
n = 50
for(k in 1:n){
gamma_k[k] = gamma(2*k + 1)/factorial(k)
}
for(j in 1: (n-1)){
prev = gamma_k[n-j]
last_term[j] = gamma(2*j + 1)/factorial(j)*prev
}
final_term = NULL
find_value = function(n){
for(i in 2:n){
final_term[i] = gamma_k[i] - sum(last_term[1:(i-1)])
}
return(final_term)
}
all_k = find_value(n)
af_sum = NULL
m_t = function(t){
for(k in 1:n){
af_sum[k] = (-1)^(k-1) * all_k[k] * t^(2*k)/gamma(2*k + 1)
}
return(sum(na.omit(af_sum)))
}
m_t(20)
The output is m(t) = 2.670408e+93. Does my iteratvie procedure correct? Thanks.
I don't think it will work. First, lets move Γ(2k+1) from denominator of m(t) into Ak. Thus, Ak will behave roughly as 1/k!.
In the nominator of the m(t) terms there is t2k, so roughly speaking you're computing sum with terms
100k/k!
From Stirling formula
k! ~ kk, making terms
(100/k)k
so yes, they will start to decrease and converge to something but after 100th term
Anyway, here is the code, you could try to improve it, but it breaks at k~70
N <- 20
A <- rep(0, N)
# compute A_k/gamma(2k+1) terms
ps <- 0.0 # previous sum
A[1] = 1.0
for(k in 2:N) {
ps <- ps + A[k-1]*gamma(2*(k-1) + 1)/factorial(k-1)
A[k] <- 1.0/factorial(k) - ps/gamma(2*k+1)
}
print(A)
t <- 10.0
t2 <- t*t
r <- 0.0
for(k in 1:N){
r <- r + (-t2)^k*A[k]
}
print(-r)
UPDATE
Ok, I calculated Ak as in your question, got the same answer. I want to estimate terms Ak/Γ(2k+1) from m(t), I believe it will be pretty much dominated by 1/k! term. To do that I made another array k!*Ak/Γ(2k+1), and it should be close to one.
Code
N <- 20
A <- rep(0.0, N)
psum <- function( pA, k ) {
ps <- 0.0
if (k >= 2) {
jmax <- k - 1
for(j in 1:jmax) {
ps <- ps + (gamma(2*j+1)/factorial(j))*pA[k-j]
}
}
ps
}
# compute A_k/gamma(2k+1) terms
A[1] = gamma(3)
for(k in 2:N) {
A[k] <- gamma(2*k+1)/factorial(k) - psum(A, k)
}
print(A)
B <- rep(0.0, N)
for(k in 1:N) {
B[k] <- (A[k]/gamma(2*k+1))*factorial(k)
}
print(B)
shows that
I got the same Ak values as you did.
Bk is indeed very close to 1
It means that term Ak/Γ(2k+1) could be replaced by 1/k! to get quick estimate of what we might get (with replacement)
m(t) ~= - Sum(k=1, k=Infinity) (-1)k (t2)k / k! = 1 - Sum(k=0, k=Infinity) (-t2)k / k!
This is actually well-known sum and it is equal to exp() with negative argument (well, you have to add term for k=0)
m(t) ~= 1 - exp(-t2)
Conclusions
Approximate value is positive. Probably will stay positive after all, Ak/Γ(2k+1) is a bit different from 1/k!.
We're talking about 1 - exp(-100), which is 1-3.72*10-44! And we're trying to compute it precisely summing and subtracting values on the order of 10100 or even higher. Even with MPFR I don't think this is possible.
Another approach is needed
OK, so I ended up going down a pretty different road on this. I have implemented a simple discretization of the integral equation which defines the renewal function:
m(t) = F(t) + integrate (m(t - s)*f(s), s, 0, t)
The integral is approximated with the rectangle rule. Approximating the integral for different values of t gives a system of linear equations. I wrote a function to generate the equations and extract a matrix of coefficients from it. After looking at some examples, I guessed a rule to define the coefficients directly and used that to generate solutions for some examples. In particular I tried shape = 2, t = 10, as in OP's example, with step = 0.1 (so 101 equations).
I found that the result agrees pretty well with an approximate result which I found in a paper (Baxter et al., cited in the code). Since the renewal function is the expected number of events, for large t it is approximately equal to t/mu where mu is the mean time between events; this is a handy way to know if we're anywhere in the neighborhood.
I was working with Maxima (http://maxima.sourceforge.net), which is not efficient for numerical stuff, but which makes it very easy to experiment with different aspects. At this point it would be straightforward to port the final, numerical stuff to another language such as Python.
Thanks to OP for suggesting the problem, and S. Pappadeux for insightful discussions. Here is the plot I got comparing the discretized approximation (red) with the approximation for large t (blue). Trying some examples with different step sizes, I saw that the values tend to increase a little as step size gets smaller, so I think the red line is probably a little low, and the blue line might be more nearly correct.
Here is my Maxima code:
/* discretize weibull renewal function and formulate system of linear equations
* copyright 2020 by Robert Dodier
* I release this work under terms of the GNU General Public License
*
* This is a program for Maxima, a computer algebra system.
* http://maxima.sourceforge.net/
*/
"Definition of the renewal function m(t):" $
renewal_eq: m(t) = F(t) + 'integrate (m(t - s)*f(s), s, 0, t);
"Approximate integral equation with rectangle rule:" $
discretize_renewal (delta_t, k) :=
if equal(k, 0)
then m(0) = F(0)
else m(k*delta_t) = F(k*delta_t)
+ m(k*delta_t)*f(0)*(delta_t / 2)
+ sum (m((k - j)*delta_t)*f(j*delta_t)*delta_t, j, 1, k - 1)
+ m(0)*f(k*delta_t)*(delta_t / 2);
make_eqs (n, delta_t) :=
makelist (discretize_renewal (delta_t, k), k, 0, n);
make_vars (n, delta_t) :=
makelist (m(k*delta_t), k, 0, n);
"Discretized integral equation and variables for n = 4, delta_t = 1/2:" $
make_eqs (4, 1/2);
make_vars (4, 1/2);
make_eqs_vars (n, delta_t) :=
[make_eqs (n, delta_t), make_vars (n, delta_t)];
load (distrib);
subst_pdf_cdf (shape, scale, e) :=
subst ([f = lambda ([x], pdf_weibull (x, shape, scale)), F = lambda ([x], cdf_weibull (x, shape, scale))], e);
matrix_from (eqs, vars) :=
(augcoefmatrix (eqs, vars),
[submatrix (%%, length(%%) + 1), - col (%%, length(%%) + 1)]);
"Subsitute Weibull pdf and cdf for shape = 2 into discretized equation:" $
apply (matrix_from, make_eqs_vars (4, 1/2));
subst_pdf_cdf (2, 1, %);
"Just the right-hand side matrix:" $
rhs_matrix_from (eqs, vars) :=
(map (rhs, eqs),
augcoefmatrix (%%, vars),
[submatrix (%%, length(%%) + 1), col (%%, length(%%) + 1)]);
"Generate the right-hand side matrix, instead of extracting it from equations:" $
generate_rhs_matrix (n, delta_t) :=
[delta_t * genmatrix (lambda ([i, j], if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))];
"Generate numerical right-hand side matrix, skipping over formulas:" $
generate_rhs_matrix_numerical (shape, scale, n, delta_t) :=
block ([f, F, numer: true], local (f, F),
f: lambda ([x], pdf_weibull (x, shape, scale)),
F: lambda ([x], cdf_weibull (x, shape, scale)),
[genmatrix (lambda ([i, j], delta_t * if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))]);
"Solve approximate integral equation (shape = 3, t = 1) via LU decomposition:" $
fpprintprec: 4 $
n: 20 $
t: 1;
[AA, bb]: generate_rhs_matrix_numerical (3, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Iterative solution of approximate integral equation (shape = 3, t = 1):" $
xx: bb;
for i thru 10 do xx: AA . xx + bb;
xx - (AA.xx + bb);
xx_iterative: xx;
"Should find iterative and LU give same result:" $
xx_diff: xx_iterative - xx_by_lu[1];
sqrt (transpose(xx_diff) . xx_diff);
"Try shape = 2, t = 10:" $
n: 100 $
t: 10 $
[AA, bb]: generate_rhs_matrix_numerical (2, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Baxter, et al., Eq. 3 (for large values of t) compared to discretization:" $
/* L.A. Baxter, E.M. Scheuer, D.J. McConalogue, W.R. Blischke.
* "On the Tabulation of the Renewal Function,"
* Econometrics, vol. 24, no. 2 (May 1982).
* H(t) is their notation for the renewal function.
*/
H(t) := t/mu + sigma^2/(2*mu^2) - 1/2;
tx_points: makelist ([float (k/n*t), xx_by_lu[1][k, 1]], k, 1, n);
plot2d ([H(u), [discrete, tx_points]], [u, 0, t]), mu = mean_weibull(2, 1), sigma = std_weibull(2, 1);

How to interface Prolog CLP(R) with real vectors?

I'm using Prolog to solve simple geometrical equations.
For example, I can define all points p3 on a line passing trough two points p1 and p2 as:
line((X1, Y1, Z1), (X2, Y2, Z2), T, (X3, Y3, Z3)) :-
{(X2 - X1) * T = X3},
{(Y2 - Y1) * T = Y3},
{(Z2 - Z1) * T = Z3}.
And then a predicate like line((0, 0, 0), (1, 1, 1), _, (2, 2, 2)) is true.
But what I'd really want is to write down something like this:
line(P1, P2, T, P3) :- {(P2 - P1) * T = P3}.
Where P1, P2, and P3 are real vectors.
What's the best way of arriving at something similar? The best I found so far is to rewrite my own add, subtract and multiply predicates, but that's not as conveniant.
Here is a solution where you still have to write a bit of code for each operator you want to handle, but which still provides nice syntax at the point of use.
Let's start with a notion of evaluating an arithmetic expression on vectors to a vector. This essentially applies arithmetic operations component-wise. (But you could add a dot product or whatever you like.)
:- use_module(library(clpr)).
vectorexpr_value((X,Y,Z), (X,Y,Z)).
vectorexpr_value(V * T, (X,Y,Z)) :-
vectorexpr_value(V, (XV,YV,ZV)),
{ X = XV * T },
{ Y = YV * T },
{ Z = ZV * T }.
vectorexpr_value(L + R, (X,Y,Z)) :-
vectorexpr_value(L, (XL,YL,ZL)),
vectorexpr_value(R, (XR,YR,ZR)),
{ X = XL + XR },
{ Y = YL + YR },
{ Z = ZL + ZR }.
vectorexpr_value(L - R, (X,Y,Z)) :-
vectorexpr_value(L, (XL,YL,ZL)),
vectorexpr_value(R, (XR,YR,ZR)),
{ X = XL - XR },
{ Y = YL - YR },
{ Z = ZL - ZR }.
So for example:
?- vectorexpr_value(A + B, Result).
A = (_1784, _1790, _1792),
B = (_1808, _1814, _1816),
Result = (_1832, _1838, _1840),
{_1808=_1832-_1784},
{_1814=_1838-_1790},
{_1816=_1840-_1792} .
Given this, we can now define "equality" of vector expressions by "evaluating" both of them and asserting pointwise equality on the results. To make this look nice, we can define an operator for it:
:- op(700, xfx, ===).
This defines === as an infix operator with the same priority as the other equality operators =, =:=, etc. Prolog doesn't allow you to overload operators, so we made up a new one. You can think of the three = signs in the operator as expressing equality in three dimensions.
Here is the corresponding predicate definition:
ExprL === ExprR :-
vectorexpr_value(ExprL, (XL,YL,ZL)),
vectorexpr_value(ExprR, (XR,YR,ZR)),
{ XL = XR },
{ YL = YR },
{ ZL = ZR }.
And we can now define line/4 almost as you wanted:
line(P1, P2, T, P3) :-
(P2 - P1) * T === P3.
Tests:
?- line((0,0,0), (1,1,1), Alpha, (2,2,2)).
Alpha = 2.0 ;
false.
?- line((0,0,0), (1,1,1), Alpha, (2,3,4)).
false.

Is it possible to use rk4 and rootfun in ode (package deSolve)

I'm trying to modeling a prey-prey-predator system using differential equations based on the LV model. For the sake of the precision, i need to use the runge-kutta4 method.
But given the equations, some of the populations become quickly negative.
So I tried to use the events/root system of ODE but it seems that rk4 and rootfun are not compatibles...
eventFunc <- function(t, y, p){
if (y["N1"] < 0) { y["N1"] = 0 }
if (y["N2"] < 0) { y["N2"] = 0 }
if (y["P"] < 0) { y["P"] = 0 }
return(y)
}
rootFunction <- function(t, y, p){
if (y["P"] < 0) {y["P"] = 0}
if (y["N1"] < 0) {y["N1"] = 0}
if (y["N2"] < 0) {y["N2"] = 0}
return(y)
}
out <- ode(func=Model_T2.2,
method="rk4",
y=state,
parms=parameters,
times=times,
events = list(func = eventFunc,
root = TRUE),
rootfun = rootFunction
)
This code give me the followin error :
Error in checkevents(events, times, Ynames, dllname) :
either 'events$time' should be given and contain the times of the events, if 'events$func' is specified and no root function or your solver does not support root functions
Is there any solution to use rk4 and forbid the functions to go under 0?
Thanks in advance.
For those who might ask, here is what works :
if(!require(ggplot2)) {
install.packages("ggplot2"); require(ggplot2)}
if(!require(deSolve)) {
install.packages("deSolve"); require(deSolve)}
Model_T2.2 <- function(t, state, par){
with(as.list(c(state, par)), {
response1 <- (a1 * N1)/(1+(a1*h1*N1)+(a2*h2*N2))
response2 <- (a2 * N2)/(1+(a1*h1*N1)+(a2*h2*N2))
dN1 = r1*N1 * (1 - ((N1 + A12 * N2)/K1)) - response1 * P
dN2 = r2*N2 * (1 - ((N1 + A21 * N2)/K2)) - response2 * P
dP = ((E1 * response1) + (E2 * response2)) * P - Mp
return(list(c(dN1, dN2, dP)))
})
}
parameters<-c(
r1=1.42, r2=0.9,
A12=0.6, A21=0.5,
K1=50, K2=50,
a1=0.77, a2=0.77,
b1 = 1, b2=1,
h1=1.04, h2=1.04,
o1=0, o2=0,
Mp=0.22,
E1=0.36, E2=0.36
)
## inital states
state<-c(
P=10,
N1=30,
N2=30
)
times <- seq(0, 30, by=0.5)
out <- ode(func=Model_T2.2,
method="rk4",
y=state,
parms=parameters,
times=times,
events = list(func = eventFunc,
root = TRUE),
rootfun = rootFunction
)
md <- melt(as.data.frame(out), id.vars=1, measure.vars = c("N1", "N2", "P"))
pl <- ggplot(md, aes(x=time, y=value, colour=variable))
pl <- pl + geom_line() + geom_point() + scale_color_discrete(name="Population")
pl
And the result in a graph :
Evolution of prey1, prey2 and predator populations
As you can see, the population of predators become negative which is clearly impossible in the real world.
Edit : missing variables, sorry about that.
This is a problem you will have with all explicit solvers like rk4. Reducing the time step will help, up to a point. Better use a solver with an implicit method, lsoda seems universally available in one form or another.
Another way to explicitly force positive values is to parametrize them as exponentials. Set N1=exp(U1), N2=exp(U2) then the ODE function code translates to (as dN = exp(U)*dU = N*dU)
N1 <- exp(U1)
N2 <- exp(U2)
response1 <- (a1)/(1+(a1*h1*N1)+(a2*h2*N2))
response2 <- (a2)/(1+(a1*h1*N1)+(a2*h2*N2))
dU1 = r1 * (1 - ((N1 + A12 * N2)/K1)) - response1 * P
dU2 = r2 * (1 - ((N1 + A21 * N2)/K2)) - response2 * P
dP = ((E1 * response1*N1) + (E2 * response2*N2)) * P - Mp
For the output you have then of course to reconstruct N1, N2 from the solutions U1, U2.
Thanks to J_F, I am now able to run my L-V model.
The radau (not randau as you mentionned) function indeed accept root function and events ans implicitly implements the runge-kutta method.
Thanks again, hope this will help someone in the future.

Solve Linear System Over Finite Field with Module

Is there in sage, any instruction to solve a linear system equations
module p(x) (polynomial over finite field), where the system coefficients are polynomials over finite field in any indeterminate?. I know that for integers exists something like, example
sage: I6 = IntegerModRing(6)
sage: M = random_matrix(I6, 4, 4)
sage: v = random_vector(I6, 4)
sage: M \ v
(4, 0, 2, 1)
Here my code
F.<a> = GF(2^4)
PR = PolynomialRing(F,'X')
X = PR.gen()
a11 = (a^2)*(X^3)+(a^11)*(X^2)+1
a12 = (a)*(X^4)+(a^13)*(X^3)+X+1
a13 = X^2+(a^13)*(X^3)+a*(X^2)+1
a21 = X^3
a22 = X+a
a23 = X^2+X^3+a*X
a31 = (a^12)*X+a*(X^2)
a32 = (a^8)*(X^2)+X^2+X^3
a33 = a*X + (a^2)*(X^3)
M = matrix([[a11,a12,a13],[a21,a22,a23],[a31,a32,a33]])
v = vector([(a^6)*(X^14)+X^13+X,a*(X^2)+(X^3)*(a^11)+X^2+X+a^12,(a^8)*(X^7)+a*(X^2)+(a^12)* (X^13)+X^3+X^2+X+1])
p = (a^2 + a)*X^3 + (a + 1)*X^2 + (a^2 + 1)*X + 1 # is than 6 in the firs code
I'm trying
matrix(PolynomialModRing(p),M)\vector(PolynomialModRing(p),v)
but PolynomialModRing not exist ...
EDIT
another person talk me that I will make
R.<Xbar> = PR.quotient(PR.ideal(p))
# change your formulas to Xbar instead of X
A \ b
# ==> (a^3 + a, a^2, (a^3 + a^2)*Xbar^2 + (a + 1)*Xbar + a^3 + a)
this work fine but Now I'm trying to apply the Chinese Theorem Remainder after the code, then .... I defined
q = X^18 + a*X^15 + a*X^12 + X^11 + (a + 1)*X^2 + a
r = a^3*X^3 + (a^3 + a^2 + a)*X^2 + (a^2 + 1)*X + a^3 + a^2 + a
#p,q and r are relatively prime
and I'm trying ...
crt([(A\b)[0],(A\b)[1],(A\b)[2]],[p,q,r])
but I get
File "element.pyx", line 344, in sage.structure.element.Element.getattr (sage/structure/element.c:3871)
File "misc.pyx", line 251, in sage.structure.misc.getattr_from_other_class (sage/structure/misc.c:1606)
AttributeError: 'PolynomialQuotientRing_field_with_category.element_class' object has no attribute 'quo_rem'
I'm thinking that problem is the change Xbar to X
Here my complete example to integers
from numpy import arange, eye, linalg
#2x-3y+2z=21
#x+4y-z=1
#-x+2y+z=17
A = matrix([[2,-3,2],[1,4,-1],[-1,2,1]])
b=vector([21,1,17])
p=[17,11,13]
d=det(A)
dlist=[0,0,0]
ylist=matrix(IntegerModRing(p[i]),[[2,-3,2],[1,4,-1], [-1,2,1]])\vector(IntegerModRing(p[i]),[21,1,17])
p1=[int(ylist[0]),int(ylist[1]),int(ylist[2])]
CRT(p1,p)
Maybe... this is what you want? Continuing your example:
G = F.extension(p) # This is what you want for "PolynomialModRing(p)
matrix(G,M)\vector(G,v)
which outputs
(a^3 + a, a^2, (a^3 + a^2)*X^2 + (a + 1)*X + a^3 + a)
In your question you ask "where the system coefficients are polynomials over finite field in any indeterminate" so what I'm doing above is NOT what you have actually asked, which would be a weird question to ask given your example. So, I'm going to just try to read your mind... :-)

Trilateration with limits?

I'm in need of help solving an issue, the problem came up doing one of my small robot experiments, the basic idea, is that each little robot has the ability to approximate the distance, from themselves to an object, however the approximate I'm getting is way too rough, and I'm hoping to calculate something more accurate.
So:
Input: A list of vertex (v_1, v_2, ... v_n), a vertex v_* (robots)
Output: The coordinates for the unknown vertex v_* (object)
Each vertex v_1 to v_n's coordinates are well known (supplied by calling getX() and getY() on the vertex), and its possible to get the approximate range to v_* by calling; getApproximateDistance(v_*), function getApproximateDistance() returns two variables variables, that is; minDistance and maxDistance. - The actual distance lies in between these.
So what I've been trying to do to obtain the coordinates for v_*, is to use trilateration, however I can't seem to find a formula for doing trilateration with limits (lower and upperbound), so that's really what I'm looking for (not really good enough at math, to figure it out myself).
Note: is triangulation the way to go instead?
Note: I would possibly love to know a way to do, performance/accuracy trade-offs.
An example of data:
[Vertex . `getX()` . `getY()` . `minDistance` . `maxDistance`]
[`v_1` . 2 . 2 . 0.5 . 1 ]
[`v_2` . 1 . 2 . 0.3 . 1 ]
[`v_3` . 1.5 . 1 . 0.3 . 0.5]
Picture to show data: http://img52.imageshack.us/img52/6414/unavngivetcb.png
It's obvious that the approximate for v_1 can be better, than [0.5; 1], as the figure that the above data creates is small cut of a annulus (limited by v_3), however how would I calculate that, and possibly find the approximate within that figure (this figure is possibly concave)?
Would this be better suited for MathOverflow?
I would go for a simple discrete approach. The implicit formula for an annulus is trivial and the intersection of multiple annulus if the number of them is high can be computed somewhat efficently with a scanline based approach.
For getting high accuracy with a fast computation an option could be using a multiresolution approach (i.e. first starting in low-res and then recomputing in high-res only samples that are close to a valid point.
A small python toy I wrote can generate a 400x400 pixel image of the intersection area in about 0.5 secs (this is the kind of computation that would get a 100x speedup if done with C).
# x, y, r0, r1
data = [(2.0, 2.0, 0.5, 1.0),
(1.0, 2.0, 0.3, 1.0),
(1.5, 1.0, 0.3, 0.5)]
x0 = max(x - r1 for x, y, r0, r1 in data)
y0 = max(y - r1 for x, y, r0, r1 in data)
x1 = min(x + r1 for x, y, r0, r1 in data)
y1 = min(y + r1 for x, y, r0, r1 in data)
def hit(x, y):
for cx, cy, r0, r1 in data:
if not (r0**2 <= ((x - cx)**2 + (y - cy)**2) <= r1**2):
return False
return True
res = 400
step = 16
white = chr(255)
grey = chr(192)
black = chr(0)
img = [black] * (res * res)
# Low-res pass
cells = {}
for i in xrange(0, res, step):
y = y0 + i * (y1 - y0) / res
for j in xrange(0, res, step):
x = x0 + j * (x1 - x0) / res
if hit(x, y):
for h in xrange(-step*2, step*3, step):
for v in xrange(-step*2, step*3, step):
cells[(i+v, j+h)] = True
# High-res pass
for i in xrange(0, res, step):
for j in xrange(0, res, step):
if cells.get((i, j), False):
img[i * res + j] = grey
img[(i + step - 1) * res + j] = grey
img[(i + step - 1) * res + (j + step - 1)] = grey
img[i * res + (j + step - 1)] = grey
for v in xrange(step):
y = y0 + (i + v) * (y1 - y0) / res
for h in xrange(step):
x = x0 + (j + h) * (x1 - x0) / res
if hit(x, y):
img[(i + v)*res + (j + h)] = white
open("result.pgm", "wb").write(("P5\n%i %i 255\n" % (res, res)) +
"".join(img))
Another interesting option could be using a GPU if available. Starting from a white picture and drawing in black the exterior of each annulus will leave at the end the intersection area in white.
For example with Python/Qt the code for doing this computation is simply:
img = QImage(res, res, QImage.Format_RGB32)
dc = QPainter(img)
dc.fillRect(0, 0, res, res, QBrush(QColor(255, 255, 255)))
dc.setPen(Qt.NoPen)
dc.setBrush(QBrush(QColor(0, 0, 0)))
for x, y, r0, r1 in data:
xa1 = (x - r1 - x0) * res / (x1 - x0)
xb1 = (x + r1 - x0) * res / (x1 - x0)
ya1 = (y - r1 - y0) * res / (y1 - y0)
yb1 = (y + r1 - y0) * res / (y1 - y0)
xa0 = (x - r0 - x0) * res / (x1 - x0)
xb0 = (x + r0 - x0) * res / (x1 - x0)
ya0 = (y - r0 - y0) * res / (y1 - y0)
yb0 = (y + r0 - y0) * res / (y1 - y0)
p = QPainterPath()
p.addEllipse(QRectF(xa0, ya0, xb0-xa0, yb0-ya0))
p.addEllipse(QRectF(xa1, ya1, xb1-xa1, yb1-ya1))
p.addRect(QRectF(0, 0, res, res))
dc.drawPath(p)
and the computation part for an 800x800 resolution image takes about 8ms (and I'm not sure it's hardware accelerated).
If only the barycenter of the intersection is to be computed then there is no memory allocation at all. For example a "brute-force" approach is just a few lines of C
typedef struct TReading {
double x, y, r0, r1;
} Reading;
int hit(double xx, double yy,
Reading *readings, int num_readings)
{
while (num_readings--)
{
double dx = xx - readings->x;
double dy = yy - readings->y;
double d2 = dx*dx + dy*dy;
if (d2 < readings->r0 * readings->r0) return 0;
if (d2 > readings->r1 * readings->r1) return 0;
readings++;
}
return 1;
}
int computeLocation(Reading *readings, int num_readings,
int resolution,
double *result_x, double *result_y)
{
// Compute bounding box of interesting zone
double x0 = -1E20, y0 = -1E20, x1 = 1E20, y1 = 1E20;
for (int i=0; i<num_readings; i++)
{
if (readings[i].x - readings[i].r1 > x0)
x0 = readings[i].x - readings[i].r1;
if (readings[i].y - readings[i].r1 > y0)
y0 = readings[i].y - readings[i].r1;
if (readings[i].x + readings[i].r1 < x1)
x1 = readings[i].x + readings[i].r1;
if (readings[i].y + readings[i].r1 < y1)
y1 = readings[i].y + readings[i].r1;
}
// Scan processing
double ax = 0, ay = 0;
int total = 0;
for (int i=0; i<=resolution; i++)
{
double yy = y0 + i * (y1 - y0) / resolution;
for (int j=0; j<=resolution; j++)
{
double xx = x0 + j * (x1 - x0) / resolution;
if (hit(xx, yy, readings, num_readings))
{
ax += xx; ay += yy; total += 1;
}
}
}
if (total)
{
*result_x = ax / total;
*result_y = ay / total;
}
return total;
}
And on my PC can compute the barycenter with resolution = 100 in 0.08 ms (x=1.50000, y=1.383250) or with resolution = 400 in 1.3ms (x=1.500000, y=1.383308). Of course a double-step speedup could be implemented even for the barycenter-only version.
I would switch from "max/min" to trying to minimize an error function. That gets you to the problem discussed at Finding a point that best fits the intersection of n spheres which is more tractable than intersecting a series of complicated shapes. (And what if one robot's sensor is messed up and it gives an impossible value? That variation will still usually give a reasonable answer.)
Not sure about your case, but in a typical robotics application you're going to be reading sensors periodically and crunching the data. If that's the case, you're trying to estimate the location based on noisy data and that's a common problem. As a simple (less rigorous) method, you could take the existing position and adjust it toward or away from each known point. Take the measured distance to target minus the present distance to target, multiply that delta (error) by some value between 0 and 1, and move your estimated position that much toward the target. Repeat for each target. Then repeat each time you get a new set of measurements. The multiplier will have an effect like a low-pass filter, smaller values will give you a more stable position estimate with slower response to movement. For the distance, use the average of the min and max. If you can put tighter bounds on the range to one target, you can increase the multiplier closer to 1 for just that target.
This is of course a crude position estimator. The math guys can probably be more rigorous, but also more complicated. The solution is definitely not anything to do with intersecting areas and working with geometric shapes.

Resources