Mathematical logic in Julia - julia

Does Jump support direct use of mathematical logics in a model?
I know it supports conditional indicator, but how about these conditionals?
For example:
1- A bi-conditional indicator is a binary variable that is associated with satisfaction and
non-satisfaction statuses of a constraint concerning the current solution.
δi = 1 ⟺ sum(j, a_ij. x_j) <= b_i
2- Either-or Constraints ensuring that at least one of the two constraints is met.
3- If-Then-Else Constraints describing a situation where we want to ensure that if the constraint C1
holds, then the constraint C2 must be held; otherwise the constraint C3 must be held

Does Julia support direct use of mathematical logics in a model?
I assume you mean JuMP here. (JuMP is a package for mathematical optimization written in Julia.)
JuMP does not have direct syntax for these three constraints.
However, they can all be formulated with some tricks.
The Mosek Modeling Cookbook is a good collection of tricks:
https://docs.mosek.com/modeling-cookbook/mio.html.
There are a variety of textbooks and lecture notes on modeling integer programs
https://download.aimms.com/aimms/download/manuals/AIMMS3OM_IntegerProgrammingTricks.pdf
Online courses: https://www.coursera.org/learn/discrete-optimization
The JuMP documentation also has some tips.
Point (2) is https://jump.dev/JuMP.jl/stable/tutorials/Mixed-integer%20linear%20programs/tips_and_tricks/#Big-M-Disjunctive-Constraints-(OR)

(1) The constraint
δ(i)=1 ⇔ sum(j, a(i,j)) ≤ b(i)
can be stated as two indicator constraints:
δ(i)=1 ⇒ sum(j, a(i,j)) ≤ b(i)
δ(i)=0 ⇒ sum(j, a(i,j)) ≥ b(i) + 0.0001
δ(i) ∈ {0,1}
I typically drop the 0.001 and let the problem be slightly ambiguous at equality. This way the solver can pick the best. If the sum is long, use an intermediate variable to prevent duplicating the sum.
(2) Either-or-constraints. Same idea:
δ=1 ⇒ linear constraint 1
δ=0 ⇒ linear constraint 2
δ ∈ {0,1}
One of the constraints 1 or 2 will be enforced
(3) If-Then-Else Constraints. I.e.
If constraint 1 then constraint 2 else constraint 3
Again similar to what we have above:
δ=1 ⇒ linear constraint 1
δ=1 ⇒ linear constraint 2
δ=0 ⇒ not linear constraint 1
δ=0 ⇒ linear constraint 3
A more complicated case can look like:
δ=1 ⇒ y=b
δ=0 ⇒ y≠b
The second implication needs to be split into two using an additional binary variable. I'll leave that as an exersize.

Related

Does AC-3 solve path consistency?

When I read the pseudocode of AC-3 in Artificial Intelligence: A Modern Approach, I thought it solves path consistency as well as arc consistency. But the book says path consistency is solved by an algorithm PC-2. Did I miss something?
Why is AC-3 not sufficient enough for solving path consistency?
Here's code for AC-3
function AC-3(csp) returns false if an inconsistency is found and true otherwise
inputs: csp, a binary CSP with components (X, D, C)
local variables: queue, a queue of arcs, initially all the arcs in csp
while queue is not empty do
(Xi, Xj)←REMOVE-FIRST(queue)
if REVISE(csp, Xi, Xj) then
if size of Di = 0 then return false
for each Xk in Xi.NEIGHBORS - {Xj} do
add (Xk, Xi) to queue
return true
function REVISE(csp, Xi, Xj) returns true iff we revise the domain of Xi
revised ← false
for each x in Di do
if no value y in Dj allows (x,y) to satisfy the constraint between Xi and Xj then
delete x from Di
revised ← true
return revised
Thanks in advance:)
I think I've figured out where the problem is. I misunderstood the meaning of path consistency.
I thought
(1) {Xi, Xj} is path-consistent with Xk
is equivalent to
(2) Xi is arc-consistent with Xj, Xi is arc-consistent with Xk, and Xj is arc-consistent with Xk.
That's why I thought AC-3 was sufficient for solving path consistency. But it turns out not.
Give the meaning of (1) and (2):
(1) means that, for every pair of assignment {a, b} consistent with the constraint on {Xi, Xj}, there is a value c in the domain of Xk such that {a, c} and {b, c} satisfy the constraints on {Xi, Xk} and {Xj, Xk}
(2) could be explained in this way (which makes it easier to see the difference): for every pair of assignment {a, b} consistent with the constraint on {Xi, Xj} (Xi is arc-consistent with Xj, this one may not be accurate, but could make do), there is a c in the domain of Xk such that {a, c} satisfies the constraints on {Xi, Xk} (Xi is arc-consistent with Xk), and there is a d in the domain of Xk such that {b, c} satisfies the constraints on {Xj, Xk} (Xj is arc-consistent with Xk)
It's easy to see the difference now: in the explanation of (2), c and d could be different values in the domain of Xk. Only when c is equal to d, (2) is equivalent to (1)
So AC-3 is only sufficient for solving (2), but it's too lax to solve path consistency
Who can tell me whether my understanding is right, this time? Thx :)
It should be {b,d}satisfies the constraint on {xj,xk} .(xj is arc-consistent with xk).

Algorithm for finding an equidistributed solution to a linear congruence system

I face the following problem in a cryptographical application: I have given a set of linear congruences
a[1]*x[1]+a[2]*x[2]+a[3]*x[3] == d[1] (mod p)
b[1]*x[1]+b[2]*x[2]+b[3]*x[3] == d[2] (mod p)
c[1]*x[1]+c[2]*x[2]+c[3]*x[3] == d[3] (mod p)
Here, x is unknown an a,b,c,d are given
The system is most likely underdetermined, so I have a large solution space. I need an algorithm that finds an equidistributed solution (that means equidistributed in the solution space) to that problem using a pseudo-random number generator (or fails).
Most standard algorithms for linear equation systems that I know from my linear algebra courses are not directly applicable to congruences as far as I can see...
My current, "safe" algorithm works as follows: Find all variable that appear in only one equation, and assign a random value. Now if in each row, only one variable is unassigned, assign the value according to the congruence. Otherwise fail.
Can anyone give me a clue how to solve this problem in general?
You can use gaussian elimination and similar algorithms just like you learned in your linear algebra courses, but all arithmetic is performed mod p (p is a prime). The one important difference is in the definition of "division": to compute a / b you instead compute a * (1/b) (in words, "a times b inverse"). Consider the following changes to the math operations normally used
addition: a+b becomes a+b mod p
subtraction: a-b becomes a-b mod p
multiplication: a*b becomes a*b mod p
division: a/b becomes: if p divides b, then "error: divide by zero", else a * (1/b) mod p
To compute the inverse of b mod p you can use the extended euclidean algorithm or alternatively compute b**(p-2) mod p.
Rather than trying to roll this yourself, look for an existing library or package. I think maybe Sage can do this, and certainly Mathematica, and Maple, and similar commercial math tools can.

Non-linear congruence solver (modular arithmetic)

Is there an algorithm that can solve a non-linear congruence in modular arithmetic? I read that such a problem is classified as NP-complete.
In my specific case the congruence is of the form:
x^3 + ax + b congruent to 0 (mod 2^64)
where a and b are known constants and I need to solve it for x.
Look at Hensel's lemma.
Yes, the general problem is NP-Complete.
This is because boolean algebra is arithmetic modulo 2! So any 3SAT formula can be rewritten as an equivalent arithmetic expression in arithmetic modulo 2. Checking if a 3SAT formula is satisfiable becomes equivalent to checking if the corresponding arithemetic expression can be 1 or not.
For example, a AND b becomes a.b in arithemetic.
NOT a is 1-a etc.
But in your case, talking about NP-Compleness makes no sense, as it is one specific problem.
Also, lhf is right. Hensel's Lifting lemma can be used. The basic essence is that to solve P(x) = 0 mod 2^(e+1) we can solve P(x) = 0 mod 2^e and 'lift' those solutions to mod 2^(e+1)
Here is a pdf explaining how to use that: http://www.cs.xu.edu/math/math302/04f/PolyCongruences.pdf

Get branch points of equation

If I have a general function,f(z,a), z and a are both real, and the function f takes on real values for all z except in some interval (z1,z2), where it becomes complex. How do I determine z1 and z2 (which will be in terms of a) using Mathematica (or is this possible)? What are the limitations?
For a test example, consider the function f[z_,a_]=Sqrt[(z-a)(z-2a)]. For real z and a, this takes on real values except in the interval (a,2a), where it becomes imaginary. How do I find this interval in Mathematica?
In general, I'd like to know how one would go about finding it mathematically for a general case. For a function with just two variables like this, it'd probably be straightforward to do a contour plot of the Riemann surface and observe the branch cuts. But what if it is a multivariate function? Is there a general approach that one can take?
What you have appears to be a Riemann surface parametrized by 'a'. Consider the algebraic (or analytic) relation g(a,z)=0 that would be spawned from this branch of a parametrized Riemann surface. In this case it is simply g^2 - (z - a)*(z - 2*a) == 0. More generally it might be obtained using Groebnerbasis, as below (no guarantee this will always work without some amount of user intervention).
grelation = First[GroebnerBasis[g - Sqrt[(z - a)*(z - 2*a)], {x, a, g}]]
Out[472]= 2 a^2 - g^2 - 3 a z + z^2
A necessary condition for the branch points, as functions of the parameter 'a', is that the zero set for 'g' not give a (single valued) function in a neighborhood of such points. This in turn means that the partial derivative of this relation with respect to g vanishes (this is from the implicit function theorem of multivariable calculus). So we find where grelation and its derivative both vanish, and solve for 'z' as a function of 'a'.
Solve[Eliminate[{grelation == 0, D[grelation, g] == 0}, g], z]
Out[481]= {{z -> a}, {z -> 2 a}}
Daniel Lichtblau
Wolfram Research
For polynomial systems (and some class of others), Reduce can do the job.
E.g.
In[1]:= Reduce[Element[{a, z}, Reals]
&& !Element[Sqrt[(z - a) (z - 2 a)], Reals], z]
Out[1]= (a < 0 && 2a < z < a) || (a > 0 && a < z < 2a)
This type of approach also works (often giving very complicated solutions for functions with many branch cuts) for other combinations of elementary functions I checked.
To find the branch cuts (as opposed to the simple class of branch points you're interested in) in general, I don't know of a good approach. The best place to find the detailed conventions that Mathematica uses is at the functions.wolfram site.
I do remember reading a good paper on this a while back... I'll try to find it....
That's right! The easiest approach I've seen for branch cut analysis uses the unwinding number. There's a paper "Reasoning about the elementary functions of complex analysis" about this the the journal "Artificial Intelligence and Symbolic Computation". It and similar papers can be found at one of the authors homepage: http://www.apmaths.uwo.ca/~djeffrey/offprints.html.
For general functions you cannot make Mathematica calculate it.
Even for polynomials, finding an exact answer takes time.
I believe Mathematica uses some sort of quantifier elimination when it uses Reduce,
which takes time.
Without any restrictions on your functions (are they polynomials, continuous, smooth?)
one can easily construct functions which Mathematica cannot simplify further:
f[x_,y_] := Abs[Zeta[y+0.5+x*I]]*I
If this function is real for arbitrary x and any -0.5 < y < 0 or 0<y<0.5,
then you will have found a counterexample to the Riemann zeta conjecture,
and I'm sure Mathematica cannot give a correct answer.

Russell's Paradox [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Let X be the set of all sets that do not contain themselves. Is X a member of X?
In ZFC, either the axiom of foundation [as mentioned] or the axiom (scheme) of comprehension will prohibit this. The first, for obvious reasons; the second, since it basically says that for given z and first-order property P, you can construct { x ∈ z : P(x) }, but to generate the Russell set, you would need z = V (the class of all sets), which is not a set (i.e. cannot be generated from any of the given axioms).
In New Foundations (NF), "x ∉ x" is not a stratified formula, and so again we cannot define the Russell set. Somewhat amusingly, however, V is a set in NF.
In von Neumann--Bernays--Gödel set theory (NBG), the class R = { x : x is a set and x ∉ x } is definable. We then ask whether R ∈ R; if so, then also R ∉ R, giving a contradiction. Thus we must have R ∉ R. But there is no contradiction here, since for any given class A, A ∉ R implies either A ∈ A or A is a proper class. Since R ∉ R, we must simply have that R is a proper class.
Of course, the class R = { x : x ∉ x }, without the restriction, is simply not definable in NBG.
Also of note is that the above procedure is formally constructable as a proof in NBG, whereas in ZFC one has to resort to meta-reasoning.
The question is ill-posed in the standard ZFC (Zermelo-Fraenkel + axiom of Choice) set theory because the object thus defined is not a set.
Since (again, assuming standard ZFC) your class {x : x\not\in x} is not a set, the answer becomes no, it's not an element of itself (even as a class) since only sets can be elements of classes or sets.
By the way, as soon as you agree to the axiom of foundation, no set can be an element of itself.
Of course the nice thing about math is you can choose whichever axioms you want :) but believing in paradoxes is just weird.
The most elegant proof I've ever seen resembles Russell's paradox closely.
Theorem (Cantor, I suppose).
Let X be a set, and 2^X the set of its subsets. Then card(X) < card(2^X).
Proof. Surely card(X) <= card(2^X), since there is a trivial bijection between X and the singletons in 2^X. We must prove that card(X) != card(2^X).
Suppose there is a bijection between X and 2^X. Then each xk in X is mapped to a set Ak in 2^X.
x1 ---> A1
x2 ---> A2
...
xk ---> Ak
...
For each xk the chances are: either xk belongs to Ak, or it does not. Let M be the set of all those xk that do not belong to their corresponding set Ak. M is a subset of X, thus there must exist an element m of X which is mapped to M by the bijection.
Does m belong to M? If it does, then it does not, for M is the set of those x that do not belong to the set they're mapped to. If it does not, then it does, for M contains all such x's. This contradiction stems from the assumption that a bijection exists. Thus a bijection cannot exist, the two cardinalities are different, and the theorem is proved.

Resources