Z3 soft constraint - constraints

I am confused by the use of soft constraints in z3. When running this program, I obtain the following output.
from z3 import *
o = Solver()
expr = f"""
(declare-const v Real)
(declare-const v1 Bool)
(assert-soft v1)
(assert (= v1 (<= 0 v)))
(assert (> 10 v))
"""
p = parse_smt2_string(expr)
o.add(p)
print(o.check())
print(o.model())
output:
[v1 = False, v = -1]
Would someone please know why is the soft constraint not satisfied while there exist values for v which allow it?
Similarly, this program does not return the expected output:
from z3 import *
o = Optimize()
expr = f"""
(declare-const v1 Real)
(declare-const e Bool)
(assert (= e (<= 6 v1)))
(assert-soft e)
"""
p = parse_smt2_string(expr)
o.add(p)
print(o.check())
print(o.model())
It returns [v1 = 0, e = False]. Why can't the soft constraints be fulfilled?

It appears parse_smt2_string does not preserve soft assertions.
If you add print(o.sexpr()) right before you call check, it prints:
(declare-fun v () Real)
(declare-fun v1 () Bool)
(assert (= v1 (<= (to_real 0) v)))
(assert (> (to_real 10) v))
As you see, the soft-constraint has disappeared. So, what z3 sees is not what you thought it did. This explains the unexpected output.
I know that parse_smt2_string isn't exactly faithful, in the sense that it doesn't handle arbitrary SMTLib inputs. Whether this particular case is supported, I'm not sure. File it as an issue at https://github.com/Z3Prover/z3/issues and the developers might take a look.

Related

Function seq.fold_left

I am unable to successfully utilize the seq.fold_left function.
(declare-const s (Seq Int))
(declare-const t (Seq Int))
(declare-fun f (Int Int) Int)
(define-fun sum_seq ((s (Seq Int)))
(seq.fold_left (lambda (acc x) (+ acc x)) 0 s)
)
(assert (= t (seq.fold_left f s 0)))
(check-sat)
(get-model)
Can you identify the problem as I have encountered some errors?
(error "line 6 column 3: unknown sort 'seq.fold_left'")
(error "line 9 column 33: unknown constant seq.fold_left ((Array Int Int Int) Int (Seq Int)) ")
A couple of issues here.
The function is actually called seq.foldl, though I see your confusion since the documentation advertises it as seq.fold_left. Probably a matter of the code not matching the documentation, hopefully they'll fix it. You might want to file a ticket at https://github.com/Z3Prover/z3/issues to alert the developers to the discrepancy.
Your lambda-expression isn't right. In SMTLib, you have to annotate each variable with its type. So it should be something like (lambda ((acc Int) (x Int)) (+ acc x))
In your assert, you're using the arguments in the incorrect order. Should be 0 s, not s 0. Also, the result is an Int, not a Seq Int, so equating that to t is type-incorrect.
So, here's an example that actually works:
(declare-const s (Seq Int))
(declare-const t Int)
(define-fun sum_seq ((s (Seq Int))) Int
(seq.foldl (lambda ((acc Int) (x Int)) (+ acc x)) 0 s)
)
(assert (= t (sum_seq s)))
(assert (> (seq.len s) 3))
(check-sat)
(get-model)
This prints:
sat
(
(define-fun s () (Seq Int)
(seq.++ (seq.unit 5) (seq.unit 6) (seq.unit 7) (seq.unit 8)))
(define-fun t () Int
26)
)
which is a correct model for the given constraints. Hope this helps you sort out your own code.

How to convert a sum and product constraint into SMT-lib2 (for Z3)

I'm wondering what the best way to convert a sum a le
or similarly a product
into an SMT-lib2 expression, specifically for solving with Z3 (or even metitarski).
I thought there would be an obvious approach with a quantifier, but I'm having trouble creating it, and in many use cases such a sum is likely to have constants for exprLB and exprUB, which would mean that I would hope some kind of tactic would simply unroll it into a long sequence of addition, where use of a quantifier might make that much more difficult.
For example, a fairly trivial tactic to convert
into
which is both trivially expressed as (and trivially solved by most SMT solvers) as
(+
(/ 2 x1)
(/ 2 x2)
(/ 2 x3)
)
yielding
sat (model (define-fun x1 () Real 1.0) (define-fun x2 () Real 1.0) (define-fun x3 () Real (/ 1.0 4.0)) )
How can I generally express a sum over three expressions (lower-bound, upper-bound, and accumulator) elegantly in smt-lib2?
The obvious choice would be to use arrays for your x values, and recursive-functions to model the sum/product.
Z3 does support recursive functions, but it's not fool-proof. At best you'll get unknown, since most such formulae would require inductive proofs; something SMT-solvers are not good at. At worst, you get an unhelpful answer, or maybe even a bogus one if you hit a bug.
Here's an example that works out ok:
(declare-fun xs () (Array Int Real))
(define-fun-rec sum ((lb Int) (ub Int)) Real
(ite (> lb ub)
0
(+ (select xs lb)
(sum (+ lb 1) ub))))
(declare-fun lb () Int)
(declare-fun ub () Int)
(assert (= (sum lb ub) 12.34))
(check-sat)
(get-value (lb ub xs))
Z3 responds:
sat
((lb 0)
(ub 0)
(xs ((as const (Array Int Real)) (/ 617.0 50.0))))
This is pretty cool actually, though maybe not as impressive as you expected. You can force it to a certain range as well:
(declare-fun xs () (Array Int Real))
(define-fun-rec sum ((lb Int) (ub Int)) Real
(ite (> lb ub)
0
(+ (select xs lb)
(sum (+ lb 1) ub))))
(declare-fun lb () Int)
(declare-fun ub () Int)
(assert (= 1 lb))
(assert (= 3 ub))
(assert (= (sum lb ub) 12.34))
(check-sat)
(get-value (lb ub))
(eval (select xs 1))
(eval (select xs 2))
(eval (select xs 3))
This produces:
sat
((lb 1)
(ub 3))
0.0
(- (/ 121233.0 50.0))
2437.0
Which is a correct model. Unfortunately, slight changes to the formula/assertions cause it to produce unhelpful answers. If I try:
(declare-fun xs () (Array Int Real))
(define-fun-rec sum ((lb Int) (ub Int)) Real
(ite (> lb ub)
0
(+ (/ 2.0 (select xs lb))
(sum (+ lb 1) ub))))
(assert (= (sum 1 3) 12.34))
(check-sat)
Then I get:
unknown
As solvers mature in their support for recursive functions, you can surely expect them to answer more queries successfully. For the short term, you're more likely to see unknown responses quite often.
Personally, I think using an SMT solver when you don't know how many terms you have in your sum/product is just not the best idea. If you know the number of terms, by all means use an SMT solver. If not, you're better off using interactive theorem proving, i.e., systems that allow you to express recursive functions and inductive proofs; such as Isabelle, Coq, and others.

Bug with recursive functions?

I'm trying recursive functions in z3, and I'm curious if there's a bug with model construction. Consider:
(define-fun-rec f ((x Int)) Int
(ite (> x 1)
(f (- x 1))
1))
(check-sat)
(get-value ((f 0)))
Here f is actually the constant function 1, just defined in a silly way. For this input, z3 prints:
sat
(((f 0) 0))
This seems incorrect, since f 0 should equal 1.
What's interesting is if I assert what z3 proposes as the result, then I get the correct unsat answer:
(define-fun-rec f ((x Int)) Int
(ite (> x 1)
(f (- x 1))
1))
(assert (= (f 0) 0))
(check-sat)
I get:
unsat
So, it looks like z3 actually does now that f 0 cannot be 0; even though it produced that very model in the previous case.
Taking this one step further, if I issue:
(define-fun-rec f ((x Int)) Int
(ite (> x 1)
(f (- x 1))
1))
(assert (= (f 0) 1))
(check-sat)
(get-model)
Then z3 responds:
sat
(model
(define-fun f ((x!0 Int)) Int
1)
)
which is indeed a reasonable answer.
So, it seems perhaps there's a bug with recursive function models under certain conditions?
The models used not to reflect the graph of recursive function definitions.
So when evaluating recursive functions on values that had not been seen during solving it could produce arbitrary results. This behavior is now changed as the recursive definitions are included in models.

In pure functional languages, is there an algorithm to get the inverse function?

In pure functional languages like Haskell, is there an algorithm to get the inverse of a function, (edit) when it is bijective? And is there a specific way to program your function so it is?
In some cases, yes! There's a beautiful paper called Bidirectionalization for Free! which discusses a few cases -- when your function is sufficiently polymorphic -- where it is possible, completely automatically to derive an inverse function. (It also discusses what makes the problem hard when the functions are not polymorphic.)
What you get out in the case your function is invertible is the inverse (with a spurious input); in other cases, you get a function which tries to "merge" an old input value and a new output value.
No, it's not possible in general.
Proof: consider bijective functions of type
type F = [Bit] -> [Bit]
with
data Bit = B0 | B1
Assume we have an inverter inv :: F -> F such that inv f . f ≡ id. Say we have tested it for the function f = id, by confirming that
inv f (repeat B0) -> (B0 : ls)
Since this first B0 in the output must have come after some finite time, we have an upper bound n on both the depth to which inv had actually evaluated our test input to obtain this result, as well as the number of times it can have called f. Define now a family of functions
g j (B1 : B0 : ... (n+j times) ... B0 : ls)
= B0 : ... (n+j times) ... B0 : B1 : ls
g j (B0 : ... (n+j times) ... B0 : B1 : ls)
= B1 : B0 : ... (n+j times) ... B0 : ls
g j l = l
Clearly, for all 0<j≤n, g j is a bijection, in fact self-inverse. So we should be able to confirm
inv (g j) (replicate (n+j) B0 ++ B1 : repeat B0) -> (B1 : ls)
but to fulfill this, inv (g j) would have needed to either
evaluate g j (B1 : repeat B0) to a depth of n+j > n
evaluate head $ g j l for at least n different lists matching replicate (n+j) B0 ++ B1 : ls
Up to that point, at least one of the g j is indistinguishable from f, and since inv f hadn't done either of these evaluations, inv could not possibly have told it apart – short of doing some runtime-measurements on its own, which is only possible in the IO Monad.
                                                                                                                                   ⬜
You can look it up on wikipedia, it's called Reversible Computing.
In general you can't do it though and none of the functional languages have that option. For example:
f :: a -> Int
f _ = 1
This function does not have an inverse.
Not in most functional languages, but in logic programming or relational programming, most functions you define are in fact not functions but "relations", and these can be used in both directions. See for example prolog or kanren.
Tasks like this are almost always undecidable. You can have a solution for some specific functions, but not in general.
Here, you cannot even recognize which functions have an inverse. Quoting Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics. North Holland, Amsterdam (1984):
A set of lambda-terms is nontrivial if it is neither the empty nor the full set. If A and B are two nontrivial, disjoint sets of lambda-terms closed under (beta) equality, then A and B are recursively inseparable.
Let's take A to be the set of lambda terms that represent invertible functions and B the rest. Both are non-empty and closed under beta equality. So it's not possible to decide whether a function is invertible or not.
(This applies to the untyped lambda calculus. TBH I don't know if the argument can be directly adapted to a typed lambda calculus when we know the type of a function that we want to invert. But I'm pretty sure it will be similar.)
If you can enumerate the domain of the function and can compare elements of the range for equality, you can - in a rather straightforward way. By enumerate I mean having a list of all the elements available. I'll stick to Haskell, since I don't know Ocaml (or even how to capitalise it properly ;-)
What you want to do is run through the elements of the domain and see if they're equal to the element of the range you're trying to invert, and take the first one that works:
inv :: Eq b => [a] -> (a -> b) -> (b -> a)
inv domain f b = head [ a | a <- domain, f a == b ]
Since you've stated that f is a bijection, there's bound to be one and only one such element. The trick, of course, is to ensure that your enumeration of the domain actually reaches all the elements in a finite time. If you're trying to invert a bijection from Integer to Integer, using [0,1 ..] ++ [-1,-2 ..] won't work as you'll never get to the negative numbers. Concretely, inv ([0,1 ..] ++ [-1,-2 ..]) (+1) (-3) will never yield a value.
However, 0 : concatMap (\x -> [x,-x]) [1..] will work, as this runs through the integers in the following order [0,1,-1,2,-2,3,-3, and so on]. Indeed inv (0 : concatMap (\x -> [x,-x]) [1..]) (+1) (-3) promptly returns -4!
The Control.Monad.Omega package can help you run through lists of tuples etcetera in a good way; I'm sure there's more packages like that - but I don't know them.
Of course, this approach is rather low-brow and brute-force, not to mention ugly and inefficient! So I'll end with a few remarks on the last part of your question, on how to 'write' bijections. The type system of Haskell isn't up to proving that a function is a bijection - you really want something like Agda for that - but it is willing to trust you.
(Warning: untested code follows)
So can you define a datatype of Bijection s between types a and b:
data Bi a b = Bi {
apply :: a -> b,
invert :: b -> a
}
along with as many constants (where you can say 'I know they're bijections!') as you like, such as:
notBi :: Bi Bool Bool
notBi = Bi not not
add1Bi :: Bi Integer Integer
add1Bi = Bi (+1) (subtract 1)
and a couple of smart combinators, such as:
idBi :: Bi a a
idBi = Bi id id
invertBi :: Bi a b -> Bi b a
invertBi (Bi a i) = (Bi i a)
composeBi :: Bi a b -> Bi b c -> Bi a c
composeBi (Bi a1 i1) (Bi a2 i2) = Bi (a2 . a1) (i1 . i2)
mapBi :: Bi a b -> Bi [a] [b]
mapBi (Bi a i) = Bi (map a) (map i)
bruteForceBi :: Eq b => [a] -> (a -> b) -> Bi a b
bruteForceBi domain f = Bi f (inv domain f)
I think you could then do invert (mapBi add1Bi) [1,5,6] and get [0,4,5]. If you pick your combinators in a smart way, I think the number of times you'll have to write a Bi constant by hand could be quite limited.
After all, if you know a function is a bijection, you'll hopefully have a proof-sketch of that fact in your head, which the Curry-Howard isomorphism should be able to turn into a program :-)
I've recently been dealing with issues like this, and no, I'd say that (a) it's not difficult in many case, but (b) it's not efficient at all.
Basically, suppose you have f :: a -> b, and that f is indeed a bjiection. You can compute the inverse f' :: b -> a in a really dumb way:
import Data.List
-- | Class for types whose values are recursively enumerable.
class Enumerable a where
-- | Produce the list of all values of type #a#.
enumerate :: [a]
-- | Note, this is only guaranteed to terminate if #f# is a bijection!
invert :: (Enumerable a, Eq b) => (a -> b) -> b -> Maybe a
invert f b = find (\a -> f a == b) enumerate
If f is a bijection and enumerate truly produces all values of a, then you will eventually hit an a such that f a == b.
Types that have a Bounded and an Enum instance can be trivially made RecursivelyEnumerable. Pairs of Enumerable types can also be made Enumerable:
instance (Enumerable a, Enumerable b) => Enumerable (a, b) where
enumerate = crossWith (,) enumerate enumerate
crossWith :: (a -> b -> c) -> [a] -> [b] -> [c]
crossWith f _ [] = []
crossWith f [] _ = []
crossWith f (x0:xs) (y0:ys) =
f x0 y0 : interleave (map (f x0) ys)
(interleave (map (flip f y0) xs)
(crossWith f xs ys))
interleave :: [a] -> [a] -> [a]
interleave xs [] = xs
interleave [] ys = []
interleave (x:xs) ys = x : interleave ys xs
Same goes for disjunctions of Enumerable types:
instance (Enumerable a, Enumerable b) => Enumerable (Either a b) where
enumerate = enumerateEither enumerate enumerate
enumerateEither :: [a] -> [b] -> [Either a b]
enumerateEither [] ys = map Right ys
enumerateEither xs [] = map Left xs
enumerateEither (x:xs) (y:ys) = Left x : Right y : enumerateEither xs ys
The fact that we can do this both for (,) and Either probably means that we can do it for any algebraic data type.
Not every function has an inverse. If you limit the discussion to one-to-one functions, the ability to invert an arbitrary function grants the ability to crack any cryptosystem. We kind of have to hope this isn't feasible, even in theory!
In some cases, it is possible to find the inverse of a bijective function by converting it into a symbolic representation. Based on this example, I wrote this Haskell program to find inverses of some simple polynomial functions:
bijective_function x = x*2+1
main = do
print $ bijective_function 3
print $ inverse_function bijective_function (bijective_function 3)
data Expr = X | Const Double |
Plus Expr Expr | Subtract Expr Expr | Mult Expr Expr | Div Expr Expr |
Negate Expr | Inverse Expr |
Exp Expr | Log Expr | Sin Expr | Atanh Expr | Sinh Expr | Acosh Expr | Cosh Expr | Tan Expr | Cos Expr |Asinh Expr|Atan Expr|Acos Expr|Asin Expr|Abs Expr|Signum Expr|Integer
deriving (Show, Eq)
instance Num Expr where
(+) = Plus
(-) = Subtract
(*) = Mult
abs = Abs
signum = Signum
negate = Negate
fromInteger a = Const $ fromIntegral a
instance Fractional Expr where
recip = Inverse
fromRational a = Const $ realToFrac a
(/) = Div
instance Floating Expr where
pi = Const pi
exp = Exp
log = Log
sin = Sin
atanh = Atanh
sinh = Sinh
cosh = Cosh
acosh = Acosh
cos = Cos
tan = Tan
asin = Asin
acos = Acos
atan = Atan
asinh = Asinh
fromFunction f = f X
toFunction :: Expr -> (Double -> Double)
toFunction X = \x -> x
toFunction (Negate a) = \a -> (negate a)
toFunction (Const a) = const a
toFunction (Plus a b) = \x -> (toFunction a x) + (toFunction b x)
toFunction (Subtract a b) = \x -> (toFunction a x) - (toFunction b x)
toFunction (Mult a b) = \x -> (toFunction a x) * (toFunction b x)
toFunction (Div a b) = \x -> (toFunction a x) / (toFunction b x)
with_function func x = toFunction $ func $ fromFunction x
simplify X = X
simplify (Div (Const a) (Const b)) = Const (a/b)
simplify (Mult (Const a) (Const b)) | a == 0 || b == 0 = 0 | otherwise = Const (a*b)
simplify (Negate (Negate a)) = simplify a
simplify (Subtract a b) = simplify ( Plus (simplify a) (Negate (simplify b)) )
simplify (Div a b) | a == b = Const 1.0 | otherwise = simplify (Div (simplify a) (simplify b))
simplify (Mult a b) = simplify (Mult (simplify a) (simplify b))
simplify (Const a) = Const a
simplify (Plus (Const a) (Const b)) = Const (a+b)
simplify (Plus a (Const b)) = simplify (Plus (Const b) (simplify a))
simplify (Plus (Mult (Const a) X) (Mult (Const b) X)) = (simplify (Mult (Const (a+b)) X))
simplify (Plus (Const a) b) = simplify (Plus (simplify b) (Const a))
simplify (Plus X a) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a X) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a b) = (simplify (Plus (simplify a) (simplify b)))
simplify a = a
inverse X = X
inverse (Const a) = simplify (Const a)
inverse (Mult (Const a) (Const b)) = Const (a * b)
inverse (Mult (Const a) X) = (Div X (Const a))
inverse (Plus X (Const a)) = (Subtract X (Const a))
inverse (Negate x) = Negate (inverse x)
inverse a = inverse (simplify a)
inverse_function x = with_function inverse x
This example only works with arithmetic expressions, but it could probably be generalized to work with lists as well. There are also several implementations of computer algebra systems in Haskell that may be used to find the inverse of a bijective function.
No, not all functions even have inverses. For instance, what would the inverse of this function be?
f x = 1

Summation in functional programming

I was searching in the web for exclusion-Inclusion principle, what i have found is this:
(from MathWorld - A Wolfram Web Resource: wolfram.com)
http://mathworld.wolfram.com/Inclusion-ExclusionPrinciple.html
I doesn't matter if you don't understand the formula, in fact, what i need is to implement this:
For example, the input is:
(summation (list 1 2) 3)
Where (list 1 2) is i and j and 3 is the limit of the sum n.
(n had to be up the sigma but...)
Then, the output of formula, in Scheme will be:
(list (list 1 2) (list 1 3) (list 2 3))
How can i implemment this in Scheme or in Haskell? (sorry for my English).
In Haskell, use a list comprehension:
Prelude> [(i,j) | i <- [1..4], j <- [i+1..4]]
[(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)]
Prelude> [i * j | i <- [1..4], j <- [i+1..4]]
[2,3,4,6,8,12]
Prelude> sum [i * j | i <- [1..4], j <- [i+1..4]]
35
First line gives all a list of all pairs (i,j) where 1 <= i < j <= 4
Second line gives a list of i*j where 1 <= i < j <= 4
Third line gives sum of these values: Σ1 <= i < j <= 4 i*j.
In racket, you'd probably use a list comprehension:
#lang racket
(for*/sum ([i (in-range 1 5)]
[j (in-range (add1 i) 5)])
(* i j))
The core functionality you need for a simple implementation of the inclusion-exclusion principle is to generate all k-element subsets of the index set. Using lists, that is an easy recursion:
pick :: Int -> [a] -> [[a]]
pick 0 _ = [[]] -- There is exactly one 0-element subset of any set
pick _ [] = [] -- No way to pick any nonzero number of elements from an empty set
pick k (x:xs) = map (x:) (pick (k-1) xs) ++ pick k xs
-- There are two groups of k-element subsets of a set containing x,
-- those that contain x and those that do not
If pick is not a local function whose calls are 100% under your control, you should add a check that the Int parameter is never negative (you could use Word for that parameter, then that's built into the type).
If k is largish, checking against the length of the list to pick from prevents a lot of fruitless recursion, so it's better to build that in from the start:
pick :: Int -> [a] -> [[a]]
pick k xs = choose k (length xs) xs
choose :: Int -> Int -> [a] -> [[a]]
choose 0 _ _ = [[]]
choose k l xs
| l < k = [] -- we want to choose more than we have
| l == k = [xs] -- we want exactly as many as we have
| otherwise = case xs of
[] -> error "This ought to be impossible, l == length xs should hold"
(y:ys) -> map (y:) (choose (k-1) (l-1) ys) ++ choose k (l-1) ys
The inclusion-exclusion formula then becomes
inclusionExclusion indices
= sum . zipWith (*) (cycle [1,-1]) $
[sum (map count $ pick k indices) | k <- [1 .. length indices]]
where count list counts the number of elements of the intersection of [subset i | i <- list]. Of course, you need an efficient way to calculate that, or it would be more efficient to find the size of the union directly.
There's much room for optimisation, and there are different ways to do it, but that's a fairly short and direct translation of the principle.
Here is a possible way with Scheme. I've made the following function to create quantification
#lang racket
(define (quantification next test op e)
{lambda (A B f-terme)
(let loop ([i A] [resultat e])
(if [test i B]
resultat
(loop (next i) (op (f-terme i) resultat)) ))})
With this function you can create sum, product, generalized union and generalized intersection.
;; Arithmetic example
(define sumQ (quantification add1 > + 0))
(define productQ (quantification add1 > * 1))
;; Sets example with (require
(define (unionQ set-of-sets)
(let [(empty-set (set))
(list-of-sets (set->list set-of-sets))
]
((quantification cdr eq? set-union empty-set) list-of-sets
'()
car)))
(define (intersectionQ set-of-sets)
(let [(empty-set (set))
(list-of-sets (set->list set-of-sets))
]
((quantification cdr eq? set-intersect (car list-of-sets)) (cdr list-of-sets)
'()
car)))
This way you can do
(define setA2 (set 'a 'b))
(define setA5 (set 'a 'b 'c 'd 'e))
(define setC3 (set 'c 'd 'e))
(define setE3 (set 'e 'f 'g))
(unionQ (set setA2 setC3 setE3))
(intersectionQ (set setA5 setC3 setE3))
I work on something similar in Haskell
module Quantification where
quantifier next test op =
let loop e a b f = if (test a b)
then e
else loop (op (f a) e) (next a) b f
in loop
quantifier_on_integer_set = quantifier (+1) (>)
sumq = quantifier_on_integer_set (+) 0
prodq = quantifier_on_integer_set (*) 1
But I never go further... Probably that you can start from this however.

Resources