Dafny recursive trigger debugging - recursion

Following up from a previous question here. I'm trying to extend my definition of an abstract power for all integers.
I thought I setup the triggers correctly but I'm still running into recursive triggers endlessly verifying. Is there a better way to debug recursive triggers than just guessing and checking? I basically sit there with task manager open seeing if z3 is eating all the memory. I hope there is a way to get better visibility about what is happening.
function apow<A>(g: Group, elem: A, n: int): A
decreases n*n
ensures n == 0 ==> apow(g,elem,n) == g.identity
{
if n == 0 then g.identity else if n > 0 then g.compose(elem, apow(g, elem, n-1)) else if n < 0 then g.compose(g.inverse(elem), apow(g, elem, n+1)) else g.identity
}
lemma apowClosed<A>(g: Group, elem: A, n: int)
requires elem in g.elements
requires g.identity in g.elements
requires isIdentity(g)
requires closedComposition(g)
requires closedInverse(g)
requires isInverse(g)
decreases n*n
ensures apow(g, elem, n) in g.elements
{}
lemma allApowClosed<A>(g: Group, elem: A)
requires ValidGroup(g)
requires elem in g.elements
ensures forall x: int :: apow(g, elem, x) in g.elements
{
reveal apow();
forall x: int {
apowClosed(g, elem, x);
}
}
lemma {:verify true} apowAdditionInt<A>(g: Group<A>, elem: A, n: int, k: int)
requires elem in g.elements
// requires ValidGroup(g)
requires closedComposition(g)
requires closedInverse(g)
requires g.identity in g.elements
requires isIdentity(g);
requires associativeComposition(g)
ensures g.compose(apow(g, elem, n), apow(g, elem, k)) == apow(g, elem, n+k)
{
allApowClosed(g, elem);
if k == 0 {
assert apow(g, elem, k) == g.identity;
assert g.compose(apow(g, elem, n), g.identity) == apow(g, elem, n+k);
}else if n == 0 {
assert apow(g, elem, n) == g.identity;
assert g.compose(g.identity, apow(g, elem, k)) == apow(g, elem, n+k);
}else if n > 0 && n+k > k {
apowPos(g, elem, n);
apowPos(g, elem, n+k);
assert apow(g, elem, n-1) in g.elements;
assert apow(g, elem, k) in g.elements;
assert apow(g, elem, n+k) in g.elements;
// assume g.compose(elem, g.compose(apow(g, elem, n-1), apow(g, elem, k))) == g.compose(elem, apow(g, elem, n-1+k));
calc {
g.compose(apow(g, elem, n), apow(g, elem, k));
g.compose(g.compose(elem, apow(g, elem, n-1)), apow(g, elem, k));
g.compose(elem, g.compose(apow(g, elem, n-1), apow(g, elem, k)));
== {apowAdditionInt(g,elem, n-1,k);}
g.compose(elem, apow(g, elem, n-1+k));
// apow(g, elem, n+k);
}
// }else{
}else{
}
}
datatype Group<!A> = Group(elements: set<A>, identity: A, compose: (A,A) -> A, inverse: (A) -> A)
predicate isIdentity<A>(g: Group<A>) {
forall a :: a in g.elements ==> g.compose(a,g.identity) == a && g.compose(g.identity, a) == a
}
predicate closedComposition<A>(g: Group<A>) {
forall x,y :: x in g.elements && y in g.elements ==> g.compose(x,y) in g.elements
}
predicate associativeComposition<A>(g: Group<A>) {
forall a,b,c :: a in g.elements && b in g.elements && c in g.elements ==> g.compose(g.compose(a,b),c) == g.compose(a, g.compose(b,c))
}
predicate closedInverse<A>(g: Group<A>) {
forall x {:trigger g.inverse(x)} :: x in g.elements ==> g.inverse(x) in g.elements
}
predicate isInverse<A>(g: Group<A>) {
forall x {:trigger g.inverse(x)} :: x in g.elements ==> g.compose(x,g.inverse(x)) == g.identity && g.compose(g.inverse(x),x) == g.identity
}

Related

How to represent kleisli composition of substitutions in abstract trees

Context: I have been trying to implement the unification algorithm (the algorithm to find the most general unifier of two abstract syntax trees). Since a unifier is a substitution, algorithm requires defining composition of substitutions.
To be specific, given a type treeSigma dependent on another type X, a substitution is a function of type:
X -> treeSigma X
and the function substitute takes a substitution as an input and has type
substitute: (X-> (treeSigma X))-> (treeSigma X) -> (treeSigma X)
I need to define a function to compose two substitutions:
compose_kleisli (rho1 rho2: X->(treeSigma X)) : (treeSigma X) := ...
such that,
forall tr: treeSigma X,
substitute (compose_kleisli rho1 rho2) tr = substitute rho1 (substitute rho2 tr).
I am fairly new to coq and have been stuck with defining this composition.
How can I define this composition?
I tried to define it using Record like this:
Record compose {X s} (rho1 rho2: X-> treeSigma X):= mkCompose{
RHO: X-> treeSigma X;
CONDITION: forall t, substitute RHO t = substitute rho2 (substitute rho1 t)
}.
but along with this, I would need to prove the result that the composition can be defined for any two substitutions. Something like:
Theorem composeTotal: forall {X s} (rho1 rho2: X-> treeSigma s X), exists rho3,
forall t, substitute rho3 t = substitute rho2 (substitute rho1 t).
Proving this would require a construction of rho3 which circles back to the same problem of defining compose.
treeSigma is defined as:
(* Signature *)
Record sigma: Type := mkSigma {
symbol : Type;
arity : symbol -> nat
}.
Record sigmaLeaf (s:sigma): Type := mkLeaf {
cLeaf: symbol s;
condLeaf: arity s cLeaf = 0
}.
Record sigmaNode (s:sigma): Type := mkNode {
fNode: symbol s;
condNode: arity s fNode <> 0
}.
(* Sigma Algebra *)
Record sigAlg (s:sigma) (X:Type) := mkAlg {
Carrier: Type;
meaning: forall f:(sigmaNode s), (Vector.t Carrier (arity s (fNode s f))) -> Carrier;
meanLeaf: forall f:(sigmaLeaf s), Vector.t Carrier 0 -> Carrier
}.
(* Abstract tree on arbitrary signature. *)
Inductive treeSigma (s:sigma) (X:Type):=
| VAR (x:X)
| LEAF (c: sigmaLeaf s)
| NODE (f: sigmaNode s) (sub: Vector.t (treeSigma s X) (arity s (fNode s f)) ).
(* Defining abstract syntax as a sigma algebra. *)
Definition meanTreeNode {s X} (f:sigmaNode s) (sub: Vector.t (treeSigma s X) (s.(arity)
(fNode s f))): treeSigma s X:= NODE s X f sub.
Definition meanTreeLeaf {s X} (c:sigmaLeaf s) (sub: Vector.t (treeSigma s X) 0) := LEAF s X c.
Definition treeSigAlg {s X} := mkAlg s X (treeSigma s X) meanTreeNode meanTreeLeaf.
The substitution function is defined as:
Fixpoint homoSigma1 {X:Type} {s} (A: sigAlg s X) (rho: X-> (Carrier s X A))
(wft: (treeSigma s X)) {struct wft}: (Carrier s X A) :=
match wft with
| VAR _ _ x => rho x
| LEAF _ _ c => meanLeaf s X A c []
| NODE _ _ f l2 => meanNode s X A f (
(fix homoSigVec k (l2:Vector.t _ k):= match l2 with
| [] => []
| t::l2s => (homoSigma1 A rho t):: (homoSigVec (vlen _ l2s) l2s)
end)
(arity s (fNode s f)) l2)
end.
Definition substitute {X s} (rho: X-> treeSigma s X) (t: treeSigma s X) := #homoSigma1 X s treeSigAlg rho t.
To be particular, a substitution is the homomorphic extension of rho (which is a variable valuation).
Definitions like this are challenging to work with because the tree type occurs recursively inside of another inductive type. Coq has trouble generating induction principles for these types on its own, so you need to help it a little bit. Here is a possible solution, for a slightly simplified set up:
Require Import Coq.Vectors.Vector.
Import VectorNotations.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Section Dev.
Variable symbol : Type.
Variable arity : symbol -> nat.
Record alg := Alg {
alg_sort :> Type;
alg_op : forall f : symbol, Vector.t alg_sort (arity f) -> alg_sort;
}.
Arguments alg_op {_} f _.
(* Turn off the automatic generation of induction principles.
This tree type does not distinguish between leaves and nodes,
since they only differ in their arity. *)
Unset Elimination Schemes.
Inductive treeSigma (X:Type) :=
| VAR (x:X)
| NODE (f: symbol) (args : Vector.t (treeSigma X) (arity f)).
Arguments NODE {X} _ _.
Set Elimination Schemes.
(* Manual definition of a custom induction principle for treeSigma.
HNODE is the inductive case for the NODE constructor; the vs argument is
saying that the induction hypothesis holds for each tree in the vector of
arguments. *)
Definition treeSigma_rect (X : Type) (T : treeSigma X -> Type)
(HVAR : forall x, T (VAR x))
(HNODE : forall f (ts : Vector.t (treeSigma X) (arity f))
(vs : Vector.fold_right (fun t V => T t * V)%type ts unit),
T (NODE f ts)) :
forall t, T t :=
fix loopTree (t : treeSigma X) : T t :=
match t with
| VAR x => HVAR x
| NODE f ts =>
let fix loopVector n (ts : Vector.t (treeSigma X) n) :
Vector.fold_right (fun t V => T t * V)%type ts unit :=
match ts with
| [] => tt
| t :: ts => (loopTree t, loopVector _ ts)
end in
HNODE f ts (loopVector (arity f) ts)
end.
Definition treeSigma_ind (X : Type) (T : treeSigma X -> Prop) :=
#treeSigma_rect X T.
Definition treeSigma_alg (X:Type) : alg := {|
alg_sort := treeSigma X;
alg_op := #NODE X;
|}.
Fixpoint homoSigma {X : Type} {Y : alg} (ρ : X -> Y) (t : treeSigma X) : Y :=
match t with
| VAR x => ρ x
| NODE f xs => alg_op f (Vector.map (homoSigma ρ) xs)
end.
Definition substitute X (ρ : X -> treeSigma X) (t : treeSigma X) : treeSigma X :=
#homoSigma X (treeSigma_alg X) ρ t.
(* You can define composition simply by applying using substitution. *)
Definition compose X (ρ1 ρ2 : X -> treeSigma X) : X -> treeSigma X :=
fun x => substitute ρ1 (ρ2 x).
(* The property you are looking for follows by induction on the tree. Note
that this requires a nested induction on the vector of arguments. *)
Theorem composeP X (ρ1 ρ2 : X -> treeSigma X) t :
substitute (compose ρ1 ρ2) t = substitute ρ1 (substitute ρ2 t).
Proof.
unfold compose, substitute.
induction t as [x|f ts IH]; trivial.
simpl; f_equal.
induction ts as [|n t ts IH']; trivial.
simpl.
destruct IH as [e IH].
rewrite e.
f_equal.
now apply IH'.
Qed.
End Dev.
In order to do this you need to use the operations of the monad, typically:
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Section MonadKleisli.
(* Set Universe Polymorphism. // Needed for real use cases *)
Variable (M : Type -> Type).
Variable (Ma : forall A B, (A -> B) -> M A -> M B).
Variable (η : forall A, A -> M A).
Variable (μ : forall A, M (M A) -> M A).
(* Compose: o^* *)
Definition oStar A B C (f : A -> M B) (g: B -> M C) : A -> M C :=
fun x => μ (Ma g (f x)).
(* Bind *)
Definition bind A B (x : M A) (f : A -> M B) : M B := oStar (fun _ => x) f tt.
End MonadKleisli.
Depending on how you organize your definitions, proving your desired properties will likely require functional extensionality, not a big deal usually but something to keep in ind.

type mismatch occurs while the evaluated values are equal

I'm a beginner in Idris and trying to make a code valid.
Could you let me know the better place for noob questions on Idris?
filter : (elem -> Bool) -> Vect len elem -> (p: (Fin len) ** Vect (finToNat p) elem)
filter {len=S l} p xs = ((FZ {k=l}) ** [])
filter {len=S l} p (x::xs) =
let (a ** tail) = filter {len=l} p xs
in if p x then
((FS a) ** x::tail)
else
((weaken a) ** tail)
I wrote another filter which can't pass the type check yet.
This new filter's type implies that the filtered vector cannot be longer than the original one.
However, Idris saids
...
Specifically:
Type mismatch between
finToNat a
and
finToNat (weaken a)
We know those two terms always have the same value.
How can I describe the fact and let Idris say ok?
You have to show that finToNat a = finToNat (weaken a). tail has type Vect (finToNat a) elem, but you need Vect (finToNat (weaken a)) elem for the second component in the last line, because you wrote weaken a in the first pair component.
lemma : {n : _} -> (a : Fin n) -> finToNat (weaken a) = finToNat a
lemma FZ = Refl
lemma (FS x) = rewrite lemma x in Refl
filter : (elem -> Bool) -> Vect len elem -> (p: (Fin len) ** Vect (finToNat p) elem)
filter {len=S l} p xs = ((FZ {k=l}) ** [])
filter {len=S l} p (x::xs) =
let (a ** tail) = Main.filter {len=l} p xs
in if p x then
((FS a) ** x::tail)
else
(weaken a ** (rewrite lemma a in tail))

Why does this function hang the REPL?

Chapter 9 of Test-Driven Development with Idris presents the following data type and removeElem function.
import Data.Vect
data MyElem : a -> Vect k a -> Type where
MyHere : MyElem x (x :: xs)
MyThere : (later : MyElem x xs) -> MyElem x (y :: xs)
-- I slightly modified the definition of this function from the text.
removeElem : (value : a) -> (xs : Vect (S n) a) -> (prf : MyElem value xs) -> Vect n a
removeElem value (value :: ys) MyHere = ys
removeElem value (y :: ys) (MyThere later) = removeElem value (y :: ys) (MyThere later)
The following works:
*lecture> removeElem 1 [1,2,3] MyHere
[2, 3] : Vect 2 Integer
But, the following call is still running after a few minutes:
*lecture> removeElem 2 [1,2,3] (MyThere MyHere)
Why is this, I'm assuming, compilation so slow?
The second case of your removeElem reads
removeElem value (y :: ys) (MyThere later) = removeElem value (y :: ys) (MyThere later)
The right-hand side is exactly the same as the left-hand side; so your recursion diverges. This is why evaluation hangs.
Note that Idris would have caught this error if you declared that removeElem should be total:
total removeElem : (value : a) -> (xs : Vect (S n) a) -> (prf : MyElem value xs) -> Vect n a
removeElem value (value :: ys) MyHere = ys
removeElem value (y :: ys) (MyThere later) = removeElem value (y :: ys) (MyThere later)
which results in the compile-time error
RemoveElem.idr line 9 col 0:
Main.removeElem is possibly not total due to recursive path Main.removeElem

Difference between n_times and chain

I came across the following two function definitions:
function n_times(f, n) {
if (n === 1) {return f;}
else {
return function(x) {
return f((n_times(f, n - 1)) (x));
}
}
}
function chain(f, n) {
if(n === 1) {
return f;
} else {
return (chain(f, n - 1)) (f);
}
}
I can't seem to figure out the difference between the two. Applying the substitution model suggests the following:
for chain: chain(f, 3) (x) = f(f(f(x))))
for n_times: n_times(f, 3) (x) = f((f(f(x))) (x))
So there are more than one variable in the case of n_times.
You must have gone wrong somewhere in your substitution. With the function definitions you gave (which are not really standard btw), it's
n_times(f, 3) (x)
≡ function(x) { return f(n_times(f, 2)(x)) } (x)
≡ f(n_times(f, 2)(x))
≡ f(function(x) { return f(n_times(f, 1)(x)) } (x))
≡ f(f(n_times(f, 1)(x)))
≡ f(f(f(x)))
and
chain(f, 3) (x)
≡ chain(f, 2) (f) (x)
≡ chain(f, 1) (f) (f) (x)
≡ f (f) (f) (x)

Haskell Binary div Binary

I'm writing a program which converts decimal numbers, characters, strings to binary numbers and works with them. But I got stuck because I want to divide Bin by Bin.
something like this:
11010110110000
/ 10011
--------------
= 01001110110000
so the new number will be 1001110110000 / 10011... until the very last result.
Here is my code:
import Data.Char (ord)
import Data.List
toBinary :: Int -> [Int]
toBinary 0 = []
toBinary x = reverse (kisegf x)
kisegf 0 = []
kisegf x | x `mod` 2 == 1 = 1 : kisegf (x `div` 2)
| x `mod` 2 == 0 = 0 : kisegf (x `div` 2)
chrToBinary :: Char -> [Int]
chrToBinary x
|length (toBinary (ord x)) == 8 = (toBinary (ord x))
|otherwise = take (8-(length (toBinary (ord x))))[0,0..] ++ (toBinary (ord x))
strToBinary :: String -> [Int]
strToBinary [] = []
strToBinary (x:xs) = [l | l <- chrToBinary x] ++ strToBinary xs
bxor :: [Int] -> [Int] -> [Int]
bxor [] [] = []
bxor (x:xs) (y:ys)
|length (x:xs) == length (y:ys) && (x /= y) = 1 : bxor xs ys
|length (x:xs) == length (y:ys) && (x == y) = 0 : bxor xs ys
|length (x:xs) < length (y:ys) && (x /= y) = 1 : bxor (take (length (y:ys)-(length (x:xs)))[0,0..] ++ xs) ys
|length (x:xs) < length (y:ys) && (x == y) = 0 : bxor (take (length (y:ys)-(length (x:xs)))[0,0..] ++ xs) ys
|length (x:xs) > length (y:ys) && (x /= y) = 1 : bxor xs (take (length (x:xs)-(length (y:ys)))[0,0..] ++ ys)
|length (x:xs) > length (y:ys) && (x == y) = 0 : bxor xs (take (length (x:xs)-(length (y:ys)))[0,0..] ++ ys)
{-this will compare 2 bin if a bigger than true else false-}
(%>=%) :: [Int] -> [Int] -> Bool
(%>=%)[] [] = True
(%>=%)[] _ = False
(%>=%)_ [] = True
(%>=%) (x:xs) (y:ys) = x==1 && y==1 && elemIndex 1 (x:xs) == elemIndex 1 (y:ys)
bmod :: [Int]{-number-} -> [Int]{-div-} -> [Int]{-result-}
bmod (x:xs) (y:ys)
|length(x:xs) >= length(y:ys) && (take (length (y:ys)) (x:xs)) %>=% (y:ys) = ???
|length(x:xs) >= length(y:ys) = ???
|otherwise = (x:xs)
what should i write in the place of "???"
another and bigger for example:
Példa: bmod 11010110110000 10011.
_______________
10011 ) 11010110110000
10011,,.,,....
-----,,.,,....
10011,.,,....
10011,.,,....
-----,.,,....
00001.,,....
00000.,,....
-----.,,....
00010,,....
00000,,....
-----,,....
00101,....
00000,....
-----,....
01011....
00000....
-----....
10110...
10011...
-----...
01010..
00000..
-----..
10100.
10011.
-----.
01110
10011 <- bigger so cant div again
-----
1110 = what i want
your function as written isn't what you want.
bmod xs ys | not (xs %>=% ys) = xs
| otherwise = ????
will probably work better. In the ????, you want to take successive amounts of digits from the beginning of xs until you find a prefix of xs is greater than ys, then recurse with
bmod ((xsPrefix %-% ys)++xsSuffix) ys
For getting the prefix of xs, inits combined with filter is pretty much what you need. Obviously, there are also some more binary functions you will need to implement.
The issue with your design is that there is nothing for you to recurse to in the second case -- you want to end up using the code from your first case somehow, but there isn't an easy way to do that short of copying the code.
Also, your kisegf function could be cleaned up a bit - why not
kisegf 0 = []
kisegf x = (x `mod` 2) : kisegf (x `div` 2)
Although not an answer to your question, I would keep the bit strings LSB first, rather than MSB first (i.e. don't reverse in toBinary). In this way, the list index corresponds to the bit significance, so you don't have to worry about adding leading zeros to align operands. For instance, the bxor function becomes much simpler:
bxor [] bs = bs
bxor as [] = as
bxor (a:as) (b:bs) = (a `xor` b) : bxor as bs where
a `xor` b | a /= b = 1
| otherwise = 0
Having the bits in this order will also make addition/subtraction simpler, since carries propagate from the LSB to the MSB:
badd :: [Int] {- a -} -> [Int] {- b -} -> Int {- carry-in -} -> [Int]
badd [] [] 0 = [] -- no carry-out
badd [] [] 1 = [1] -- carry-out
badd [] (b:bs) c = s : badd [] bs c' where (c', s) = add 0 b c -- zero-extend as
badd (a:as) [] c = s : badd as [] c' where (c', s) = add a 0 c -- zero-extend bs
badd (a:as) (b:bs) c = s : badd as bs c' where (c', s) = add a b c
add a b c = (s `div` 2, s `mod` 2) where s = a+b+c
Left and right shifts are also simpler since they affect LSBs:
as `rsh` n = drop n as
as `lsh` n = replicate n 0 ++ as
For signed numbers, you implicitly assume that the last bit repeats indefinitely.

Resources