I am brand new to Isabelle. I have a simple tree datatype and a function getTree. getTree uses a boolean list to control its traversal of the tree (it goes left for false and right for true). When it gets to the end of the list, it returns the remaining subtree. If it reaches a leaf before reaching the end of the list, it returns that leaf. I want to show that if getTree returns a leaf using some list ys, then it will return the same leaf using (ys # bs) (once you get to a leaf, the remaining list doesn't matter).
All of my attempts to prove this have failed. If anyone has any suggestions, I would be very grateful.
Here is the code:
datatype 'a tree =
Leaf 'a |
Node 'a "'a tree" "'a tree"
fun getTree :: "'a tree ⇒ bool list ⇒ 'a tree" where
"getTree (Leaf x) ys = (Leaf x)" |
"getTree r [] = r" |
"getTree (Node x l r) (False # ys) = getTree l ys" |
"getTree (Node x l r) (True # ys) = getTree r ys"
lemma: "getTree t ys = Leaf a ==> getTree t (ys # bs) = Leaf a"
When you define a function via "fun", Isabelle generates an induction rule according to the recursive structure of the definition. Here you can make use of getTree.induct:
by (induct t ys rule: getTree.induct) simp_all
Related
I am doing Exercise 2.6 from the Concrete Semantics book:
Starting from the type 'a tree defined in the text, define a function contents :: 'a tree ⇒ 'a list that collects all values in a tree in a list, in any order, without removing duplicates. Then define a function sum_tree :: nat tree ⇒ nat that sums up all values in a tree of natural numbers and prove sum_tree t = sum_list (contents t) (where sum_list is predefined).
I have started to prove the theorem not using auto but guiding Isabelle to use the necessary theorems:
theory Minimal
imports Main
begin
datatype 'a tree = Tip | Node "'a tree" 'a "'a tree"
fun contents :: "'a tree ⇒ 'a list" where
"contents Tip = []"
| "contents (Node l a r) = a # (contents l) # (contents r)"
fun sum_tree :: "nat tree ⇒ nat" where
"sum_tree Tip = 0"
| "sum_tree (Node l a r) = a + (sum_tree l) + (sum_tree r)"
lemma sum_list_contents:
"sum_list (contents t1) + sum_list (contents t2) = sum_list (contents t1 # contents t2)"
apply auto
done
lemma sum_commutes: "sum_tree(t) = sum_list(contents(t))"
apply (induction t)
apply (simp only: sum_tree.simps contents.simps sum_list.Nil)
apply (simp only: sum_list.Cons contents.simps sum_tree.simps sum_list_contents)
Here it arrives to a proof state
proof (prove)
goal (1 subgoal):
1. ⋀t1 x2 t2.
sum_tree t1 = sum_list (contents t1) ⟹
sum_tree t2 = sum_list (contents t2) ⟹
x2 + sum_list (contents t1) + sum_list (contents t2) = x2 + sum_list (contents t1 # contents t2)
Where I wonder why simp did not use the provided sum_list_contents lemma. I know simple simp would solve the equation.
What does general simp contain that simp only would not use in this case?
As pointed out in the comments, the missing piece is associativity of addition for natural numbers. Adding add.assoc to the simpplification rules solves the equation.
Alternatively, the order of operands when defining the tree sum could be changed:
fun sum_tree_1 :: "nat tree ⇒ nat" where
"sum_tree_1 Tip = 0"
| "sum_tree_1 (Node l a r) = a + ((sum_tree_1 l) + (sum_tree_1 r))"
Then the associativity is not required:
lemma sum_commutes_1: "sum_tree_1(t) = sum_list(contents(t))"
apply (induction t)
apply (simp only: sum_tree_1.simps contents.simps sum_list.Nil)
apply (simp only: sum_list.Cons contents.simps sum_tree_1.simps sum_list_contents)
done
I’d like to define the following function using Program Fixpoint or Function in Coq:
Require Import Coq.Lists.List.
Import ListNotations.
Require Import Coq.Program.Wf.
Require Import Recdef.
Inductive Tree := Node : nat -> list Tree -> Tree.
Fixpoint height (t : Tree) : nat :=
match t with
| Node x ts => S (fold_right Nat.max 0 (map height ts))
end.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map (fun t => mapTree f t) ts)
end.
Next Obligation.
Unfortunately, at this point I have a proof obligation height t < height (Node x ts) without knowing that t is a member of ts.
Similarly with Function instead of Program Fixpoint, only that Function detects the problem and aborts the definition:
Error:
the term fun t : Tree => mapTree f t can not contain a recursive call to mapTree
I would expect to get a proof obligation of In t ts → height t < height (Node x ts).
Is there a way of getting that that does not involve restructuring the function definition? (I know work-arounds that require inlining the definition of map here, for example – I’d like to avoid these.)
Isabelle
To justify that expectation, let me show what happens when I do the same in Isabelle, using the function command, which is (AFAIK) related to Coq’s Function command:
theory Tree imports Main begin
datatype Tree = Node nat "Tree list"
fun height where
"height (Node _ ts) = Suc (foldr max (map height ts) 0)"
function mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
by pat_completeness auto
termination
proof (relation "measure (λ(f,t). height t)")
show "wf (measure (λ(f, t). height t))" by auto
next
fix f :: "nat ⇒ nat" and x :: nat and ts :: "Tree list" and t
assume "t ∈ set ts"
thus "((f, t), (f, Node x ts)) ∈ measure (λ(f, t). height t)"
by (induction ts) auto
qed
In the termination proof, I get the assumption t ∈ set ts.
Note that Isabelle does not require a manual termination proof here, and the following definition works just fine:
fun mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
This works because the map function has a “congruence lemma” of the form
xs = ys ⟹ (⋀x. x ∈ set ys ⟹ f x = g x) ⟹ map f xs = map g ys
that the function command uses to find out that the termination proof only needs to consider t ∈ set ts..
If such a lemma is not available, e.g. because I define
definition "map' = map"
and use that in mapTree, I get the same unprovable proof obligation as in Coq. I can make it work again by declaring a congruence lemma for map', e.g. using
declare map_cong[folded map'_def,fundef_cong]
In this case, you actually do not need well-founded recursion in its full generality:
Require Import Coq.Lists.List.
Set Implicit Arguments.
Inductive tree := Node : nat -> list tree -> tree.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
Coq is able to figure out by itself that recursive calls to map_tree are performed on strict subterms. However, proving anything about this function is difficult, as the induction principle generated for tree is not useful:
tree_ind :
forall P : tree -> Prop,
(forall (n : nat) (l : list tree), P (Node n l)) ->
forall t : tree, P t
This is essentially the same problem you described earlier. Luckily, we can fix the issue by proving our own induction principle with a proof term.
Require Import Coq.Lists.List.
Import ListNotations.
Unset Elimination Schemes.
Inductive tree := Node : nat -> list tree -> tree.
Set Elimination Schemes.
Fixpoint tree_ind
(P : tree -> Prop)
(IH : forall (n : nat) (ts : list tree),
fold_right (fun t => and (P t)) True ts ->
P (Node n ts))
(t : tree) : P t :=
match t with
| Node n ts =>
let fix loop ts :=
match ts return fold_right (fun t' => and (P t')) True ts with
| [] => I
| t' :: ts' => conj (tree_ind P IH t') (loop ts')
end in
IH n ts (loop ts)
end.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
The Unset Elimination Schemes command prevents Coq from generating its default (and not useful) induction principle for tree. The occurrence of fold_right on the induction hypothesis simply expresses that the predicate P holds of every tree t' appearing in ts.
Here is a statement that you can prove using this induction principle:
Lemma map_tree_comp f g t :
map_tree f (map_tree g t) = map_tree (fun n => f (g n)) t.
Proof.
induction t as [n ts IH]; simpl; f_equal.
induction ts as [|t' ts' IHts]; try easy.
simpl in *.
destruct IH as [IHt' IHts'].
specialize (IHts IHts').
now rewrite IHt', <- IHts.
Qed.
You can now do this with Equations and get the right elimination principle automatically, using either structural nested recursion or well-founded recursion
In general, it might be advisable to avoid this problem. But if one really wants to obtain the proof obligation that Isabelle gives you, here is a way:
In Isabelle, we can give an external lemma that stats that map applies its arguments only to members of the given list. In Coq, we cannot do this in an external lemma, but we can do it in the type. So instead of the normal type of map
forall A B, (A -> B) -> list A -> list B
we want the type to say “f is only ever applied to elements of the list:
forall A B (xs : list A), (forall x : A, In x xs -> B) -> list B
(It requires reordering the argument so that the type of f can mention xs).
Writing this function is not trivial, and I found it easier to use a proof script:
Definition map {A B} (xs : list A) (f : forall (x:A), In x xs -> B) : list B.
Proof.
induction xs.
* exact [].
* refine (f a _ :: IHxs _).
- left. reflexivity.
- intros. eapply f. right. eassumption.
Defined.
But you can also write it “by hand”:
Fixpoint map {A B} (xs : list A) : forall (f : forall (x:A), In x xs -> B), list B :=
match xs with
| [] => fun _ => []
| x :: xs => fun f => f x (or_introl eq_refl) :: map xs (fun y h => f y (or_intror h))
end.
In either case, the result is nice: I can use this function in mapTree, i.e.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map ts (fun t _ => mapTree f t))
end.
Next Obligation.
and I don’t have to do anything with the new argument to f, but it shows up in the the termination proof obligation, as In t ts → height t < height (Node x ts) as desired. So I can prove that and define mapTree:
simpl.
apply Lt.le_lt_n_Sm.
induction ts; inversion_clear H.
- subst. apply PeanoNat.Nat.le_max_l.
- rewrite IHts by assumption.
apply PeanoNat.Nat.le_max_r.
Qed.
It only works with Program Fixpoint, not with Function, unfortunately.
Here is a theory I've taken from Isabelle Tutorial. It has ways to prove theorems but I was wondering how one would call the app function below with two lists.
theory ToyList
imports Main
begin
datatype 'a list = Nil | Cons 'a "'a list"
fun app :: "'a list ⇒ 'a list ⇒ 'a list" where
"app Nil ys = ys" |
"app (Cons x xs) ys = Cons x (app xs ys)"
I tried theorem app_test[simp] : "app (xs # ys ) # zs = xs # ys # zs" and other ways but didn't work.
expect to use the subgoal to run the list which defined by let? aa = [1,2]
and run rev_app on this aa and show the value as [2,1]
theory Scratch2
imports Datatype
begin
datatype 'a list = Nil ("[]")
| Cons 'a "'a list" (infixr "#" 65)
(* This is the append function: *)
primrec app :: "'a list => 'a list => 'a list" (infixr "#" 65)
where
"[] # ys = ys" |
"(x # xs) # ys = x # (xs # ys)"
primrec rev :: "'a list => 'a list" where
"rev [] = []" |
"rev (x # xs) = (rev xs) # (x # [])"
primrec itrev :: "'a list => 'a list => 'a list" where
"itrev [] ys = ys" |
"itrev (x#xs) ys = itrev xs (x#ys)"
value "rev (True # False # [])"
lemma app_Nil2 [simp]: "xs # [] = xs"
apply(induct_tac xs)
apply(auto)
done
lemma app_assoc [simp]: "(xs # ys) # zs = xs # (ys # zs)"
apply(induct_tac xs)
apply(auto)
done
(1 st trial)
lemma rev_app [simp]: "rev(xs # ys) = (rev ys) # (rev xs)"
apply(induct_tac xs)
thus ?aa by rev_app
show "rev_app [1; 2]"
(2nd trial)
value "rev_app [1,2]"
(3 rd trial)
fun ff :: "'a list ⇒ 'a list"
where "rev(xs # ys) = (rev ys) # (rev xs)"
value "ff [1,2]"
thus ?aa by rev_app
show "rev_app [1; 2]"
end
Firstly, you need the syntax for list enumeration (I just picked it up in the src/HOL/List.thy file):
syntax
-- {* list Enumeration *}
"_list" :: "args => 'a list" ("[(_)]")
translations
"[x, xs]" == "x#[xs]"
"[x]" == "x#[]"
Then, is one of the following what you're searching for ?
Proposition 1:
lemma example1: "rev [a, b] = [b, a]"
by simp
This lemma is proved by applying the definition rules of rev that are used by the method simp to rewrite the left-hand term and prove that the two sides of the equality are equal. This is the solution I prefer because you can see the example is satisfied even without evaluating it with Isabelle.
Proposition 2:
value "rev [a, b]" (* return "[b, a]" *)
Here and in Proposition 3, we just uses the command value to evaluate rev.
Proposition 3:
value "rev [a, b] = [b, a]" (* returns "True" *)
This lemma is not used by the previous propositions:
lemma rev_app [simp]: "rev(xs # ys) = (rev ys) # (rev xs)"
apply (induct_tac xs)
by simp_all
Notes:
As a general principle, you shouldn't import the "Datatype" package alone, but import "Main" instead.
In your 1st attempt you're mixing the "apply" (apply ...) and the "structured proof" (thus ...) styles
"thus ?aa" makes no sense if "?aa" is "[1,2]" as the argument of "thus" should be a subgoal, ie. a proposition with a boolean value.
To evaluate, the command "value" uses ML execution or if this fails, normalisation by evaluation.
In example1, you can use a custom proof and thus lemmas (for example: by (simp add:rev_app)
Up until several days ago, I always defined a type, and then proved theorems directly about the type. Now I'm trying to use type classes.
Problem
The problem is that I can't instantiate cNAT for my type myD below, and it appears it's because simp has no effect on the abstract function cNAT, which I've made concrete with my primrec function cNAT_myD. I can only guess what's happening because of the automation that happens after instance proof.
Questions
Q1: Below, at the statement instantiation myD :: (type) cNAT, can you tell me how to finish the proof, and why I can prove the following theorem, but not the type class proof, which requires injective?
theorem dNAT_1_to_1: "(dNAT n = dNAT m) ==> n = m"
assumes injective: "(cNAT n = cNAT m) ==> n = m"
Q2: This is not as important, but at the bottom is this statement:
instantiation myD :: (type) cNAT2
It involves another way I was trying to instantiate cNAT. Can you tell me why I get Failed to refine any pending goal at shows? I put some comments in the source to explain some of what I did to set it up. I used this slightly modified formula for the requirement injective:
assumes injective: "!!n m. (cNAT2 n = cNAT2 m) --> n = m"
Specifics
My contrived datatype is this, which may be useful to me someday: (Update: Well, for another example maybe. A good mental exercise is for me to try and figure out how I can actually get something inside a 'a myD list, other than []. With BNF, something like datatype_new 'a myD = myS "'a myD fset" gives me the warning that there's an unused type variable on the right-hand side)
datatype 'a myD = myL "'a myD list"
The type class is this, which requires an injective function from nat to 'a:
class cNAT =
fixes cNAT :: "nat => 'a"
assumes injective: "(cNAT n = cNAT m) ==> n = m"
dNAT: this non-type class version of cNAT works
fun get_myL :: "'a myD => 'a myD list" where
"get_myL (myL L) = L"
primrec dNAT :: "nat => 'a myD" where
"dNAT 0 = myL []"
|"dNAT (Suc n) = myL (myL [] # get_myL(dNAT n))"
fun myD2nat :: "'a myD => nat" where
"myD2nat (myL []) = 0"
|"myD2nat (myL (x # xs)) = Suc(myD2nat (myL xs))"
theorem left_inverse_1 [simp]:
"myD2nat(dNAT n) = n"
apply(induct n, auto)
by(metis get_myL.cases get_myL.simps)
theorem dNAT_1_to_1:
"(dNAT n = dNAT m) ==> n = m"
apply(induct n)
apply(simp) (*
The simp method expanded dNAT.*)
apply(metis left_inverse_1 myD2nat.simps(1))
by (metis left_inverse_1)
cNAT: type class version that I can't instantiate
instantiation myD :: (type) cNAT
begin
primrec cNAT_myD :: "nat => 'a myD" where
"cNAT_myD 0 = myL []"
|"cNAT_myD (Suc n) = myL (myL [] # get_myL(cNAT_myD n))"
instance
proof
fix n m :: nat
show "cNAT n = cNAT m ==> n = m"
apply(induct n)
apply(simp) (*
The simp method won't expand cNAT to cNAT_myD's definition.*)
by(metis injective)+ (*
Metis proved it without unfolding cNAT_myD. It's useless. Goals always remain,
and the type variables in the output panel are all weird.*)
oops
end
cNAT2: Failed to refine any pending goal at show
(*I define a variation of `injective` in which the `assumes` definition, the
goal, and the `show` statement are exactly the same, and that strange `fails
to refine any pending goal shows up.*)
class cNAT2 =
fixes cNAT2 :: "nat => 'a"
assumes injective: "!!n m. (cNAT2 n = cNAT2 m) --> n = m"
instantiation myD :: (type) cNAT2
begin
primrec cNAT2_myD :: "nat => 'a myD" where
"cNAT2_myD 0 = myL []"
|"cNAT2_myD (Suc n) = myL (myL [] # get_myL(cNAT2_myD n))"
instance
proof (*
goal: !!n m. cNAT2 n = cNAT2 m --> n = m.*)
show
"!!n m. cNAT2 n = cNAT2 m --> n = m"
(*Failed to refine any pending goal
Local statement fails to refine any pending goal
Failed attempt to solve goal by exported rule:
cNAT2 (n::nat) = cNAT2 (m::nat) --> n = m *)
Your function cNAT is polymorphic in its result type, but the type variable does not appear among the parameters. This often causes type inference to compute a type which is more general than you want. In your case for cNAT, Isabelle infers for the two occurrences of cNAT in the show statement the type nat => 'b for some 'b of sort cNAT, but their type in the goal is nat => 'a myD. You can see this in jEdit by Ctrl-hovering over the cNAT occurrences to inspect the types. In ProofGeneral, you can enable printing of types with using [[show_consts]].
Therefore, you have to explicitly constrain types in the show statement as follows:
fix n m
assume "(cNAT n :: 'a myD) = cNAT m"
then show "n = m"
Note that it is usually not a good idea to use Isabelle's meta-connectives !! and ==> inside a show statement, you better rephrase them using fix/assume/show.