Isabelle proving with translation issue - isabelle

I had defined a few translations like this:
consts
"time" :: "i"
"sig" :: "i ⇒ i"
"BaseChTy" :: "i"
syntax
"time" :: "i"
"sig" :: "i ⇒ i"
translations
"time" ⇌ "CONST int"
"sig(A)" ⇌ "CONST int → A"
Then, I want to prove a theorem like this:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
It should be a very simple theorem, and should be proved with theorem Pi_mono in a single step:
thm Pi_mono
?B ⊆ ?C ⟹ ?A → ?B ⊆ ?A → ?C
So I did it like this:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
apply(drule Pi_mono[of _ _ "time"])
(*Output:
goal (1 subgoal):
1. sig(A) ⊆ sig(B) ⟹ sig(A) ⊆ sig(B)
*)
apply(simp)
(*Output:
Failed ...
*)
Since the premise has become the same as the goal, it should be proved immediately, but it didn't. May I know have I done anything wrong in the translation definition?
I tried to change the theorem to:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ (time → A) ⊆ (time → B)"
(*Output:
goal (1 subgoal):
1. A ⊆ B ⟹ sig(A) ⊆ sig(B)
*)
apply(drule Pi_mono[of _ _ "time"])
(*Output:
goal (1 subgoal):
1. sig(A) ⊆ sig(B) ⟹ sig(A) ⊆ sig(B)
*)
apply(simp)
(*Output:
Success ...
*)
Then it works immediately, but shouldn't the translation will make them to be the same thing?
Update:
Thanks for Mathias Fleury reply, I tried to do a simplify trace, and it shows something like this:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
using [[show_sorts]] apply(drule Pi_mono[of _ _ "time"])
using [[simp_trace]] apply(simp)
oops
(*
Output:
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
sig(A::i) ⊆ sig(B::i) ⟹ sig(A) ⊆ sig(B)
[1]Adding rewrite rule "??.unknown":
sig(A::i) ⊆ sig(B::i) ≡ True
*)
while the time -> A version shows:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ time → A ⊆ time → B"
using [[show_sorts]] apply(drule Pi_mono[of _ _ "time"])
using [[simp_trace]] apply(simp)
oops
(*
Output:
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
sig(A::i) ⊆ sig(B::i) ⟹ sig(A) ⊆ sig(B)
[1]Adding rewrite rule "??.unknown":
sig(A::i) ⊆ sig(B::i) ≡ True
[1]Applying instance of rewrite rule "??.unknown":
sig(A::i) ⊆ sig(B::i) ≡ True
[1]Rewriting:
sig(A::i) ⊆ sig(B::i) ≡ True
*)
Why can this time version can apply the instance of rewrite rule to continue to the proof, but the original one does not?

Thanks to the imports you mentioned in you comment (thanks), I could reproduce the problem. The issue is the translation, you need to do something like
syntax
"sig" :: "i ⇒ i" (‹sig(_)›)
translations
"sig(A)" == "CONST int → A"
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
apply(rule Pi_mono)
apply assumption
done
Just to expand on my comment and explain how I found that the problem is the translation. I looked at the unification failure:
theorem ⟦ A ⊆ B ⟧ ⟹ time → A ⊆ time → B
supply[[unify_trace_failure]]
apply (rule PI_mono)
The error message tells that sig and Pi are not unifiable. This is already strange. To be certain that the problem comes from the translation, I looked at the underlying term:
ML ‹#{print}#{term ‹sig(A)›}›
It shows the underlying term and we can see that the translation is not working and I looked at other translations in the library to fix issue.

Related

Topological filters in Isabelle

I'm studying topological filters in Filter.thy
theory Filter
imports Set_Interval Lifting_Set
begin
subsection ‹Filters›
text ‹
This definition also allows non-proper filters.
›
locale is_filter =
fixes F :: "('a ⇒ bool) ⇒ bool"
assumes True: "F (λx. True)"
assumes conj: "F (λx. P x) ⟹ F (λx. Q x) ⟹ F (λx. P x ∧ Q x)"
assumes mono: "∀x. P x ⟶ Q x ⟹ F (λx. P x) ⟹ F (λx. Q x)"
typedef 'a filter = "{F :: ('a ⇒ bool) ⇒ bool. is_filter F}"
proof
show "(λx. True) ∈ ?filter" by (auto intro: is_filter.intro)
qed
I don't get this definition. It's quite convoluted so I'll simplify it first
The expression
F (λx. P x) could be simplified to F P (using eta reduction of lambda calculus). The predicate 'a ⇒ bool is really just a set 'a set. Similarly ('a ⇒ bool) ⇒ bool should be 'a set set. Then we could rewrite the axioms as
assumes conj: "P ∈ F ∧ Q ∈ F ⟹ Q ∩ P ∈ F"
assumes mono: "P ⊆ Q ∧ P ∈ F ⟹ Q ∈ F"
Now my question is about the True axiom. It is equivalent to
assumes True: "UNIV ∈ F"
This does not match with the definitions of filters that I ever saw.
The axiom should be instead
assumes True: "{} ∉ F" (* the name True is not very fitting anymore *)
The statement UNIV ∈ F is unnecessary because it follows from axiom mono.
So what's up with this definition that Isabelle provides?
The link provided by Javier Diaz has lots of explanations.
Turns out this is a definition of improper filter. The axiom True is necessary and does not follow from mono. If this axiom was missing then F could be defined as
F P = False
or in set-theory notation, F could be an empty set and mono and conj would then be satisfied vacuously.

Focussing on new subgoals in Eisbach

In Eisbach I can use ; to apply a method to all new subgoals created by a method.
However, I often know how many subgoals are created and would like to apply different methods to the new subgoals.
Is there a way to say something like "apply method X to the first new subgoal and method Y to the second new subgoal"?
Here is a simple use case:
I want to develop a method that works on 2 conjunctions of arbitrary length but with the same structure.
The method should be usable to show that conjunction 1 implies conjunction 2 by showing that the implication holds for each component.
It should be usable like this:
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
proof (conj_one_by_one pre: c)
show "a 0 ⟹ a' 0" by (rule imp)
show "a 1 ⟹ a' 1" by (rule imp)
show "a 2 ⟹ a' 2" by (rule imp)
show "a 3 ⟹ a' 3" by (rule imp)
qed
When implementing this method in Eisbach, I have a problem after using rule conjI.
I get two subgoals that I want to recursively work on, but I want to use different facts for the two cases.
I came up with the following workaround, which uses artificial markers for the two subgoals and is kind of ugly:
definition "marker_L x ≡ x"
definition "marker_R x ≡ x"
lemma conjI_marked:
assumes "marker_L P" and "marker_R Q"
shows "P ∧ Q"
using assms unfolding marker_L_def marker_R_def by simp
method conj_one_by_one uses pre = (
match pre in
p: "?P ∧ ?Q" ⇒ ‹
(unfold marker_L_def marker_R_def)?,
rule conjI_marked;(
(match conclusion in "marker_L _" ⇒ ‹(conj_one_by_one pre: p[THEN conjunct1])?›)
| (match conclusion in "marker_R _" ⇒ ‹(conj_one_by_one pre: p[THEN conjunct2])?›))›)
| ((unfold marker_L_def marker_R_def)?, insert pre)
This is not a complete answer, but you might be able to derive some useful information from what is stated here.
In Eisbach I can use ; to apply a method to all new subgoals created
by a method. However, I often know how many subgoals are created and
would like to apply different methods to the new subgoals. Is there a
way to say something like "apply method X to the first new subgoal and
method Y to the second new subgoal"?
You can use the standard tactical RANGE to define your own tactic that you can apply to consecutive subgoals. I provide a very specialized and significantly simplified use case below:
ML‹
fun mytac ctxt thms = thms
|> map (fn thm => resolve_tac ctxt (single thm))
|> RANGE
›
lemma
assumes A: A and B: B and C: C
shows "A ∧ B ∧ C"
apply(intro conjI)
apply(tactic‹mytac #{context} [#{thm A}, #{thm B}, #{thm C}] 1›)
done
Hopefully, it should be reasonably easy to extend it to more complicated use cases (while being more careful than I am about subgoal indexing: you might also need SELECT_GOAL to ensure that the implementation is safe). While in the example above mytac accepts a list of theorems, it should be easy to see how these theorems can be replaced by tactics and with some further work, the tactic can be wrapped as a higher-order method.
I want to develop a method that works on 2 conjunctions of arbitrary
length but with the same structure. The method should be usable to
show that conjunction 1 implies conjunction 2 by showing that the
implication holds for each component. It should be usable like this:
UPDATE
Having had another look at the problem, it seems that there exists a substantially more natural solution. The solution follows the outline from the original answer, but the meta implication is replaced with the HOL's object logic implication (the 'to and fro' conversion can be achieved using atomize (full) and intro impI):
lemma arg_imp2: "(a ⟶ b) ⟹ (c ⟶ d) ⟹ ((a ∧ c) ⟶ (b ∧ d))" by auto
lemma example:
assumes "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
apply(insert assms(1), atomize (full))
apply(intro arg_imp2; intro impI; intro imp; assumption)
done
LEGACY (this was part of the original answer, but is almost irrelevant due to the UPDATE suggested above)
If this is the only application that you have in mind, perhaps, there is a reasonably natural solution based on the following iterative procedure:
lemma arg_imp2: "(a ⟹ b) ⟹ (c ⟹ d) ⟹ ((a ∧ c) ⟹ (b ∧ d))" by auto
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
using c
apply(intro arg_imp2[of ‹a 0› ‹a' 0› ‹a 1 ∧ a 2 ∧ a 3› ‹a' 1 ∧ a' 2 ∧ a' 3›])
apply(rule imp)
apply(assumption)
apply(intro arg_imp2[of ‹a 1› ‹a' 1› ‹a 2 ∧ a 3› ‹a' 2 ∧ a' 3›])
apply(rule imp)
apply(assumption)
apply(intro arg_imp2[of ‹a 2› ‹a' 2› ‹a 3› ‹a' 3›])
apply(rule imp)
apply(assumption)
apply(rule imp)
apply(assumption+)
done
I am not certain how easy it would be to express this in Eisbach, but it should be reasonably easy to express this in Isabelle/ML.
Using the pointers from user9716869, I was able to write a method that does what I want:
ML‹
fun split_with_tac (tac1: int -> tactic) (ts: (int -> tactic) list) (i: int) (st: thm): thm Seq.seq =
let
val st's = tac1 i st
fun next st' =
let
val new_subgoals_count = 1 + Thm.nprems_of st' - Thm.nprems_of st
in
if new_subgoals_count <> length ts then Seq.empty
else
RANGE ts i st'
end
in
st's |> Seq.maps next
end
fun tok_to_method_text ctxt tok =
case Token.get_value tok of
SOME (Token.Source src) => Method.read ctxt src
| _ =>
let
val (text, src) = Method.read_closure_input ctxt (Token.input_of tok);
val _ = Token.assign (SOME (Token.Source src)) tok;
in text end
val readText: Token.T Token.context_parser = Scan.lift (Parse.token Parse.text)
val text_and_texts_closure: (Method.text * Method.text list) Token.context_parser =
(Args.context -- readText -- (Scan.lift \<^keyword>‹and› |-- Scan.repeat readText)) >> (fn ((ctxt, tok), t) =>
(tok_to_method_text ctxt tok, map (tok_to_method_text ctxt) t));
›
method_setup split_with =
‹text_and_texts_closure >> (fn (m, ms) => fn ctxt => fn facts =>
let
fun tac m st' =
method_evaluate m ctxt facts
fun tac' m i st' =
Goal.restrict i 1 st'
|> method_evaluate m ctxt facts
|> Seq.map (Goal.unrestrict i)
handle THM _ => Seq.empty
val initialT: int -> tactic = tac' m
val nextTs: (int -> tactic) list = map tac' ms
in SIMPLE_METHOD (HEADGOAL (split_with_tac initialT nextTs)) facts end)
›
lemma
assumes r: "P ⟹ Q ⟹ R"
and p: "P"
and q: "Q"
shows "R"
by (split_with ‹rule r› and ‹rule p› ‹rule q›)
method conj_one_by_one uses pre = (
match pre in
p: "?P ∧ ?Q" ⇒ ‹split_with ‹rule conjI› and
‹conj_one_by_one pre: p[THEN conjunct1]›
‹conj_one_by_one pre: p[THEN conjunct2]››
| insert pre)
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
proof (conj_one_by_one pre: c)
show "a 0 ⟹ a' 0" by (rule imp)
show "a 1 ⟹ a' 1" by (rule imp)
show "a 2 ⟹ a' 2" by (rule imp)
show "a 3 ⟹ a' 3" by (rule imp)
qed

Reasoning about overlapping inductive definitions in Isabelle

I would like to prove the following lemma in Isabelle:
lemma "T (Open # xs) ⟹ ¬ S (Open # xs) ⟹ count xs Close ≤ count xs Open"
Please find the definitions below:
datatype paren = Open | Close
inductive S where
S_empty: "S []" |
S_append: "S xs ⟹ S ys ⟹ S (xs # ys)" |
S_paren: "S xs ⟹ S (Open # xs # [Close])"
inductive T where
T_S: "T []" |
T_append: "T xs ⟹ T ys ⟹ T (xs # ys)" |
T_paren: "T xs ⟹ T (Open # xs # [Close])" |
T_left: "T xs ⟹ T (Open # xs)"
The lemma states that an unbalanced parentheses structure would result in a possibly unbalanced structure when removing an Open bracket.
I've been trying the techniques that are described in the book "A proof-assistant for Higher-order logic", but so far none of them work. In particular, I tried to use rule inversion and rule induction, sledgehammer and other techniques.
One of the problems is that I haven't yet learned about Isar proofs, which thus complicates the proof. I would prefer if you can orient me with plain apply commands.
Please find a proof below. It is not unlikely that it can be improved: I tried to follow the simplest route towards the proof and relied on sledgehammer to fill in the details.
theory so_raoidii
imports Complex_Main
begin
datatype paren = Open | Close
inductive S where
S_empty: "S []" |
S_append: "S xs ⟹ S ys ⟹ S (xs # ys)" |
S_paren: "S xs ⟹ S (Open # xs # [Close])"
inductive T where
T_S: "T []" |
T_append: "T xs ⟹ T ys ⟹ T (xs # ys)" |
T_paren: "T xs ⟹ T (Open # xs # [Close])" |
T_left: "T xs ⟹ T (Open # xs)"
lemma count_list_lem:
"count_list xsa a = n ⟹
count_list ysa a = m ⟹
count_list (xsa # ysa) a = n + m"
apply(induction xsa arbitrary: ysa n m)
apply auto
done
lemma T_to_count: "T xs ⟹ count_list xs Close ≤ count_list xs Open"
apply(induction rule: T.induct)
by (simp add: count_list_lem)+
lemma T_to_S_count: "T xs ⟹ count_list xs Close = count_list xs Open ⟹ S xs"
apply(induction rule: T.induct)
apply(auto)
apply(simp add: S_empty)
apply(metis S_append T_to_count add.commute add_le_cancel_right count_list_lem
dual_order.antisym)
apply(simp add: count_list_lem S_paren)
using T_to_count by fastforce
lemma "T (Open # xs) ⟹
¬ S (Open # xs) ⟹
count_list xs Close ≤ count_list xs Open"
apply(cases "T xs")
apply(simp add: T_to_count)
using T_to_S_count T_to_count by fastforce
end

Proving a theorem about parser combinators

I've written some simple parser combinators (without backtracking etc.). Here are the important definitions for my problem.
type_synonym ('a, 's) parser = "'s list ⇒ ('a * 's list) option"
definition sequenceP :: "('a, 's) parser
⇒ ('b, 's) parser
⇒ ('b, 's) parser" (infixl ">>P" 60) where
"sequenceP p q ≡ λ i .
(case p i of
None ⇒ None
| Some v ⇒ q (snd v))"
definition consumerP :: "('a, 's) parser ⇒ bool" where
"consumerP p ≡ (∀ i . (case p i of
None ⇒ True |
Some v ⇒ length (snd v) ≤ length i))"
I do want to proof the following lemma.
lemma consumerPI: "consumerP p ⟹ consumerP q ⟹ consumerP (p >>P q)"
apply (unfold sequenceP_def)
apply (simp (no_asm) add:consumerP_def)
apply clarsimp
apply (case_tac "case p i of None ⇒ None | Some v ⇒ q (snd v)")
apply simp
apply clarsimp
apply (case_tac "p i")
apply simp
apply clarsimp
apply (unfold consumerP_def)
I arrive at this proof state, at which I fail to continue.
goal (1 subgoal):
1. ⋀i a b aa ba.
⟦∀i. case p i of None ⇒ True | Some v ⇒ length (snd v) ≤ length i;
∀i. case q i of None ⇒ True | Some v ⇒ length (snd v) ≤ length i; q ba = Some (a, b); p i = Some (aa, ba)⟧
⟹ length b ≤ length i
Can anybody give me a tip how to solve this goal?
Thanks in advance!
It turns out that if you just want to prove the lemma, without further insight, then
lemma consumerPI: "consumerP p ⟹ consumerP q ⟹ consumerP (p >>P q)"
by (smt consumerP_def le_trans option.case_eq_if sequenceP_def)
does the job.
If you want to have insight, you want to go for a structured proof. First identify some useful lemmas about consumerP, and then write a Isar proof that details the necessary steps.
lemma consumerPI[intro!]:
assumes "⋀ i x r . p i = Some (x,r) ⟹ length r ≤ length i"
shows "consumerP p"
unfolding consumerP_def by (auto split: option.split elim: assms)
lemma consumerPE[elim, consumes 1]:
assumes "consumerP p"
assumes "p i = Some (x,r)"
shows "length r ≤ length i"
using assms by (auto simp add: consumerP_def split: option.split_asm)
lemma consumerP_sequencePI: "consumerP p ⟹ consumerP q ⟹ consumerP (p >>P q)"
proof-
assume "consumerP p"
assume "consumerP q"
show "consumerP (p >>P q)"
proof(rule consumerPI)
fix i x r
assume "(p >>P q) i = Some (x, r)"
then obtain x' r' where "p i = Some (x', r')" and "q r' = Some (x,r)"
by (auto simp add: sequenceP_def split:option.split_asm)
from `consumerP q` and `q r' = Some (x, r)`
have "length r ≤ length r'" by (rule consumerPE)
also
from `consumerP p` and `p i = Some (x', r')`
have "length r' ≤ length i" by (rule consumerPE)
finally
show "length r ≤ length i".
qed
qed
In fact, for this definition you can very nicely use the inductive command, and get intro and elim rules for free:
inductive consumerP where
consumerPI: "(⋀ i x r . p i = Some (x,r) ⟹ length r ≤ length i) ⟹ consumerP p"
In the above proof, you can replace by (rule consumerPE) by by cases and it works.

Rewriting with non-equality equivalence-relations using Isabelle simplifier

I would like to use the simplifier to replace subterms that are not equalities. Instead of a generic definition of my problem, I will illustrate this by an example:
Say I have a simple programming language and a Hoare logic on top of it. Say we have if, while, and the sequence operation. Also, we have denotation which gives a denotation of a program, and hoare P c Q. Below is an example signature in Isabelle/HOL:
(* A simple language and Hoare logic *)
typedecl program
typedecl memory
consts
seq :: "program ⇒ program ⇒ program" (infixl ";" 10) (* c;d: run c, then run d *)
ifthen :: "(memory ⇒ bool) ⇒ program ⇒ program" (* ifthen e c: run c if e(current_mem)=true *)
while :: "(memory ⇒ bool) ⇒ program ⇒ program" (* while e c: run c while e(current_mem)=true *)
denotation :: "program ⇒ memory ⇒ memory" (* denotation c m: memory after running c, when starting with memory m *)
hoare :: "(memory ⇒ bool) ⇒ program ⇒ (memory ⇒ bool) ⇒ bool"
(* hoare P c Q: if P(current_mem), then after running c, we have Q(current_mem) *)
Now it is not true that (a;b);c = a;(b;c) (these are different programs), but it does hold that they are denotationally equivalent, i.e., denotation ((a;b);c)) = denotation (a;(b;c)).
That means, I should be able to rewrite a;(b;c) to (a;b);c inside a Hoare triple. For example, I would like to be able to prove
lemma "hoare P (while e (a;b;c)) Q ==> hoare P (while e (a;(b;c))) Q"
just using the simplifier (by simp), given suitable simplification rules.
Logically, the relevant rules would be:
lemma "denotation (a;(b;c)) = denotation ((a;b);c)"
lemma "denotation a = denotation b ==> hoare P a Q = hoare P b Q"
lemma "denotation a = denotation b ==> denotation (while e a) = denotation (while e b)"
lemma "denotation a = denotation b ==> denotation (ifthen e a) = denotation (ifthen e b)"
lemma "denotation a = denotation a' ==> denotation b = denotation b' ==> denotation (a;b) = denotation (a';b')"
Unfortunately, there seems to be no straightforward way of telling those rules to the simplifier. (More generically, we would like to tell the simplifier in a congruence-rule, that the rewriting below has to be done module a certain equivalence relation, denotational-equivalence in the present example.)
I have found a partial solution to this problem (see my own answer below), but the solution seems like a hack (and I don't know how stable it is), and I wonder whether there is a good way to do it.
I do not mind having to use some ML-code in the process (e.g., writing a simproc), but I would like to avoid to have to reimplement the whole simplifier for rewriting inside Hoare tuples.
Isabelle's simplifier does not support rewriting with respect to arbitrary equivalence relations. Fortunately, your rewrites appear to be rather simple, so it may be worth to implement the rewriting in a simproc. Here's the idea:
Write a simproc that triggers on terms of the form hoare P c Q. Upon invocation, it sets up a goal of the form hoare P c Q == ?rhs and
applies a rule stating that %c. hoare P c Q only cares about the equivalence class of its argument, not the concrete element. Then, apply the rewriting rules as introduction rules until the stated goal is solved. This should have instantiated ?rhs to something of the form hoare P c' Q. Test whether c and c' are alpha-beta-eta-...-equivalent. If so, the simproc fails with NONE, otherwise it returns the proven equation.
Here is the bunch of lemmas I would use as a start:
definition fun_equiv :: "('a ⇒ 'b) ⇒ 'a ⇒ 'a ⇒ bool"
where "fun_equiv f x y ⟷ f x = f y"
lemma fun_equiv_refl: "fun_equiv f x x" by(simp add: fun_equiv_def)
lemma hoare_cong_start: (* start rule *)
"fun_equiv denotation c c' ⟹ hoare P c Q == hoare P' c' Q'"
sorry
lemma while_cong: "fun_equiv denotation c c' ⟹ fun_equiv denotation (while b c) (while b c')" sorry
lemma seq_cong: "⟦ fun_equiv denotation a a'; fun_equiv denotation b b' ⟧ ⟹ fun_equiv denotation (a ; b) (a' ; b')" sorry
lemma if_cong: "fun_equiv denotation c c' ⟹ fun_equiv denotation (ifthen b c) (ifthen b c')" sorry
lemma seq_assoc: "fun_equiv denotation (a ; (b ; c)) (a; b; c)" sorry
lemma ifthen_true: "fun_equiv denotation (ifthen (λm. True) c) c" sorry
lemmas hoare_intros =
-- ‹rewrites come first, congruences later, reflexivity last›
ifthen_true seq_assoc
while_cong if_cong seq_cong
fun_equiv_refl
Since this is a simproc inside the simplifier, you may assume that the command in the invocation is already in normal form w.r.t. the simpset. In your example, the test %m. m = m has already been simplified to %_. True. Thus, the simproc can focus on just implementing the rewrites for the hoare rules.
A single step of the simproc invocation should do something like the following Isar snippet:
schematic_lemma "hoare (λm. P x) (while P (c;(d;e);ifthen (λm. True) (f;g;c))) (λm. True) == ?c"
by(rule hoare_cong_start)(rule hoare_intros)+
Since the simplifier iterates the simproc until it does not trigger any more, you should really end up with a normal form.
If your want to support conditional rewrite rules w.r.t. denotational equivalence, the rule hoare_intros should be replaced with something that checks the format of the subgoal. If it is not of the form fun_equiv denotation _ _, then the simproc should invoke the simplifier recursively (or any other proof method of your choice) rather than try another rule application of hoare_intros.
For a way that seems to work OK, see the attached Isabelle/HOL theory. The idea there is to use conditional simp-rules to simulate cong-rules.
E.g., a congruence rule
lemma l1 [cong]: "f a = f b ==> g a = g b"
is also a valid conditional simp-rule
lemma l2 [simp]: "f a = f b ==> g a = g b"
and when Isabelle applies the simp-rule, b will be replaced by a schematic variable, and the simplifier will rewrite "f a = f ?b", which instantiates ?b with a simplification of b.
However, lemma l2 will make the simplifier loop, because it can be applied to it's own rhs. So instead we can write a rule
lemma l3 [simp]: "f_simp a = f_done b ==> g_simp a = g_done b"
where g_simp and g_done are defined to be equal to g, but will stop the simplifier from applying the rules in a loop.
A full working example of this idea is in the theory file below.
The problems are:
- It is a hack. I don't know in which situations it might break down or be incompatible with other things
- apply simp may rewrite partially, so you have to invoke it again to finish rewriting. (See the last lemma in the theory.)
theory Test
imports Main
begin
section "Minimal Hoare logic"
(* A simple language and Hoare logic *)
typedecl program
typedecl memory
consts
seq :: "program ⇒ program ⇒ program" (infixl ";" 10) (* c;d: run c, then run d *)
ifthen :: "(memory ⇒ bool) ⇒ program ⇒ program" (* ifthen e c: run c if e(current_mem)=true *)
while :: "(memory ⇒ bool) ⇒ program ⇒ program" (* while e c: run c while e(current_mem)=true *)
denotation :: "program ⇒ memory ⇒ memory" (* denotation c m: memory after running c, when starting with memory m *)
hoare :: "(memory ⇒ bool) ⇒ program ⇒ (memory ⇒ bool) ⇒ bool"
(* hoare P c Q: if P(current_mem), then after running c, we have Q(current_mem) *)
(* seq is associative modulo denotational equivalence.
Thus we should be able to rewrite "a;(b;c)" to "a;b;c"
within a hoare triple. E.g., the following should be solved with simp: *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms
oops (* by simp *)
section "A failed attempt"
experiment begin
(* Here a natural approach first which, however, fails. *)
(* A congruence rule for the simplifier.
To rewrite a hoare triple "hoare P c Q",
we need to rewrite "denotation c". *)
lemma enter [cong]:
assumes "P==P'" and "Q==Q'"
assumes "denotation c == denotation c'"
shows "hoare P c Q == hoare P' c' Q'"
sorry
(* To descend further into subterms, we need a congruence-rule for while. *)
lemma while [cong]:
assumes "e=e'"
and "denotation c == denotation c'"
shows "denotation (while e c) ≡ denotation (while e' c')"
sorry
(* And we give a simplification rule for the associativity of seq *)
lemma assoc [simp]:
"denotation (a;(b;c)) = denotation (a;b;c)"
sorry
(* Now we can simplify the lemma from above *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms by simp
(* Unfortunately, this does not work any more once we add more congruence rules. *)
(* To descend further into subterms, we need a congruence-rule for while. *)
lemma ifthen [cong]:
assumes "e=e'"
and "denotation c == denotation c'"
shows "denotation (ifthen e c) ≡ denotation (ifthen e' c')"
sorry
(* Warning: Overwriting congruence rule for "Test.denotation" *)
(* Simplifier doesn't work any more because the congruence rule for while was overwritten *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms
oops (* by simp *)
end
section {* A working attempt *}
(* Define copies of the denotation-constant, to control the simplifier *)
definition "denotation_simp == denotation"
definition "denotation_done == denotation"
(* A congruence rule for the simplifier.
To rewrite a hoare triple "hoare P c Q",
we need to rewrite "denotation c".
This means, the congruence rule should have an assumption
"denotation c == denotation c'"
However, to avoid infinite loops with the rewrite rules below,
we use the logically equivalent assumption
"denotation_simp c == denotation_done c'"
This means that we need to configure the rules below rewrite
"denotation_simp c" into "denotation_done c'" where c' is
the simplication of c (module denotational equivalence).
*)
lemma enter [cong]:
assumes "P==P'" and "Q==Q'"
assumes "denotation_simp c == denotation_done c'"
shows "hoare P c Q == hoare P' c' Q'"
sorry
(* A similar congruence rule for simplifying expressions
of the form "denotation c". *)
lemma denot [cong]:
assumes "denotation_simp c == denotation_done c'"
shows "denotation c == denotation c'"
sorry
(* Now we add a congruence-rule for while.
Since we saw above that we cannot use several congruence-rules
with "denotation" as their head,
we simulate a congruence rule using a simp-rule.
To rewrite "denotation_simp (while e c)", we need to rewrite
"denotation_simp c".
We put denotation_done on the rhs instead of denotation_simp
to avoid infinite loops.
*)
lemma while [simp]:
assumes "e=e'"
and "denotation_simp c == denotation_done c'"
shows "denotation_simp (while e c) ≡ denotation_done (while e' c')"
sorry
(* A pseudo-congruence rule for ifthen *)
lemma ifthen [simp]:
assumes "e=e'"
and "denotation_simp c == denotation_done c'"
shows "denotation_simp (ifthen e c) == denotation_done (ifthen e' c')"
sorry
(* A pseudo-congruence rule for seq *)
lemma seq [simp]:
assumes "denotation_simp c == denotation_done c'"
and "denotation_simp d == denotation_done d'"
shows "denotation_simp (c;d) == denotation_done (c';d')"
sorry
(* Finally, we can state associativity of seq. *)
lemma assoc [simp]:
"denotation_simp (a;(b;c)) = denotation_simp (a;b;c)"
sorry
(* Finally, since our congruence-rules expect the rewriting to rewrite
"denotation_simp c" into "denotation_done c'",
we need to translate any non-rewritten "denotation_simp" into
"denotation_done".
However, a rule "denotation_simp c == denotation_done c" does not work,
because it could be triggered too early, and block the pseudo-congruence rules above.
So we only trigger the rule when the current term would not match
any of the pseudo congruence rules *)
lemma finish [simp]:
assumes "NO_MATCH (while e1 c1) a"
assumes "NO_MATCH (ifthen e2 c2) a"
assumes "NO_MATCH (c3;d3) a"
shows "denotation_simp a = denotation_done a"
sorry
(* Testing the simplifier rules *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms
by simp
(* Some more complex test *)
lemma iftrue [simp]: "denotation_simp (ifthen (λm. True) c) == denotation_simp c" sorry
lemma
assumes "⋀x. Q x"
assumes "hoare (λm. P x ∧ Q x) (while P (c;(d;e);(f;g);c)) (λm. m=m)"
shows "hoare (λm. P x) (while P (c;(d;e);ifthen (λm. m=m) (f;g;c))) (λm. True)"
using assms
apply simp (* Hmm. Only partially simplified... The assoc rule is not applied to the result of the iftrue rule *)
by simp (* Rerunning simp solves the goal *)
(* By the way: "using assms by auto" solves the goal in one go *)
end
Below is my implementation of the suggestions by Andreas Lochbihler. The first part is a generic implementation (suitable for other equivalences than denotational equivalence), and below it is instantiated for my example Hoare logic.
(* Written by Dominique Unruh *)
theory Test2
imports Main
begin
section "Implementation of modulo-simplifier"
definition fun_equiv :: "('a ⇒ 'b) ⇒ 'a ⇒ 'a ⇒ bool" where "fun_equiv f x y == f x = f y"
lemma fun_equiv_refl: "fun_equiv f x x" by(simp add: fun_equiv_def)
ML {*
(* Call as: hoare_simproc_tac (simplifications#congruences) context/simpset *)
fun fun_equiv_simproc_tac intros ctxt =
SUBGOAL (fn (goal,i) =>
case goal of
Const(#{const_name Trueprop},_) $ (Const(#{const_name fun_equiv},_)$_$_$_) =>
(resolve0_tac intros THEN_ALL_NEW fun_equiv_simproc_tac intros ctxt) i
ORELSE (rtac #{thm fun_equiv_refl} i)
| _ =>
SOLVED' (simp_tac ctxt) i)
fun fun_equiv_simproc start intros _ ctxt (t:cterm) =
let val fresh_var = Var(("x",Term.maxidx_of_term (Thm.term_of t)+1),Thm.typ_of_cterm t)
val goal = Logic.mk_equals (Thm.term_of t,fresh_var)
val thm = Goal.prove ctxt [] [] goal
(fn {context,...} => resolve0_tac start 1 THEN ALLGOALS (fun_equiv_simproc_tac intros context))
in
if (Thm.rhs_of thm) aconvc t then NONE else SOME thm
end
handle ERROR msg => (warning ("fun_equiv_simproc failed\nTerm was:\n"^(#{make_string} t)^"\nError: "^msg); NONE)
fun fun_equiv_simproc_named start cong simp morph ctxt =
fun_equiv_simproc (Named_Theorems.get ctxt start) (Named_Theorems.get ctxt simp # Named_Theorems.get ctxt cong) morph ctxt
*}
section "Minimal Hoare logic"
(* A simple language and Hoare logic *)
typedecl program
typedecl memory
consts
seq :: "program ⇒ program ⇒ program" (infixl ";" 100) (* c;d: run c, then run d *)
ifthen :: "(memory ⇒ bool) ⇒ program ⇒ program" (* ifthen e c: run c if e(current_mem)=true *)
while :: "(memory ⇒ bool) ⇒ program ⇒ program" (* while e c: run c while e(current_mem)=true *)
denotation :: "program ⇒ memory ⇒ memory" (* denotation c m: memory after running c, when starting with memory m *)
hoare :: "(memory ⇒ bool) ⇒ program ⇒ (memory ⇒ bool) ⇒ bool"
(* hoare P c Q: if P(current_mem), then after running c, we have Q(current_mem) *)
section "Instantiating the simplifier"
named_theorems denot_simp_start
named_theorems denot_simp
named_theorems denot_cong
lemma hoare_cong_start [denot_simp_start]: "fun_equiv denotation c c' ⟹ hoare P c Q == hoare P c' Q" sorry
lemma while_cong [denot_cong]: "fun_equiv denotation c c' ⟹ fun_equiv denotation (while b c) (while b c')" sorry
lemma seq_cong [denot_cong]: "fun_equiv denotation a a' ⟹ fun_equiv denotation b b' ⟹ fun_equiv denotation (a ; b) (a' ; b')" sorry
lemma if_cong [denot_cong]: "fun_equiv denotation c c' ⟹ fun_equiv denotation (ifthen b c) (ifthen b c')" sorry
lemma seq_assoc [denot_simp]: "fun_equiv denotation (a ; (b ; c)) (a; b; c)" sorry
lemma ifthen_true [denot_simp]: "(⋀m. e m) ⟹ fun_equiv denotation (ifthen e c) c" sorry
lemma double_while [denot_simp]: "(⋀m. e m = e' m) ⟹ fun_equiv denotation c d ⟹ fun_equiv denotation (seq (while e c) (while e' d)) (while e c)" sorry
(* -- "If the set of simplification theorems is fixed, we can use the following setup"
lemmas hoare_congruences = while_cong if_cong seq_cong
lemmas hoare_simps = ifthen_true seq_assoc double_while
simproc_setup hoare_simproc ("hoare P c Q") = {* fun_equiv_simproc #{thms hoare_cong_start} #{thms hoare_simps hoare_congruences} *}
*)
(* To make use of dynamic lists of theorems, we use the following setup *)
simproc_setup hoare_simproc ("hoare P c Q") = {*
fun_equiv_simproc_named #{named_theorems denot_simp_start}
#{named_theorems denot_cong}
#{named_theorems denot_simp}
*}
section "Tests"
(* Testing the simplifier rules *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q ∧ True"
using assms
by simp
lemma
assumes "hoare P (while P (c;d;e)) Q"
assumes "⋀x. P x = R x"
shows "hoare P ((while P (c;(d;e))); (while R ((c;d);e))) Q"
using assms by simp
lemma
assumes "⋀x. Q x"
assumes "hoare (λm. P x ∧ Q x) (while P (c;(d;e);(f;g);c)) (λm. m=m)"
shows "hoare (λm. P x) (while P (c;(d;e);ifthen (λm. m=m) (f;g;c))) (λm. True)"
using assms by simp
(* Test: Disabling the simproc *)
lemma
assumes "hoare P (a;(b;c)) Q"
shows "hoare P (a;(b;c)) Q ∧ True"
using[[simproc del: hoare_simproc]] -- "Without this, proof fails"
apply simp
by (fact assms)
end

Resources