I would like to use the simplifier to replace subterms that are not equalities. Instead of a generic definition of my problem, I will illustrate this by an example:
Say I have a simple programming language and a Hoare logic on top of it. Say we have if, while, and the sequence operation. Also, we have denotation which gives a denotation of a program, and hoare P c Q. Below is an example signature in Isabelle/HOL:
(* A simple language and Hoare logic *)
typedecl program
typedecl memory
consts
seq :: "program ⇒ program ⇒ program" (infixl ";" 10) (* c;d: run c, then run d *)
ifthen :: "(memory ⇒ bool) ⇒ program ⇒ program" (* ifthen e c: run c if e(current_mem)=true *)
while :: "(memory ⇒ bool) ⇒ program ⇒ program" (* while e c: run c while e(current_mem)=true *)
denotation :: "program ⇒ memory ⇒ memory" (* denotation c m: memory after running c, when starting with memory m *)
hoare :: "(memory ⇒ bool) ⇒ program ⇒ (memory ⇒ bool) ⇒ bool"
(* hoare P c Q: if P(current_mem), then after running c, we have Q(current_mem) *)
Now it is not true that (a;b);c = a;(b;c) (these are different programs), but it does hold that they are denotationally equivalent, i.e., denotation ((a;b);c)) = denotation (a;(b;c)).
That means, I should be able to rewrite a;(b;c) to (a;b);c inside a Hoare triple. For example, I would like to be able to prove
lemma "hoare P (while e (a;b;c)) Q ==> hoare P (while e (a;(b;c))) Q"
just using the simplifier (by simp), given suitable simplification rules.
Logically, the relevant rules would be:
lemma "denotation (a;(b;c)) = denotation ((a;b);c)"
lemma "denotation a = denotation b ==> hoare P a Q = hoare P b Q"
lemma "denotation a = denotation b ==> denotation (while e a) = denotation (while e b)"
lemma "denotation a = denotation b ==> denotation (ifthen e a) = denotation (ifthen e b)"
lemma "denotation a = denotation a' ==> denotation b = denotation b' ==> denotation (a;b) = denotation (a';b')"
Unfortunately, there seems to be no straightforward way of telling those rules to the simplifier. (More generically, we would like to tell the simplifier in a congruence-rule, that the rewriting below has to be done module a certain equivalence relation, denotational-equivalence in the present example.)
I have found a partial solution to this problem (see my own answer below), but the solution seems like a hack (and I don't know how stable it is), and I wonder whether there is a good way to do it.
I do not mind having to use some ML-code in the process (e.g., writing a simproc), but I would like to avoid to have to reimplement the whole simplifier for rewriting inside Hoare tuples.
Isabelle's simplifier does not support rewriting with respect to arbitrary equivalence relations. Fortunately, your rewrites appear to be rather simple, so it may be worth to implement the rewriting in a simproc. Here's the idea:
Write a simproc that triggers on terms of the form hoare P c Q. Upon invocation, it sets up a goal of the form hoare P c Q == ?rhs and
applies a rule stating that %c. hoare P c Q only cares about the equivalence class of its argument, not the concrete element. Then, apply the rewriting rules as introduction rules until the stated goal is solved. This should have instantiated ?rhs to something of the form hoare P c' Q. Test whether c and c' are alpha-beta-eta-...-equivalent. If so, the simproc fails with NONE, otherwise it returns the proven equation.
Here is the bunch of lemmas I would use as a start:
definition fun_equiv :: "('a ⇒ 'b) ⇒ 'a ⇒ 'a ⇒ bool"
where "fun_equiv f x y ⟷ f x = f y"
lemma fun_equiv_refl: "fun_equiv f x x" by(simp add: fun_equiv_def)
lemma hoare_cong_start: (* start rule *)
"fun_equiv denotation c c' ⟹ hoare P c Q == hoare P' c' Q'"
sorry
lemma while_cong: "fun_equiv denotation c c' ⟹ fun_equiv denotation (while b c) (while b c')" sorry
lemma seq_cong: "⟦ fun_equiv denotation a a'; fun_equiv denotation b b' ⟧ ⟹ fun_equiv denotation (a ; b) (a' ; b')" sorry
lemma if_cong: "fun_equiv denotation c c' ⟹ fun_equiv denotation (ifthen b c) (ifthen b c')" sorry
lemma seq_assoc: "fun_equiv denotation (a ; (b ; c)) (a; b; c)" sorry
lemma ifthen_true: "fun_equiv denotation (ifthen (λm. True) c) c" sorry
lemmas hoare_intros =
-- ‹rewrites come first, congruences later, reflexivity last›
ifthen_true seq_assoc
while_cong if_cong seq_cong
fun_equiv_refl
Since this is a simproc inside the simplifier, you may assume that the command in the invocation is already in normal form w.r.t. the simpset. In your example, the test %m. m = m has already been simplified to %_. True. Thus, the simproc can focus on just implementing the rewrites for the hoare rules.
A single step of the simproc invocation should do something like the following Isar snippet:
schematic_lemma "hoare (λm. P x) (while P (c;(d;e);ifthen (λm. True) (f;g;c))) (λm. True) == ?c"
by(rule hoare_cong_start)(rule hoare_intros)+
Since the simplifier iterates the simproc until it does not trigger any more, you should really end up with a normal form.
If your want to support conditional rewrite rules w.r.t. denotational equivalence, the rule hoare_intros should be replaced with something that checks the format of the subgoal. If it is not of the form fun_equiv denotation _ _, then the simproc should invoke the simplifier recursively (or any other proof method of your choice) rather than try another rule application of hoare_intros.
For a way that seems to work OK, see the attached Isabelle/HOL theory. The idea there is to use conditional simp-rules to simulate cong-rules.
E.g., a congruence rule
lemma l1 [cong]: "f a = f b ==> g a = g b"
is also a valid conditional simp-rule
lemma l2 [simp]: "f a = f b ==> g a = g b"
and when Isabelle applies the simp-rule, b will be replaced by a schematic variable, and the simplifier will rewrite "f a = f ?b", which instantiates ?b with a simplification of b.
However, lemma l2 will make the simplifier loop, because it can be applied to it's own rhs. So instead we can write a rule
lemma l3 [simp]: "f_simp a = f_done b ==> g_simp a = g_done b"
where g_simp and g_done are defined to be equal to g, but will stop the simplifier from applying the rules in a loop.
A full working example of this idea is in the theory file below.
The problems are:
- It is a hack. I don't know in which situations it might break down or be incompatible with other things
- apply simp may rewrite partially, so you have to invoke it again to finish rewriting. (See the last lemma in the theory.)
theory Test
imports Main
begin
section "Minimal Hoare logic"
(* A simple language and Hoare logic *)
typedecl program
typedecl memory
consts
seq :: "program ⇒ program ⇒ program" (infixl ";" 10) (* c;d: run c, then run d *)
ifthen :: "(memory ⇒ bool) ⇒ program ⇒ program" (* ifthen e c: run c if e(current_mem)=true *)
while :: "(memory ⇒ bool) ⇒ program ⇒ program" (* while e c: run c while e(current_mem)=true *)
denotation :: "program ⇒ memory ⇒ memory" (* denotation c m: memory after running c, when starting with memory m *)
hoare :: "(memory ⇒ bool) ⇒ program ⇒ (memory ⇒ bool) ⇒ bool"
(* hoare P c Q: if P(current_mem), then after running c, we have Q(current_mem) *)
(* seq is associative modulo denotational equivalence.
Thus we should be able to rewrite "a;(b;c)" to "a;b;c"
within a hoare triple. E.g., the following should be solved with simp: *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms
oops (* by simp *)
section "A failed attempt"
experiment begin
(* Here a natural approach first which, however, fails. *)
(* A congruence rule for the simplifier.
To rewrite a hoare triple "hoare P c Q",
we need to rewrite "denotation c". *)
lemma enter [cong]:
assumes "P==P'" and "Q==Q'"
assumes "denotation c == denotation c'"
shows "hoare P c Q == hoare P' c' Q'"
sorry
(* To descend further into subterms, we need a congruence-rule for while. *)
lemma while [cong]:
assumes "e=e'"
and "denotation c == denotation c'"
shows "denotation (while e c) ≡ denotation (while e' c')"
sorry
(* And we give a simplification rule for the associativity of seq *)
lemma assoc [simp]:
"denotation (a;(b;c)) = denotation (a;b;c)"
sorry
(* Now we can simplify the lemma from above *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms by simp
(* Unfortunately, this does not work any more once we add more congruence rules. *)
(* To descend further into subterms, we need a congruence-rule for while. *)
lemma ifthen [cong]:
assumes "e=e'"
and "denotation c == denotation c'"
shows "denotation (ifthen e c) ≡ denotation (ifthen e' c')"
sorry
(* Warning: Overwriting congruence rule for "Test.denotation" *)
(* Simplifier doesn't work any more because the congruence rule for while was overwritten *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms
oops (* by simp *)
end
section {* A working attempt *}
(* Define copies of the denotation-constant, to control the simplifier *)
definition "denotation_simp == denotation"
definition "denotation_done == denotation"
(* A congruence rule for the simplifier.
To rewrite a hoare triple "hoare P c Q",
we need to rewrite "denotation c".
This means, the congruence rule should have an assumption
"denotation c == denotation c'"
However, to avoid infinite loops with the rewrite rules below,
we use the logically equivalent assumption
"denotation_simp c == denotation_done c'"
This means that we need to configure the rules below rewrite
"denotation_simp c" into "denotation_done c'" where c' is
the simplication of c (module denotational equivalence).
*)
lemma enter [cong]:
assumes "P==P'" and "Q==Q'"
assumes "denotation_simp c == denotation_done c'"
shows "hoare P c Q == hoare P' c' Q'"
sorry
(* A similar congruence rule for simplifying expressions
of the form "denotation c". *)
lemma denot [cong]:
assumes "denotation_simp c == denotation_done c'"
shows "denotation c == denotation c'"
sorry
(* Now we add a congruence-rule for while.
Since we saw above that we cannot use several congruence-rules
with "denotation" as their head,
we simulate a congruence rule using a simp-rule.
To rewrite "denotation_simp (while e c)", we need to rewrite
"denotation_simp c".
We put denotation_done on the rhs instead of denotation_simp
to avoid infinite loops.
*)
lemma while [simp]:
assumes "e=e'"
and "denotation_simp c == denotation_done c'"
shows "denotation_simp (while e c) ≡ denotation_done (while e' c')"
sorry
(* A pseudo-congruence rule for ifthen *)
lemma ifthen [simp]:
assumes "e=e'"
and "denotation_simp c == denotation_done c'"
shows "denotation_simp (ifthen e c) == denotation_done (ifthen e' c')"
sorry
(* A pseudo-congruence rule for seq *)
lemma seq [simp]:
assumes "denotation_simp c == denotation_done c'"
and "denotation_simp d == denotation_done d'"
shows "denotation_simp (c;d) == denotation_done (c';d')"
sorry
(* Finally, we can state associativity of seq. *)
lemma assoc [simp]:
"denotation_simp (a;(b;c)) = denotation_simp (a;b;c)"
sorry
(* Finally, since our congruence-rules expect the rewriting to rewrite
"denotation_simp c" into "denotation_done c'",
we need to translate any non-rewritten "denotation_simp" into
"denotation_done".
However, a rule "denotation_simp c == denotation_done c" does not work,
because it could be triggered too early, and block the pseudo-congruence rules above.
So we only trigger the rule when the current term would not match
any of the pseudo congruence rules *)
lemma finish [simp]:
assumes "NO_MATCH (while e1 c1) a"
assumes "NO_MATCH (ifthen e2 c2) a"
assumes "NO_MATCH (c3;d3) a"
shows "denotation_simp a = denotation_done a"
sorry
(* Testing the simplifier rules *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q"
using assms
by simp
(* Some more complex test *)
lemma iftrue [simp]: "denotation_simp (ifthen (λm. True) c) == denotation_simp c" sorry
lemma
assumes "⋀x. Q x"
assumes "hoare (λm. P x ∧ Q x) (while P (c;(d;e);(f;g);c)) (λm. m=m)"
shows "hoare (λm. P x) (while P (c;(d;e);ifthen (λm. m=m) (f;g;c))) (λm. True)"
using assms
apply simp (* Hmm. Only partially simplified... The assoc rule is not applied to the result of the iftrue rule *)
by simp (* Rerunning simp solves the goal *)
(* By the way: "using assms by auto" solves the goal in one go *)
end
Below is my implementation of the suggestions by Andreas Lochbihler. The first part is a generic implementation (suitable for other equivalences than denotational equivalence), and below it is instantiated for my example Hoare logic.
(* Written by Dominique Unruh *)
theory Test2
imports Main
begin
section "Implementation of modulo-simplifier"
definition fun_equiv :: "('a ⇒ 'b) ⇒ 'a ⇒ 'a ⇒ bool" where "fun_equiv f x y == f x = f y"
lemma fun_equiv_refl: "fun_equiv f x x" by(simp add: fun_equiv_def)
ML {*
(* Call as: hoare_simproc_tac (simplifications#congruences) context/simpset *)
fun fun_equiv_simproc_tac intros ctxt =
SUBGOAL (fn (goal,i) =>
case goal of
Const(#{const_name Trueprop},_) $ (Const(#{const_name fun_equiv},_)$_$_$_) =>
(resolve0_tac intros THEN_ALL_NEW fun_equiv_simproc_tac intros ctxt) i
ORELSE (rtac #{thm fun_equiv_refl} i)
| _ =>
SOLVED' (simp_tac ctxt) i)
fun fun_equiv_simproc start intros _ ctxt (t:cterm) =
let val fresh_var = Var(("x",Term.maxidx_of_term (Thm.term_of t)+1),Thm.typ_of_cterm t)
val goal = Logic.mk_equals (Thm.term_of t,fresh_var)
val thm = Goal.prove ctxt [] [] goal
(fn {context,...} => resolve0_tac start 1 THEN ALLGOALS (fun_equiv_simproc_tac intros context))
in
if (Thm.rhs_of thm) aconvc t then NONE else SOME thm
end
handle ERROR msg => (warning ("fun_equiv_simproc failed\nTerm was:\n"^(#{make_string} t)^"\nError: "^msg); NONE)
fun fun_equiv_simproc_named start cong simp morph ctxt =
fun_equiv_simproc (Named_Theorems.get ctxt start) (Named_Theorems.get ctxt simp # Named_Theorems.get ctxt cong) morph ctxt
*}
section "Minimal Hoare logic"
(* A simple language and Hoare logic *)
typedecl program
typedecl memory
consts
seq :: "program ⇒ program ⇒ program" (infixl ";" 100) (* c;d: run c, then run d *)
ifthen :: "(memory ⇒ bool) ⇒ program ⇒ program" (* ifthen e c: run c if e(current_mem)=true *)
while :: "(memory ⇒ bool) ⇒ program ⇒ program" (* while e c: run c while e(current_mem)=true *)
denotation :: "program ⇒ memory ⇒ memory" (* denotation c m: memory after running c, when starting with memory m *)
hoare :: "(memory ⇒ bool) ⇒ program ⇒ (memory ⇒ bool) ⇒ bool"
(* hoare P c Q: if P(current_mem), then after running c, we have Q(current_mem) *)
section "Instantiating the simplifier"
named_theorems denot_simp_start
named_theorems denot_simp
named_theorems denot_cong
lemma hoare_cong_start [denot_simp_start]: "fun_equiv denotation c c' ⟹ hoare P c Q == hoare P c' Q" sorry
lemma while_cong [denot_cong]: "fun_equiv denotation c c' ⟹ fun_equiv denotation (while b c) (while b c')" sorry
lemma seq_cong [denot_cong]: "fun_equiv denotation a a' ⟹ fun_equiv denotation b b' ⟹ fun_equiv denotation (a ; b) (a' ; b')" sorry
lemma if_cong [denot_cong]: "fun_equiv denotation c c' ⟹ fun_equiv denotation (ifthen b c) (ifthen b c')" sorry
lemma seq_assoc [denot_simp]: "fun_equiv denotation (a ; (b ; c)) (a; b; c)" sorry
lemma ifthen_true [denot_simp]: "(⋀m. e m) ⟹ fun_equiv denotation (ifthen e c) c" sorry
lemma double_while [denot_simp]: "(⋀m. e m = e' m) ⟹ fun_equiv denotation c d ⟹ fun_equiv denotation (seq (while e c) (while e' d)) (while e c)" sorry
(* -- "If the set of simplification theorems is fixed, we can use the following setup"
lemmas hoare_congruences = while_cong if_cong seq_cong
lemmas hoare_simps = ifthen_true seq_assoc double_while
simproc_setup hoare_simproc ("hoare P c Q") = {* fun_equiv_simproc #{thms hoare_cong_start} #{thms hoare_simps hoare_congruences} *}
*)
(* To make use of dynamic lists of theorems, we use the following setup *)
simproc_setup hoare_simproc ("hoare P c Q") = {*
fun_equiv_simproc_named #{named_theorems denot_simp_start}
#{named_theorems denot_cong}
#{named_theorems denot_simp}
*}
section "Tests"
(* Testing the simplifier rules *)
lemma
assumes "hoare P (while e (a;b;c)) Q"
shows "hoare P (while e (a;(b;c))) Q ∧ True"
using assms
by simp
lemma
assumes "hoare P (while P (c;d;e)) Q"
assumes "⋀x. P x = R x"
shows "hoare P ((while P (c;(d;e))); (while R ((c;d);e))) Q"
using assms by simp
lemma
assumes "⋀x. Q x"
assumes "hoare (λm. P x ∧ Q x) (while P (c;(d;e);(f;g);c)) (λm. m=m)"
shows "hoare (λm. P x) (while P (c;(d;e);ifthen (λm. m=m) (f;g;c))) (λm. True)"
using assms by simp
(* Test: Disabling the simproc *)
lemma
assumes "hoare P (a;(b;c)) Q"
shows "hoare P (a;(b;c)) Q ∧ True"
using[[simproc del: hoare_simproc]] -- "Without this, proof fails"
apply simp
by (fact assms)
end
Related
I'm studying topological filters in Filter.thy
theory Filter
imports Set_Interval Lifting_Set
begin
subsection ‹Filters›
text ‹
This definition also allows non-proper filters.
›
locale is_filter =
fixes F :: "('a ⇒ bool) ⇒ bool"
assumes True: "F (λx. True)"
assumes conj: "F (λx. P x) ⟹ F (λx. Q x) ⟹ F (λx. P x ∧ Q x)"
assumes mono: "∀x. P x ⟶ Q x ⟹ F (λx. P x) ⟹ F (λx. Q x)"
typedef 'a filter = "{F :: ('a ⇒ bool) ⇒ bool. is_filter F}"
proof
show "(λx. True) ∈ ?filter" by (auto intro: is_filter.intro)
qed
I don't get this definition. It's quite convoluted so I'll simplify it first
The expression
F (λx. P x) could be simplified to F P (using eta reduction of lambda calculus). The predicate 'a ⇒ bool is really just a set 'a set. Similarly ('a ⇒ bool) ⇒ bool should be 'a set set. Then we could rewrite the axioms as
assumes conj: "P ∈ F ∧ Q ∈ F ⟹ Q ∩ P ∈ F"
assumes mono: "P ⊆ Q ∧ P ∈ F ⟹ Q ∈ F"
Now my question is about the True axiom. It is equivalent to
assumes True: "UNIV ∈ F"
This does not match with the definitions of filters that I ever saw.
The axiom should be instead
assumes True: "{} ∉ F" (* the name True is not very fitting anymore *)
The statement UNIV ∈ F is unnecessary because it follows from axiom mono.
So what's up with this definition that Isabelle provides?
The link provided by Javier Diaz has lots of explanations.
Turns out this is a definition of improper filter. The axiom True is necessary and does not follow from mono. If this axiom was missing then F could be defined as
F P = False
or in set-theory notation, F could be an empty set and mono and conj would then be satisfied vacuously.
I have following simple Isabelle/HOL theory:
theory Max_Of_Two_Integers_Real
imports Main
"HOL-Library.Multiset"
"HOL-Library.Code_Target_Numeral"
"HOL-Library.Code_Target_Nat"
"HOL-Library.Code_Abstract_Nat"
begin
definition two_integer_max_case_def :: "nat ⇒ nat ⇒ nat" where
"two_integer_max_case_def a b = (case a > b of True ⇒ a | False ⇒ b)"
lemma spec_final:
fixes a :: nat and b :: nat
assumes "a > b" (* and "b < a" *)
shows "two_integer_max_case_def a b = a"
using assms by (simp add: two_integer_max_case_def_def)
lemma spec_1:
fixes a :: nat and b :: nat
shows "a > b ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
lemma spec_2:
shows " (a ∈ nat set) ∧ (b ∈ nat set) ∧ (a > b) ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
end
Three lemmas try to express and prove that same statement, but progressively I am trying to move information from assumes and fixes towards the term. First 2 lemmas are correct, but the third (last) lemma is failing syntactically with the error message:
Type unification failed: Clash of types "_ ⇒ _" and "int"
Type error in application: incompatible operand type
Operator: nat :: int ⇒ nat
Operand: set :: ??'a list ⇒ ??'a set
My aim in this lemma is to move type information from the fixes towards the term/statement? How I make statements about the type of variable in the term (of inner syntax)?
Maybe I should use, if I am trying to avoid fixes clause (in which the variables may be declared), the full ForAll expression like:
lemma spec_final_3:
shows "∀ a :: nat . ∀ b :: nat . ( (a > b) ⟹ two_integer_max_case_def a b = a)"
by (simp add: two_integer_max_case_def_def)
But it is failing syntactically as well with the error message:
Inner syntax error: unexpected end of input⌂
Failed to parse prop
So - is it possible to include type statements in the term directly (without fixes clause) and is there any difference between fixes clause and type statement in the term? Maybe such differences start to appear during (semi)automatic proofs, e.g., when simplification tactics are applied or some other tactics?
nat set is interpreted as a function (that does not type correctly). The set of natural numbers can be expressed as UNIV :: nat set. Then, spec_2 reads
lemma spec_2:
shows "a ∈ (UNIV :: nat set) ∧ b ∈ (UNIV :: nat set) ∧ a > b ⟹
two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
However, more natural way would be to include the type information in spec_1 without fixes clause:
lemma spec_1':
shows "(a :: nat) > (b :: nat) ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
∀ belongs to HOL, so the HOL implication should be used in spec_final_3:
lemma spec_final_3:
shows "∀ a :: nat. ∀ b :: nat. a > b ⟶ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
spec_1 can also be rewritten using an explicit meta-logic qualification (and implication) to look similar to spec_final_3:
lemma spec_1'':
shows "⋀ a :: nat. ⋀ b :: nat. a > b ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
I had defined a few translations like this:
consts
"time" :: "i"
"sig" :: "i ⇒ i"
"BaseChTy" :: "i"
syntax
"time" :: "i"
"sig" :: "i ⇒ i"
translations
"time" ⇌ "CONST int"
"sig(A)" ⇌ "CONST int → A"
Then, I want to prove a theorem like this:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
It should be a very simple theorem, and should be proved with theorem Pi_mono in a single step:
thm Pi_mono
?B ⊆ ?C ⟹ ?A → ?B ⊆ ?A → ?C
So I did it like this:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
apply(drule Pi_mono[of _ _ "time"])
(*Output:
goal (1 subgoal):
1. sig(A) ⊆ sig(B) ⟹ sig(A) ⊆ sig(B)
*)
apply(simp)
(*Output:
Failed ...
*)
Since the premise has become the same as the goal, it should be proved immediately, but it didn't. May I know have I done anything wrong in the translation definition?
I tried to change the theorem to:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ (time → A) ⊆ (time → B)"
(*Output:
goal (1 subgoal):
1. A ⊆ B ⟹ sig(A) ⊆ sig(B)
*)
apply(drule Pi_mono[of _ _ "time"])
(*Output:
goal (1 subgoal):
1. sig(A) ⊆ sig(B) ⟹ sig(A) ⊆ sig(B)
*)
apply(simp)
(*Output:
Success ...
*)
Then it works immediately, but shouldn't the translation will make them to be the same thing?
Update:
Thanks for Mathias Fleury reply, I tried to do a simplify trace, and it shows something like this:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
using [[show_sorts]] apply(drule Pi_mono[of _ _ "time"])
using [[simp_trace]] apply(simp)
oops
(*
Output:
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
sig(A::i) ⊆ sig(B::i) ⟹ sig(A) ⊆ sig(B)
[1]Adding rewrite rule "??.unknown":
sig(A::i) ⊆ sig(B::i) ≡ True
*)
while the time -> A version shows:
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ time → A ⊆ time → B"
using [[show_sorts]] apply(drule Pi_mono[of _ _ "time"])
using [[simp_trace]] apply(simp)
oops
(*
Output:
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
sig(A::i) ⊆ sig(B::i) ⟹ sig(A) ⊆ sig(B)
[1]Adding rewrite rule "??.unknown":
sig(A::i) ⊆ sig(B::i) ≡ True
[1]Applying instance of rewrite rule "??.unknown":
sig(A::i) ⊆ sig(B::i) ≡ True
[1]Rewriting:
sig(A::i) ⊆ sig(B::i) ≡ True
*)
Why can this time version can apply the instance of rewrite rule to continue to the proof, but the original one does not?
Thanks to the imports you mentioned in you comment (thanks), I could reproduce the problem. The issue is the translation, you need to do something like
syntax
"sig" :: "i ⇒ i" (‹sig(_)›)
translations
"sig(A)" == "CONST int → A"
theorem sig_mono: "⟦ A ⊆ B ⟧ ⟹ sig(A) ⊆ sig(B)"
apply(rule Pi_mono)
apply assumption
done
Just to expand on my comment and explain how I found that the problem is the translation. I looked at the unification failure:
theorem ⟦ A ⊆ B ⟧ ⟹ time → A ⊆ time → B
supply[[unify_trace_failure]]
apply (rule PI_mono)
The error message tells that sig and Pi are not unifiable. This is already strange. To be certain that the problem comes from the translation, I looked at the underlying term:
ML ‹#{print}#{term ‹sig(A)›}›
It shows the underlying term and we can see that the translation is not working and I looked at other translations in the library to fix issue.
In Eisbach I can use ; to apply a method to all new subgoals created by a method.
However, I often know how many subgoals are created and would like to apply different methods to the new subgoals.
Is there a way to say something like "apply method X to the first new subgoal and method Y to the second new subgoal"?
Here is a simple use case:
I want to develop a method that works on 2 conjunctions of arbitrary length but with the same structure.
The method should be usable to show that conjunction 1 implies conjunction 2 by showing that the implication holds for each component.
It should be usable like this:
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
proof (conj_one_by_one pre: c)
show "a 0 ⟹ a' 0" by (rule imp)
show "a 1 ⟹ a' 1" by (rule imp)
show "a 2 ⟹ a' 2" by (rule imp)
show "a 3 ⟹ a' 3" by (rule imp)
qed
When implementing this method in Eisbach, I have a problem after using rule conjI.
I get two subgoals that I want to recursively work on, but I want to use different facts for the two cases.
I came up with the following workaround, which uses artificial markers for the two subgoals and is kind of ugly:
definition "marker_L x ≡ x"
definition "marker_R x ≡ x"
lemma conjI_marked:
assumes "marker_L P" and "marker_R Q"
shows "P ∧ Q"
using assms unfolding marker_L_def marker_R_def by simp
method conj_one_by_one uses pre = (
match pre in
p: "?P ∧ ?Q" ⇒ ‹
(unfold marker_L_def marker_R_def)?,
rule conjI_marked;(
(match conclusion in "marker_L _" ⇒ ‹(conj_one_by_one pre: p[THEN conjunct1])?›)
| (match conclusion in "marker_R _" ⇒ ‹(conj_one_by_one pre: p[THEN conjunct2])?›))›)
| ((unfold marker_L_def marker_R_def)?, insert pre)
This is not a complete answer, but you might be able to derive some useful information from what is stated here.
In Eisbach I can use ; to apply a method to all new subgoals created
by a method. However, I often know how many subgoals are created and
would like to apply different methods to the new subgoals. Is there a
way to say something like "apply method X to the first new subgoal and
method Y to the second new subgoal"?
You can use the standard tactical RANGE to define your own tactic that you can apply to consecutive subgoals. I provide a very specialized and significantly simplified use case below:
ML‹
fun mytac ctxt thms = thms
|> map (fn thm => resolve_tac ctxt (single thm))
|> RANGE
›
lemma
assumes A: A and B: B and C: C
shows "A ∧ B ∧ C"
apply(intro conjI)
apply(tactic‹mytac #{context} [#{thm A}, #{thm B}, #{thm C}] 1›)
done
Hopefully, it should be reasonably easy to extend it to more complicated use cases (while being more careful than I am about subgoal indexing: you might also need SELECT_GOAL to ensure that the implementation is safe). While in the example above mytac accepts a list of theorems, it should be easy to see how these theorems can be replaced by tactics and with some further work, the tactic can be wrapped as a higher-order method.
I want to develop a method that works on 2 conjunctions of arbitrary
length but with the same structure. The method should be usable to
show that conjunction 1 implies conjunction 2 by showing that the
implication holds for each component. It should be usable like this:
UPDATE
Having had another look at the problem, it seems that there exists a substantially more natural solution. The solution follows the outline from the original answer, but the meta implication is replaced with the HOL's object logic implication (the 'to and fro' conversion can be achieved using atomize (full) and intro impI):
lemma arg_imp2: "(a ⟶ b) ⟹ (c ⟶ d) ⟹ ((a ∧ c) ⟶ (b ∧ d))" by auto
lemma example:
assumes "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
apply(insert assms(1), atomize (full))
apply(intro arg_imp2; intro impI; intro imp; assumption)
done
LEGACY (this was part of the original answer, but is almost irrelevant due to the UPDATE suggested above)
If this is the only application that you have in mind, perhaps, there is a reasonably natural solution based on the following iterative procedure:
lemma arg_imp2: "(a ⟹ b) ⟹ (c ⟹ d) ⟹ ((a ∧ c) ⟹ (b ∧ d))" by auto
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
using c
apply(intro arg_imp2[of ‹a 0› ‹a' 0› ‹a 1 ∧ a 2 ∧ a 3› ‹a' 1 ∧ a' 2 ∧ a' 3›])
apply(rule imp)
apply(assumption)
apply(intro arg_imp2[of ‹a 1› ‹a' 1› ‹a 2 ∧ a 3› ‹a' 2 ∧ a' 3›])
apply(rule imp)
apply(assumption)
apply(intro arg_imp2[of ‹a 2› ‹a' 2› ‹a 3› ‹a' 3›])
apply(rule imp)
apply(assumption)
apply(rule imp)
apply(assumption+)
done
I am not certain how easy it would be to express this in Eisbach, but it should be reasonably easy to express this in Isabelle/ML.
Using the pointers from user9716869, I was able to write a method that does what I want:
ML‹
fun split_with_tac (tac1: int -> tactic) (ts: (int -> tactic) list) (i: int) (st: thm): thm Seq.seq =
let
val st's = tac1 i st
fun next st' =
let
val new_subgoals_count = 1 + Thm.nprems_of st' - Thm.nprems_of st
in
if new_subgoals_count <> length ts then Seq.empty
else
RANGE ts i st'
end
in
st's |> Seq.maps next
end
fun tok_to_method_text ctxt tok =
case Token.get_value tok of
SOME (Token.Source src) => Method.read ctxt src
| _ =>
let
val (text, src) = Method.read_closure_input ctxt (Token.input_of tok);
val _ = Token.assign (SOME (Token.Source src)) tok;
in text end
val readText: Token.T Token.context_parser = Scan.lift (Parse.token Parse.text)
val text_and_texts_closure: (Method.text * Method.text list) Token.context_parser =
(Args.context -- readText -- (Scan.lift \<^keyword>‹and› |-- Scan.repeat readText)) >> (fn ((ctxt, tok), t) =>
(tok_to_method_text ctxt tok, map (tok_to_method_text ctxt) t));
›
method_setup split_with =
‹text_and_texts_closure >> (fn (m, ms) => fn ctxt => fn facts =>
let
fun tac m st' =
method_evaluate m ctxt facts
fun tac' m i st' =
Goal.restrict i 1 st'
|> method_evaluate m ctxt facts
|> Seq.map (Goal.unrestrict i)
handle THM _ => Seq.empty
val initialT: int -> tactic = tac' m
val nextTs: (int -> tactic) list = map tac' ms
in SIMPLE_METHOD (HEADGOAL (split_with_tac initialT nextTs)) facts end)
›
lemma
assumes r: "P ⟹ Q ⟹ R"
and p: "P"
and q: "Q"
shows "R"
by (split_with ‹rule r› and ‹rule p› ‹rule q›)
method conj_one_by_one uses pre = (
match pre in
p: "?P ∧ ?Q" ⇒ ‹split_with ‹rule conjI› and
‹conj_one_by_one pre: p[THEN conjunct1]›
‹conj_one_by_one pre: p[THEN conjunct2]››
| insert pre)
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
proof (conj_one_by_one pre: c)
show "a 0 ⟹ a' 0" by (rule imp)
show "a 1 ⟹ a' 1" by (rule imp)
show "a 2 ⟹ a' 2" by (rule imp)
show "a 3 ⟹ a' 3" by (rule imp)
qed
I have a datatype and an inductive predicate over it (which is actually a small-step semantics of some transition system):
datatype dtype = E | A | B dtype
inductive dsem :: "dtype ⇒ dtype ⇒ bool" where
"dsem A E"
| "dsem (B E) E"
| "dsem d d' ⟹ dsem (B d) (B d')"
and also a function which is computed by case distinction:
fun f :: "dtype ⇒ nat" where
"f E = 0"
| "f A = 1"
| "f (B _) = 2"
I'm trying to prove some property about the inductive predicate, and assumptions also involve computing the value of f which doesn't participate in induction.
lemma
assumes d: "dsem s s'"
and h: "h s v"
and v: "v = f s"
shows "P v"
using d h
proof (induct rule: dsem.induct)
For the 3rd semantics rule Isabelle computes the subgoal
⋀d d'. dsem d d' ⟹ (h d v ⟹ P v) ⟹ h (B d) v ⟹ P v
where the value of s is lost so it is impossible to compute the value v.
I can neither include v into the induction assumptions because then Isabelle generates the subgoal
⋀d d'. dsem d d' ⟹ (h d v ⟹ v = f d ⟹ P v) ⟹ h (B d) v ⟹ v = f (B d) ⟹ P v
where the induction hypothesis says v = f d which is incorrect since v = f (B d) in this case. Nor can I put v into arbitrary: ... because the value of v must be fixed throughout the proof.
It would be nice to have an explicit binding s = B d in the generated subgoal; unfortunately, the rule dsem.induct doesn't provide it.
Does anybody know a workaround for computing the value v in this case?
It seems strange to me that v should be at the same time fixed and computed from s and that is what chris is saying in the comments.
If the solution Brian gives in the comments is what you want, it could duplicate the expression f s which could be big (and use s several times) and perhaps the point of the assumption v = f s was to avoid this.
A first workaround (that was possibly what Brian implicitly proposed) is to make Isabelle do the unfolding:
lemma
assumes d: "dsem s s'"
and h: "h s v"
and v: "v = big_f s s"
shows "P v"
using d h
unfolding v -- {* <<<< *}
proof (induct rule: dsem.induct)
A second workaround could be to abbreviate big_f instead of big_f s s:
lemma
assumes d: "dsem s s'"
and h: "h s (f s)"
and v: "f = (λs. big_f s s)" -- {* <<<< *}
shows "P (f s)"
using d h
proof (induct rule: dsem.induct)