Merging 2 subgoals with common unknown variable in Isabelle proof - isabelle

Im currently trying to prove a lemma in Isabelle and I'm left with 2 subgoals, which both have the same unknown variable ?Q11.
Is it possible to merge the two subgoals into one by "transitivity"?
That is by replacing ?Q11 in the second subgoal by the set in which ?Q11 is contained in the first subgoal.
goal (2 subgoals):
1. ⋀s. ?Q11 s ⊆ {s. s⦇msg_sender := address_this s, address_this := add2 s⦈ ∈ {s. s⦇g := 2⦈ ∈ {t. g t = 2}}}
2. {s. g s = 0} ⊆ {s. s⦇msg_sender := address_this s, address_this := add1 s⦈ ∈ {sa. sa⦇g := 1⦈ ∈ ?Q11 s}}
The goal I would like to obtain would be something like
1. {s. g s = 0} ⊆ {s. s⦇msg_sender := address_this s, address_this := add1 s⦈ ∈ {sa. sa⦇g := 1⦈ ∈ {s. s⦇msg_sender := address_this s, address_this := add2 s⦈ ∈ {s. s⦇g := 2⦈ ∈ {t. g t = 2}}}}}
which is proven directly by simp.
Thanks.

You can
apply (rule order.refl)
which solves the first goal using reflexivity of the subset relation. This instatiates ?Q11 accordingly.
Of course this does not really "merge subgoals", but it achieves the desired effect.

Related

Transformation of goals by "rule"

I'm trying to use rule dec_induct to do an induction proof with a base case that is not 0, but I don't understand how the rule is being applied by Isabelle. If I state the following lemma:
lemma test:
shows "P a"
proof (rule dec_induct)
Isabelle transforms it into three subgoals, which I assume are supposed to be the premises of dec_induct unified with my goal. dec_induct is
⟦?i ≤ ?j; ?P ?i; ⋀n. ⟦?i ≤ n; n < ?j; ?P n⟧ ⟹ ?P (Suc n)⟧ ⟹ ?P ?j
, so I would think that the ?j in its conclusion would unify with the "a" of my goal. That is, I would expect the following three subgoals:
?i ≤ a
?P ?i
⋀n. ⟦?i ≤ n; n < a; ?P n⟧ ⟹ ?P (Suc n)
But the subgoals Isabelle actually transforms it to are
?i ≤ ?j
P a
⋀n. ⟦?i ≤ n; n < ?j; P a⟧ ⟹ P a
How is Isabelle getting that, and how can I get it to perform the induction as I expect? I realize I should be using the induct method, but I'm just trying to understand how rule works.
Higher order unification can produce very unintuitive results, especially when you have patterns like ?f ?x, i.e. a schematic variable of function type, applied to another schematic variable. I don't know much about higher order unification, but it seems that if you unify ?f ?x with something like f x, you tend to get the unifier [?f ↦ λy. f x] instead of [?f ↦ f, ?x ↦ x], which is probably what you wanted.
You can experiment with it like this to see precisely what the possible inferred unifiers are:
context
fixes P :: "int ⇒ bool" and j :: int
begin
ML ‹
local
val ctxt = Context.Proof #{context}
val env = Envir.init
val ctxt' = #{context} |> Proof_Context.set_mode Proof_Context.mode_schematic
val s1 = "?P ?j"
val s2 = "P j"
val (t1, t2) = apply2 (Syntax.read_term ctxt') (s1, s2)
val prt = Syntax.pretty_term #{context}
fun pretty_schem s = prt (Var ((s, 0), \<^typ>‹unit›))
fun pretty_unifier (Envir.Envir {tenv, ...}, _) =
tenv
|> Vartab.dest
|> map (fn ((s,_),(_,t)) => Pretty.block
(Pretty.breaks [pretty_schem s, Pretty.str "↦", prt t]))
|> (fn x => Pretty.block (Pretty.str "[" :: Pretty.commas x # [Pretty.str "]"]))
in
val _ =
Pretty.breaks [Pretty.str "Unifiers for", prt t1, Pretty.str "and", prt t2, Pretty.str ":"]
|> Pretty.block
|> Pretty.writeln
val _ =
Unify.unifiers (ctxt, env, [(t1, t2)])
|> Seq.list_of
|> map pretty_unifier
|> map (fn x => Pretty.block [Pretty.str "∙ ", x])
|> map (Pretty.indent 2)
|> Pretty.fbreaks
|> Pretty.block
|> Pretty.writeln
end
›
Output:
Unifiers for ?P ?j and P j :
∙ [?P ↦ λa. P j]
(Disclaimer: This is only experimental code to illustrate what is going on, this is not clean Isabelle/ML coding style.)
To summarise: don't rely on higher-order unification to figure out instantiations of function variables, especially when you have patterns like ?f ?x.

Defining function with several bindings in Isabelle

Consider the following simplified lambda-calculus with the peculiarity that bound variables can occur on the bound type:
theory Example
imports "Nominal2.Nominal2"
begin
atom_decl vrs
nominal_datatype ty =
Top
nominal_datatype trm =
Var "vrs"
| Abs x::"vrs" t::"trm" T::"ty" binds x in t T
nominal_function
fv :: "trm ⇒ vrs set"
where
"fv (Var x) = {x}"
| "fv (Abs x t T) = (fv t) - {x}"
using [[simproc del: alpha_lst]]
subgoal by(simp add: fv_graph_aux_def eqvt_def eqvt_at_def)
subgoal by simp
subgoal for P x
apply(rule trm.strong_exhaust[of x P])
by( simp_all add: fresh_star_def fresh_Pair)
apply simp_all
subgoal for x T t xa Ta ta
sorry
end
I have been unable to show the last goal:
eqvt_at fv_sumC T ⟹ eqvt_at fv_sumC Ta ⟹
[[atom x]]lst. (T, t) = [[atom xa]]lst. (Ta, ta) ⟹
fv_sumC T - {x} = fv_sumC Ta - {xa}
despite my efforts of a day.
Solution
subgoal for x T t xa Ta ta
proof -
assume 1: "[[atom x]]lst. (t, T) = [[atom xa]]lst. (ta, Ta)"
" eqvt_at fv_sumC t" " eqvt_at fv_sumC ta"
then have 2: "[[atom x]]lst. t = [[atom xa]]lst. ta"
by(auto simp add: Abs1_eq_iff'(3) fresh_Pair)
show "removeAll x (fv_sumC t) = removeAll xa (fv_sumC ta)"
apply(rule Abs_lst1_fcb2'[OF 2, of _ "[]"])
apply (simp add: fresh_removeAll)
apply (simp add: fresh_star_list(3))
using 1 Abs_lst1_fcb2' unfolding eqvt_at_def
by auto
qed
I am glad that you were able to work out the solution. Nonetheless, I would still like to elaborate on the comment that I previously made. In particular, I would like to emphasize that nominal_datatype already provides a very similar function automatically: it is the function fv_trm. This function is, effectively, equivalent to the function fv in your question. Here is a rough sketch (the proof will need to be refined) of a theory that demonstrates this:
theory Scratch
imports "Nominal2.Nominal2"
begin
atom_decl vrs
nominal_datatype ty =
Top
nominal_datatype trm =
Var vrs
| Abs x::vrs t::trm T::ty binds x in t T
lemma supp_ty: "supp (ty::ty) = {}"
by (metis (full_types) ty.strong_exhaust ty.supp)
lemmas fv_trm = trm.fv_defs[unfolded supp_ty supp_at_base, simplified]
lemma dom_fv_trm:
"a ∈ fv_trm x ⟹ a ∈ {a. sort_of a = Sort ''Scratch.vrs'' []}"
apply(induction rule: trm.induct)
unfolding fv_trm
by auto
lemma inj_on_Abs_vrs: "inj_on Abs_vrs (fv_trm x)"
using dom_fv_trm by (simp add: Abs_vrs_inject inj_on_def)
definition fv where "fv x = Abs_vrs ` fv_trm x"
lemma fv_Var: "fv (Var x) = {x}"
unfolding fv_def fv_trm using Rep_vrs_inverse atom_vrs_def by auto
(*I leave it to you to work out the details,
but Sledgehammer already finds something sensible*)
lemma fv_Abs: "fv (Abs x t T) = fv t - {x}"
using inj_on_Abs_vrs
unfolding fv_def fv_trm
sorry
end

Focussing on new subgoals in Eisbach

In Eisbach I can use ; to apply a method to all new subgoals created by a method.
However, I often know how many subgoals are created and would like to apply different methods to the new subgoals.
Is there a way to say something like "apply method X to the first new subgoal and method Y to the second new subgoal"?
Here is a simple use case:
I want to develop a method that works on 2 conjunctions of arbitrary length but with the same structure.
The method should be usable to show that conjunction 1 implies conjunction 2 by showing that the implication holds for each component.
It should be usable like this:
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
proof (conj_one_by_one pre: c)
show "a 0 ⟹ a' 0" by (rule imp)
show "a 1 ⟹ a' 1" by (rule imp)
show "a 2 ⟹ a' 2" by (rule imp)
show "a 3 ⟹ a' 3" by (rule imp)
qed
When implementing this method in Eisbach, I have a problem after using rule conjI.
I get two subgoals that I want to recursively work on, but I want to use different facts for the two cases.
I came up with the following workaround, which uses artificial markers for the two subgoals and is kind of ugly:
definition "marker_L x ≡ x"
definition "marker_R x ≡ x"
lemma conjI_marked:
assumes "marker_L P" and "marker_R Q"
shows "P ∧ Q"
using assms unfolding marker_L_def marker_R_def by simp
method conj_one_by_one uses pre = (
match pre in
p: "?P ∧ ?Q" ⇒ ‹
(unfold marker_L_def marker_R_def)?,
rule conjI_marked;(
(match conclusion in "marker_L _" ⇒ ‹(conj_one_by_one pre: p[THEN conjunct1])?›)
| (match conclusion in "marker_R _" ⇒ ‹(conj_one_by_one pre: p[THEN conjunct2])?›))›)
| ((unfold marker_L_def marker_R_def)?, insert pre)
This is not a complete answer, but you might be able to derive some useful information from what is stated here.
In Eisbach I can use ; to apply a method to all new subgoals created
by a method. However, I often know how many subgoals are created and
would like to apply different methods to the new subgoals. Is there a
way to say something like "apply method X to the first new subgoal and
method Y to the second new subgoal"?
You can use the standard tactical RANGE to define your own tactic that you can apply to consecutive subgoals. I provide a very specialized and significantly simplified use case below:
ML‹
fun mytac ctxt thms = thms
|> map (fn thm => resolve_tac ctxt (single thm))
|> RANGE
›
lemma
assumes A: A and B: B and C: C
shows "A ∧ B ∧ C"
apply(intro conjI)
apply(tactic‹mytac #{context} [#{thm A}, #{thm B}, #{thm C}] 1›)
done
Hopefully, it should be reasonably easy to extend it to more complicated use cases (while being more careful than I am about subgoal indexing: you might also need SELECT_GOAL to ensure that the implementation is safe). While in the example above mytac accepts a list of theorems, it should be easy to see how these theorems can be replaced by tactics and with some further work, the tactic can be wrapped as a higher-order method.
I want to develop a method that works on 2 conjunctions of arbitrary
length but with the same structure. The method should be usable to
show that conjunction 1 implies conjunction 2 by showing that the
implication holds for each component. It should be usable like this:
UPDATE
Having had another look at the problem, it seems that there exists a substantially more natural solution. The solution follows the outline from the original answer, but the meta implication is replaced with the HOL's object logic implication (the 'to and fro' conversion can be achieved using atomize (full) and intro impI):
lemma arg_imp2: "(a ⟶ b) ⟹ (c ⟶ d) ⟹ ((a ∧ c) ⟶ (b ∧ d))" by auto
lemma example:
assumes "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
apply(insert assms(1), atomize (full))
apply(intro arg_imp2; intro impI; intro imp; assumption)
done
LEGACY (this was part of the original answer, but is almost irrelevant due to the UPDATE suggested above)
If this is the only application that you have in mind, perhaps, there is a reasonably natural solution based on the following iterative procedure:
lemma arg_imp2: "(a ⟹ b) ⟹ (c ⟹ d) ⟹ ((a ∧ c) ⟹ (b ∧ d))" by auto
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
using c
apply(intro arg_imp2[of ‹a 0› ‹a' 0› ‹a 1 ∧ a 2 ∧ a 3› ‹a' 1 ∧ a' 2 ∧ a' 3›])
apply(rule imp)
apply(assumption)
apply(intro arg_imp2[of ‹a 1› ‹a' 1› ‹a 2 ∧ a 3› ‹a' 2 ∧ a' 3›])
apply(rule imp)
apply(assumption)
apply(intro arg_imp2[of ‹a 2› ‹a' 2› ‹a 3› ‹a' 3›])
apply(rule imp)
apply(assumption)
apply(rule imp)
apply(assumption+)
done
I am not certain how easy it would be to express this in Eisbach, but it should be reasonably easy to express this in Isabelle/ML.
Using the pointers from user9716869, I was able to write a method that does what I want:
ML‹
fun split_with_tac (tac1: int -> tactic) (ts: (int -> tactic) list) (i: int) (st: thm): thm Seq.seq =
let
val st's = tac1 i st
fun next st' =
let
val new_subgoals_count = 1 + Thm.nprems_of st' - Thm.nprems_of st
in
if new_subgoals_count <> length ts then Seq.empty
else
RANGE ts i st'
end
in
st's |> Seq.maps next
end
fun tok_to_method_text ctxt tok =
case Token.get_value tok of
SOME (Token.Source src) => Method.read ctxt src
| _ =>
let
val (text, src) = Method.read_closure_input ctxt (Token.input_of tok);
val _ = Token.assign (SOME (Token.Source src)) tok;
in text end
val readText: Token.T Token.context_parser = Scan.lift (Parse.token Parse.text)
val text_and_texts_closure: (Method.text * Method.text list) Token.context_parser =
(Args.context -- readText -- (Scan.lift \<^keyword>‹and› |-- Scan.repeat readText)) >> (fn ((ctxt, tok), t) =>
(tok_to_method_text ctxt tok, map (tok_to_method_text ctxt) t));
›
method_setup split_with =
‹text_and_texts_closure >> (fn (m, ms) => fn ctxt => fn facts =>
let
fun tac m st' =
method_evaluate m ctxt facts
fun tac' m i st' =
Goal.restrict i 1 st'
|> method_evaluate m ctxt facts
|> Seq.map (Goal.unrestrict i)
handle THM _ => Seq.empty
val initialT: int -> tactic = tac' m
val nextTs: (int -> tactic) list = map tac' ms
in SIMPLE_METHOD (HEADGOAL (split_with_tac initialT nextTs)) facts end)
›
lemma
assumes r: "P ⟹ Q ⟹ R"
and p: "P"
and q: "Q"
shows "R"
by (split_with ‹rule r› and ‹rule p› ‹rule q›)
method conj_one_by_one uses pre = (
match pre in
p: "?P ∧ ?Q" ⇒ ‹split_with ‹rule conjI› and
‹conj_one_by_one pre: p[THEN conjunct1]›
‹conj_one_by_one pre: p[THEN conjunct2]››
| insert pre)
lemma example:
assumes c: "a 0 ∧ a 1 ∧ a 2 ∧ a 3"
and imp: "⋀i. a i ⟹ a' i"
shows "a' 0 ∧ a' 1 ∧ a' 2 ∧ a' 3"
proof (conj_one_by_one pre: c)
show "a 0 ⟹ a' 0" by (rule imp)
show "a 1 ⟹ a' 1" by (rule imp)
show "a 2 ⟹ a' 2" by (rule imp)
show "a 3 ⟹ a' 3" by (rule imp)
qed

Combining tactics a certain number of times in Isabelle

I find myself solving a goal that with safe splits to 32 subgoals. It is a quite algebraic goal so overall I need to use argo, algebra and auto. I was wondering if there is a way to specify that auto should be applied say 2 times, then algebra 10 times etc. Where should I look for this syntax in the future? Is it part of eisbach?
There is the REPEAT_DETERM_N tactical in $ISABELLE_HOME/src/Pure/tactical.ML I never used it so I'm not 100% sure it's what you need.
Alternatively your functionality can be done somewhat like that:
theory NTimes
imports
Main
"~~/src/HOL/Eisbach/Eisbach"
begin
ML ‹
infixr 2 TIMES
fun 0 TIMES _ = all_tac
| n TIMES tac = tac THEN (n - 1) TIMES tac
›
notepad
begin
fix A B C D
have test1: "A ∧ B ∧ C ∧ D ⟹ True"
apply (tactic ‹3 TIMES eresolve_tac #{context} [#{thm conjE}] 1›)
apply (rule TrueI)
done
fix E
have test2: "A ∧ B ∧ C ∧ D ∧ E ⟹ True"
apply (tactic ‹2 TIMES 2 TIMES eresolve_tac #{context} [#{thm conjE}] 1›)
apply (rule TrueI)
done
end
(* For good examples for working
with higher order methods in ML see $ISABELLE_HOME/src/HOL/Eisbach/Eisbach.thy *)
method_setup ntimes = ‹
Scan.lift Parse.nat -- Method.text_closure >>
(fn (n, closure) => fn ctxt => fn facts =>
let
val tac = method_evaluate closure ctxt facts
in
SIMPLE_METHOD (n TIMES tac) facts
end)
›
notepad
begin
fix A B C D
have test1: "A ∧ B ∧ C ∧ D ⟹ True"
apply (ntimes 3 ‹erule conjE›)
apply (rule TrueI)
done
fix E
have test2: "A ∧ B ∧ C ∧ D ∧ E ⟹ True"
apply (ntimes 2 ‹ntimes 2 ‹erule conjE››)
apply (rule TrueI)
done
have test3: "A ∧ B ∧ C ∧ D ∧ E ⟹ True"
apply (ntimes 3 ‹erule conjE›)
apply (rule TrueI)
done
have test4: "A = A" "B = B" "C = C"
apply -
apply (ntimes 2 ‹fastforce›)
apply (rule refl)
done
(* in some examples one can instead use subgoal ranges *)
have test5: "A = A" "B = B" "C = C"
apply -
apply (fastforce+)[2]
apply (rule refl)
done
end
end
I'm not an expert in Isabelle/ML Programming so this code is likely of low quality, but I hope it's a good starting point for you!

Is there an Isabelle equivalent to Haskell newtype?

I want to make a new datatype shaped like an old one, but (unlike using type_synonym) it should be recognized as distinct in other theories.
My motivating example: I'm making a stack datatype out of lists. I don't want my other theories to see my stacks as lists so I can enforce my own simplification rules on it, but the only solution I've found is the following:
datatype 'a stk = S "'a list"
...
primrec index_of' :: "'a list => 'a => nat option"
where "index_of' [] b = None"
| "index_of' (a # as) b = (
if b = a then Some 0
else case index_of' as b of Some n => Some (Suc n) | None => None)"
primrec index_of :: "'a stk => 'a => nat option"
where "index_of (S as) x = index_of' as x"
...
lemma [simp]: "index_of' del v = Some m ==> m <= n ==>
index_of' (insert_at' del n v) v = Some m"
<proof>
lemma [simp]: "index_of del v = Some m ==> m <= n ==>
index_of (insert_at del n v) v = Some m"
by (induction del, simp)
It works, but it means my stack theory is bloated and filled with way too much redundancy: every function has a second version stripping the constructor off, and every theorem has a second version (for which the proof is always by (induction del, simp), which strikes me as a sign I'm doing too much work somewhere).
Is there anything that would help here?
You want to use typedef.
The declaration
typedef 'a stack = "{xs :: 'a list. True}"
morphisms list_of_stack as_stack
by auto
introduces a new type, containing all lists, as well as functions between 'a stack and 'a list and a bunch of theorems. Here is selection of them (you can view all using show_theorems after the typedef command):
theorems:
as_stack_cases: (⋀y. ?x = as_stack y ⟹ y ∈ {xs. True} ⟹ ?P) ⟹ ?P
as_stack_inject: ?x ∈ {xs. True} ⟹ ?y ∈ {xs. True} ⟹ (as_stack ?x = as_stack ?y) = (?x = ?y)
as_stack_inverse: ?y ∈ {xs. True} ⟹ list_of_stack (as_stack ?y) = ?y
list_of_stack: list_of_stack ?x ∈ {xs. True}
list_of_stack_inject: (list_of_stack ?x = list_of_stack ?y) = (?x = ?y)
list_of_stack_inverse: as_stack (list_of_stack ?x) = ?x
type_definition_stack: type_definition list_of_stack as_stack {xs. True}
The ?x ∈ {xs. True} assumptions are quite boring here, but you can specify a subset of all lists there, e.g. if your stacks are never empty, and ensure on the type level that the property holds for all types.
The type_definition_stack theorem is useful in conjunction with the lifting package. After the declaration
setup_lifting type_definition_stack
you can define functions on stacks by giving their definition in terms of lists, and also prove theorems involving stacks by proving their equivalent proposition in terms of lists; much easier than manually juggling with the conversion functions.

Resources