Can erule produce erroneous subgoals? - isabelle

I have the following grammar defined in Isabelle:
inductive S where
S_empty: "S []" |
S_append: "S xs ⟹ S ys ⟹ S (xs # ys)" |
S_paren: "S xs ⟹ S (Open # xs # [Close])"
Then I define a gramar T that conceptually only adds the following rule:
T_left: "T xs ⟹ T (Open # xs)"
Then I tried to proof the following theorem:
theorem T_S:
"T xs ⟹ count xs Open = count xs Close ⟹ S xs"
apply(erule T.induct)
apply(simp add: S_empty)
apply(simp add: S_append)
apply(simp add: S_paren)
oops
To my surprise the final goal seems to be false:
⋀xsa. count xs Open = count xs Close ⟹ T xsa ⟹ S xsa ⟹ S (Open # xsa)
So here S (Open # xsa) cannot hold because there is no such production in the grammar assuming S xsa.
This situation makes no-sense to me? Is erule producing goals that are false?

Induction rules like T.induct should usually be used with the induction proof method rather than erule. The induction method ensures that the whole statement becomes part of the inductive statements whereas with erule only the conclusion is part of the inductive argument; other assumptions are basically ignored for the induction. This can be seen in the last goal state where the inductive statement involves the goal parameter xsa whereas the crucial assumption count xs Open = count xs Close still talks about the variable xs. So, the proof step should be apply(induction rule: T.induct). Then there is a chance to prove this statement.

Related

Function termination proof in Isabelle

I have datatype stack_op which consists of several (~20) cases. I'm trying write function which skips some of that cases in list:
function (sequential) skip_expr :: "stack_op list ⇒ stack_op list" where
"skip_expr [] = []"
| "skip_expr ((stack_op.Unary _)#other) = (skip_expr other)"
| "skip_expr ((stack_op.Binary _)#other) = skip_expr (skip_expr other)"
| "skip_expr ((stack_op.Value _)#other) = other"
| "skip_expr other = other"
by pat_completeness auto termination by lexicographic_order
which seems to always terminate. But trying by lexicographic order generates such unresolved cases:
Calls:
c) stack_op.Binary uv_ # other ~> skip_expr other
Measures:
1) size_list size
2) length
Result matrix:
1 2
c: ? ?
(size_change also desn't work)
I've read https://isabelle.in.tum.de/dist/Isabelle2021/doc/functions.pdf, but it couldn't help. (Maybe there are more complex examples of tremination use?)
I tried to rewrite function adding another param:
function (sequential) skip_expr :: "stack_op list ⇒ nat ⇒ stack_op list" where
"skip_expr l 0 = l"
| "skip_expr [] _ = []"
| "skip_expr ((stack_op.Unary _)#other) depth = (skip_expr other (depth - 1))"
| "skip_expr ((stack_op.Binary _)#other) depth =
(let buff1 = (skip_expr other (depth - 1))
in (skip_expr buff1 (length buff1)))"
| "skip_expr ((stack_op.Value _)#other) _ = other"
| "skip_expr other _ = other"
by pat_completeness auto
termination by (relation "measure (λ(_,dep). dep)") auto
which generates unresolved subgoal:
1. ⋀other v. skip_expr_dom (other, v) ⟹ length (skip_expr other v) < Suc v
which I also don't how to proof.
Could anyone how such cases solved (As I can understand there is some problem with two-level recursive call on rigth side of stack_op.Binary case)? Or maybe there is another way to make such skip?
Thanks in advance
The lexicographic_order method simply tries to solve the arising goals with the simplifier, so if the simplifier gets stuck you end up with unresolved termination subgoals.
In this case, as you identified correctly, the problem is that you have a nested recursive call skip_expr (skip_expr other). This is always problematic because at this stage, the simplifier knows nothing about what skip_expr does to the input list. For all we know, it might just return the list unmodified, or even a longer list, and then it surely would not terminate.
Confronting the issue head on
The solution is to show something about length (skip_expr …) and make that information available to the simplifier. Because we have not yet shown termination of the function, we have to use the skip_expr.psimps rules and the partial induction rule skip_expr.pinduct, i.e. every statement we make about skip_expr xs always has as a precondition that skip_expr actually terminates on the input xs. For this, there is the predicate skip_expr_dom.
Putting it all together, it looks like this:
lemma length_skip_expr [termination_simp]:
"skip_expr_dom xs ⟹ length (skip_expr xs) ≤ length xs"
by (induction xs rule: skip_expr.pinduct) (auto simp: skip_expr.psimps)
termination skip_expr by lexicographic_order
Circumventing the issue
Sometimes it can also be easier to circumvent the issue entirely. In your case, you could e.g. define a more general function skip_exprs that skips not just one instruction but n instructions. This you can define without nested induction:
fun skip_exprs :: "nat ⇒ stack_op list ⇒ stack_op list" where
"skip_exprs 0 xs = xs"
| "skip_exprs (Suc n) [] = []"
| "skip_exprs (Suc n) (Unary _ # other) = skip_exprs (Suc n) other"
| "skip_exprs (Suc n) (Binary _ # other) = skip_exprs (Suc (Suc n)) other"
| "skip_exprs (Suc n) (Value _ # other) = skip_exprs n other"
| "skip_exprs (Suc n) xs = xs"
Equivalence to your skip_expr is then straightforward to prove:
lemma skip_exprs_conv_skip_expr: "skip_exprs n xs = (skip_expr ^^ n) xs"
proof -
have [simp]: "(skip_expr ^^ n) [] = []" for n
by (induction n) auto
have [simp]: "(skip_expr ^^ n) (Other # xs) = Other # xs" for xs n
by (induction n) auto
show ?thesis
by (induction n xs rule: skip_exprs.induct)
(auto simp del: funpow.simps simp: funpow_Suc_right)
qed
lemma skip_expr_Suc_0 [simp]: "skip_exprs (Suc 0) xs = skip_expr xs"
by (simp add: skip_exprs_conv_skip_expr)
In your case, I don't think it actually makes sense to do this because figuring out the termination is fairly easy, but it may be good to keep in mind.

Nested cases Isar

I'm having some issues trying to do exercise 4.5 of 'Concrete Semantics' in Isar:
inductive S :: "alpha list ⇒ bool" where
Sε : "S []" |
aSb : "S m ⟹ S (a#m # [b])" |
SS : "S l ⟹ S r ⟹ S (l # r)"
inductive T :: "alpha list ⇒ bool" where
Tε : "T []" |
TaTb : "T l ⟹ T r ⟹ T (l # a#(r # [b]))"
lemma TS: "T w ⟹ S w"
proof (induction w rule: T.induct)
case Tε
show ?case by (simp add:Sε)
case (TaTb l r) show ?case using TaTb.IH(1) (* This being S l, which allows us to case-split on l using S.induct *)
proof (cases "l" rule: S.induct)
case Sε
then show ?case by (simp add: TaTb.IH(2) aSb)
next case (aSb m)
I'm getting Illegal schematic variable(s) in case "aSb"⌂
Also I find suspicious that in Sε I cannot refer to ?case, I get Unbound schematic variable: ?case. I'm thinking that maybe the problem is that I have a cases in an induction?
As summarized by the comments, you have two problems:
cases "l" rule: S.induct makes little sense and you should either use a nested induction induction l rule: S.induct or a case distinction cases l rule: S.cases
In cases you should use ?thesis instead of cases as the Isabelle/jEdit outline tells you (you can click on that thing to insert it into the buffer!). That way you would also have given a name to all variable in the case TaTb.
So you probably want something like:
lemma TS: "T w ⟹ S w"
proof (induction w rule: T.induct)
case Tε
show ?case by (simp add:Sε)
next
case (TaTb l r a b) show ?case using TaTb.IH(1)
proof (cases "l" rule: S.cases)
case Sε
then show ?thesis sorry
next
case (aSb m a b)
then show ?thesis sorry
next
case (SS l r)
then show ?thesis sorry
qed
qed

Induction on second argument Isar

inductive T :: "alpha list ⇒ bool" where
Tε : "T []" |
TaTb : "T l ⟹ T r ⟹ T (l # a#(r # [b]))"
lemma Tapp: "⟦T l; T r⟧ ⟹ T (l#r)"
proof (induction r rule: T.induct)
I get 'Failed to apply initial proof method⌂'
In Isabelle one could use rotate_tac I guess to get induction to work on the desired argument, what's the Isar equivalent? Would it help to reformulate the lemma with 'assumes' & 'shows'?
Rule induction is always on the leftmost premise of the goal. Therefore, the Isabelle/Isar solution consists on inverting the order of the premises:
lemma Tapp: "⟦T r; T l⟧ ⟹ T (l#r)"
proof (induction r rule: T.induct)
...
Or, using assumes and shows:
lemma Tapp: assumes "T r" and "T l" shows "T (l#r)"
using assms proof (induction r rule: T.induct)
...

Using the type-to-sets approach for defining quotients

Isabelle has some automation for quotient reasoning through the quotient package. I would like to see if that automation is of any use for my example. The relevant definitions is:
definition e_proj where "e_proj = e'_aff_bit // gluing"
So I try to write:
typedef e_aff_t = e'_aff_bit
quotient_type e_proj_t = "e'_aff_bit" / "gluing
However, I get the error:
Extra type variables in representing set: "'a"
The error(s) above occurred in typedef "e_aff_t"
Because as Manuel Eberl explains here, we cannot have type definitions that depend on type parameters. In the past, I was suggested to use the type-to-sets approach.
How would that approach work in my example? Would it lead to more automation?
In the past, I was suggested to use the type-to-sets approach ...
The suggestion that was made in my previous answer was to use the standard set-based infrastructure for reasoning about quotients. I only mentioned that there exist other options for completeness.
I still believe that it is best not to use Types-To-Sets, provided that the definition of a quotient type is the only reason why you wish to use Types-To-Sets:
Even with Types-To-Sets, you will only be able to mimic the behavior of a quotient type in a local context with certain additional assumptions. Upon leaving the local context, the theorems that use locally defined quotient types would need to be converted to the set-based theorems that would inevitably rely on the standard set-based infrastructure for reasoning about quotients.
One would need to develop additional Isabelle/ML infrastructure before Local Typedef Rule can be used to define quotient types locally conveniently. It should not be too difficult to develop an infrastructure that is useable, but it would take some time to develop something that is universally applicable. Personally, I do not consider this application to be sufficiently important to invest my time in it.
In my view, it is only viable to use Types-To-Sets for the definition of quotient types locally if you are already using Types-To-Sets for its intended purpose in a given development. Then, the possibility of using the framework for the definition of quotient types locally can be seen as a 'value-added benefit'.
For completeness, I provide an example that I developed for an answer on the mailing list some time ago. Of course, this is merely the demonstration of the concept, not a solution that can be used for work that is meant to be published in some form. To make this useable, one would need to convert this development to an Isabelle/ML command that would take care of all the details automatically.
theory Scratch
imports Main
"HOL-Types_To_Sets.Prerequisites"
"HOL-Types_To_Sets.Types_To_Sets"
begin
locale local_typedef =
fixes R :: "['a, 'a] ⇒ bool"
assumes is_equivalence: "equivp R"
begin
(*The exposition subsumes some of the content of
HOL/Types_To_Sets/Examples/Prerequisites.thy*)
context
fixes S and s :: "'s itself"
defines S: "S ≡ {x. ∃u. x = {v. R u v}}"
assumes Ex_type_definition_S:
"∃(Rep::'s ⇒ 'a set) (Abs::'a set ⇒ 's). type_definition Rep Abs S"
begin
definition "rep = fst (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "Abs = snd (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "rep' a = (SOME x. a ∈ S ⟶ x ∈ a)"
definition "Abs' x = (SOME a. a ∈ S ∧ a = {v. R x v})"
definition "rep'' = rep' o rep"
definition "Abs'' = Abs o Abs'"
lemma type_definition_S: "type_definition rep Abs S"
unfolding Abs_def rep_def split_beta'
by (rule someI_ex) (use Ex_type_definition_S in auto)
lemma rep_in_S[simp]: "rep x ∈ S"
and rep_inverse[simp]: "Abs (rep x) = x"
and Abs_inverse[simp]: "y ∈ S ⟹ rep (Abs y) = y"
using type_definition_S
unfolding type_definition_def by auto
definition cr_S where "cr_S ≡ λs b. s = rep b"
lemmas Domainp_cr_S = type_definition_Domainp[OF type_definition_S
cr_S_def, transfer_domain_rule]
lemmas right_total_cr_S = typedef_right_total[OF type_definition_S
cr_S_def, transfer_rule]
and bi_unique_cr_S = typedef_bi_unique[OF type_definition_S cr_S_def,
transfer_rule]
and left_unique_cr_S = typedef_left_unique[OF type_definition_S cr_S_def,
transfer_rule]
and right_unique_cr_S = typedef_right_unique[OF type_definition_S
cr_S_def, transfer_rule]
lemma cr_S_rep[intro, simp]: "cr_S (rep a) a" by (simp add: cr_S_def)
lemma cr_S_Abs[intro, simp]: "a∈S ⟹ cr_S a (Abs a)" by (simp add: cr_S_def)
(* this part was sledgehammered - please do not pay attention to the
(absence of) proof style *)
lemma r1: "∀a. Abs'' (rep'' a) = a"
unfolding Abs''_def rep''_def comp_def
proof-
{
fix s'
note repS = rep_in_S[of s']
then have "∃x. x ∈ rep s'" using S equivp_reflp is_equivalence by force
then have "rep' (rep s') ∈ rep s'"
using repS unfolding rep'_def by (metis verit_sko_ex')
moreover with is_equivalence repS have "rep s' = {v. R (rep' (rep s'))
v}"
by (smt CollectD S equivp_def)
ultimately have arr: "Abs' (rep' (rep s')) = rep s'"
unfolding Abs'_def by (smt repS some_sym_eq_trivial verit_sko_ex')
have "Abs (Abs' (rep' (rep s'))) = s'" unfolding arr by (rule
rep_inverse)
}
then show "∀a. Abs (Abs' (rep' (rep a))) = a" by auto
qed
lemma r2: "∀a. R (rep'' a) (rep'' a)"
unfolding rep''_def rep'_def
using is_equivalence unfolding equivp_def by blast
lemma r3: "∀r s. R r s = (R r r ∧ R s s ∧ Abs'' r = Abs'' s)"
apply(intro allI)
apply standard
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def by auto
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def
by (smt Abs''_def Abs'_def CollectD S comp_apply local.Abs_inverse
mem_Collect_eq someI_ex)
done
definition cr_Q where "cr_Q = (λx y. R x x ∧ Abs'' x = y)"
lemma quotient_Q: "Quotient R Abs'' rep'' cr_Q"
unfolding Quotient_def
apply(intro conjI)
subgoal by (rule r1)
subgoal by (rule r2)
subgoal by (rule r3)
subgoal by (rule cr_Q_def)
done
(* instantiate the quotient lemmas from the theory Lifting *)
lemmas Q_Quotient_abs_rep = Quotient_abs_rep[OF quotient_Q]
(*...*)
(* prove the statements about the quotient type 's *)
(*...*)
(* transfer the results back to 'a using the capabilities of transfer -
not demonstrated in the example *)
lemma aa: "(a::'a) = (a::'a)"
by auto
end
thm aa[cancel_type_definition]
(* this shows {x. ∃u. x = {v. R u v}} ≠ {} ⟹ ?a = ?a *)
end

Express that a function is constant on a set

I’m trying to express that a function f is constant on a set S, with value r My first idea was
f ` S = {r}
but that does not work, as S can be empty. So I am currently working with
f ` S ⊆ {r}
and it works okish, but I have the impression that this is still not ideal for the standard automation. In particular, auto would fail leaving this goal (irrelevant facts erased)
2. ⋀xa. thunks (delete x Γ) ⊆ thunks Γ ⟹
ae ` thunks Γ ⊆ {up⋅0} ⟹
xa ∈ thunks (delete x Γ) ⟹
ae xa = up⋅0
Sledgehammer of course has no problem (metis image_eqI singletonD subsetCE), but there are a few occurrences of this. (In general, ⊆ does not seem to work with auto as good as I’d expect).
There there a better way to express this, i.e. one that can be used by auto more easily when occurring as an assumption?
I didn't try it, since I didn't have any examples handy. But you might try the following setup.
definition "const f S r ≡ ∀x ∈ S. f x = r"
Which is equivalent to your definition:
lemma
"const f S r ⟷ f ` S ⊆ {r}"
by (auto simp: const_def)
Then employ the following simp rule:
lemma [simp]:
"const f S r ⟹ x ∈ S ⟹ f x = r"
by (simp add: const_def)
The Analysis library defines
definition constant_on (infixl "(constant'_on)" 50)
where "f constant_on A \<equiv> \<exists>y. \<forall>x\<in>A. f x = y"

Resources