Are there Isabelle/Isar proof strategies to prove lemmas about non-inductive definitions? - isabelle

I have Isabelle theory with non-inductive definition (I would like to model algorithms avoiding induction as usually is done by industrial developers) and lemma that is stated for this definition:
theory Maximum
imports (* Main *)
"HOL-Library.Multiset"
"HOL-Library.Code_Target_Numeral"
"HOL-Library.Code_Target_Nat"
"HOL-Library.Code_Abstract_Nat"
begin
definition two_integer_max_case_def :: "nat ⇒ nat ⇒ nat" where
"two_integer_max_case_def a b = (case a > b of True ⇒ a | False ⇒ b)"
lemma spec_1:
fixes a :: nat and b :: nat
assumes "a > b"
shows "two_integer_max_case_def a b = a"
apply (induction b)
apply (induction a)
apply simp_all
done
end
Current proof state has generated 3 goals all of them are connected with induction (direct application of simp fails with Failed to apply proof method):
proof (prove)
goal (3 subgoals):
1. two_integer_max_case_def 0 0 = 0
2. ⋀a. two_integer_max_case_def a 0 = a ⟹
two_integer_max_case_def (Suc a) 0 =
Suc a
3. ⋀b. two_integer_max_case_def a b = a ⟹
two_integer_max_case_def a (Suc b) = a
My question is - are there proof methods, strategies that avoid induction and that can be applied in this simple case? Maybe somehow related with simp family of tactics?

Related

How to include statement about type of the variable in the Isabelle/HOL term

I have following simple Isabelle/HOL theory:
theory Max_Of_Two_Integers_Real
imports Main
"HOL-Library.Multiset"
"HOL-Library.Code_Target_Numeral"
"HOL-Library.Code_Target_Nat"
"HOL-Library.Code_Abstract_Nat"
begin
definition two_integer_max_case_def :: "nat ⇒ nat ⇒ nat" where
"two_integer_max_case_def a b = (case a > b of True ⇒ a | False ⇒ b)"
lemma spec_final:
fixes a :: nat and b :: nat
assumes "a > b" (* and "b < a" *)
shows "two_integer_max_case_def a b = a"
using assms by (simp add: two_integer_max_case_def_def)
lemma spec_1:
fixes a :: nat and b :: nat
shows "a > b ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
lemma spec_2:
shows " (a ∈ nat set) ∧ (b ∈ nat set) ∧ (a > b) ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
end
Three lemmas try to express and prove that same statement, but progressively I am trying to move information from assumes and fixes towards the term. First 2 lemmas are correct, but the third (last) lemma is failing syntactically with the error message:
Type unification failed: Clash of types "_ ⇒ _" and "int"
Type error in application: incompatible operand type
Operator: nat :: int ⇒ nat
Operand: set :: ??'a list ⇒ ??'a set
My aim in this lemma is to move type information from the fixes towards the term/statement? How I make statements about the type of variable in the term (of inner syntax)?
Maybe I should use, if I am trying to avoid fixes clause (in which the variables may be declared), the full ForAll expression like:
lemma spec_final_3:
shows "∀ a :: nat . ∀ b :: nat . ( (a > b) ⟹ two_integer_max_case_def a b = a)"
by (simp add: two_integer_max_case_def_def)
But it is failing syntactically as well with the error message:
Inner syntax error: unexpected end of input⌂
Failed to parse prop
So - is it possible to include type statements in the term directly (without fixes clause) and is there any difference between fixes clause and type statement in the term? Maybe such differences start to appear during (semi)automatic proofs, e.g., when simplification tactics are applied or some other tactics?
nat set is interpreted as a function (that does not type correctly). The set of natural numbers can be expressed as UNIV :: nat set. Then, spec_2 reads
lemma spec_2:
shows "a ∈ (UNIV :: nat set) ∧ b ∈ (UNIV :: nat set) ∧ a > b ⟹
two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
However, more natural way would be to include the type information in spec_1 without fixes clause:
lemma spec_1':
shows "(a :: nat) > (b :: nat) ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
∀ belongs to HOL, so the HOL implication should be used in spec_final_3:
lemma spec_final_3:
shows "∀ a :: nat. ∀ b :: nat. a > b ⟶ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
spec_1 can also be rewritten using an explicit meta-logic qualification (and implication) to look similar to spec_final_3:
lemma spec_1'':
shows "⋀ a :: nat. ⋀ b :: nat. a > b ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)

Simplifying if-then-else in summations or products

While doing some basic algebra, I frequently arrive at a subgoal of the following type (sometimes with a finite sum, sometimes with a finite product).
lemma foo:
fixes N :: nat
fixes a :: "nat ⇒ nat"
shows "(a 0) = (∑x = 0..N. (if x = 0 then 1 else 0) * (a x))"
This seems pretty obvious to me, but neither auto nor auto cong: sum.cong split: if_splits can handle this. What's more, sledgehammer also surrenders when called on this lemma. How can one efficiently work with finite sums and products containing if-then-else in general, and how to approach this case in particular?
My favourite way to do these things (because it is very general) is to use the rules sum.mono_neutral_left and sum.mono_neutral_cong_left and the corresponding right versions (and analogously for products). The rule sum.mono_neutral_right lets you drop arbitrarily many summands if they are all zero:
finite T ⟹ S ⊆ T ⟹ ∀i∈T - S. g i = 0
⟹ sum g T = sum g S
The cong rule additionally allows you to modify the summation function on the now smaller set:
finite T ⟹ S ⊆ T ⟹ ∀i∈T - S. g i = 0 ⟹ (⋀x. x ∈ S ⟹ g x = h x)
⟹ sum g T = sum h S
With those, it looks like this:
lemma foo:
fixes N :: nat and a :: "nat ⇒ nat"
shows "a 0 = (∑x = 0..N. (if x = 0 then 1 else 0) * a x)"
proof -
have "(∑x = 0..N. (if x = 0 then 1 else 0) * a x) = (∑x ∈ {0}. a x)"
by (intro sum.mono_neutral_cong_right) auto
also have "… = a 0"
by simp
finally show ?thesis ..
qed
Assuming the left-hand side could use an arbitrary value between 0 and N, what about adding a more general lemma
lemma bar:
fixes N :: nat
fixes a :: "nat ⇒ nat"
assumes
"M ≤ N"
shows "a M = (∑x = 0..N. (if x = M then 1 else 0) * (a x))"
using assms by (induction N) force+
and solving the original one with using bar by blast?

Proving existence of an infinite path in Isabelle

Consider the following inductive predicate:
inductive terminating where
"(⋀ s'. s → s' ⟹ terminating s') ⟹ terminating s"
I would like to prove that if a node s is not terminating then there exists an infinite chain of the form s0 → s1 → s2 → .... Something among the lines of:
lemma "¬ terminating (c,s) ⟹
∃ cfs. (cfs 0 = (c,s) ∧ (∀ n. (cfs n) → (cfs (n+1))))"
How can I prove this in Isabelle?
Edit
The final goal is to prove the following goal:
lemma "(∀s t. (c, s) ⇒ t = (c', s) ⇒ t) ⟹
terminating (c, s) = terminating (c', s) "
where ⇒ is the big step semantics of the GCL. Perhaps another method is needed to prove this theorem.
If you are comfortable using the choice operator, you can easily construct a witness using SOME, for instance:
primrec infinite_trace :: ‹'s ⇒ nat ⇒ 's› where
‹infinite_trace c0 0 = c0›
| ‹infinite_trace c0 (Suc n) =
(SOME c. infinite_trace c0 n → c ∧ ¬ terminating c)›
(I was not sure about the types of your s and (c,s) values—so I just used 's for that.)
Obviously, the witness construction would fail if at some point SOME cannot pick a value satisfying the constraint. So, one still has to prove that the non-termination indeed propagates (as is quite obvious from the definition):
lemma terminating_suc:
assumes ‹¬ terminating c›
obtains c' where ‹c → c'› ‹¬ terminating c'›
using assms terminating.intros by blast
lemma nontermination_implies_infinite_trace:
assumes ‹¬ terminating c0›
shows ‹¬ terminating (infinite_trace c0 n)
∧ infinite_trace c0 n → infinite_trace c0 (Suc n)›
by (induct n,
(simp, metis (mono_tags, lifting) terminating_suc assms exE_some)+)
Proving your existential quantification using infinite_trace (c,s) as a witness is straight-forward.

Free type variables in proof by induction

While trying to prove lemmas about functions in continuation-passing style by induction I have come across a problem with free type variables. In my induction hypothesis, the continuation is a schematic variable but its type involves a free type variable. As a result Isabelle is not able to unify the type variable with a concrete type when I try to apply the i.h. I have cooked up this minimal example:
fun add_k :: "nat ⇒ nat ⇒ (nat ⇒ 'a) ⇒ 'a" where
"add_k 0 m k = k m" |
"add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))"
lemma add_k_cps: "∀k. add_k n m k = k (add_k n m id)"
proof(rule, induction n)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))" by simp
also have "… = k (Suc (add_k n m id))"
using Suc[where k="(λn'. k (Suc n'))"] by metis
also have "… = k (add_k n m (λn'. Suc n'))"
using Suc[where k="(λn'. Suc n')"] sorry (* Type unification failed *)
also have "… = k (add_k (Suc n) m id)" by simp
finally show ?case .
qed
In the "sorry step", the explicit instantiation of the schematic variable ?k fails with
Type unification failed
Failed to meet type constraint:
Term: Suc :: nat ⇒ nat
Type: nat ⇒ 'a
since 'a is free and not schematic. Without the instantiation the simplifier fails anyway and I couldn't find other methods that would work.
Since I cannot quantify over types, I don't see any way how to make 'a schematic inside the proof. When a term variable becomes schematic locally inside a proof, why isn't this the case with variables in its type too? After the lemma has been proved, they become schematic at the theory level anyway. This seems quite limiting. Could an option to do this be implemented in the future or is there some inherent limitation? Alternatively, is there an approach to avoid this problem and still keeping the continuation schematically polymorphic in the proven lemma?
Free type variables become schematic in a theorem when the theorem is exported from the block in which the type variables have been fixed. In particular, you cannot quantify over type variables in a block and then instantiate the type variable within the block, as you are trying to do in your induction. Arbitrary quantification over types leads to inconsistencies in HOL, so there is little hope that this could be changed.
Fortunately, there is a way to prove your lemma in CPS style without type quantification. The problem is that your statement is not general enough, because it contains id. If you generalise it, then the proof works:
lemma add_k_cps: "add_k n m (k ∘ f) = k (add_k n m f)"
proof(induction n arbitrary: f)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m (k ∘ f) = add_k n m (k ∘ (λn'. f (Suc n')))" by(simp add: o_def)
also have "… = k (add_k n m (λn'. f (Suc n')))"
using Suc.IH[where f="(λn'. f (Suc n'))"] by metis
also have "… = k (add_k (Suc n) m f)" by simp
finally show ?case .
qed
You get your original theorem back, if you choose f = id.
This is an inherent limitation how induction works in HOL. Induction is a rule in HOL, so it is not possible to generalize any types in the induction hypothesis.
A specialized solution for your problem is to first prove
lemma add_k_cps_nat: "add_k n m k = k (n + m)"
by (induction n arbitrary: m k) auto
and then prove add_k_cps.
A general approach is: prove instances for fixed types first, for which the induction works. In the example case is is an induction by nat. And then derive a proof generalized in the type itself.

Lifting a partial definition to a quotient type

I have a partially-defined operator (disj_union below) on sets that I would like to lift to a quotient type (natq). Morally, I think this should be ok, because it is always possible to find in the equivalence class some representative for which the operator is defined [*]. However, I cannot complete the proof that the lifted definition preserves the equivalence, because disj_union is only partially defined. In my theory file below, I propose one way I have found to define my disj_union operator, but I don't like it because it features lots of abs and Rep functions, and I think it would be hard to work with (right?).
What is a good way to define this kind of thing using quotients in Isabelle?
theory My_Theory imports
"~~/src/HOL/Library/Quotient_Set"
begin
(* A ∪-operator that is defined only on disjoint operands. *)
definition "X ∩ Y = {} ⟹ disj_union X Y ≡ X ∪ Y"
(* Two sets are equivalent if they have the same cardinality. *)
definition "card_eq X Y ≡ finite X ∧ finite Y ∧ card X = card Y"
(* Quotient sets of naturals by this equivalence. *)
quotient_type natq = "nat set" / partial: card_eq
proof (intro part_equivpI)
show "∃x. card_eq x x" by (metis card_eq_def finite.emptyI)
show "symp card_eq" by (metis card_eq_def symp_def)
show "transp card_eq" by (metis card_eq_def transp_def)
qed
(* I want to lift my disj_union operator to the natq type.
But I cannot complete the proof, because disj_union is
only partially defined. *)
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union"
oops
(* Here is another attempt to define natq_add. I think it
is correct, but it looks hard to prove things about,
because it uses abstraction and representation functions
explicitly. *)
definition natq_add :: "natq ⇒ natq ⇒ natq"
where "natq_add X Y ≡
let (X',Y') = SOME (X',Y').
X' ∈ Rep_natq X ∧ Y' ∈ Rep_natq Y ∧ X' ∩ Y' = {}
in abs_natq (disj_union X' Y')"
end
[*] This is a little bit like how capture-avoiding substitution is only defined on the condition that bound variables do not clash; a condition that can always be satisfied by renaming to another representative in the alpha-equivalence class.
What about something like this (just an idea):
definition disj_union' :: "nat set ⇒ nat set ⇒ nat set"
where "disj_union' X Y ≡
let (X',Y') = SOME (X',Y').
card_eq X' X ∧ card_eq Y' Y ∧ X' ∩ Y' = {}
in disj_union X' Y'"
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union'" oops
For the record, here is Ondřej's suggestion (well, a slight amendment thereof, in which only one of the operands is renamed, not both) carried out to completion...
(* A version of disj_union that is always defined. *)
definition disj_union' :: "nat set ⇒ nat set ⇒ nat set"
where "disj_union' X Y ≡
let Y' = SOME Y'.
card_eq Y' Y ∧ X ∩ Y' = {}
in disj_union X Y'"
(* Can always choose a natural that is not in a given finite subset of ℕ. *)
lemma nats_infinite:
fixes A :: "nat set"
assumes "finite A"
shows "∃x. x ∉ A"
proof (rule ccontr, simp)
assume "∀x. x ∈ A"
hence "A = UNIV" by fast
hence "finite UNIV" using assms by fast
thus False by fast
qed
(* Can always choose n naturals that are not in a given finite subset of ℕ. *)
lemma nat_renaming:
fixes x :: "nat set" and n :: nat
assumes "finite x"
shows "∃z'. finite z' ∧ card z' = n ∧ x ∩ z' = {}"
using assms
apply (induct n)
apply (intro exI[of _ "{}"], simp)
apply (clarsimp)
apply (rule_tac x="insert (SOME y. y ∉ x ∪ z') z'" in exI)
apply (intro conjI, simp)
apply (rule someI2_ex, rule nats_infinite, simp, simp)+
done
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union'"
apply (unfold disj_union'_def card_eq_def)
apply (rule someI2_ex, simp add: nat_renaming)
apply (rule someI2_ex, simp add: nat_renaming)
apply (metis card.union_inter_neutral disj_union_def empty_iff finite_Un)
done

Resources