Associativity proof on Nats vs. Lists - isabelle

I am comparing the associativity proofs for Nats and Lists.
The proof on Lists goes by induction
lemma append_assoc [simp]: "(xs # ys) # zs = xs # (ys # zs)"
by (induct xs) auto
But, the proof on Nats is
lemma nat_add_assoc: "(m + n) + k = m + ((n + k)::nat)"
by (rule add_assoc)
Why do I not need induction on the nat_add_assoc proof? Is it because of some automation happening on natural numbers?

The associativity proof on nat is also done by induction.
In Nat.thy you can find
instantiation nat :: comm_monoid_diff
which is the Isabelle way of saying nat has type class comm_monoid_diff. The following definitions and lemmas then show that the natural numbers are a commutative monoid under addition and that there's also subtraction.
In this block you find the proof:
instance proof
fix n m q :: nat
show "(n + m) + q = n + (m + q)" by (induct n) simp_all
The instantiation then gives us the lemma add_assoc on nat.

Related

Reification for interpretation functions which use different interpretation functions below quantifiers/lambdas

I'm currently trying use Isabelle/HOL's reification tactic. I'm unable to use different interpretation functions below quantifiers/lambdas. The below MWE illustrates this. The important part is the definition of the form function, where the ter call occurs below the ∀. When trying to use the reify tactic I get an Cannot find the atoms equation error. I don't get this error for interpretation functions which only call themselves under quantifiers.
I can't really reformulate my problem to avoid this. Does anybody know how to get reify working for such cases?
theory MWE
imports
"HOL-Library.Reflection"
begin
datatype Ter = V nat | P Ter Ter
datatype Form = All0 Ter
fun ter :: "Ter ⇒ nat list ⇒ nat"
where "ter (V n) vs = vs ! n"
| "ter (P t1 t2) vs = ter t1 vs + ter t2 vs"
fun form :: "Form ⇒ nat list ⇒ bool"
where "form (All0 t) vs = (∀ v . ter t (v#vs) = 0)" (* use of different interpretation function below quantifier *)
(*
I would expect this to reify to:
form (All0 (P (V 0) (V 0))) []
instead I get an error :-(
*)
lemma "∀ n :: nat . n + n = 0"
apply (reify ter.simps form.simps)
(* proof (prove)
goal (1 subgoal):
1. ∀n. n + n = n + n
Cannot find the atoms equation *)
oops
(* As a side note: the following example in src/HOL/ex/Reflection_Examples.thy (line 448, Isabelle2022) seems to be broken? For me, the reify invocation
doesn't change the goal at all. It uses quantifiers too, but only calls the same interpretation function under quantifiers and also doesn't throw an error,
so at least for me this seems to be unrelated to my problem.
*)
(*
lemma " ∀x. ∃n. ((Suc n) * length (([(3::int) * x + f t * y - 9 + (- z)] # []) # xs) = length xs) ∧ m < 5*n - length (xs # [2,3,4,x*z + 8 - y]) ⟶ (∃p. ∀q. p ∧ q ⟶ r)"
apply (reify Irifm.simps Irnat_simps Irlist.simps Irint_simps)
oops
*)
end

Using the type-to-sets approach for defining quotients

Isabelle has some automation for quotient reasoning through the quotient package. I would like to see if that automation is of any use for my example. The relevant definitions is:
definition e_proj where "e_proj = e'_aff_bit // gluing"
So I try to write:
typedef e_aff_t = e'_aff_bit
quotient_type e_proj_t = "e'_aff_bit" / "gluing
However, I get the error:
Extra type variables in representing set: "'a"
The error(s) above occurred in typedef "e_aff_t"
Because as Manuel Eberl explains here, we cannot have type definitions that depend on type parameters. In the past, I was suggested to use the type-to-sets approach.
How would that approach work in my example? Would it lead to more automation?
In the past, I was suggested to use the type-to-sets approach ...
The suggestion that was made in my previous answer was to use the standard set-based infrastructure for reasoning about quotients. I only mentioned that there exist other options for completeness.
I still believe that it is best not to use Types-To-Sets, provided that the definition of a quotient type is the only reason why you wish to use Types-To-Sets:
Even with Types-To-Sets, you will only be able to mimic the behavior of a quotient type in a local context with certain additional assumptions. Upon leaving the local context, the theorems that use locally defined quotient types would need to be converted to the set-based theorems that would inevitably rely on the standard set-based infrastructure for reasoning about quotients.
One would need to develop additional Isabelle/ML infrastructure before Local Typedef Rule can be used to define quotient types locally conveniently. It should not be too difficult to develop an infrastructure that is useable, but it would take some time to develop something that is universally applicable. Personally, I do not consider this application to be sufficiently important to invest my time in it.
In my view, it is only viable to use Types-To-Sets for the definition of quotient types locally if you are already using Types-To-Sets for its intended purpose in a given development. Then, the possibility of using the framework for the definition of quotient types locally can be seen as a 'value-added benefit'.
For completeness, I provide an example that I developed for an answer on the mailing list some time ago. Of course, this is merely the demonstration of the concept, not a solution that can be used for work that is meant to be published in some form. To make this useable, one would need to convert this development to an Isabelle/ML command that would take care of all the details automatically.
theory Scratch
imports Main
"HOL-Types_To_Sets.Prerequisites"
"HOL-Types_To_Sets.Types_To_Sets"
begin
locale local_typedef =
fixes R :: "['a, 'a] ⇒ bool"
assumes is_equivalence: "equivp R"
begin
(*The exposition subsumes some of the content of
HOL/Types_To_Sets/Examples/Prerequisites.thy*)
context
fixes S and s :: "'s itself"
defines S: "S ≡ {x. ∃u. x = {v. R u v}}"
assumes Ex_type_definition_S:
"∃(Rep::'s ⇒ 'a set) (Abs::'a set ⇒ 's). type_definition Rep Abs S"
begin
definition "rep = fst (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "Abs = snd (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "rep' a = (SOME x. a ∈ S ⟶ x ∈ a)"
definition "Abs' x = (SOME a. a ∈ S ∧ a = {v. R x v})"
definition "rep'' = rep' o rep"
definition "Abs'' = Abs o Abs'"
lemma type_definition_S: "type_definition rep Abs S"
unfolding Abs_def rep_def split_beta'
by (rule someI_ex) (use Ex_type_definition_S in auto)
lemma rep_in_S[simp]: "rep x ∈ S"
and rep_inverse[simp]: "Abs (rep x) = x"
and Abs_inverse[simp]: "y ∈ S ⟹ rep (Abs y) = y"
using type_definition_S
unfolding type_definition_def by auto
definition cr_S where "cr_S ≡ λs b. s = rep b"
lemmas Domainp_cr_S = type_definition_Domainp[OF type_definition_S
cr_S_def, transfer_domain_rule]
lemmas right_total_cr_S = typedef_right_total[OF type_definition_S
cr_S_def, transfer_rule]
and bi_unique_cr_S = typedef_bi_unique[OF type_definition_S cr_S_def,
transfer_rule]
and left_unique_cr_S = typedef_left_unique[OF type_definition_S cr_S_def,
transfer_rule]
and right_unique_cr_S = typedef_right_unique[OF type_definition_S
cr_S_def, transfer_rule]
lemma cr_S_rep[intro, simp]: "cr_S (rep a) a" by (simp add: cr_S_def)
lemma cr_S_Abs[intro, simp]: "a∈S ⟹ cr_S a (Abs a)" by (simp add: cr_S_def)
(* this part was sledgehammered - please do not pay attention to the
(absence of) proof style *)
lemma r1: "∀a. Abs'' (rep'' a) = a"
unfolding Abs''_def rep''_def comp_def
proof-
{
fix s'
note repS = rep_in_S[of s']
then have "∃x. x ∈ rep s'" using S equivp_reflp is_equivalence by force
then have "rep' (rep s') ∈ rep s'"
using repS unfolding rep'_def by (metis verit_sko_ex')
moreover with is_equivalence repS have "rep s' = {v. R (rep' (rep s'))
v}"
by (smt CollectD S equivp_def)
ultimately have arr: "Abs' (rep' (rep s')) = rep s'"
unfolding Abs'_def by (smt repS some_sym_eq_trivial verit_sko_ex')
have "Abs (Abs' (rep' (rep s'))) = s'" unfolding arr by (rule
rep_inverse)
}
then show "∀a. Abs (Abs' (rep' (rep a))) = a" by auto
qed
lemma r2: "∀a. R (rep'' a) (rep'' a)"
unfolding rep''_def rep'_def
using is_equivalence unfolding equivp_def by blast
lemma r3: "∀r s. R r s = (R r r ∧ R s s ∧ Abs'' r = Abs'' s)"
apply(intro allI)
apply standard
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def by auto
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def
by (smt Abs''_def Abs'_def CollectD S comp_apply local.Abs_inverse
mem_Collect_eq someI_ex)
done
definition cr_Q where "cr_Q = (λx y. R x x ∧ Abs'' x = y)"
lemma quotient_Q: "Quotient R Abs'' rep'' cr_Q"
unfolding Quotient_def
apply(intro conjI)
subgoal by (rule r1)
subgoal by (rule r2)
subgoal by (rule r3)
subgoal by (rule cr_Q_def)
done
(* instantiate the quotient lemmas from the theory Lifting *)
lemmas Q_Quotient_abs_rep = Quotient_abs_rep[OF quotient_Q]
(*...*)
(* prove the statements about the quotient type 's *)
(*...*)
(* transfer the results back to 'a using the capabilities of transfer -
not demonstrated in the example *)
lemma aa: "(a::'a) = (a::'a)"
by auto
end
thm aa[cancel_type_definition]
(* this shows {x. ∃u. x = {v. R u v}} ≠ {} ⟹ ?a = ?a *)
end

How can I prove irreflexivity of an inductively defined relation in Isabelle?

Consider as an example the following definition of inequality of natural numbers in Isabelle:
inductive unequal :: "nat ⇒ nat ⇒ bool" where
zero_suc: "unequal 0 (Suc _)" |
suc_zero: "unequal (Suc _) 0" |
suc_suc: "unequal n m ⟹ unequal (Suc n) (Suc m)"
I want to prove irreflexivity of unequal, that is, ¬ unequal n n. For illustration purposes let me first prove the contrived lemma ¬ unequal (n + m) (n + m):
lemma "¬ unequal (n + m) (n + m)"
proof
assume "unequal (n + m) (n + m)"
then show False
proof (induction "n + m" "n + m" arbitrary: n m)
case zero_suc
then show False by simp
next
case suc_zero
then show False by simp
next
case suc_suc
then show False by presburger
qed
qed
In the first two cases, False must be deduced from the assumptions 0 = n + m and Suc _ = n + m, which is trivial.
I would expect that the proof of ¬ unequal n n can be done in an analogous way, that is, according to the following pattern:
lemma "¬ unequal n n"
proof
assume "unequal n n"
then show False
proof (induction n n arbitrary: n)
case zero_suc
then show False sorry
next
case suc_zero
then show False sorry
next
case suc_suc
then show False sorry
qed
qed
In particular, I would expect that in the first two cases, I get the assumptions 0 = n and Suc _ = n. However, I get no assumptions at all, meaning that I am asked to prove False from nothing. Why is this and how can I conduct the proof of inequality?
You are inducting over unequal. Instead, you should induct over n, like this:
lemma "¬ (unequal n n)"
proof (induct n)
case 0
then show ?case sorry
next
case (Suc n)
then show ?case sorry
qed
Then we can use Sledgehammer on each of the subgoals marked with sorry. Sledgehammer (with CVC4) recommends us to complete the proof as follows:
lemma "¬ (unequal n n)"
proof (induct n)
case 0
then show ?case using unequal.cases by blast
next
case (Suc n)
then show ?case using unequal.cases by blast
qed
The induction method handles variable instantiations and non-variable instantiations differently. A non-variable instantiation t is a shorthand for x ≡ t where x is a fresh variable. As a result, induction is done on x, and the context additionally contains the definition x ≡ t.
Therefore, (induction "n + m" "n + m" arbitrary: n m) in the first proof is equivalent to (induction k ≡ "n + m" l ≡ "n + m" arbitrary: n m) with the effect described above. To get this effect for the second proof, you have to replace (induction n n arbitrary: n) with (induction k ≡ n l ≡ n arbitrary: n). The assumptions will actually become so simple that the pre-simplifier, which is run by the induction method, can derive False from them. As a result, there will be no cases left to prove, and you can replace the whole inner proof–qed block with by (induction k ≡ n l ≡ n arbitrary: n).

Free type variables in proof by induction

While trying to prove lemmas about functions in continuation-passing style by induction I have come across a problem with free type variables. In my induction hypothesis, the continuation is a schematic variable but its type involves a free type variable. As a result Isabelle is not able to unify the type variable with a concrete type when I try to apply the i.h. I have cooked up this minimal example:
fun add_k :: "nat ⇒ nat ⇒ (nat ⇒ 'a) ⇒ 'a" where
"add_k 0 m k = k m" |
"add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))"
lemma add_k_cps: "∀k. add_k n m k = k (add_k n m id)"
proof(rule, induction n)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))" by simp
also have "… = k (Suc (add_k n m id))"
using Suc[where k="(λn'. k (Suc n'))"] by metis
also have "… = k (add_k n m (λn'. Suc n'))"
using Suc[where k="(λn'. Suc n')"] sorry (* Type unification failed *)
also have "… = k (add_k (Suc n) m id)" by simp
finally show ?case .
qed
In the "sorry step", the explicit instantiation of the schematic variable ?k fails with
Type unification failed
Failed to meet type constraint:
Term: Suc :: nat ⇒ nat
Type: nat ⇒ 'a
since 'a is free and not schematic. Without the instantiation the simplifier fails anyway and I couldn't find other methods that would work.
Since I cannot quantify over types, I don't see any way how to make 'a schematic inside the proof. When a term variable becomes schematic locally inside a proof, why isn't this the case with variables in its type too? After the lemma has been proved, they become schematic at the theory level anyway. This seems quite limiting. Could an option to do this be implemented in the future or is there some inherent limitation? Alternatively, is there an approach to avoid this problem and still keeping the continuation schematically polymorphic in the proven lemma?
Free type variables become schematic in a theorem when the theorem is exported from the block in which the type variables have been fixed. In particular, you cannot quantify over type variables in a block and then instantiate the type variable within the block, as you are trying to do in your induction. Arbitrary quantification over types leads to inconsistencies in HOL, so there is little hope that this could be changed.
Fortunately, there is a way to prove your lemma in CPS style without type quantification. The problem is that your statement is not general enough, because it contains id. If you generalise it, then the proof works:
lemma add_k_cps: "add_k n m (k ∘ f) = k (add_k n m f)"
proof(induction n arbitrary: f)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m (k ∘ f) = add_k n m (k ∘ (λn'. f (Suc n')))" by(simp add: o_def)
also have "… = k (add_k n m (λn'. f (Suc n')))"
using Suc.IH[where f="(λn'. f (Suc n'))"] by metis
also have "… = k (add_k (Suc n) m f)" by simp
finally show ?case .
qed
You get your original theorem back, if you choose f = id.
This is an inherent limitation how induction works in HOL. Induction is a rule in HOL, so it is not possible to generalize any types in the induction hypothesis.
A specialized solution for your problem is to first prove
lemma add_k_cps_nat: "add_k n m k = k (n + m)"
by (induction n arbitrary: m k) auto
and then prove add_k_cps.
A general approach is: prove instances for fixed types first, for which the induction works. In the example case is is an induction by nat. And then derive a proof generalized in the type itself.

How to use obtain in existential proofs?

I tried to prove an existential theorem
lemma "∃ x. x * (t :: nat) = t"
proof
obtain y where "y * t = t" by (auto)
but I could not finish the proof. So I have the necessary y but how can I feed it into the original goal?
Soundness of natural deduction requires that you get hold of the witness before you open the existential quantifier. This is why you are not allowed to use obtained variables in show statements. In your example, the proof step implicitly applies the rule exI. This turns the existentially quantified variable x into the schematic variable ?x, which can be instantiated later, but the instantiation may only refer to variables that have been in scope when ?x came into place. In the low-level proof state, obtained variables are meta-quantified (!!) and the instantiations for ?x can only refer to such variables that appear as a parameter to ?x.
Therefore, you have to switch the order in your proof:
lemma "∃ x. x * (t :: nat) = t"
proof - (* method - does not change the goal *)
obtain y where "y * t = t" by (auto)
then show ?thesis by(rule exI)
qed
You can give the witness (i.e. the element you want to put in for x) in the show clause:
lemma "∃ x. x * (t :: nat) = t"
proof
show "1*t = t" by simp
qed
Alternatively, when you already know the witness (1 or Suc 0 here), you can explicitly instantiate the rule exI to introduce the existential term:
lemma "∃ x. x * (t :: nat) = t"
by (rule exI[where x = "Suc 0"], simp)
Here, the existential quantifier introduction rule thm exI is
?P ?x ⟹ ∃x. ?P x
you can explore and instantiate it gradually with the answer.
thm exI[where x = "Suc 0"] is:
?P (Suc 0) ⟹ ∃x. ?P x
and exI[where P = "λ x. x * t = t" and x = "Suc 0"] is
Suc 0 * t = t ⟹ ∃x. x * t = t
And Suc 0 * t = t is only one simplification (simp) away. But the system can figure out the last instantiation P = "λ x. x * t = t" via unification, so it isn't really necessary.
Related:
Instantiating theorems in Isabelle

Resources