What is the difference between universal quantifiers and meta-universal quantifiers? - isabelle

The universal quantifier in first-order logic (the symbol is ∀) and meta-universal quantifier in meta-logic (the symbol is ⋀) what is the main difference?
For the following two lemmas, the first example proves successful using universal quantifiers, while the meta-universal quantifiers do not.
lemma "∀ x. P x ⟹ P 0"
apply simp
done
lemma "⋀ x. P x ⟹ P 0"
oops

Isabelle is a generic framework for interactive theorem proving. Its meta-logic Isabelle/Pure allows to define a broad range of object-logics, one of them being Isabelle/HOL. As you already hinted, the symbol ∀ is Isabelle/HOL's universal quantifier and the symbol ⋀ is Isabelle/Pure's universal quantifier. Also, the symbol ⟹ is Isabelle/Pure's implication. The operator precedence rules state that ⋀ has lower precedence than ⟹, and ∀ has higher precedence than ⋀ and ⟹. Therefore ⋀ x. P x ⟹ P 0 is actually parsed as ⋀ x. (P x ⟹ P 0) (which clearly doesn't hold) instead of (⋀ x. P x) ⟹ P 0, so you need to explicitly parenthesize the proposition ⋀ x. P x. Then, your lemma can be trivially proved using the usual elimination rule for ⋀ in natural deduction as follows:
lemma "(⋀ x. P x) ⟹ P 0"
by (rule meta_spec)
In order to try to avoid this kind of nuances, I'd suggest you to adopt the Isabelle/Isar style for stating your lemmas, namely the following:
lemma
assumes "⋀ x. P x"
shows "P 0"
using assms by (rule meta_spec)
Please refer to Programming and Proving in Isabelle/HOL and The Isabelle/Isar Reference Manual for more information.

Related

Generalize a claim in a structural induction proof to be able to use the induction hypothesis

I want to prove the following
lemma
fixes pi :: "'a path" and T :: "'a ts"
shows "valid_path T pi s ⟹ ∀ op ∈ set pi. valid_operator T op"
by induction on pi where
fun valid_path :: "'a ts ⇒ 'a path ⇒ 'a state ⇒ bool" where
"valid_path T [] s = True" |
"valid_path T (op#ops) s = (valid_operator T op ∧ valid_path T ops (effect op s))
and path is just a type synonym for an operator list.
The other definitions should not play a role for the proof.
The base case works fine.
The problem is that, informally, for the inductive step where pi = (x # xs) I'm assuming that
if valid_path T xs s
then ∀ op ∈ set xs. valid_operator T op
and I must show that this implies
if valid_path T (x#xs) s
then ∀ op ∈ set (x#xs). valid_operator T op
I can use the definition of valid_path here, so this last expression is equivalent to
if valid_path T (xs) (effect x s)
then ∀ op ∈ set (x#xs). valid_operator T op
If I could be able to use the induction hypothesis on valid_path T (xs) (effect x s) I would be done.
I can't since the hypothesis only holds for valid_path T (xs) s instead of valid_path T xs (effect x s).
But this does not really matter since the predicate of that if statement does not depend on s at all!
But Isabelle does not know that so it complains.
How can I make it such that I can apply the inductive hypothesis on valid_path T (xs) (effect x s)?
I have a feeling that I have to make the claim more general, so that I can use the hypothesis on the proof, but I don't know how.
It is very common that you have to generalize some terms in an induction. Use the keyword arbitrary in the induct method.
proof (induct pi arbitrary: s)
This is explained in Chapter 2.4 of Programming and Proving in Isabelle/HOL.

How to fix "partially applied constant on left hand side of code equation"?

I'm trying to define the code equation:
datatype t = A | B | C
inductive less_t :: "t ⇒ t ⇒ bool" where
"less_t A B"
| "less_t B C"
code_pred [show_modes] less_t .
fun less_t_fun :: "t ⇒ t ⇒ bool" where
"less_t_fun A A = False"
| "less_t_fun A B = True"
| "less_t_fun A C = True"
| "less_t_fun B C = True"
| "less_t_fun B _ = False"
| "less_t_fun C _ = False"
lemma tancl_less_t_code [code]:
"less_t⇧+⇧+ x y ⟷ less_t_fun x y"
apply (rule iffI)
apply (erule tranclp_trans_induct)
apply (erule less_t.cases; simp)
apply (metis less_t_fun.elims(2) less_t_fun.simps(3) t.simps(4))
apply (induct rule: less_t_fun.induct; simp)
using less_t.intros apply auto
done
value "less_t A B"
value "less_t_fun A C"
value "less_t⇧+⇧+ A C"
And get the following warning:
Partially applied constant "less_t" on left hand side of equation, in theorem:
less_t⇧+⇧+ ?x ?y ≡ less_t_fun ?x ?y
This question is unrelated to transitive closures. I already received such a warning for different theorems:
Partially applied constant on left hand side of code equation
How to use different code lemmas for different modes of inductive predicate?
I just need to understand the meaning of this warning and how to fix it. Maybe I should define a different lemma?
The problem is that the structure of your lemma tancl_less_t_code is indeed not suitable as code-equations. Note that the outermost constant in the left-hand side of the equations is the transitive closure predicate tranclp. So, this tells the code-generator to use the lemma in order to implement tranclp. However, using your lemma one only knows how to implement tranclp for one specific predicate, namely less_t. Therefore, you get the complaint from Isabelle that your implementation is too specific.
There are at least two workarounds.
First, instead of the declaration [code], you can use [code unfold]. Then
every occurrence of tranclp less_t x y will be replaced by less_t_fun during the code generation. To make this rule even more applicable, I would then reformulate the lemma to tranclp less = less_t_fun, so that even if
tranclp less_t is not fully applied, the unfolding can happen.
Second, you can take the symmetric version of your lemma and declare it as
[simp]. Then in your implementation you just invoke less_t_fun instead of
tranclp less_t and in the proofs the simplifier will switch to the latter one.
For more information on [code] and [code_unfold] have a look into the
documentation of the code generator.

Negation of set membership, equality

I noticed that Isabelle automatically simplifies ¬ (a ∈ (- A)) and ¬ (x = y) to a ∉ A and x ≠ y, respectively.
Here is a simple pen-and-paper proof in natural deduction but fails in Isabelle. In the 2nd line, ¬ (a ∈ (- A)) is simplified to a ∉ - A. From the latter, we cannot apply ComplD, but why?
lemma "- (- A) ⊆ (A::'a set)"
proof
fix a assume "a ∈ - (- A)"
hence "¬ (a ∈ (- A))" by (rule ComplD)
hence "¬ (¬ (a ∈ A))" by (rule ComplD) (* fail! *)
thus "a ∈ A" by (rule notnotD)
qed
Is there a way to go back to the non-simplified expression?
Of course, the lemma can be proved in one line by simp. But my purpose is to explicitly use natural deduction rules (for teaching).
These are not simplifications. If you look at the definitions of a ≠ b and x ∉ A in Isabelle (e.g. by ctrl-clicking on the symbols), you will find that they are simply abbreviations for ¬(a = b) and ¬(x ∈ A). The statements are internally represented exactly the way you wrote them above, they are just printed differently for increased readability.
The reason why you cannot apply the rule ComplD is that it simply does not match. ComplD says that ?c ∈ - ?A ⟹ ?c ∉ ?A. However, in the failing step, your assumption is a ∉ -A, and that cannot be unified to the premise ?c ∈ -?A of ComplD, and therefore rule fails.
I am relatively certain that you will need classical reasoning for this proof since your statement does not hold intuitionistically. This means you will have to do a proof by contradiction, e.g. like this:
lemma "- (- A) ⊆ (A::'a set)"
proof
fix a assume a: "a ∈ - (- A)"
show "a ∈ A"
proof (rule ccontr)
assume "a ∉ A"
have "a ∈ -A"
proof (rule ComplI)
assume "a ∈ A"
with ‹a ∉ A› show False by contradiction
qed
moreover from a have "a ∉ -A" by (rule ComplD)
ultimately show False by contradiction
qed
qed
The rule ccontr in there starts the proof by contradiction; the proof method contradiction is merely a nice way to derive anything when one has proven a fact and its negation.

How to fix "Illegal schematic variable(s)" in mutually recursive rule induction?

In Isabelle, I'm trying to do rule induction on mutually recursive inductive definitions. Here's the simplest example I was able to create:
theory complex_exprs
imports Main
begin
datatype A = NumA int
| AB B
and B = NumB int
| BA A
inductive eval_a :: "A ⇒ int ⇒ bool" and eval_b :: "B ⇒ int ⇒ bool" where
eval_num_a: "eval_a (NumA i) i" |
eval_a_b: "eval_b b i ⟹ eval_a (AB b) i" |
eval_num_b: "eval_b (NumB i) i" |
eval_b_a: "eval_a a i ⟹ eval_b (BA a) i"
lemma foo:
assumes "eval_a a result"
shows "True"
using assms
proof (induction a)
case (NumA x)
show ?case by auto
case (AB x)
At this point, Isabelle stops with 'Illegal schematic variable(s) in case "AB"'. Indeed the current goal is ⋀x. ?P2.2 x ⟹ eval_a (AB x) result ⟹ True which contains the assumption ?P2.2 x. Is that the 'schematic variable' Isabelle is talking about? Where does it come from, and how can I get rid of it?
I get the same problem if I try to do the induction on the rules:
proof (induction)
case (eval_num_a i)
show ?case by auto
case (eval_a_b b i)
Again, the goal is ⋀b i. eval_b b i ⟹ ?P2.0 b i ⟹ True with the unknown ?P2.0 b i, and I can't continue.
As a related question: I tried to do the induction using
proof (induction rule: eval_a_eval_b.induct)
but Isabelle doesn't accept this, saying 'Failed to apply initial proof method'.
How do I make this induction go through? (In my actual application, I do actually need induction because the goal is more complex than True.)
Proofs about mutually recursive definitions, be they datatypes, functions or inductive predicates, must be mutually recursive themselves. However, in your lemma, you only state the inductive property for eval_a, but not for eval_b. In the case for AB, you obviously want to use the induction hypothesis for eval_b, but as the lemma does not state the inductive property for eval_b, Isabelle does not know what it is. So it leaves it as a schematic variable ?P2.0.
So, you have to state two goals, say
lemma
shows "eval_a a result ==> True"
and "eval_b b result ==> True"
Then, the method induction a b will figure out that the first statement corresponds to A and the second to B.
The induction rule for the inductive predicates fails because this rule eliminates the inductive predicate (induction over datatypes only "eliminates" the type information, but this is not a HOL formula) and it cannot find the assumption for the second inductive predicate.
More examples on induction over mutually recursive objects can be found in src/HOL/Induct/Common_Patterns.thy.

How to prove basic facts about datatypes and codatatypes?

I would like to prove some basic facts about a datatype_new and a codatatype: the first does not have an infinite element, and that the latter does have one.
theory Co
imports BNF
begin
datatype_new natural = Zero | Successor natural
lemma "¬ (∃ x. x = Successor x)"
oops
codatatype conat = CoZero | CoSucc conat
lemma "∃ x. x = CoSucc x"
oops
The problem was that I could not come up with a pen-and-paper proof, let alone a proof script.
An idea for the first was to use the size function, which has a theorem
size (Successor ?natural) = size ?natural + Suc 0
and somehow using that size is a function, applying it to the two sides of the original equation one cannot have a natural number equal to its successor. But I do not see how I could formalise this.
For the latter I did not even have an idea how to derive this theorem from the facts that the codatatype package proves.
How can I prove these?
Personally, I don't know the first thing about codatatypes. But let me try to help you nevertheless.
The first lemma you posted can be proven automatically by sledgehammer. It finds a proof using the size function, effectively reducing the problem on natural to the same problem on nat:
by (metis Scratch.natural.size(2) n_not_Suc_n nat.size(4) size_nat)
If you want a very basic, step-by-step version of this proof, you could write it like this:
lemma "¬(∃x. x = Successor x)"
proof clarify
fix x assume "x = Successor x"
hence "size x = size (Successor x)" by (rule subst) (rule refl)
also have "... = size x + Suc 0" by (rule natural.size)
finally have "0 = Suc 0" by (subst (asm) add_0_iff) (rule sym)
moreover have "0 ≠ Suc 0" by (rule nat.distinct(1))
ultimately show False by contradiction
qed
If you want a more “elementary” proof, without the use of HOL natural numbers, you can do a proof by contradiction using induction on your natural:
lemma "¬(∃x. x = Successor x)"
proof clarify
fix x assume "x = Successor x"
thus False by (induction x) simp_all
qed
You basically get the two cases in the induction:
Zero = Successor Zero ⟹ False
⋀x. (x = Successor x ⟹ False) ⟹
Successor x = Successor (Successor x) ⟹ False
The first subgoal is a direct consequence of natural.distinct(1), the second one can be reduced to the induction hypothesis using natural.inject. Since these rules are in the simpset, simp_all can solve it automatically.
As for the second lemma, the only solution I can think of is to explicitly construct the infinite element using primcorec:
primcorec infinity :: conat where
"infinity = CoSucc infinity"
Then you can prove your second lemma simply by unfolding the definition:
lemma "∃x. x = CoSucc x"
proof
show "infinity = CoSucc infinity" by (rule infinity.ctr)
qed
Caveat: these proofs work, but I am not sure whether they are the easiest and/or most elegant solution to this problem. I have virtually no knowledge of codatatypes or the new datatype package.

Resources