How to use lambda expression in Isabelle/HOL? - isabelle

In my exercise to learn Isabelle/HOL syntax, I tried to prove a toy lemma below. It's about lambda expressions (and things like the "Greatest" notation that takes a predicate as input). The intended content of the lemma is that "the greatest natural number that is l.e. 1 is 1".
lemma "1 = Greatest (λ x::nat. x ≤ 1)"
proof -
show ?thesis
by auto
qed
However, the above proof doesn't work either by auto or simp, and generates a message that.
Failed to finish proof⌂:
goal (1 subgoal):
1. Suc 0 = (GREATEST x. x ≤ Suc 0)
Can someone help explain what went wrong with the statement or how to prove this correctly (if the statement is correct)?

There is nothing wrong with the lemma, it's just that none of the rules for Greatest are declared in such a way that auto knows about them. Which is probably good, because these kinds of rules tend to mess with automation a lot.
You can prove your statement using e.g. the rule Greatest_equality:
lemma "1 = Greatest (λ x::nat. x ≤ 1)"
proof -
have "(GREATEST (x::nat). x ≤ 1) = 1"
by (rule Greatest_equality) auto
thus ?thesis by simp
qed
You can find rules like this using the Query panel in Isabelle/jEdit or the find_theorems command by searching for the constant Greatest.
If the GREATEST thing confuses you, the syntax GREATEST x. P x is just fancy syntax for Greatest (λx. P x). Such notation is fairly standard in Isabelle, we also have ∃x. P x for Ex (λx. P x) etc.

Related

How do you print local variables and ?thesis in an Isabelle proof (debugging in Isabelle)?

I sometimes find it hard to use Isabelle because I cannot have a "print command" like in normal programming.
For example, I want to see what ?thesis. The concrete semantics book says:
The unknown ?thesis is implicitly matched against any goal stated by lemma or show. Here is a typical example:
My silly sample FOL proof is:
lemma
assumes "(∃ x. ∀ y. x ≤ y)"
shows "(∀x. ∃ y. y ≤ x)"
proof (rule allI)
show ?thesis
but I get the error:
proof (state)
goal (1 subgoal):
1. ⋀x. ∃y. y ≤ x
Failed to refine any pending goal
Local statement fails to refine any pending goal
Failed attempt to solve goal by exported rule:
∀x. ∃y. y ≤ x
but I do know why.
I expected
?thesis === ⋀x. ∃y. y ≤ x
since my proof state is:
proof (state)
goal (1 subgoal):
1. ⋀x. ∃y. y ≤ x
Why can't I print ?thesis?
It's really annoying to have to write the statement I'm trying to proof if it's obvious. Perhaps it's meant to be explicit but in the examples in chapter 5 they get away with using ?thesis in:
lemma fixes a b :: int assumes "b dvd (a+b)" shows "b dvd a" proof −
have "∃k′. a = b∗k′" if asm: "a+b = b∗k" for k proof
show "a = b∗(k − 1)" using asm by(simp add: algebra_simps) qed
then show ?thesis using assms by(auto simp add: dvd_def ) qed
but whenever I try to use ?thesis I always fail.
Why is it?
Note that this does work:
lemma
assumes "(∃ x. ∀ y. x ≤ y)"
shows "(∀x. ∃ y. y ≤ x)"
proof (rule allI)
show "⋀x. ∃y. y ≤ x" proof -
but I thought ?thesis was there to avoid this.
Also, thm ?thesis didn't work either.
Another example is when I use:
let ?ys = take k1 xs
but I can't print ?ys value.
TODO:
why doesn't:
lemma "length(tl xs) = length xs - 1"
thm (cases xs)
show anything? (same if your replaces cases with induction).
You can find ?theorem and others in the print context window:
As for why ?thesis doesn't work, by applying the introduction rule proof (rule allI) you are changing the goal, so it no longer matches ?thesis. The example in the book uses proof- which prevents Isabelle from applying any introduction rule.
It seems I asked a very similar question worth pointing to: What is the best way to search through general definitions, theorems, functions, etc for Isabelle?
But here is a list of thing's I've learned so far:
thm: seems to work for definition, lemmas and functions. For definition do name_def for a definition with name name. For functions do thm f.simps for all definitions in the function. For a single one do thm f.simps(1) for the first one. For lemmas do thm lemma_name or thm impI or HOL.mp etc.
term: for terms do term term_name e.g. in isar term ?thesis or term this
print_theorems: if you place this after a definition or a function it shows all the theorems defined for those! It's amazing.
print... I just noticed in jedit if you let the auto complete show you the rest for print it has a bunch of options! Probably useful!
Search engine for Isabelle: https://search.isabelle.in.tum.de/
You can use Query (TODO: improve this)
TODO: how to find good way to display stuff about tactics.
I plan to update this as I learn all the ways to debug in Isabelle.

How to prove this simple theorem in Isabelle?

I define a very simple function replace which replaces 1 with 0 while preserving other input values. I want to prove that the output of the function cannot be 1. How to achieve this?
Here's the code.
theory Question
imports Main
begin
fun replace :: "nat ⇒ nat" where
"replace (Suc 0) = 0" |
"replace x = x"
theorem no1: "replace x ≠ (Suc 0)"
sorry
end
Thanks!
There exist several approaches for proving the statement that you are trying to prove.
You can make an attempt to use sledgehammer to find the proof automatically, e.g.
theorem no1: "replace x ≠ (Suc 0)"
by sledgehammer
(*using replace.elims by blast*)
Once the proof is found, you can delete the explicit invocation of the command sledgehammer.
Perhaps, a slightly better way to state the proof found by the sledgehammer would be
theorem no1': "replace x ≠ (Suc 0)"
by (auto elim: replace.elims)
You can also try to provide a more specialized proof. For example,
theorem no1: "replace x ≠ (Suc 0)"
by (cases x rule: replace.cases) simp_all
This proof looks at the different cases the value of x can have and then uses simplifier (in conjunction with the simp rules provided by the command fun during the definition of your function) to finish the proof. You can see all theorems that are generated by the command fun by typing print_theorems immediately after the specification of replace, e.g.
fun replace :: "nat ⇒ nat" where
"replace (Suc 0) = 0" |
"replace x = x"
print_theorems
Of course, there are other ways to prove the result that you are trying to prove. One good way to improve your ability to find such proofs is by reading the documentation and tutorials on Isabelle. My own starting point for learning Isabelle was the book "Concrete Semantics" by Tobias Nipkow and Gerwin Klein.

Isabelle - exI and refl behavior explanation needed

I am trying to understand the lemma below.
Why is the ?y2 schematic variable introduced in exI?
And why it is not considered in refl (so: x = x)?
lemma "∀x. ∃y. x = y"
apply(rule allI) (* ⋀x. ∃y. x = y *)
thm exI (* ?P ?x ⟹ ∃x. ?P x *)
apply(rule exI) (* ⋀x. x = ?y2 x *)
thm refl (* ?t = ?t *)
apply(rule refl)
done
UPDATE (because I can't format code in comments):
This is the same lemma with a different proof, using simp.
lemma "∀x. ∃y. x = y"
using [[simp_trace, simp_trace_depth_limit = 20]]
apply (rule allI) (*So that we start from the same problem state. *)
apply (simp only:exI)
done
The trace shows:
[0]Adding rewrite rule "HOL.exI":
?P1 ?x1 ⟹ ∃x. ?P1 x ≡ True
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
⋀x. ∃y. x = y
[1]Applying instance of rewrite rule "HOL.exI":
?P1 ?x1 ⟹ ∃x. ?P1 x ≡ True
[1]Trying to rewrite:
x = ?x1 ⟹ ∃xa. x = xa ≡ True <-- NOTE: not ?y2 xa or similar!
[2]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
x = ?x1
[1]SUCCEEDED
∃xa. x = xa ≡ True
So apparently simp and rule handles exI differently. And the remaining question is: what is the mechanical (programmatical) reasoning behind rule's behavior.
When you use rule thm for some fact thm, Isabelle performs higher-order unification of the conclusion of thm with the current goal. If there is a unifier, it is used to instantiate both the goal and the conclusion of the theorem, and then resolution is performed (i.e. the goal is replaced with the assumptions of thm).
This means that:
Schematic variables in the goal can be instantiated by rule through unification
Variables that appear only in the assumptions of thm will not be instantiated by the unification and will therefore remain schematic. That way, you end up with schematic variables in your new goals. Such variables can be seen as existential in some sense, because the conclusion of thm holds if you can prove the assumptions for just one arbitrary value.
In the case of exI, you have ?P ?x ⟹ ∃x. ?P x. When you apply rule exI, the variable ?P is instantiated to λy. x = y, but the variable ?x appears only in the assumptions of exI, so it remains schematic. This means that you can pick any value you want for ?x later on in your proof.
To be more precise, you end up with ⋀x. x = ?y2 x as your goal. You might ask ‘Why not just ⋀x. x = ?y2?’ That would mean that you have to show that x equals some fixed value y2 for all possible values of x. That is obviously not true in general. ⋀x. x = ?y2 x means you have to show that every x equals some y2 that may depend on x – or, equivalently, that there is a function y2 that, when given x, outputs x.
Of course, there is such a function and it is simply the identity function λx. x. That is precisely what ?y2 gets instantiated to when you apply rule refl: the goal x = ?y2 x is unified with the conclusion of refl ?t = ?t and you end up with ?t = x and ?y2 = λx. x, and since refl has no assumptions, this resolution finishes the proof.
I am not entirely sure what you mean with ‘And why it is not considered in refl?’, but I hope that I have answered your questions.
Get a more complete answer from an expert, but I give a short, brief answer to your second part.
The great thing about Isabelle is that it provides many different ways to prove a problem.
Your new question is similar to L.Paulson's comment on FOM: you moved the goal post by switching the question to rule vs. simp:
http://www.cs.nyu.edu/pipermail/fom/2015-October/019312.html
Getting a basic understanding of simp is actually a much easier goal to pursue, or I wouldn't be adding my reponse here.
rule and natural deduction
The use of rule is the use of natural deduction (ND), where most people aren't up to speed on ND. The use of ND requires understanding ND, so questions like your first question can lead to a non-simple answer, because anything informative can't be a one-liner answer, especially due to things like schematic variables (which you asked about), resolution, unification, rewriting, etc.
Do a search on natural deduction and you'll find the standard wiki page about it. There are numerous books on natural deduction, though they get swamped in searches on "logic" due to first-order logic books. A popular book is Logic in Computer Science, 2nd, by Huth and Ryan.
If you study ND, you'll see that exI matches one of the ND rules.
I have yet to take the time to come up to speed on ND, because I keep making progress without having more than a basic understanding of ND.
Sledgehammer, and auto-methods auto, simp, blast, induct, cases, etc., and Sledgehammer's use of some of those, keep me from finding the time to become good with natural decution.
Answer's like M.Eberl's, though not simple explanations, help me absorb a little here and a little there.
Simp, I think of it as simple substitution (rewriting)
The mechanics behind simp is really simple, compared to natural deduction. You define a formula and prove it:
lemma foo [simp]: "left_hand_side = right_hand_side"
In the proof of another theorem, when simp is invoked in one way or another, or foo is unfolded, where there is left_hand_side, it's replaced with right_hand_side. It's just classic mathematical substitution.
I suppose it could also be "rewriting", but I don't know anything about rewriting, other than they talk about it.
There are lots of details about how and whether one should set things up automatically (to prevent looping), like with [simp] or declare foo_def [simp add], but that's just details along the line of normal programming.

How to replace ⋀ and ⟹ with ∀ and ⟶ in assumption

I'm an Isabelle newbie, and I'm a little (actually, a lot) confused about the relationship between ⋀ and ∀, and between ⟹ and ⟶.
I have the following goal (which is a highly simplified version of something that I've ended up with in a real proof):
⟦⋀x. P x ⟹ P z; P y⟧ ⟹ P z
which I want to prove by specialising x with y to get ⟦P y ⟹ P z; P y⟧ ⟹ P z, and then using modus ponens. This works for proving the very similar-looking:
⟦∀x. P x ⟶ P z; P y⟧ ⟹ P z
but I can't get it to work for the goal above.
Is there a way of converting the former goal into the latter? If not, is this because they are logically different statements, in which case can someone help me understand the difference?
That the two premises !!x. P x ==> P y and ALL x. P x --> P y are logically equivalent can be shown by the following proof
lemma
"(⋀x. P x ⟹ P y) ≡ (Trueprop (∀x. P x ⟶ P y))"
by (simp add: atomize_imp atomize_all)
When I tried the same kind of reasoning for your example proof I ran into a problem however. I intended to do the following proof
lemma
"⟦⋀x. P x ⟹ P z; P y⟧ ⟹ P z"
apply (subst (asm) atomize_imp)
apply (unfold atomize_all)
apply (drule spec [of _ y])
apply (erule rev_mp)
apply assumption
done
but at unfold atomize_all I get
Failed to apply proof method:
When trying to explicitly instantiate the lemma I get a more clear error message, i.e.,
apply (unfold atomize_all [of "λx. P x ⟶ P z"])
yields
Type unification failed: Variable 'a::{} not of sort type
This I find strange, since as far as I know every type variable should be of sort type. We can solve this issue by adding an explicit sort constraint:
lemma
"⟦⋀x::_::type. P x ⟹ P z; P y⟧ ⟹ P z"
Then the proof works as shown above.
Cutting a long story short. I usually work with Isar structured proofs instead of apply scripts. Then such issues are often avoided. For your statement I would actually do
lemma
"⟦⋀x. P x ⟹ P z; P y⟧ ⟹ P z"
proof -
assume *: "⋀x. P x ⟹ P z"
and **: "P y"
from * [OF **] show ?thesis .
qed
Or maybe more idiomatic
lemma
assumes *: "⋀x. P x ⟹ P z"
and **: "P y"
shows "P z"
using * [OF **] .
C.Sternagel answered your title question "How?", which satisfied your last sentence, but I go ahead and fill in some details based on his answer, to try to "help [you] understand the difference".
It can be confusing that there is ==> and -->, meta-implication and HOL-implication, and that they both have the properties of logical implication. (I don't say much about !! and !, meta-all and HOL-all, because what's said about ==> and --> can be mostly be transferred to them.)
(NOTE: I convert graphical characters to equivalent ASCII when I can, to make sure they display correctly in all browsers.)
First, I give some references:
[1] Isabelle/Isar Reference manual.
[2] HOL/HOL.thy
[3] Logic in Computer Science, by Huth and Ryan
[4] Wiki sequent entry.
[5] Wiki intuitionistic logic entry.
If you understand a few basics, there's nothing that confusing about the fact that there is both ==> and -->. Much of the confusion departs, and what's left is just the work of digging through the details about what particular source statements mean, such as the formula of C.Sternagel's first lemma.
"(!!x. P x ==> P y) == (Trueprop (!x. P x --> P y))"
C.Sternagel stopped taking the time to give me important answers, but the formula he gives you above is similar to one he gave me a while ago, to convince me that all free variables in a formula are universally quantified.
Short answer: The difference between ==> and --> is that ==> (somewhat) plays the part of the turnstile symbol, |-, of a non-generalized sequent in which there is only one conclusion on the right-hand side. That is, ==>, the meta-logic implication operator of Isabelle/Pure, is used to define the Isabelle/HOL implication object-logic operator -->, as shown by impI in the following axiomatization in HOL.thy [2].
(*line 56*)
typedecl bool
judgment
Trueprop :: "bool => prop"
(*line 166*)
axiomatization where
impI: "(P ==> Q) ==> P-->Q" and
mp: "[| P-->Q; P |] ==> Q" and
iff: "(P-->Q) --> (Q-->P) --> (P=Q)" and
True_or_False: "(P=True) | (P=False)"
Above, I show the definition of three other axioms: mp (modus ponuns), iff, and True_or_False (law of excluded middle). I do that to repeatedly show how ==> is used to define the axioms and operators of the HOL logic. I also threw in the judgement to show that some of the sequent vocabulary is used in the language Isar.
I also show the axiom True_or_False to show that the Isabelle/HOL logic has an axiom which Isabelle/Pure doesn't have, the law of excluded middle [5]. This is huge in answering your question "what is the difference?"
It was a recent answer by A.Lochbihler that finally gave meaning, for me, to "intuitionistic" [5]. I had repeatedly seen "intuitionistic" in the Isabelle literature, but it didn't sink in.
If you can understand the differences in the next source, then you can see that there's a big difference between ==> and -->, and between types prop and bool, where prop is the type of meta-logic propositions, as opposed to bool, which is the type of the HOL logic proposition. In the HOL object-logic, False implies any proposition Q::bool. However, False::bool doesn't imply any proposition Q::prop.
The type prop is a big part of the meta-logic team !!, ==>, and ==.
theorem "(!!P. P::bool) == Trueprop (False::bool)"
by(rule equal_intr_rule, auto)
theorem HOL_False_meta_implies_any_prop_Q:
"(!!P. P::bool) ==> PROP Q"
(*Currently, trying by(auto) will hang my machine due to blast, which is know
to be a problem, and supposedly is fixed in the current repository. With
`Auto methods` on in the options, it tries `auto`, thus it will hang it.*)
oops
theorem HOL_False_meta_implies_any_bool_Q:
"(!!P. P::bool) ==> Q::bool"
by(rule meta_allE)
theorem HOL_False_obj_implies_any_bool_Q:
"(!P. P::bool) --> Q::bool"
by(auto)
When you understand that Isabelle/Pure meta-logic ==> is used to define the HOL logic, and other differences, such as that the meta-logic is weaker because of no excluded middle, then you understand that there are significant differences between the meta-operators, !!, ==>, and ==, in comparison to the HOL object-logic operators, !, -->, and =.
From here, I put in more details, partly to convince any expert that I'm not totally abusing the word sequent, where my use here is based primarily on how it's used in reference [3, Huth and Ryan].
Attempting to not write a book
I throw in some quotes and references to show that there's a relationship between sequents and ==>.
From my research, I can't see that the word "sequent" is standardized. As far as I can tell, in [3.pg 5], Huth and Ryan use "sequent" to mean a sequent which has only has one conclusion on the right-hand side.
...This intention we denote by
phi1, phi2, ..., phiN |- psi
This expression is called a sequent; it is valid if a proof can be found.
A more narrow definition of sequent, in which the right-hand side has only one conclusion, matches up very nicely with the use of ==>.
We can blame L.Paulson for confusing us by separating the meta-logic from the object-logic, though we can thank him for giving us a larger logical playground.
Maybe to keep from clashing with the common definition of a sequent, as in [4, Wiki], he uses the phrase natural deduction sequent calculus in various places in the literature. In any case, the use of ==> is completely related to implementing natural deduction rules in the logic of Isabelle/HOL.
Even with generalized sequents, L.Paulson prefers the ==> notation:
Logic and Proof course 2012-13
Course materials: see slides for his generalized sequent calculus notation
You asked about differences. I throw in some source related to C.Sternagel's answer, along with the impI axiomatization again:
(*line 166*)
axiomatization where
impI: "(P ==> Q) ==> P-->Q"
(*706*)
lemma --"atomize_all [atomize]:"
"(!!x. P x) == Trueprop (ALL x. P x)"
by(rule atomize_all)
(*715*)
lemma --"atomize_imp [atomize]:"
"(A ==> B) == Trueprop (A --> B)"
by(rule atomize_imp)
(*line 304*)
lemma --"allI:"
assumes "!!x::'a. P(x)"
shows "ALL x. P(x)"
by(auto simp only: assms allI
I put impI in structured proof format:
lemma impI_again:
assumes "P ==> Q"
shows "P --> Q"
by(simp add: assms)
Now, consider ==> to be the use of the sequent turnstile, and shows to be the sequent notation horizontal bar, then you have the following sequent:
P |- Q
-------
P --> Q
This is the natural deduction implication introduction rule, as the axiom name says, impI (Cornell Lecture 15).
The Big Guys have been on top of all of this for a long time. See [1, Section 2.1, page 27] for an overview of !!, ==>, and ==. In particular, it says
The Pure logic [38, 39] is an intuitionistic fragment of higher-order logic
[13]. In type-theoretic parlance, there are three levels of lambda-calculus with
corresponding arrows =>/!!/==>`...
One general significance of the statement is that in the use of Isabelle/HOL, you are using two logics, a meta-logic and an object-logic, where those two terms come from L.Paulson, and where "intuitionistic" is a key defining point of the meta-logic.
See also [1, Section 9.4.1, Simulating sequents by natural deduction, pg 206]. According to M.Wenzel on the IsaUsersList, L.Paulson wrote this section. On page 205, Paulson first takes the definition of a sequent to be the generalized definition. On page 206, he then shows how you can line up one type of sequent with the use of ==>, which is by negating every proposition on the right-hand side of a sequent, except for one of them.
That, by all appearances, is a horn clause, which I know nothing about.
It seems obvious to me that using ==> is the use of a limited form of sequents. In any event, that's how I think of it, and thinking that way has given me an understanding of the differences between ==> and -->, along with the fact that the meta-logic has no excluded middle.
If A.Lochbhiler wouldn't have pointed out the absence of an excluded middle, I wouldn't have seen an important difference of what's possible with ==>, and what's possible with -->.
Maybe C.Sternagel will start back again to give me some of his important answers.
Please pardon the long answer.
Others have already explained some of the reasons behind the difference between meta-logic and logic, but missed the simple tactic apply atomize:
lemma "⟦⋀(x::'a). P x ⟹ P z ; P y⟧ ⟹ P z"
apply atomize
which yields the goal:
⟦ ∀x. P x ⟶ P z; P y ⟧ ⟹ P z
as desired.
(The additional type constraint ⋀(x::'a) is required for the reasons mentioned by chris.)
There is a lot of text already, so just a few brief notes:
Isabelle/Pure is minimal-higher order logic with the main connectives ⋀ and ⟹ to lay out Natural Deduction rules in a declarative way. The system knows how to compose them by basic means, e.g. in Isar proofs, proof methods like rule, attributes like OF.
Isabelle/HOL is full higher-order logic, with the full set of predicate logic connectives, e.g. ∀ ∃ ∧ ∨ ¬ ⟶ ⟷, and much more library material. Canonical introduction rules like allI, allE, exI, exE etc. for these connectives explain formally how the reasoning works wrt. the Pure framework. HOL ∀ and ⟶ somehow correspond to Pure ⋀ and ⟹, but they are of different category and should not be thrown into the same box.
Note that apart from the basic thm command to print such theorems, it occasionally helps to use print_statement to get an Isar reading of these Natural Deduction reasoning forms.

How do I remove duplicate subgoals in Isabelle?

In Isabelle, one occasionally reaches a scenario where there are duplicate subgoals. For example, imagine the following proof script:
lemma "a ∧ a"
apply (rule conjI)
with goals:
proof (prove): step 1
goal (2 subgoals):
1. a
2. a
Is there any way to eliminate the duplicate subgoal in-place, so proofs need not be repeated?
The ML-level tactic distinct_subgoals_tac in Pure/tactic.ML removes duplicate subgoals, and can be used as follows:
lemma "a ∧ a"
apply (rule conjI)
apply (tactic {* distinct_subgoals_tac *})
leaving:
proof (prove): step 2
goal (1 subgoal):
1. a
There does not appear to be a way without dropping into the ML world, unfortunately.
I came across a similar behavior as a side effect of the subst method applied to any theorem, and for example refl. Then apply (subst refl) do indeed remove the duplicate subgoals.
It's not a bug, It's a feature ;-).
Adding to davidg's answer, if one does not want to use tactic for whatever reason, it is easy enough to turn distinct_subgoals_tac into a method:
method_setup distinct_subgoals =
‹Scan.succeed (K (SIMPLE_METHOD distinct_subgoals_tac))›
lemma P and P and P
(* here there are three goals P *)
apply distinct_subgoals
(* now there is only one goal P *)

Resources