I want to write a predicate that would state that a transition system cannot have infinite runs from a state s. So consider the transition system is given by R, then the definition I have come up with is:
inductive finite_runs for R where
"(∀ s'. R s s' ⟶ finite_runs R s') ⟹ finite_runs R s"
Is this the simplest way I can state this fact in Isabelle? In particular, I have browsed the Archive of Formal Proofs (the rewriting and labeled transition theories) but got no simpler solution.
There is Wellfounded.termip in the standard library. I think it should do exactly what you want.
It is basically an abbreviation using Wellfounded.accp, which does the same thing but going ‘backwards’.
If you mean that finite runs means absence of infinite runs, then you can use the strong-normalization-on predicate SN_on of the AFP-entry Abstract-Rewriting. However your definition permits also runs like
a -> a -> a -> ...
which is not permitted by SN_on.
Related
In Isabelle I find myself often using
apply(subst xx)
apply(rule subst)
apply(subst_tac xx)
and similar command but often it is a hit or miss. Is there any resources on how to guide the
term unification and how to precisely specify the terms that should be substituted for?
For example if there are multiple ways to perform unification, how can I disambiguate?
If I have multiple equalities among the premises, how can I tell Isabelle which one of them to use? I spend way too much time wrestling with such seemingly simple problems.
This book https://www21.in.tum.de/~nipkow/LNCS2283/ has a chapter dedicated to substitution but it's far too short, only covers erule ssubst and doesn't really answer my questions.
To give some examples, this is ssubst
lemma ssubst: "t = s ⟹ P s ⟹ P t"
by (drule sym) (erule subst)
but what about
lemma arg_cong: "x = y ⟹ f x = f y"
by (iprover intro: refl elim: subst)
How can I do erule_tac arg_cong and specify exactly the desired f, x and y? Anything I tried resulted in Failed to apply proof method which is not a particularly enlightening error message.
As I recall, a more elaborate substitution method is available, in particular for restricting substitution to certain contexts. But to answer your question properly it's essential to know what sort of assertions you are trying to prove. If you are working with short expressions (up to a couple of lines long), then it's much better to guide the series of transformations using equational reasoning, via also and finally. (See Programming and Proving in Isabelle/HOL, 4.2.2 Chains of (In)Equations.) Then at the cost of writing out these expressions, you'll often find that you can prove each step automatically, without detailed substitutions, and you can follow your reasoning.
Also note that if your expressions are long because they contain large, repeated subexpressions, you can introduce abbreviations using define. You will then not only have shorter and clearer proofs, but you will find that automation will perform much better.
The other situation is when you are working with verification conditions dozens of lines long, or even longer. In that case it is worth looking for more advanced substitution packages.
Is there a "generic" informal algorithm that users of Isabelle follow, when they are trying to prove something that isn't proved immediately by auto or sledgehammer? A kind of general way of figuring out, if auto needs additional lemmas, formulated by the user, to succeed or if better some other proof method is used.
A related question is: Is there maybe a table to be found somewhere with all the proof methods together with the context in which to apply them? When I'm reading through the Programming and Proving tutorial, the description of various methods (respectively variants of some methods, such as the many variant of auto) are scattered through the text, which constantly makes me go back and for between text and Isabelle code (which also leads to forgetting what exactly is used for what) and which results in a very inefficient workflow.
No, there's no "generic" informal way. You can use try0 which tries all standard proof methods (like auto, blast, fastforce, …) and/or sledgehammer which is more advanced.
After that, the fun part starts.
Can this theorem be shown with simpler helper lemmas? You can use the command "sorry" for assuming that a lemma is true.
How would I prove this on a piece of paper? And then try to do this proof in Isabelle.
Ask for help :) Lots of people on stack overflow, #isabelle on freenode and the Isabelle mailing list are waiting for your questions.
For your second question: No, there's no such overview. Maybe someone should write one, but as mentioned before you can simply use try0.
ammbauer's answer already covers lots of important stuff, but here are some more things that may help you:
When the automation gets stuck at a certain point, look at the available premises and the goal at that point. What kind of simplification did you expect the system to do at that point? Why didn't it do it? Perhaps the corresponding rule is just not in the simp set (add it with simp add:) or some preconditions of the rule could not be proved (in that case, add enough facts so that they can be proved, or do it yourself in an additional step)
Isar proofs are good. If you have some complicated goal, try breaking it down into smaller steps in Isar. If you have bigger auxiliary facts that may even be of more general interest, try pulling them out as auxiliary lemmas. Perhaps you can even generalise them a bit. Sometimes that even simplifies the proof.
In the same vein: Too much information can confuse both you and Isabelle. You can introduce local definitions in Isar with define x where "x = …" and unfold them with x_def. This makes your goals smaller and cleaner and decreases the probability of the automation going down useless paths in its proof search.
Isabelle does not automatically unfold definitions, so if you have a definition, and you want to unfold it for a proof, you have to do that yourself by using unfolding foo_def or simp add: foo_def.
The defining equations of functions defined with fun or primrec are unfolding by anything using the simplifier (simp, simp_all, force, auto) unless the equations (foo.simps) have manually been deleted from the simp set. (by lemmas [simp del] = foo.simps or declare [simp del] foo.simps)
Different proof methods are good at different things, and it takes some experience to know what method to use in what case. As a general rule, anything that requires only rewriting/simplification should be done with simp or simp_all. Anything related to classical reasoning (i.e. first-order logic or sets) calls for blast. If you need both rewriting and classical reasoning, try auto or force. Think of auto as a combination of simp and blast, and force is like an ‘all-or-nothing’ variant of auto that fails if it cannot solve the goal entirely. It also tries a little harder than auto.
Most proof methods can take options. You probably already know add: and del: for simp and simp_all, and the equivalent simp:/simp del: for auto. However, the classical reasoners (auto, blast, force, etc.) also accept intro:, dest:, elim: and the corresponding del: options. These are for declaring introduction, destruction, and elimination rules.
Some more information on the classical reasoner:
An introduction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever the goal has the form R, to replace it with P and Q
A destruction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever a fact of the form P is in the premises to replace to goal G with the new goals Q and R ⟹ G.
An elimination rule is something like thm exE (elimination of the existential quantifier). These are like a generalisation of destruction rules that also allow introducing new variables. These rules often appear in this like case distinctions.
The classical reasoner used by auto, blast, force etc. will use the rules in the claset (i.e. that have been declared intro/dest/elim) automatically whenever appropriate. If doing that does not lead to a proof, the automation will backtrack at some point and try other rules. You can disable backtracking for specific rules by using intro!: instead of intro: (and analogously for the others). Then the automation will apply that rule whenever possible without ever looking back.
The basic proof methods rule, drule, erule correspond to applying a single intro/dest/elim rule and are good for single step reasoning, e.g. in order to find out why automatic methods fail to make progress at a certain point. intro is like rule but applies the set of rules it is given iteratively until it is no longer possible.
safe and clarify are occasionally useful. The former essentially strips away quantifiers and logical connectives (try it on a goal like ∀x. P x ∧ Q x ⟶ R x) and the latter similarly tries to ‘clean up’ the goal. (I forgot what it does exactly, I just use it occasionally when I think it might be useful)
Suppose I have a list of subgoals in an apply style proof. I know that something like
apply blast
will provide a proof for a number of the subgoals within this list. Is there a way I can avoid duplicating this line?
For example, suppose I have three subgoals where the first and the third are provable using the above method while the second is provable with something like
apply (metis lemma1 lemma2 ...)
A naive proof for such subgoals will look like
apply blast
apply (metis lemma1 lemma2 ...)
apply blast
What I am looking for is a way to give a proof without duplicating the apply blast portion of the proof. Observe that using the method combinator + will not achieve this; it merely applies the method repeatedly until the first failure.
Actually apply blast will only try to solve the first subgoal. If you want to solve as many subgoals as possible you could try
apply blast+
I am not sure what exactly you are trying to achieve, but an alternative to your using some_lemma might be
apply (insert some_lemma)
which inserts some_lemma as additional assumption of all of your subgoals.
Update: There are some basic proof method combinators available in Isabelle (see also Section 6.4.1: Proof method expressions, of isar-ref). So you could do for example
apply (blast | metis ...)+
which will first try to solve a subgoal by blast and only if this fails by metis .... However, its usefulness depends on the specific subgoal situation, e.g., if blast takes a long time before failing, it might not be suitable. More fine-grained control of proof methods is available through the recent Isabelle/Eisbach proof method language (see isabelle doc eisbach).
I have seen a lot of documentation about Isabelle's syntax and proof strategies. However, little have I found about its foundations. I have a few questions that I would be very grateful if someone could take the time to answer:
Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
What symbols are part of Isabelle's meta-language? I read that there are symbols in the meta-language for Universal Quantification (/\) and for implication (==>). However, these symbols have their counterpart in the object-level language (∀ and -->). I understand that --> is an object-level function of type bool => bool => bool. However, how are ∀ and ∃ defined? Are they object-level Boolean functions? If so, they are not computable (considering infinite domains). I noticed that I am able to write Boolean functions in therms of ∀ and ∃, but they are not computable. So what are ∀ and ∃? Are they part of the object-level? If so, how are they defined?
Are Isabelle theorems just Boolean expressions? Then Booleans are part of the meta-language?
As far as I know, Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Sorry if these questions are very basic. I do not seem to find a good tutorial on Isabelle's meta-theory. I would love if someone could recommend me a good tutorial on these topics.
Thank you very much.
You can define non-terminating (i.e. partial) functions in Isabelle (cf. Function package manual (section 8)). However, partial functions are more difficult to reason about, because whenever you want to use its definition equations (the psimps rules, which replace the simps rules of a normal function), you have to show that the function terminates on that particular input first.
In general, things like non-definedness and non-termination are always problematic in a logic – consider, for instance, the function ‘definition’ f x = f x + 1. If we were to take this as an equation on ℤ (integers), we could subtract f x from both sides and get 0 = 1. In Haskell, this problem is ‘solved’ by saying that this is not an equation on ℤ, but rather on ℤ ∪ {⊥} (the integers plus bottom) and the non-terminating function f evaluates to ⊥, and ‘⊥ + 1 = ⊥’, so everything works out fine.
However, if every single expression in your logic could potentially evaluate to ⊥ instead of a ‘proper‘ value, reasoning in this logic will become very tedious. This is why Isabelle/HOL chooses to restrict itself to total functions; things like partiality have to be emulated with things like undefined (which is an arbitrary value that you know nothing about) or option types.
I'm not an expert on Isabelle/Pure (the meta logic), but the most important symbols are definitely
⋀ (the universal meta quantifier)
⟹ (meta implication)
≡ (meta equality)
&&& (meta conjunction, defined in terms of ⟹)
Pure.term, Pure.prop, Pure.type, Pure.dummy_pattern, Pure.sort_constraint, which fulfil certain internal functions that I don't know much about.
You can find some information on this in the Isabelle/Isar Reference Manual in section 2.1, and probably more elsewhere in the manual.
Everything else (that includes ∀ and ∃, which indeed operate on boolean expressions) is defined in the object logic (HOL, usually). You can find the definitions, of rather the axiomatisations, in ~~/src/HOL/HOL.thy (where ~~ denotes the Isabelle root directory):
All_def: "All P ≡ (P = (λx. True))"
Ex_def: "Ex P ≡ ∀Q. (∀x. P x ⟶ Q) ⟶ Q"
Also note that many, if not most Isabelle functions are typically not computable. Isabelle is not a programming language, although it does have a code generator that allows exporting Isabelle functions as code to programming languages as long as you can give code equations for all the functions involved.
3)
Isabelle theorems are a complex datatype (cf. ~~/src/Pure/thm.ML) containing a lot of information, but the most important part, of course, is the proposition. A proposition is something from Isabelle/Pure, which in fact only has propositions and functions. (and itself and dummy, but you can ignore those).
Propositions are not booleans – in fact, there isn't even a way to state that a proposition does not hold in Isabelle/Pure.
HOL then defines (or rather axiomatises) booleans and also axiomatises a coercion from booleans to propositions: Trueprop :: bool ⇒ prop
Isabelle is not a programming language, and apart from that, totality does not mean you have to restrict yourself to finite structures. Even in a total programming language, you can have infinite lists. (cf. Idris's codata)
Isabelle is a theorem prover, and logically, infinite objects can be treated by axiomatising them and then reasoning about them using the axioms and rules that you have.
For instance, HOL assumes the existence of an infinite type and defines the natural numbers on that. That already gives you access to functions nat ⇒ 'a, which are essentially infinite lists.
You can also define infinite lists and other infinite data structures as codatatypes with the (co-)datatype package, which is based on bounded natural functors.
Let me add some points to two of your questions.
1) Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
In short: Isabelle/HOL does not require termination, but totality (i.e., there is a specific result for each input to the function) of functions. Totality does not mean that a function is actually terminating when transcribed to a (functional) programming language or even that it is computable at all.
Therefore, talking about termination is somewhat misleading, even though it is encouraged by the fact that Isabelle/HOL's function package uses the keyword termination for proving some property P about which I will have to say a little more below.
On the one hand the term "termination" might sound more intuitive to a wider audience. On the other hand, a more precise description of P would be well-foundedness of the function's call graph.
Don't get me wrong, termination is not really a bad name for the property P, it is even justified by the fact that many techniques that are implemented in the function package are very close to termination techniques from term rewriting or functional programming (like the size-change principle, dependency pairs, lexicographic orders, etc.).
I'm just saying that it can be misleading. The answer to why that is the case also touches on question 4 of the OP.
4) As far as I know Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Isabelle/HOL is not a programming language and it specifically does not have any evaluation strategy (we could alternatively say: it has any evaluation strategy you like).
And here is why the word termination is misleading (drum roll): if there is no evaluation strategy and we have termination of a function f, people might expect f to terminate independent of the used strategy. But this is not the case. A termination proof of a function rather ensures that f is well-defined. Even if f is computable a proof of P merely ensures that there is an evaluation strategy for which f terminates.
(As an aside: what I call "strategy" here, is typically influenced by so called cong-rules (i.e., congruence rules) in Isabelle/HOL.)
As an example, it is trivial to prove that the function (see Section 10.1 Congruence rules and evaluation order in the documentation of the function package):
fun f' :: "nat ⇒ bool"
where
"f' n ⟷ f' (n - 1) ∨ n = 0"
terminates (in the sense defined by termination) after adding the cong-rule:
lemma [fundef_cong]:
"Q = Q' ⟹ (¬ Q' ⟹ P = P') ⟹ (P ∨ Q) = (P' ∨ Q')"
by auto
Which essentially states that logical-or should be "evaluated" from right to left. However, if you write the same function e.g. in OCaml it causes a stack overflow ...
EDIT: this answer is not really correct, check out Lars' comment below.
Unfortunately I don't have enough reputation to post this as a comment, so here is my go at an answer (please bear in mind I am no expert in Isabelle, but I also had similar questions once):
1) The idea is to prove statements about the defined functions. I am not sure how familiar you are with Computability Theory, but think about the Halting Problem and the fact most undeciability problems stem from it (such as Acceptance Problem). Imagine defining a function which you can't prove it terminates. How could you then still prove it returns the number 42 when given input "ABC" and it doesn't go in an infinite loop?
If instead you limit yourself to terminating functions, you can prove much more about them, essentially making a trade-off (or at least this is how I see it).
These ideas stem from Constructivism and Intuitionism and I recommend you check out Robert Harper's very interesting lecture series: https://www.youtube.com/watch?v=9SnefrwBIDc&list=PLGCr8P_YncjXRzdGq2SjKv5F2J8HUFeqN on Type Theory
You should check out especially the part about the absence of the Law of Excluded middle: http://youtu.be/3JHTb6b1to8?t=15m34s
2) See Manuel's answer.
3,4) Again see Manuel's answer keeping in mind Intuitionistic logic: "the fundamental entity is not the boolean, but rather the proof that something is true".
For me it took a long time to get adjusted to this way of thinking and I'm still not sure I understand it. I think the key though is to understand it is a more-or-less completely different way of thinking.
I have a rather large term foo. When I type
value "foo"
then Isabelle evaluates foo to a value, say foo_value. I would now like to prove the following lemma.
lemma "foo = foo_value"
What proof method should I use? I tried try, but that timed out. I guess I could proceed manually by unfolding the various definitions that occur in foo, but surely I should be able to tap into whatever mechanism the value command is using, right?
There are three proof methods that correspond to the different evaluation mechanisms of value:
eval uses the code generator; it corresponds to value [code]. The proof succeeds if the generated ML code evaluates to True.
normalization compiles the statement to a symbolic normalisation engine in ML. It mimicks value [nbe].
code_simp uses Isabelle's simplifier as an evaluator. It corresponds to value [simp].
The tutorial on code generation describes these proof methods in more detail. eval and normalization act like oracles, i.e., they bypass Isabelle's kernel whereas every evaluation step of code_simp goes through the kernel. Usually, eval is faster than normalization and normalization is faster than code_simp.
I am not sure whether it works in all cases, but you could try:
lemma "foo = foo_value"
by eval
In many cases, by simp should also work and I guess eval is kind of an oracle (in the sense that it is not fully verified by the kernel; please somebody correct me if I am wrong).