How to resolve Isabelle termination _dom error message (avoiding recursive definition of two-argument max)? - isabelle

I am trying to define my own simple max function in Isabelle and prove its termination:
fun two_integer_max_case :: "nat ⇒ nat ⇒ nat" where
"two_integer_max_case a b = (case a > b of True ⇒ a | False ⇒ b)"
termination by auto
But there is un-handled goal in the termination proof:
proof (prove)
goal (1 subgoal):
1. All two_integer_max_case_dom
Ignoring duplicate rewrite rule:
two_integer_max_case ?a1 ?b1 ≡ case ?b1 < ?a1 of True ⇒ ?a1 | False ⇒ ?b1
Duplicate fact declaration "Max_Of_Two_Integers.two_integer_max_case.simps" vs. "Max_Of_Two_Integers.two_integer_max_case.simps"⌂
Failed to finish proof⌂:
goal (1 subgoal):
1. ⋀a b. two_integer_max_case_dom (a, b)
I am focused specifically on message:
Failed to finish proof⌂:
goal (1 subgoal):
1. ⋀a b. two_integer_max_case_dom (a, b)
What does it mean? What is required of me? This ..._dom condition. Where I can read about that?
I have read Chapters 3.5 of https://isabelle.in.tum.de/dist/Isabelle2021/doc/tutorial.pdf and now I am reading Chapter 8.1 of https://isabelle.in.tum.de/dist/Isabelle2021/doc/functions.pdf about domain predicates. Well, I hope that I manage find the solution.
I am aware that things would be easy (at least in the termination proof) if I had manage to come up with the recursive definition of my max function (two natural type arguments). But I have not managed to find such definition (I guess that there is some creative definition in the base theories already) and I am not sure whether I would be happy with recursive definition because my intention is to generate the Haskell or Scala code from this my function and I would prefer that this code would not be recursive but that it would use the standard less, equality operators of respective languages.
Well - it my be problem for the code synthesis with Isabelle generally - if Isabelle prefers recursive definitions of the algorithmic constructions for which industrial-programming-style (whatever it be) is not requiring recursion, then the generated code can be of less maintainability and human comprehension (still an issue before the AI has taken the field of program synthesis).

There is no need to prove termination when using fun, since this Isabelle-command only accepts a function definition if it can prove termination automatically. Hence your termination-command is not necessary at all and actually confuses the system, since it has already proven termination.
Only when using function instead of fun, then you need to prove termination manually afterwards.
Hope this helps, René

Related

What is `ind` type in Isabelle

In Isabelle natural numbers are defined as follows
typedecl ind
axiomatization Zero_Rep :: ind and Suc_Rep :: "ind ⇒ ind"
― ‹The axiom of infinity in 2 parts:›
where Suc_Rep_inject: "Suc_Rep x = Suc_Rep y ⟹ x = y"
and Suc_Rep_not_Zero_Rep: "Suc_Rep x ≠ Zero_Rep"
subsection ‹Type nat›
text ‹Type definition›
inductive Nat :: "ind ⇒ bool"
where
Zero_RepI: "Nat Zero_Rep"
| Suc_RepI: "Nat i ⟹ Nat (Suc_Rep i)"
That's a lot of code to write what's effectively just
datatype nat = Zero | Suc nat
Is there some greater purpose to ind or maybe it is there just for historical reasons?
The datatype package needs a whole lot of maths to do all those internal constructions that are required to give you the datatype you want in the end. In particular, it needs natural numbers.
So the reason why the datatype package is not used to define the naturals is that it simply isn't available yet at that point.
One could of course just axiomatise the nat type directly. I think the idea to instead axiomatise some infinite type and then carve the naturals out of that is a standard one, I think it is done similarly in Zermelo–Fraenkel logic.
Side note: In fact one could even make the datatype package itself axiomatic. But the common philosophy in interactive theorem provers, especially in the LCF family, is to work from a small set of axioms and that everything that can be constructed should be constructed instead of axiomatised. This reduces the amount of "trusted code".
A direct axiomatisation of the natural numbers would not be very controversial I think, but for something as complicated as the datatype package an axiomatic implementation would h introduce lots of possibilities for subtle soundness bugs.

Case analysis on a premise Isabelle

I have the following proof state:
1. ⋀i is s stk stack.
(⋀stack.
length (exec is s stack) = n' ⟹
length stack = n ⟹ ok n is n') ⟹
length (exec (i # is) s stack) = n' ⟹
length stack = n ⟹ ok n (i # is) n'
How do I perform a case split on i? Where i is of type:
datatype instr = LOADI val | LOAD vname | ADD
I'm doing this for exc 4.7 of concrete semantics so this should be possible to do with tactics.
If anything you should use cases i rule: instr.cases, but that will not work here because i is not a fixed variable but a bound variable. Also, the rule: instr.cases is not really needed because Isabelle will use that rule by default anyway.
Doing a case distinction on a bound variable without fixing it first is kind of discouraged; that said, it can be done by doing apply (case_tac i) instead of apply (cases i). But as I said, this is not the nice way to do it.
A more proper way to do it is to explicitly fix i using e.g. the subgoal command:
subgoal for i is s stk stack
apply (cases i)
An even better way would probably be to use a structured Isar proof instead.
However, I don't think the subgoal command or Isar proofs are something that you know about at this stage of the Concrete Semantics book, so my guess would be that there is a nicer way to do the proof where you don't have to do any manual case splitting.
Most probably you are doing an induction on the list of instructions; it would probably be better to do an induction on the predicate ok instead. But then again: Where is that predicate ok? I don't see it in your assumptions. It's hard to say what's going on there without knowing how you defined ok and what lemma you are trying to prove exactly and what tactics you applied already.

Isabelle "Failed to apply proof method" when working with two theory files

I have theory file Test_Func.thy which I have copied in Isabelle src/HOL and which defines function add_123:
theory Test_Func
imports Main
begin
fun add_123 :: "nat ⇒ nat ⇒ nat" where
"add_123 0 n = n" |
"add_123 (Suc m) n = Suc(add_123 m n)"
end
And then I have Test_1.thy file which have import and lemma:
theory Test_1
imports Main "HOL.Test_Func"
begin
lemma add_02: "add_123 m 0 = m"
apply(simp)
done
end
The strange thing is that apply(simp) or apply(auto) fails with Failed to apply proof method. There is no error message about undefined function or unvisible function, but somehow such simple proof is not working when function definition and lemma about it is split into two files.
So - this question can have different problems and different solutions - maybe it is about my inexperience to import theory file or maybe I am confused about tactic choice and application.
I am observing this in jEdit of Isabelle 2021, but in different setting I can see that the same thing happening in Isabelle 2020 as well.
There is no need to put theory files into the Isabelle distribution (on the contrary, I'd better keep it intact to make sure your development can be used on other machines without touching Isabelle installation).
The issue with the failing proof lies in a different area: the definition of add_123 is inductive on the first argument and has no immediate rule how to handle the expression specified in lemma_02. (E.g., lemma add_01: "add_123 0 m = m" could be proved the way you used because it matches the first case specified in the definition.)
The solution is to use a proof by induction on the first argument:
apply (induction m)
apply simp_all
done
or, in short by (induction m) simp_all.

How to prove that "3 is a prime" in the Isabelle proof assistant?

For a proof I'm working on in Isabelle I need the facts that 3 and 5 are primes. What would be the simplest way to establish this?
There are simp rules that allow the simplifier to do that automatically:
lemma "prime (5 :: nat)"
by simp
For larger numbers (e.g. 137), this will take a few seconds, and for much larger numbers, it is completely unusable.
You can also use eval instead of simp, which goes through Isabelle's evaluation oracle to evaluate the statement inside Standard ML and then reinterprets the result as a theorem in Isabelle. Depending on whom you ask, this may be seen as slightly less trustworthy than simp.
Lastly, there the entry on Pratt Certificates in the Archive of Formal Proofs also provides a proof method called pratt, which can automatically prove the primality of a number using Pratt certificates. This is slightly more efficient than using simp, but still not great for really big numbers.
In any case, for small numbers like 5 and 7, by simp is the way to go.
Note however that you must give a type, i.e. prime (5 :: nat) or prime (7 :: int). If you just write prime 5, the type that is inferred for 5 is too general. For instance, prime (5 :: real) is not true, since fields contain no prime numbers.

Isabelle/HOL foundations

I have seen a lot of documentation about Isabelle's syntax and proof strategies. However, little have I found about its foundations. I have a few questions that I would be very grateful if someone could take the time to answer:
Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
What symbols are part of Isabelle's meta-language? I read that there are symbols in the meta-language for Universal Quantification (/\) and for implication (==>). However, these symbols have their counterpart in the object-level language (∀ and -->). I understand that --> is an object-level function of type bool => bool => bool. However, how are ∀ and ∃ defined? Are they object-level Boolean functions? If so, they are not computable (considering infinite domains). I noticed that I am able to write Boolean functions in therms of ∀ and ∃, but they are not computable. So what are ∀ and ∃? Are they part of the object-level? If so, how are they defined?
Are Isabelle theorems just Boolean expressions? Then Booleans are part of the meta-language?
As far as I know, Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Sorry if these questions are very basic. I do not seem to find a good tutorial on Isabelle's meta-theory. I would love if someone could recommend me a good tutorial on these topics.
Thank you very much.
You can define non-terminating (i.e. partial) functions in Isabelle (cf. Function package manual (section 8)). However, partial functions are more difficult to reason about, because whenever you want to use its definition equations (the psimps rules, which replace the simps rules of a normal function), you have to show that the function terminates on that particular input first.
In general, things like non-definedness and non-termination are always problematic in a logic – consider, for instance, the function ‘definition’ f x = f x + 1. If we were to take this as an equation on ℤ (integers), we could subtract f x from both sides and get 0 = 1. In Haskell, this problem is ‘solved’ by saying that this is not an equation on ℤ, but rather on ℤ ∪ {⊥} (the integers plus bottom) and the non-terminating function f evaluates to ⊥, and ‘⊥ + 1 = ⊥’, so everything works out fine.
However, if every single expression in your logic could potentially evaluate to ⊥ instead of a ‘proper‘ value, reasoning in this logic will become very tedious. This is why Isabelle/HOL chooses to restrict itself to total functions; things like partiality have to be emulated with things like undefined (which is an arbitrary value that you know nothing about) or option types.
I'm not an expert on Isabelle/Pure (the meta logic), but the most important symbols are definitely
⋀ (the universal meta quantifier)
⟹ (meta implication)
≡ (meta equality)
&&& (meta conjunction, defined in terms of ⟹)
Pure.term, Pure.prop, Pure.type, Pure.dummy_pattern, Pure.sort_constraint, which fulfil certain internal functions that I don't know much about.
You can find some information on this in the Isabelle/Isar Reference Manual in section 2.1, and probably more elsewhere in the manual.
Everything else (that includes ∀ and ∃, which indeed operate on boolean expressions) is defined in the object logic (HOL, usually). You can find the definitions, of rather the axiomatisations, in ~~/src/HOL/HOL.thy (where ~~ denotes the Isabelle root directory):
All_def: "All P ≡ (P = (λx. True))"
Ex_def: "Ex P ≡ ∀Q. (∀x. P x ⟶ Q) ⟶ Q"
Also note that many, if not most Isabelle functions are typically not computable. Isabelle is not a programming language, although it does have a code generator that allows exporting Isabelle functions as code to programming languages as long as you can give code equations for all the functions involved.
3)
Isabelle theorems are a complex datatype (cf. ~~/src/Pure/thm.ML) containing a lot of information, but the most important part, of course, is the proposition. A proposition is something from Isabelle/Pure, which in fact only has propositions and functions. (and itself and dummy, but you can ignore those).
Propositions are not booleans – in fact, there isn't even a way to state that a proposition does not hold in Isabelle/Pure.
HOL then defines (or rather axiomatises) booleans and also axiomatises a coercion from booleans to propositions: Trueprop :: bool ⇒ prop
Isabelle is not a programming language, and apart from that, totality does not mean you have to restrict yourself to finite structures. Even in a total programming language, you can have infinite lists. (cf. Idris's codata)
Isabelle is a theorem prover, and logically, infinite objects can be treated by axiomatising them and then reasoning about them using the axioms and rules that you have.
For instance, HOL assumes the existence of an infinite type and defines the natural numbers on that. That already gives you access to functions nat ⇒ 'a, which are essentially infinite lists.
You can also define infinite lists and other infinite data structures as codatatypes with the (co-)datatype package, which is based on bounded natural functors.
Let me add some points to two of your questions.
1) Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
In short: Isabelle/HOL does not require termination, but totality (i.e., there is a specific result for each input to the function) of functions. Totality does not mean that a function is actually terminating when transcribed to a (functional) programming language or even that it is computable at all.
Therefore, talking about termination is somewhat misleading, even though it is encouraged by the fact that Isabelle/HOL's function package uses the keyword termination for proving some property P about which I will have to say a little more below.
On the one hand the term "termination" might sound more intuitive to a wider audience. On the other hand, a more precise description of P would be well-foundedness of the function's call graph.
Don't get me wrong, termination is not really a bad name for the property P, it is even justified by the fact that many techniques that are implemented in the function package are very close to termination techniques from term rewriting or functional programming (like the size-change principle, dependency pairs, lexicographic orders, etc.).
I'm just saying that it can be misleading. The answer to why that is the case also touches on question 4 of the OP.
4) As far as I know Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Isabelle/HOL is not a programming language and it specifically does not have any evaluation strategy (we could alternatively say: it has any evaluation strategy you like).
And here is why the word termination is misleading (drum roll): if there is no evaluation strategy and we have termination of a function f, people might expect f to terminate independent of the used strategy. But this is not the case. A termination proof of a function rather ensures that f is well-defined. Even if f is computable a proof of P merely ensures that there is an evaluation strategy for which f terminates.
(As an aside: what I call "strategy" here, is typically influenced by so called cong-rules (i.e., congruence rules) in Isabelle/HOL.)
As an example, it is trivial to prove that the function (see Section 10.1 Congruence rules and evaluation order in the documentation of the function package):
fun f' :: "nat ⇒ bool"
where
"f' n ⟷ f' (n - 1) ∨ n = 0"
terminates (in the sense defined by termination) after adding the cong-rule:
lemma [fundef_cong]:
"Q = Q' ⟹ (¬ Q' ⟹ P = P') ⟹ (P ∨ Q) = (P' ∨ Q')"
by auto
Which essentially states that logical-or should be "evaluated" from right to left. However, if you write the same function e.g. in OCaml it causes a stack overflow ...
EDIT: this answer is not really correct, check out Lars' comment below.
Unfortunately I don't have enough reputation to post this as a comment, so here is my go at an answer (please bear in mind I am no expert in Isabelle, but I also had similar questions once):
1) The idea is to prove statements about the defined functions. I am not sure how familiar you are with Computability Theory, but think about the Halting Problem and the fact most undeciability problems stem from it (such as Acceptance Problem). Imagine defining a function which you can't prove it terminates. How could you then still prove it returns the number 42 when given input "ABC" and it doesn't go in an infinite loop?
If instead you limit yourself to terminating functions, you can prove much more about them, essentially making a trade-off (or at least this is how I see it).
These ideas stem from Constructivism and Intuitionism and I recommend you check out Robert Harper's very interesting lecture series: https://www.youtube.com/watch?v=9SnefrwBIDc&list=PLGCr8P_YncjXRzdGq2SjKv5F2J8HUFeqN on Type Theory
You should check out especially the part about the absence of the Law of Excluded middle: http://youtu.be/3JHTb6b1to8?t=15m34s
2) See Manuel's answer.
3,4) Again see Manuel's answer keeping in mind Intuitionistic logic: "the fundamental entity is not the boolean, but rather the proof that something is true".
For me it took a long time to get adjusted to this way of thinking and I'm still not sure I understand it. I think the key though is to understand it is a more-or-less completely different way of thinking.

Resources