Case analysis on a premise Isabelle - isabelle

I have the following proof state:
1. ⋀i is s stk stack.
(⋀stack.
length (exec is s stack) = n' ⟹
length stack = n ⟹ ok n is n') ⟹
length (exec (i # is) s stack) = n' ⟹
length stack = n ⟹ ok n (i # is) n'
How do I perform a case split on i? Where i is of type:
datatype instr = LOADI val | LOAD vname | ADD
I'm doing this for exc 4.7 of concrete semantics so this should be possible to do with tactics.

If anything you should use cases i rule: instr.cases, but that will not work here because i is not a fixed variable but a bound variable. Also, the rule: instr.cases is not really needed because Isabelle will use that rule by default anyway.
Doing a case distinction on a bound variable without fixing it first is kind of discouraged; that said, it can be done by doing apply (case_tac i) instead of apply (cases i). But as I said, this is not the nice way to do it.
A more proper way to do it is to explicitly fix i using e.g. the subgoal command:
subgoal for i is s stk stack
apply (cases i)
An even better way would probably be to use a structured Isar proof instead.
However, I don't think the subgoal command or Isar proofs are something that you know about at this stage of the Concrete Semantics book, so my guess would be that there is a nicer way to do the proof where you don't have to do any manual case splitting.
Most probably you are doing an induction on the list of instructions; it would probably be better to do an induction on the predicate ok instead. But then again: Where is that predicate ok? I don't see it in your assumptions. It's hard to say what's going on there without knowing how you defined ok and what lemma you are trying to prove exactly and what tactics you applied already.

Related

Fragile rule application in Isabelle

I was playing around with an example from the Isabelle/HOL tutorial to get a better understanding on the correspondence between Isar and Tactics proofs.
This is a version which works:
lemma rtrancl_converseD: "(x,y) ∈ (r ^-1 )^* ⟹ (y,x) ∈ r^* "
proof (induct y rule: rtrancl_induct)
case base
then show ?case ..
next case (step y z)
then have "(z, y) ∈ r" using rtrancl_converseD by simp
with `(y,x)∈ r^*` show "(z,x) ∈ r^*" using [[unify_trace_failure]]
apply (subgoal_tac "1=(1::nat)")
apply (rule converse_rtrancl_into_rtrancl)
apply simp_all
done
qed
I want to instantiate converse_rtrancl_into_rtrancl which proofs (?a, ?b) ∈ ?r ⟹ (?b, ?c) ∈ ?r^* ⟹ (?a, ?c) ∈ ?r^* .
But without the seemingly nonsensical apply (subgoal_tac "1=(1::nat)") line this errors with
Clash: r =/= Transitive_Closure.rtrancl
Failed to apply proof method⌂:
using this:
(y, x) ∈ r^*
(z, y) ∈ r
goal (1 subgoal):
1. (z, x) ∈ r^*
If I fully instantiate the rule apply (rule converse_rtrancl_into_rtrancl[of z y r x]) this becomes Clash: z__ =/= ya__.
This leaves me with three questions: Why does this specific case break? How can I fix it? And how can I figure out what went wrong in these cases since I can't really understand what the unify_trace_failure message wants to tell me.
rule-tactics are usually sensitive to the order of premises. The order of premises in converse_rtrancl_into_rtrancl and in your proof state don't match. Switching the order of premises in the proof state using rotate_tac will make them match the rule, so that you can directly apply fact like this:
... show "(z,x) ∈ r^*"
apply (rotate_tac)
apply (fact converse_rtrancl_into_rtrancl)
done
Or, if you want to include some kind of rule tactic, this would look like this:
apply (rotate_tac)
apply (erule converse_rtrancl_into_rtrancl)
apply (assumption)
(I personally don't use apply scripts ever in my everyday work. So apply-style gurus might know more elegant ways of handling this kind of situation. ;) )
Regarding your 1=(1::nat) / simp_all fix:
The whole goal can directly be solved by simp_all. So, attempts with adding stuff like 1=1 probably did not really tell you a lot about how much the other methods contributed to solving the proof.
However, the additional assumption seems to actually help Isabelle match converse_rtrancl_into_rtrancl correctly. (Don't ask me why!) So, one could indeed circumvent the problem by adding this spurious assumption and then eliminating it with refl again like:
apply (subgoal_tac "1=(1::nat)")
apply (erule converse_rtrancl_into_rtrancl)
apply (assumption)
apply (rule refl)
This does not look particularly elegant, of course.
The [[unify_trace_failure]] probably only really helps if one is familiar with the internal workings of Nipkow's higher-order unification algorithm. (I'm not.) I think the hint for the future here would really be that one must look closely at the order of premises for some tactics (rather than at the unifier debug output).
I found an explanation in the Isar reference 6.4.3 .
The with b1..bn command is equivalent to from b1..bn and this, i.e. it enters the proof chaining mode which adds them as (structured) assumptions to proof methods.
Basic proof methods (such as rule) expect multiple facts to be given
in their proper order, corresponding to a prefix of the premises of
the rule involved. Note that positions may be easily skipped using
something like from _ and a and b, for example. This involves the
trivial rule PROP ψ =⇒ PROP ψ, which is bound in Isabelle/Pure as “_”
(underscore).
Automated methods (such as simp or auto) just insert any given facts
before their usual operation. Depending on the kind of procedure
involved, the order of facts is less significant here.
Given the information about the 'with' translation and that rule expects chained facts in order, we could try to flip the chained facts. And indeed this works:
from this and `(y,x)∈ r^*` show "(z,x) ∈ r^*"
by (rule converse_rtrancl_into_rtrancl)
I think "6.4.3 Fundamental methods and attributes" is also relevant because it describes how the basic methods interact with incoming facts. Notably, the '-' noop which is sometimes used when starting proofs turns forward chaining into assumptions on the goal.
with `(y,x)∈ r^*` show "(z,x) ∈ r^*"
apply -
apply (rule converse_rtrancl_into_rtrancl; assumption)
done
This works because the first apply consumes all chained facts so the second apply is pure backwards chaining. This is also why the subgoal_tac or rotate_tac worked, but only if they are in seperate apply commands.

How to prove the existence of inverse functions in Isabelle/HOL?

I am trying to prove the following basic theorem about the existence of the inverse function of a bijective function (to learn theorem-proving with Isabelle/HOL):
For any set S and its identity map 1_S, α:S→T is bijective iff there
exists a map β: T→S such that βα=1_S and αβ=1_S.
Below is what I have so far after some attempts to define relevant things including functions and their inverses. But I am pretty stuck and couldn't make much progress due to my lack of understanding of Isabelle and/or Isar.
theory Test
imports Main
"HOL.Relation"
begin
lemma bij_iff_ex_identity : "bij_betw f A B ⟷ (∃ g. g∘f = restrict id B ∧ f∘g = restrict id A)"
unfolding bij_betw_def inj_on_def restrict_def iffI
proof
let ?g = "restrict (λ y. (if f x = y then x else undefined)) B"
assume "(∀x∈A. ∀y∈A. f x = f y ⟶ x = y)"
have "?g∘f = restrict id B"
proof
(* cannot prove this *)
end
In above, I try to give an explicit existential witness (i.e. the inverse function g of the original function f). I have several issues about the proof.
whether the concepts are defined right (functions, inverse functions etc.) in Isabelle terms.
how to expand the relevant definitions and then simplify them with function applications. I have followed some Isabelle (2021) examples/tutorials about both the apply-style simp, and structured style Isar proof but couldn't use the Isar proof fluently. Once I started the proof command, I don't know how to simp or move any further.
Isar has the new way of assumes ... shows ... for stating the theorem. Is there similar support for proving iff's (⟷) like the example above? Without it, there is no access to assms etc., and is it necessary to assume everything except the conclusion during the proof.
Can someone help explain how the above existential proof about inverse function can be accomplished?
lemma bij_iff_ex_identity : "bij_betw f A B ⟷ (∃ g. g∘f = restrict id B ∧ f∘g = restrict id A)"
I think this is not exactly what you want an I am doubtful that it is true. g∘f = restrict id B does not mean that g∘f and id are equal on B. It means that the total function g∘f (and there are only total functions in HOL) equals the total function restrict id B. The latter returns id x on x∈B and undefined otherwise. So to make this equality true, g needs to output undefined whenever the input of f is not in B. But how would g know that!
If you want to use restrict, you could write restrict (g∘f) B = restrict id B. But personally, I would rather go for the simpler (∀x∈B. (g∘f) x = x).
So the corrected theorem would be:
lemma bij_iff_ex_identity : "bij_betw f A B ⟷ (∃ g. (∀x∈A. (g∘f) x = x) ∧ (∀y∈B. (f∘g) y = y))"
(Which is still wrong, by the way, as quickcheck tells me in Isabelle/jEdit, see the output window. If A has one element and B is empty, f cannot be a bijection. So the theorem you are attempting is actually mathematically not true. I will not attempt to fix it, but just answer the remaining lines.
unfolding bij_betw_def inj_on_def restrict_def iffI
The iffI here has no effect. Unfolding can only apply theorems of the form A = B (unconditional rewriting rules). iffI is not of that form. (Use thm iffI to see.)
proof
Personally, I don't use the bare form proof but always proof - or proof (some method). Because proof just applies some default method (in this case, equivalent to (rule iffI), so I think it's better to make it explicit. proof - just starts the proof without applying an extra method.
let ?g = "restrict (λ y. (if f x = y then x else undefined)) B"
You have an unbound variable x here. (Note the background color in the IDE.) That is most likely not what you want. Formally, it is allowed, but x will be treated as if it was some arbitrary constant.
Generally, I don't think there is any way to define g in a simple way (i.e., only with quantifiers and function applications and if-then-else). I think the only way to define an inverse (even if you know it exists), is to use the THE operator, because you need to say something like g y is "the" x such that f x = y. (And then later in the proof you will run into a proof obligation that it indeed exists and that it is unique.) See the definition of inv_into in Hilbert_Choice.thy (except it uses SOME not THE). Maybe for starters, try to do the proof just using the existing inv_into constant.
assume "(∀x∈A. ∀y∈A. f x = f y ⟶ x = y)"
All assume commands must have assumptions exactly as the are in the proof goal. You can test whether you wrote it right by just temporarily writing the command show A for A (that's an unprovable goal that would, however, finish the proof, so it tricks Isabelle into checking if it would). If this command does not give an error, you got the assumes right. In your cases, you didn't, it should be (∀x∈A. ∀y∈A. f x = f y ⟶ x = y) ∧ f ' A = B. (' is the backtick symbol here. Markup doesn't let me write it.)
My recommendation: Try the proof with bij instead of bij_betw first. (One direction is in BNF_Fixpoint_Base.o_bij if you want to cheat.)
Once done, you can try to generalize.
I agree with the insightful remarks provided by Dominique Unruh. However, I would like to mention that a theorem that captures the idea underlying the theorem that you are trying to prove already exists in the source code of the main library of Isabelle/HOL. In fact, it exists in at least two different formats: let me name them the traditional Isabelle/HOL format and the canonical FuncSet format. For the former one, see the theorem bij_betw_iff_bijections:
"bij_betw f A B ⟷ (∃g. (∀x ∈ A. f x ∈ B ∧ g(f x) = x) ∧ (∀y ∈ B. g y ∈ A ∧ f(g y) = y))"
The situation is a little bit more complicated with FuncSet. There does not seem to exist a single theorem that captures the idea. However, together, the theorems bij_betwI, bij_betw_imp_funcset and inv_into_funcset are nearly equivalent to the theorem that you are trying to state. Let me provide a sketch of how one could express this theorem in a manner that would be considered reasonably canonical in the FuncSet sense (try to prove it yourself):
lemma bij_betw_iff:
shows "bij_betw f A B ⟷
(
∃g.
(∀x. x∈A ⟶ g (f x) = x) ∧
(∀y. y∈B ⟶ f (g y) = y) ∧
f ∈ A → B ∧
g ∈ B → A
)"
sorry
I would also like to repeat the advice given by Dominique Unruh and provide several side remarks:
My recommendation: Try the proof with bij instead of bij_betw first.
Indeed, this is a very good idea. In general, by trying to restrict the problem to explicitly defined sets A and B, instead of working directly with types, you touched upon a topic that is known as relativization in logic. For a mild layman's introduction see, for example, https://leanprover.github.io/logic_and_proof/first_order_logic.html [1], for a slightly more thorough introduction in the context of set theory see [2, chapter 12]. As you have probably noticed by now, it is not that easy to relativize theorems in Isabelle/HOL and requires additional proof effort.
However, there exists an extension of Isabelle/HOL that allows for the automation of the process of the relativization of theorems. For more information about this extension see the article From Types to Sets by Local Type Definition in Higher-Order Logic by Ondřej Kunčar and Andrei Popescu [3]. There also exists a large scale application example of the framework [4]. Independently, I am working on making this extension more user-friendly and very slowly approaching the final stages in my efforts: see https://gitlab.com/user9716869/tts_extension. Thus, in principle, if you know how to use Types-To-Sets and you accept its axioms, then it is sufficient to prove the theorem with bij, e.g.,
"bij f ⟷ (∃g. (∀x. g (f x) = x) ∧ (∀y. f (g y) = y))",
Then, the theorems like
bij_betw_iff_bijections and bij_betw_iff can be synthesized automatically for free upon a click of a button (almost...).
Finally, for completeness, let me offer my own advice with regard to your queries (although, as I mentioned, I agree with everything stated by Dominique Unruh)
how to expand the relevant definitions and then simplify them with
function applications. I have followed some Isabelle (2021)
examples/tutorials about both the apply-style simp, and structured
style Isar proof but couldn't use the Isar proof fluently. Once I
started the proof command, I don't know how to simp or move any
further.
I believe that the best way to learn what you are trying to learn is by following through the exercises in the book Concrete Semantics by Tobias Nipkow and Gerwin Klein [5]. Additionally, I would also look through A Proof Assistant for Higher-Order Logic by Tobias Nipkow et al [6](it is slightly outdated, but I found it to be useful specifically for learning apply-style scripting/direct rule application). By the way, I have mostly self-taught myself Isabelle from these books without any prior experience in formal methods.
Isar has the new way of assumes ... shows ... for stating the theorem.
Is there similar support for proving iff's (⟷) like the example above?
Without it, there is no access to assms etc., and is it necessary to
assume everything except the conclusion during the proof.
I will make the advice given by Dominique Unruh more explicit: use rule iffI or intro iffI for this.
Edit. When you use rule iffI (or similar) to start your structured Isar proof, you need to state your assumptions explicitly for every subgoal (using the assume ... show ... paradigm). However, there is a tool that can generate such boilerplate Isar code automatically. It is called Sketch-and-Explore and you can find it in the directory HOL/ex of the main library of Isabelle/HOL. In this case, all you need to do is to type sketch(rule iffI) and the assume/show paradigm will be generated automatically for every subgoal.
References
Avigad J, Lewis RY, and van Doorn F. Logic and Proof.
Jech T. Set theory. 3rd ed. Heidelberg: Springer; 2006. (Pure and applied mathematics, a series of monographs and textbooks).
Kunčar O, Popescu A. From Types to Sets by Local Type Definition in Higher-Order Logic. Journal of Automated Reasoning. 2019;62(2):237–60.
Immler F, Zhan B. Smooth Manifolds and Types to Sets for Linear Algebra in Isabelle/HOL. In: 8th ACM SIGPLAN International Conference on Certified Programs and Proofs. New York: ACM; 2019. p. 65–77. (CPP 2019).
Nipkow T, Klein G. Concrete Semantics with Isabelle/HOL. Heidelberg: Springer-Verlag; 2017. (http://concrete-semantics.org/)
Nipkow T, Paulson LC, Wenzel M. A Proof Assistant for Higher-Order Logic. Heidelberg: Springer-Verlag; 2017.

Isabelle "Failed to apply proof method" when working with two theory files

I have theory file Test_Func.thy which I have copied in Isabelle src/HOL and which defines function add_123:
theory Test_Func
imports Main
begin
fun add_123 :: "nat ⇒ nat ⇒ nat" where
"add_123 0 n = n" |
"add_123 (Suc m) n = Suc(add_123 m n)"
end
And then I have Test_1.thy file which have import and lemma:
theory Test_1
imports Main "HOL.Test_Func"
begin
lemma add_02: "add_123 m 0 = m"
apply(simp)
done
end
The strange thing is that apply(simp) or apply(auto) fails with Failed to apply proof method. There is no error message about undefined function or unvisible function, but somehow such simple proof is not working when function definition and lemma about it is split into two files.
So - this question can have different problems and different solutions - maybe it is about my inexperience to import theory file or maybe I am confused about tactic choice and application.
I am observing this in jEdit of Isabelle 2021, but in different setting I can see that the same thing happening in Isabelle 2020 as well.
There is no need to put theory files into the Isabelle distribution (on the contrary, I'd better keep it intact to make sure your development can be used on other machines without touching Isabelle installation).
The issue with the failing proof lies in a different area: the definition of add_123 is inductive on the first argument and has no immediate rule how to handle the expression specified in lemma_02. (E.g., lemma add_01: "add_123 0 m = m" could be proved the way you used because it matches the first case specified in the definition.)
The solution is to use a proof by induction on the first argument:
apply (induction m)
apply simp_all
done
or, in short by (induction m) simp_all.

Local assumptions in "state" mode

Frequently, when proving a statement in "prove" mode, I find myself in need of some intermediate statements that are not yet stated nor proved. To state them, I usually make use of the subgoal command, followed by proof- to change to "state" mode. In the process, however, all of the local assumptions are removed. A typical example could look like this
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
apply(auto)
subgoal
proof-
have "0<n" sorry (* here I would like to refer to the assumption from the subgoal *)
then show ?thesis sorry
qed
subgoal sorry
done
I am aware that I could state the assumptions using assume explicitly. However, this becomes quickly rather tedious when multiple assumptions are involved. Is there an easier way to simply refer to all of the assumptions? Alternatively, is there a good way to implement statements with short proofs directly in "prove" mode?
There is the syntax subgoal premises prems to bind the premises of the subgoal to the name prems (or any other name – but prems is a sensible default):
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
apply(auto)
subgoal premises prems
proof -
thm prems
There is also a method called goal_cases that automatically gives names to all the current subgoals – I find it very useful. If subgoal premises did not exist, you could do this instead:
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
apply(auto)
subgoal
proof goal_cases
case 1
By the way, looking at your example, it is considered a bad idea to do anything after auto that depends on the exact form of the proof state, such as metis calls or Isar proofs. auto is fairly brutal and might behave differently in the next Isabelle release so that such proofs break. I recommend doing a nice structured Isar proof here.
Also note that your theorem is a direct consequence of power_strict_mono and power_less_imp_less_base and can be proven in a single line:
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
by (auto intro: Nat.gr0I power_strict_mono)`

Isabelle - exI and refl behavior explanation needed

I am trying to understand the lemma below.
Why is the ?y2 schematic variable introduced in exI?
And why it is not considered in refl (so: x = x)?
lemma "∀x. ∃y. x = y"
apply(rule allI) (* ⋀x. ∃y. x = y *)
thm exI (* ?P ?x ⟹ ∃x. ?P x *)
apply(rule exI) (* ⋀x. x = ?y2 x *)
thm refl (* ?t = ?t *)
apply(rule refl)
done
UPDATE (because I can't format code in comments):
This is the same lemma with a different proof, using simp.
lemma "∀x. ∃y. x = y"
using [[simp_trace, simp_trace_depth_limit = 20]]
apply (rule allI) (*So that we start from the same problem state. *)
apply (simp only:exI)
done
The trace shows:
[0]Adding rewrite rule "HOL.exI":
?P1 ?x1 ⟹ ∃x. ?P1 x ≡ True
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
⋀x. ∃y. x = y
[1]Applying instance of rewrite rule "HOL.exI":
?P1 ?x1 ⟹ ∃x. ?P1 x ≡ True
[1]Trying to rewrite:
x = ?x1 ⟹ ∃xa. x = xa ≡ True <-- NOTE: not ?y2 xa or similar!
[2]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
x = ?x1
[1]SUCCEEDED
∃xa. x = xa ≡ True
So apparently simp and rule handles exI differently. And the remaining question is: what is the mechanical (programmatical) reasoning behind rule's behavior.
When you use rule thm for some fact thm, Isabelle performs higher-order unification of the conclusion of thm with the current goal. If there is a unifier, it is used to instantiate both the goal and the conclusion of the theorem, and then resolution is performed (i.e. the goal is replaced with the assumptions of thm).
This means that:
Schematic variables in the goal can be instantiated by rule through unification
Variables that appear only in the assumptions of thm will not be instantiated by the unification and will therefore remain schematic. That way, you end up with schematic variables in your new goals. Such variables can be seen as existential in some sense, because the conclusion of thm holds if you can prove the assumptions for just one arbitrary value.
In the case of exI, you have ?P ?x ⟹ ∃x. ?P x. When you apply rule exI, the variable ?P is instantiated to λy. x = y, but the variable ?x appears only in the assumptions of exI, so it remains schematic. This means that you can pick any value you want for ?x later on in your proof.
To be more precise, you end up with ⋀x. x = ?y2 x as your goal. You might ask ‘Why not just ⋀x. x = ?y2?’ That would mean that you have to show that x equals some fixed value y2 for all possible values of x. That is obviously not true in general. ⋀x. x = ?y2 x means you have to show that every x equals some y2 that may depend on x – or, equivalently, that there is a function y2 that, when given x, outputs x.
Of course, there is such a function and it is simply the identity function λx. x. That is precisely what ?y2 gets instantiated to when you apply rule refl: the goal x = ?y2 x is unified with the conclusion of refl ?t = ?t and you end up with ?t = x and ?y2 = λx. x, and since refl has no assumptions, this resolution finishes the proof.
I am not entirely sure what you mean with ‘And why it is not considered in refl?’, but I hope that I have answered your questions.
Get a more complete answer from an expert, but I give a short, brief answer to your second part.
The great thing about Isabelle is that it provides many different ways to prove a problem.
Your new question is similar to L.Paulson's comment on FOM: you moved the goal post by switching the question to rule vs. simp:
http://www.cs.nyu.edu/pipermail/fom/2015-October/019312.html
Getting a basic understanding of simp is actually a much easier goal to pursue, or I wouldn't be adding my reponse here.
rule and natural deduction
The use of rule is the use of natural deduction (ND), where most people aren't up to speed on ND. The use of ND requires understanding ND, so questions like your first question can lead to a non-simple answer, because anything informative can't be a one-liner answer, especially due to things like schematic variables (which you asked about), resolution, unification, rewriting, etc.
Do a search on natural deduction and you'll find the standard wiki page about it. There are numerous books on natural deduction, though they get swamped in searches on "logic" due to first-order logic books. A popular book is Logic in Computer Science, 2nd, by Huth and Ryan.
If you study ND, you'll see that exI matches one of the ND rules.
I have yet to take the time to come up to speed on ND, because I keep making progress without having more than a basic understanding of ND.
Sledgehammer, and auto-methods auto, simp, blast, induct, cases, etc., and Sledgehammer's use of some of those, keep me from finding the time to become good with natural decution.
Answer's like M.Eberl's, though not simple explanations, help me absorb a little here and a little there.
Simp, I think of it as simple substitution (rewriting)
The mechanics behind simp is really simple, compared to natural deduction. You define a formula and prove it:
lemma foo [simp]: "left_hand_side = right_hand_side"
In the proof of another theorem, when simp is invoked in one way or another, or foo is unfolded, where there is left_hand_side, it's replaced with right_hand_side. It's just classic mathematical substitution.
I suppose it could also be "rewriting", but I don't know anything about rewriting, other than they talk about it.
There are lots of details about how and whether one should set things up automatically (to prevent looping), like with [simp] or declare foo_def [simp add], but that's just details along the line of normal programming.

Resources