I am puzzled about proving
A ==> B ==> C ==> B
in Isabelle. Obviously you could
apply simp
but how could I prove this with using rules?
Alternatively, is there a way to dump the rules simp used? Thanks.
If you really want to understand how proofs work, you should forget both about funny tactics and automated reasoning tools as a start.
The statement A ==> B ==> C ==> B (using this special ==> connective) of Isabelle/Pure is immediately true, so its proof in Isabelle/Isar is:
lemma "A ==> B ==> C ==> B" .
That's it, just . (which abbreviates by this, but the this is actually empty here).
As slightly less vacuous proof uses actual Isabelle/HOL connectives, which you can then handle by standard introduction or elimination steps. E.g. like this:
lemma "A --> B --> C --> B"
proof
show "B --> C --> B"
proof
assume b: B
show "C --> B"
proof
show B by (rule b)
qed
qed
qed
But that is not so interesting either: you build up a boring implication that that is then decomposed until you are finished.
To find more interesting Isabelle/Isar proofs just do some web search, or look through the sources that come with the system.
A totally arbitrary example is here: Drinker.
There are also tons of manuals, actually too many of them.
You can enable simplifier tracing; in Proof General, you can do this with Isabelle → Settings → Tracing → Trace Simplifier, I don't know about jEdit.
EDIT: In this case the simp trace will not be very helpful, since simp does not use rewrite rules to solve this, instead it "sees" A, B, and C in the premises and concludes that it can, in the context of this statement, rewrite A = True, B = True, and C = True, then it rewrites the goal B to True and you're done.
However, the "normal" way of proving statements such as this is to use the assumption method, which matches the goal against a premise, in this case B. There is probably a way to prove this using rule as well, but that would be unnecessarily complicated. assumption uses assume_tac, which in turn is just a wrapper around the very basic function Thm.assumption, so this can really be considered one of the most elementary proof methods in Isabelle.
So just write by assumption.
Related
I have the following proof state:
1. ⋀i is s stk stack.
(⋀stack.
length (exec is s stack) = n' ⟹
length stack = n ⟹ ok n is n') ⟹
length (exec (i # is) s stack) = n' ⟹
length stack = n ⟹ ok n (i # is) n'
How do I perform a case split on i? Where i is of type:
datatype instr = LOADI val | LOAD vname | ADD
I'm doing this for exc 4.7 of concrete semantics so this should be possible to do with tactics.
If anything you should use cases i rule: instr.cases, but that will not work here because i is not a fixed variable but a bound variable. Also, the rule: instr.cases is not really needed because Isabelle will use that rule by default anyway.
Doing a case distinction on a bound variable without fixing it first is kind of discouraged; that said, it can be done by doing apply (case_tac i) instead of apply (cases i). But as I said, this is not the nice way to do it.
A more proper way to do it is to explicitly fix i using e.g. the subgoal command:
subgoal for i is s stk stack
apply (cases i)
An even better way would probably be to use a structured Isar proof instead.
However, I don't think the subgoal command or Isar proofs are something that you know about at this stage of the Concrete Semantics book, so my guess would be that there is a nicer way to do the proof where you don't have to do any manual case splitting.
Most probably you are doing an induction on the list of instructions; it would probably be better to do an induction on the predicate ok instead. But then again: Where is that predicate ok? I don't see it in your assumptions. It's hard to say what's going on there without knowing how you defined ok and what lemma you are trying to prove exactly and what tactics you applied already.
I am trying to prove the following basic theorem about the existence of the inverse function of a bijective function (to learn theorem-proving with Isabelle/HOL):
For any set S and its identity map 1_S, α:S→T is bijective iff there
exists a map β: T→S such that βα=1_S and αβ=1_S.
Below is what I have so far after some attempts to define relevant things including functions and their inverses. But I am pretty stuck and couldn't make much progress due to my lack of understanding of Isabelle and/or Isar.
theory Test
imports Main
"HOL.Relation"
begin
lemma bij_iff_ex_identity : "bij_betw f A B ⟷ (∃ g. g∘f = restrict id B ∧ f∘g = restrict id A)"
unfolding bij_betw_def inj_on_def restrict_def iffI
proof
let ?g = "restrict (λ y. (if f x = y then x else undefined)) B"
assume "(∀x∈A. ∀y∈A. f x = f y ⟶ x = y)"
have "?g∘f = restrict id B"
proof
(* cannot prove this *)
end
In above, I try to give an explicit existential witness (i.e. the inverse function g of the original function f). I have several issues about the proof.
whether the concepts are defined right (functions, inverse functions etc.) in Isabelle terms.
how to expand the relevant definitions and then simplify them with function applications. I have followed some Isabelle (2021) examples/tutorials about both the apply-style simp, and structured style Isar proof but couldn't use the Isar proof fluently. Once I started the proof command, I don't know how to simp or move any further.
Isar has the new way of assumes ... shows ... for stating the theorem. Is there similar support for proving iff's (⟷) like the example above? Without it, there is no access to assms etc., and is it necessary to assume everything except the conclusion during the proof.
Can someone help explain how the above existential proof about inverse function can be accomplished?
lemma bij_iff_ex_identity : "bij_betw f A B ⟷ (∃ g. g∘f = restrict id B ∧ f∘g = restrict id A)"
I think this is not exactly what you want an I am doubtful that it is true. g∘f = restrict id B does not mean that g∘f and id are equal on B. It means that the total function g∘f (and there are only total functions in HOL) equals the total function restrict id B. The latter returns id x on x∈B and undefined otherwise. So to make this equality true, g needs to output undefined whenever the input of f is not in B. But how would g know that!
If you want to use restrict, you could write restrict (g∘f) B = restrict id B. But personally, I would rather go for the simpler (∀x∈B. (g∘f) x = x).
So the corrected theorem would be:
lemma bij_iff_ex_identity : "bij_betw f A B ⟷ (∃ g. (∀x∈A. (g∘f) x = x) ∧ (∀y∈B. (f∘g) y = y))"
(Which is still wrong, by the way, as quickcheck tells me in Isabelle/jEdit, see the output window. If A has one element and B is empty, f cannot be a bijection. So the theorem you are attempting is actually mathematically not true. I will not attempt to fix it, but just answer the remaining lines.
unfolding bij_betw_def inj_on_def restrict_def iffI
The iffI here has no effect. Unfolding can only apply theorems of the form A = B (unconditional rewriting rules). iffI is not of that form. (Use thm iffI to see.)
proof
Personally, I don't use the bare form proof but always proof - or proof (some method). Because proof just applies some default method (in this case, equivalent to (rule iffI), so I think it's better to make it explicit. proof - just starts the proof without applying an extra method.
let ?g = "restrict (λ y. (if f x = y then x else undefined)) B"
You have an unbound variable x here. (Note the background color in the IDE.) That is most likely not what you want. Formally, it is allowed, but x will be treated as if it was some arbitrary constant.
Generally, I don't think there is any way to define g in a simple way (i.e., only with quantifiers and function applications and if-then-else). I think the only way to define an inverse (even if you know it exists), is to use the THE operator, because you need to say something like g y is "the" x such that f x = y. (And then later in the proof you will run into a proof obligation that it indeed exists and that it is unique.) See the definition of inv_into in Hilbert_Choice.thy (except it uses SOME not THE). Maybe for starters, try to do the proof just using the existing inv_into constant.
assume "(∀x∈A. ∀y∈A. f x = f y ⟶ x = y)"
All assume commands must have assumptions exactly as the are in the proof goal. You can test whether you wrote it right by just temporarily writing the command show A for A (that's an unprovable goal that would, however, finish the proof, so it tricks Isabelle into checking if it would). If this command does not give an error, you got the assumes right. In your cases, you didn't, it should be (∀x∈A. ∀y∈A. f x = f y ⟶ x = y) ∧ f ' A = B. (' is the backtick symbol here. Markup doesn't let me write it.)
My recommendation: Try the proof with bij instead of bij_betw first. (One direction is in BNF_Fixpoint_Base.o_bij if you want to cheat.)
Once done, you can try to generalize.
I agree with the insightful remarks provided by Dominique Unruh. However, I would like to mention that a theorem that captures the idea underlying the theorem that you are trying to prove already exists in the source code of the main library of Isabelle/HOL. In fact, it exists in at least two different formats: let me name them the traditional Isabelle/HOL format and the canonical FuncSet format. For the former one, see the theorem bij_betw_iff_bijections:
"bij_betw f A B ⟷ (∃g. (∀x ∈ A. f x ∈ B ∧ g(f x) = x) ∧ (∀y ∈ B. g y ∈ A ∧ f(g y) = y))"
The situation is a little bit more complicated with FuncSet. There does not seem to exist a single theorem that captures the idea. However, together, the theorems bij_betwI, bij_betw_imp_funcset and inv_into_funcset are nearly equivalent to the theorem that you are trying to state. Let me provide a sketch of how one could express this theorem in a manner that would be considered reasonably canonical in the FuncSet sense (try to prove it yourself):
lemma bij_betw_iff:
shows "bij_betw f A B ⟷
(
∃g.
(∀x. x∈A ⟶ g (f x) = x) ∧
(∀y. y∈B ⟶ f (g y) = y) ∧
f ∈ A → B ∧
g ∈ B → A
)"
sorry
I would also like to repeat the advice given by Dominique Unruh and provide several side remarks:
My recommendation: Try the proof with bij instead of bij_betw first.
Indeed, this is a very good idea. In general, by trying to restrict the problem to explicitly defined sets A and B, instead of working directly with types, you touched upon a topic that is known as relativization in logic. For a mild layman's introduction see, for example, https://leanprover.github.io/logic_and_proof/first_order_logic.html [1], for a slightly more thorough introduction in the context of set theory see [2, chapter 12]. As you have probably noticed by now, it is not that easy to relativize theorems in Isabelle/HOL and requires additional proof effort.
However, there exists an extension of Isabelle/HOL that allows for the automation of the process of the relativization of theorems. For more information about this extension see the article From Types to Sets by Local Type Definition in Higher-Order Logic by Ondřej Kunčar and Andrei Popescu [3]. There also exists a large scale application example of the framework [4]. Independently, I am working on making this extension more user-friendly and very slowly approaching the final stages in my efforts: see https://gitlab.com/user9716869/tts_extension. Thus, in principle, if you know how to use Types-To-Sets and you accept its axioms, then it is sufficient to prove the theorem with bij, e.g.,
"bij f ⟷ (∃g. (∀x. g (f x) = x) ∧ (∀y. f (g y) = y))",
Then, the theorems like
bij_betw_iff_bijections and bij_betw_iff can be synthesized automatically for free upon a click of a button (almost...).
Finally, for completeness, let me offer my own advice with regard to your queries (although, as I mentioned, I agree with everything stated by Dominique Unruh)
how to expand the relevant definitions and then simplify them with
function applications. I have followed some Isabelle (2021)
examples/tutorials about both the apply-style simp, and structured
style Isar proof but couldn't use the Isar proof fluently. Once I
started the proof command, I don't know how to simp or move any
further.
I believe that the best way to learn what you are trying to learn is by following through the exercises in the book Concrete Semantics by Tobias Nipkow and Gerwin Klein [5]. Additionally, I would also look through A Proof Assistant for Higher-Order Logic by Tobias Nipkow et al [6](it is slightly outdated, but I found it to be useful specifically for learning apply-style scripting/direct rule application). By the way, I have mostly self-taught myself Isabelle from these books without any prior experience in formal methods.
Isar has the new way of assumes ... shows ... for stating the theorem.
Is there similar support for proving iff's (⟷) like the example above?
Without it, there is no access to assms etc., and is it necessary to
assume everything except the conclusion during the proof.
I will make the advice given by Dominique Unruh more explicit: use rule iffI or intro iffI for this.
Edit. When you use rule iffI (or similar) to start your structured Isar proof, you need to state your assumptions explicitly for every subgoal (using the assume ... show ... paradigm). However, there is a tool that can generate such boilerplate Isar code automatically. It is called Sketch-and-Explore and you can find it in the directory HOL/ex of the main library of Isabelle/HOL. In this case, all you need to do is to type sketch(rule iffI) and the assume/show paradigm will be generated automatically for every subgoal.
References
Avigad J, Lewis RY, and van Doorn F. Logic and Proof.
Jech T. Set theory. 3rd ed. Heidelberg: Springer; 2006. (Pure and applied mathematics, a series of monographs and textbooks).
Kunčar O, Popescu A. From Types to Sets by Local Type Definition in Higher-Order Logic. Journal of Automated Reasoning. 2019;62(2):237–60.
Immler F, Zhan B. Smooth Manifolds and Types to Sets for Linear Algebra in Isabelle/HOL. In: 8th ACM SIGPLAN International Conference on Certified Programs and Proofs. New York: ACM; 2019. p. 65–77. (CPP 2019).
Nipkow T, Klein G. Concrete Semantics with Isabelle/HOL. Heidelberg: Springer-Verlag; 2017. (http://concrete-semantics.org/)
Nipkow T, Paulson LC, Wenzel M. A Proof Assistant for Higher-Order Logic. Heidelberg: Springer-Verlag; 2017.
I have been playing around with basic examples of proofs in Isabelle.
Consider the following simple proof:
lemma
fixes n::nat
shows "n*(n+1) = n^2 + n"
by simp
It seems to me that a powerful proof assistant like Isabelle should be able to prove this lemma without much guidance.
However, I was surprised to find out that Isabelle actually fails at applying the rule simp here (I also tried other "generic" rules like simp_all, auto, force, blast but the result is the same).
If I replace the last line by the following, then it works out:
by (simp add: power2_eq_square)
My concern is that I feel like I shouldn't have had to tell the system about the specific rule power2_eq_square to complete this proof.
Playing around with similar trivial examples, I found that simp is able to prove
n*(n+2)=n*n+n*2
but fails with
n*(n+3)=n*n+n*3
The last example is proven
by (simp add: distrib_left)
It is a complete mystery to me why I need to specify distrib_left in that second example, but not in the first (why is that?).
I have given these examples not for their own sake, but mainly to illustrate my main question:
Is there a way to automate the verification of routine algebraic identities such as the above in Isabelle? If there isn't, then why not? What are the technical obstacles?
Daily proof work indeed often stumbles over »routine algebraic identities«; but after some practical experience one usually develops some intuition how to solve such problems effectively. A pattern I have developed over the years, by example:
context semidom
begin
lemma "a * (b ^ 2 + c) + 2 = a * b * b + c * a + 2"
A typical explorative proof starts with
apply auto
Then associativity and commutative are considered also
apply (auto simp add: ac_simps)
Then more algebaic normalizing rules are applied
apply (auto simp add: algebra_simps)
The last gap is then easily filled by sledgehammer
apply (simp add: power2_eq_square)
After that, the proof can be compactified
by (simp add: algebra_simps power2_eq_square)
The lemma
lemma power2_eq_square: "a^2 = a * a"
is not a good rewrite rule in general, as it will easily blow up the size of terms. So it is expected that a term rewriting based automation like simp will not apply this without you telling it to.
What you want is some sort of proof search, and Isabelle provides that: After writing your lemma, you can invoke the sledgehammer tool, and it will readily and quickly find the proof for you:
Sledgehammering...
Proof found...
"z3": Try this: by (simp add: power2_eq_square) (1 ms)
"cvc4": Try this: by (simp add: power2_eq_square) (5 ms)
For an example lemma like this:
lemma someFuncLemma: "∀ (e::someType) . pre_someFunc 2 e"
which gives the following when using quickcheck:
Auto Quickcheck found a counterexample:
e = - 1
or when using Nitpick (which isn't really the main point here):
Nitpick found a counterexample:
Skolem constant:
e = - 1
How can I then use this counterexample to finish the proof?
As you can see, I'm not very familiar with Isabelle and POs.
Thank you for your help!
The presence of a counterexample usually indicates that you won't be able to prove your proposition, except maybe
the counterexample is spurious;
the underlying logic is inconsistent.
I'm assuming you want to prove there exists some e such that pre_someFunc 2 e is false. You would have to change your lemma to use exists instead of forall, and prefix your predicate with not:
lemma "∃e::someType. ¬(pre_someFunc 2 e)"
Then you can provide the counter-example using rule exI[where x=...] which sets the free variable x in exI to something. You can look at the definition of exI and how x is used by clicking it while holding Ctrl in Isabelle JEdit.
A simple example:
lemma "∃n :: nat. ¬ odd n"
apply (rule exI[where x=2])
apply simp
done
A question is posed on the IsaUserList on how to prove this lemma:
lemma "dom (SOME b. dom b = A) = A"
As a first response, P.Lammich says that obtain needs to be used:
You have to show that there is such a beast b, ie,
proof -
obtain b where "dom b = A" ...
thus ?thesis
sledgehammer (*Should find a proof now, using the rules for SOME, probably SomeI*)
Here, I have one main question, one secondary question, and I wonder about some differences between what P.Lammich says to do, some things M.Eberl does, and the results that I got.
Q1: I get the warning Introduced fixed type variable(s): 'c in "b__" at my use of obtain, and at the use of the by statement that proves the obtain. Can I get rid of this warning?
Q2: Is there a command of three dots, ...? I assume it means, "Your proof goes here." However, it sometimes sounds like the writer is also saying, "...because, after all, the proof is really simple here." I also know that there is the command . for by this, and .. for by rule. Additionally, I entertain the idea that ... is commonly known to be some simple proof statement which I'm supposed to know, but don't.
The following source will show the warning. It could be I'm supposed to fix something. The source also shows how I had to help sledgehammer, which is that I had to put an exists statement it.
I leave the error in that's due to the schematic variable, in case anyone is interested in that.
(* I HELP SLEDGEHAMMER with an exists statement. I can delete the exists
statement after the `metis` proof is found.
The `?'c1` below causes an error, but `by` still proves the goal.
*)
lemma "dom (SOME b. dom b = A) = A"
proof-
have "? x. x = (SOME b. dom b = A)"
by(simp)
from this
obtain b where ob1: "dom b = A"
(*WARNING: Orange squiggly under `obtain`. Message: Introduced fixed type
variable(s): 'c in "b__".*)
by(metis (full_types) dom_const dom_restrict inf_top_left)
thus ?thesis
using[[show_types]]
(*Because of `show_types`, a schematic type variable `?'c1` will be part
of the proof command that `sledgehammer` provides in the output panel.*)
(*sledgehammer[minimize=smart,preplay_timeout=10,timeout=60,verbose=true,
isar_proofs=smart,provers="z3 spass remote_vampire"]*)
by(metis (lifting, full_types)
`!!thesis::bool.(!!b::'a => ?'c1 option. dom b = (A::'a set) ==> thesis)
==> thesis`
someI_ex)
(*ERROR: Illegal schematic type variable: ?'c1::type.
To get rid of the error, delete `?`, or use `ob1` as the fact.*)
qed
My Q1 and Q2 are related to what's above. As part of my wonderings, there is the issue of getting an error because of the schematic variable. I may report that as bug-type issue.
In his IsaUserList response, M.Eberl says that he got the following sledgehammer proof for the obtain. He says the proof is slow, and it is. It's about 2 seconds for me.
by(metis (lifting, full_types)
dom_const dom_restrict inf_top.left_neutral someI_ex)
The proof that sledgehammer found for me above for thus ?thesis is only 4ms.
Answer to Q1
Because of M.Eberl's comment, I made a respectable effort to figure out how to get a witness without using obtain. In the process, I answered my main question.
I got rid of the warning about introducing 'c as a type variable by using b :: "'a => 'b option", instead of b, as shown here:
lemma "dom (SOME b. dom b = A) = A"
proof-
obtain b :: "'a => 'b option" where "dom (b) = A"
by(metis (full_types) dom_const dom_restrict inf_top_left)
thus ?thesis
by(metis (lifting, full_types) exE_some)
qed
Answer to Q2
(Update 140119) I finally found Isar syntax for ..., on page 6 of isar-ref.pdf.
term ... -- the argument of the last explicitly stated result (for infix application this is the right-hand side)
The string ... is not exactly a friendly search string. Finding the meaning was a result of starting to look through chapter 1. I now see that chapters 1, 2, and 6 of isar-ref.pdf are key chapters for getting some help on how to use Isar to do proofs. (End update.)
Concerning an error due to using fix/assume as an alternate to obtain
Now, I return back to M.Eberl telling me I shouldn't use obtain, which happened to be beneficial. But it brings up that it's a major effort to try to figure out how to use the language to make the PIDE happy. The last source I show below is another example of what a hassle it is to learn how to make the PIDE happy. To a large extent, it's just using examples to try and figure out the right combination of syntax and commands.
P.Lammich says to use obtain in his answer. I also looked up the use of obtain in prog-prove.pdf page 42, which discusses it in connection with the use of a witness.
I read a few other things, and I thought it was all telling me that obtain is crucial to fixing a variable or constant with certain properties.
Anyway, I used def to create a witness, so I learned something new:
declare[[show_sorts,show_brackets]]
lemma "dom (SOME b. dom b = A) = A"
proof-
def w == "(SOME b::('a => 'b option). dom b = A)"
hence "dom w = A"
by(metis (lifting, mono_tags) dom_const dom_restrict inf_top_left someI_ex)
print_facts
thus ?thesis
by(metis (lifting, full_types) dom_option_map exE_some)
qed
But, I try to use a fix/assume combination in place def, where supposedly def is an abbreviation, and I get that mysterious and greatly infuriating message, "Failed to refine any pending goal", which makes me wonder why I want to use this language.
declare[[show_sorts,show_brackets]]
lemma "dom (SOME b. dom b = A) = A"
proof-
fix w assume w_def: "w == (SOME b::('a => 'b option). dom b = A)"
hence "dom w = A"
by(metis (lifting, mono_tags) dom_const dom_restrict inf_top_left someI_ex)
print_facts
thus ?thesis
oops
For the two proofs, when the cursor is at the line before print_facts, what I see in the output panel is exactly the same, other than the def proof shows proof (state): step 4, and the fix/assume proofs shows proof (state): step 5. The facts at print_facts are also the same.
From searches, I know that "Failed to refine any pending goal" has been a source of great pain for many. In the past, I finally figured out the trick to get rid of it for what I was doing, but it doesn't make sense here, not that it made sense to me there either.
Update 140118_0054
L.Noschinski gives the subtle tip from IsaUserList 2012-11-13:
When you use "fix" or "def" to define a
variable, they either get just generalized (i.e. turned into schematics)
(fix) or replaced by their right hand side (definitions)
when a block is closed / a show is performed.
So for the fix/assumes form of the proof, I put part of it in brackets, and for some reason, it exports the fact in the way that's needed:
lemma "dom (SOME b. dom b = A) = A"
proof-
{
fix w assume "w == (SOME b::('a => 'b option). dom b = A)"
hence "dom w = A"
by(metis (lifting, mono_tags) dom_const dom_restrict inf_top_left someI_ex)
}
thus ?thesis
by(metis (lifting, full_types) dom_option_map exE_some)
qed
I go ahead and throw in a let form of the proof. I wouldn't have known to use the schematic variable ?w without having looked at M.Eberl's proofs.
lemma "dom (SOME b. dom b = A) = A"
proof-
let ?w = "(SOME b::('a => 'b option). dom b = A)"
have "dom ?w = A"
by(metis (lifting, mono_tags) dom_const dom_restrict inf_top_left someI_ex)
thus ?thesis
by(metis (lifting, full_types) dom_option_map exE_some)
qed