Algebraic simplifications in Isabelle - isabelle

I have been playing around with basic examples of proofs in Isabelle.
Consider the following simple proof:
lemma
fixes n::nat
shows "n*(n+1) = n^2 + n"
by simp
It seems to me that a powerful proof assistant like Isabelle should be able to prove this lemma without much guidance.
However, I was surprised to find out that Isabelle actually fails at applying the rule simp here (I also tried other "generic" rules like simp_all, auto, force, blast but the result is the same).
If I replace the last line by the following, then it works out:
by (simp add: power2_eq_square)
My concern is that I feel like I shouldn't have had to tell the system about the specific rule power2_eq_square to complete this proof.
Playing around with similar trivial examples, I found that simp is able to prove
n*(n+2)=n*n+n*2
but fails with
n*(n+3)=n*n+n*3
The last example is proven
by (simp add: distrib_left)
It is a complete mystery to me why I need to specify distrib_left in that second example, but not in the first (why is that?).
I have given these examples not for their own sake, but mainly to illustrate my main question:
Is there a way to automate the verification of routine algebraic identities such as the above in Isabelle? If there isn't, then why not? What are the technical obstacles?

Daily proof work indeed often stumbles over »routine algebraic identities«; but after some practical experience one usually develops some intuition how to solve such problems effectively. A pattern I have developed over the years, by example:
context semidom
begin
lemma "a * (b ^ 2 + c) + 2 = a * b * b + c * a + 2"
A typical explorative proof starts with
apply auto
Then associativity and commutative are considered also
apply (auto simp add: ac_simps)
Then more algebaic normalizing rules are applied
apply (auto simp add: algebra_simps)
The last gap is then easily filled by sledgehammer
apply (simp add: power2_eq_square)
After that, the proof can be compactified
by (simp add: algebra_simps power2_eq_square)

The lemma
lemma power2_eq_square: "a^2 = a * a"
is not a good rewrite rule in general, as it will easily blow up the size of terms. So it is expected that a term rewriting based automation like simp will not apply this without you telling it to.
What you want is some sort of proof search, and Isabelle provides that: After writing your lemma, you can invoke the sledgehammer tool, and it will readily and quickly find the proof for you:
Sledgehammering...
Proof found...
"z3": Try this: by (simp add: power2_eq_square) (1 ms)
"cvc4": Try this: by (simp add: power2_eq_square) (5 ms)

Related

cases vs case_tac/induct vs induct_tac

I've been working with Isabelle/HOL for a few months now, but I've been unable to figure out the exact intention of the use of _tac.
Specifically, I'm talking about cases vs case_tac and induct vs indut_tac (although it would be nice to know the meaning of tac in general, since I'm also using other methods such as cut_tac).
I've noticed I can't use cases or induct using apply with ⋀-bound variables, but I can if it's an structured proof. Why?
An example of this:
lemma "¬(∀x. ¬(P x)) ⟹ ∃x. P x"
apply (rule ccontr)
apply (erule notE)
apply (rule allI)
apply (case_tac "P x")
apply (erule notE)
apply (erule exI)
apply assumption
done
On the other hand, another difference I've noticed between induct and induct_tac is that I can use double induction with the latter, but not with the former. Again, I'm clueless why.
Thanks in advance.
*_tac are built-in tactics used in apply-scripts. In particular, case_tac and induct_tac have been basically superseded by the cases and induction proof methods in Isabelle/Isar. As you mentioned, case_tac and induct_tac can handle ⋀-bound variables. However, this is quite fragile, since their names are often generated automatically and may change when Isabelle changes (of course, you could use rename_tac to choose fixed names). That's one of the reasons why nowadays structured proof methods are preferred to unstructured tactic scripts. Now, back to your example: In order to be able to use cases, you can introduce a structured block as follows:
lemma "¬(∀x. ¬(P x)) ⟹ ∃x. P x"
apply (rule ccontr)
apply (erule notE)
proof (intro allI)
fix x
assume "∄x. P x"
then show "¬ P x"
apply (cases "P x")
apply (erule notE)
apply (erule exI)
apply assumption
done
qed
As you can see, structured proofs are typically verbose but much more readable than linear apply-scripts.
If you're still curious about the "double-induction" issue, an example would be very welcome. Finally, if you want to learn more about structured proofs using the Isabelle/Isar language environment, I'd strongly suggest you read this tutorial on Isabelle/HOL and The Isabelle/Isar Reference Manual for more detailed information.

Fragile rule application in Isabelle

I was playing around with an example from the Isabelle/HOL tutorial to get a better understanding on the correspondence between Isar and Tactics proofs.
This is a version which works:
lemma rtrancl_converseD: "(x,y) ∈ (r ^-1 )^* ⟹ (y,x) ∈ r^* "
proof (induct y rule: rtrancl_induct)
case base
then show ?case ..
next case (step y z)
then have "(z, y) ∈ r" using rtrancl_converseD by simp
with `(y,x)∈ r^*` show "(z,x) ∈ r^*" using [[unify_trace_failure]]
apply (subgoal_tac "1=(1::nat)")
apply (rule converse_rtrancl_into_rtrancl)
apply simp_all
done
qed
I want to instantiate converse_rtrancl_into_rtrancl which proofs (?a, ?b) ∈ ?r ⟹ (?b, ?c) ∈ ?r^* ⟹ (?a, ?c) ∈ ?r^* .
But without the seemingly nonsensical apply (subgoal_tac "1=(1::nat)") line this errors with
Clash: r =/= Transitive_Closure.rtrancl
Failed to apply proof method⌂:
using this:
(y, x) ∈ r^*
(z, y) ∈ r
goal (1 subgoal):
1. (z, x) ∈ r^*
If I fully instantiate the rule apply (rule converse_rtrancl_into_rtrancl[of z y r x]) this becomes Clash: z__ =/= ya__.
This leaves me with three questions: Why does this specific case break? How can I fix it? And how can I figure out what went wrong in these cases since I can't really understand what the unify_trace_failure message wants to tell me.
rule-tactics are usually sensitive to the order of premises. The order of premises in converse_rtrancl_into_rtrancl and in your proof state don't match. Switching the order of premises in the proof state using rotate_tac will make them match the rule, so that you can directly apply fact like this:
... show "(z,x) ∈ r^*"
apply (rotate_tac)
apply (fact converse_rtrancl_into_rtrancl)
done
Or, if you want to include some kind of rule tactic, this would look like this:
apply (rotate_tac)
apply (erule converse_rtrancl_into_rtrancl)
apply (assumption)
(I personally don't use apply scripts ever in my everyday work. So apply-style gurus might know more elegant ways of handling this kind of situation. ;) )
Regarding your 1=(1::nat) / simp_all fix:
The whole goal can directly be solved by simp_all. So, attempts with adding stuff like 1=1 probably did not really tell you a lot about how much the other methods contributed to solving the proof.
However, the additional assumption seems to actually help Isabelle match converse_rtrancl_into_rtrancl correctly. (Don't ask me why!) So, one could indeed circumvent the problem by adding this spurious assumption and then eliminating it with refl again like:
apply (subgoal_tac "1=(1::nat)")
apply (erule converse_rtrancl_into_rtrancl)
apply (assumption)
apply (rule refl)
This does not look particularly elegant, of course.
The [[unify_trace_failure]] probably only really helps if one is familiar with the internal workings of Nipkow's higher-order unification algorithm. (I'm not.) I think the hint for the future here would really be that one must look closely at the order of premises for some tactics (rather than at the unifier debug output).
I found an explanation in the Isar reference 6.4.3 .
The with b1..bn command is equivalent to from b1..bn and this, i.e. it enters the proof chaining mode which adds them as (structured) assumptions to proof methods.
Basic proof methods (such as rule) expect multiple facts to be given
in their proper order, corresponding to a prefix of the premises of
the rule involved. Note that positions may be easily skipped using
something like from _ and a and b, for example. This involves the
trivial rule PROP ψ =⇒ PROP ψ, which is bound in Isabelle/Pure as “_”
(underscore).
Automated methods (such as simp or auto) just insert any given facts
before their usual operation. Depending on the kind of procedure
involved, the order of facts is less significant here.
Given the information about the 'with' translation and that rule expects chained facts in order, we could try to flip the chained facts. And indeed this works:
from this and `(y,x)∈ r^*` show "(z,x) ∈ r^*"
by (rule converse_rtrancl_into_rtrancl)
I think "6.4.3 Fundamental methods and attributes" is also relevant because it describes how the basic methods interact with incoming facts. Notably, the '-' noop which is sometimes used when starting proofs turns forward chaining into assumptions on the goal.
with `(y,x)∈ r^*` show "(z,x) ∈ r^*"
apply -
apply (rule converse_rtrancl_into_rtrancl; assumption)
done
This works because the first apply consumes all chained facts so the second apply is pure backwards chaining. This is also why the subgoal_tac or rotate_tac worked, but only if they are in seperate apply commands.

Local assumptions in "state" mode

Frequently, when proving a statement in "prove" mode, I find myself in need of some intermediate statements that are not yet stated nor proved. To state them, I usually make use of the subgoal command, followed by proof- to change to "state" mode. In the process, however, all of the local assumptions are removed. A typical example could look like this
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
apply(auto)
subgoal
proof-
have "0<n" sorry (* here I would like to refer to the assumption from the subgoal *)
then show ?thesis sorry
qed
subgoal sorry
done
I am aware that I could state the assumptions using assume explicitly. However, this becomes quickly rather tedious when multiple assumptions are involved. Is there an easier way to simply refer to all of the assumptions? Alternatively, is there a good way to implement statements with short proofs directly in "prove" mode?
There is the syntax subgoal premises prems to bind the premises of the subgoal to the name prems (or any other name – but prems is a sensible default):
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
apply(auto)
subgoal premises prems
proof -
thm prems
There is also a method called goal_cases that automatically gives names to all the current subgoals – I find it very useful. If subgoal premises did not exist, you could do this instead:
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
apply(auto)
subgoal
proof goal_cases
case 1
By the way, looking at your example, it is considered a bad idea to do anything after auto that depends on the exact form of the proof state, such as metis calls or Isar proofs. auto is fairly brutal and might behave differently in the next Isabelle release so that such proofs break. I recommend doing a nice structured Isar proof here.
Also note that your theorem is a direct consequence of power_strict_mono and power_less_imp_less_base and can be proven in a single line:
lemma "0 < n ⟷ ((2::nat)^n < 3^n)"
by (auto intro: Nat.gr0I power_strict_mono)`

How to abort a proof in Isabelle?

I was wondering if there is a way to abort a proof in Isabelle/jEdit?
I searched for commands such as "Reset", "Abort" but couldn't find it.
I know there is Sorry. But I am not sure if one uses Sorry, the theorem at hand is assumed to be true or abandoned. Also, Sorry does not seem to work in the apply..done mode.
Currently, I comment out the theorems that I can't prove. But it requires a lot of typing (four characters each in (* *)) to comment or uncomment something, which is kind of cumbersome.
So is there a standard/universal way to abort a proof in Isabelle?
First, the command is sorry and it does work (but only) in apply style:
lemma
‹False ∧ True›
apply (rule conjI)
apply auto
sorry
About the actual question, oops aborts proofs, whether in apply style or not:
lemma
‹False ∧ True›
proof -
have True
by auto
oops
An oops-ed proof cannot be referenced later.

Why does simp "fail to apply initial proof method" where blast succeeds with the same facts?

I have modelled this calculus in Isabelle as an exercise. Here's my code so far.
I use sledgehammer to prove simple theorems which usually suggests to use blast supplemented with a subset of the rules of the calculus, e.g.:
by (blast intro: DH_bdiam2_f Fbox2_R l2)
That works fine and dandy, however if I try to use simp adding the same rules, e.g.:
by (simp only: DH_bdiam2_f Fbox2_R l2)
I get an error that none of the rules were applicable
Failed to apply initial proof method⌂:
What is going on exactly? I was expecting that simp either terminates or times out, but certainly not this. What am I missing?
This is the error message you get when a tactic failed to produce a proof step. For simp, that's the case when no rule matches (i.e. rewriting with neither DH_bdiam2_f ... is impossible). Looking at your code, these rules come from an inductive predicate. Usually, those are not suitable as rewriting (= simplification) rules. Scattered throughout Programming and Proving in Isabelle/HOL, there are hints about what are suitable simplification and introduction rules, along with explanations about what tactics are suitable.

Resources