Some question on the lemma prove in Isabelle - isabelle

I have defined a lemma like "A ==> B ==>C" and try to use the nitpick, and it give the counterexample, I see the result of A = False , B = False, and C = False.
I just wanna know that how to give a lemma when A and B holds, then C holds.
lemma "LTS_is_reachable (Δ (reg2nfa r1 v)) [] x y ⟹
y ∈ ℱ (reg2nfa r1 v) ⟹
LTS_is_reachable (Δ (reg2nfa r1 v)) [] y x"
nitpick
Nitpicking formula...
Nitpick found a potentially spurious counterexample for card 'a = 1:
Free variables:
r1 = LChr a⇩1?
v = {a⇩1}
x = [a⇩1]
y = []
lemma "LTS_is_reachable (Δ (reg2nfa (LChr a1) {a1})) [] [a1] [] == False"
by auto
lemma "[] \<in> ℱ (reg2nfa (LChr a1) {a1}) == False"
by auto
lemma "LTS_is_reachable (Δ (reg2nfa (LChr a⇩1) {a⇩1})) [] [] [a⇩1] == False"
by auto

When nitpick says its result is "potentially spurious", that means it might be wrong. And so it is here.

Related

Isabelle Failed to refine any pending goal during instantiation

datatype 'a list = Cons 'a "'a list" | Nil
instantiation list :: (order) order
begin
fun less_eq_list :: "'a list ⇒ 'a list ⇒ bool" where
"less_eq_list Nil Nil = True" |
"less_eq_list (Cons _ _) Nil = True" |
"less_eq_list Nil (Cons _ _) = False" |
"less_eq_list (Cons _ a) (Cons _ b) = less_eq_list a b"
instance
proof
fix x y:: "'a list"
show "x ≤ x"
apply(induct_tac x)
apply(auto)
done
(* at this point the state is
show x ≤ x
Successful attempt to solve goal by exported rule:
?x2 ≤ ?x2
proof (state)
this:
x ≤ x
goal (3 subgoals):
1. ⋀x y. (x < y) = (x ≤ y ∧ ¬ y ≤ x)
2. ⋀x y z. x ≤ y ⟹ y ≤ z ⟹ x ≤ z
3. ⋀x y. x ≤ y ⟹ y ≤ x ⟹ x = y
*)
show "(x < y) = (x ≤ y ∧ ¬ y ≤ x)"
(* I get an error here
Failed to refine any pending goal
Local statement fails to refine any pending goal
Failed attempt to solve goal by exported rule:
(?x2 < ?y2) = (?x2 ≤ ?y2 ∧ ¬ ?y2 ≤ ?x2)
*)
qed
end
What is wrong with this? The proof of "x ≤ x" worked like a charm. Somehow "(x < y) = (x ≤ y ∧ ¬ y ≤ x)" doesn't match any subgoal.
Class order is a subclass of preorder, which in turn is a subclass of ord. Class ord requires you to define both less_eq (≤) and less (<). In your code, you have correctly defined less_eq_list but forgot to define less_list, and that's why you got an error when trying to prove (x < y) = (x ≤ y ∧ ¬ y ≤ x).

How to prove elimination rules using Isar?

Here is a simple theory:
datatype t1 = A | B | C
datatype t2 = D | E t1 | F | G
inductive R where
"R A B"
| "R B C"
inductive_cases [elim]: "R x B" "R x A" "R x C"
inductive S where
"S D (E _)"
| "R x y ⟹ S (E x) (E y)"
inductive_cases [elim]: "S x D" "S x (E y)"
I can prove lemma elim using two helper lemmas:
lemma tranclp_S_x_E:
"S⇧+⇧+ x (E y) ⟹ x = D ∨ (∃z. x = E z)"
by (induct rule: converse_tranclp_induct; auto)
(* Let's assume that it's proven *)
lemma reflect_tranclp_E:
"S⇧+⇧+ (E x) (E y) ⟹ R⇧+⇧+ x y"
sorry
lemma elim:
"S⇧+⇧+ x (E y) ⟹
(x = D ⟹ P) ⟹ (⋀z. x = E z ⟹ R⇧+⇧+ z y ⟹ P) ⟹ P"
using reflect_tranclp_E tranclp_S_x_E by blast
I need to prove elim using Isar:
lemma elim:
assumes "S⇧+⇧+ x (E y)"
shows "(x = D ⟹ P) ⟹ (⋀z. x = E z ⟹ R⇧+⇧+ z y ⟹ P) ⟹ P"
proof -
assume "S⇧+⇧+ x (E y)"
then obtain z where "x = D ∨ x = E z"
by (induct rule: converse_tranclp_induct; auto)
also have "S⇧+⇧+ (E z) (E y) ⟹ R⇧+⇧+ z y"
sorry
finally show ?thesis
But I get the following errors:
No matching trans rules for calculation:
x = D ∨ x = E z
S⇧+⇧+ (E z) (E y) ⟹ R⇧+⇧+ z y
Failed to refine any pending goal
Local statement fails to refine any pending goal
Failed attempt to solve goal by exported rule:
(S⇧+⇧+ x (E y)) ⟹ P
How to fix them?
I guess that this lemma could have a simpler proof. But I need to prove it in two steps:
Show the possible values of x
Show that E reflects transitive closure
I think also that this lemma could be proven by cases on x. But my real data types have too many cases. So, it's not a preferred solution.
This variant seems to work:
lemma elim:
assumes "S⇧+⇧+ x (E y)"
and "x = D ⟹ P"
and "⋀z. x = E z ⟹ R⇧+⇧+ z y ⟹ P"
shows "P"
proof -
have "S⇧+⇧+ x (E y)" by (simp add: assms(1))
then obtain z where "x = D ∨ x = E z"
by (induct rule: converse_tranclp_induct; auto)
moreover
have "S⇧+⇧+ (E z) (E y) ⟹ R⇧+⇧+ z y"
sorry
ultimately show ?thesis
using assms by auto
qed
Assumptions should be separated from the goal.
As a first statement I shoud use have instead of assume. It's not a new assumption, just the existing one.
Instead of finally I should use ultimately. It seems that the later one has a simpler application logic.

Proving the correctness of an algorithm to partition lists in Isabelle

I trying to prove correct an algorithm to split a list of integers into sublists of equal sum in linear time. Here you can see the algorithm I have chosen to do so.
I would like to get some feedback regarding:
The convenience of my definition for the splitting function.
The "induction" hypothesis to use in my situation.
Please, bear in mind that up to now I have only worked with apply-scripts and not with Isar proofs.
Here is a preliminary implementation of the algorithm and the correctness definition:
definition
"ex_balanced_sum xs = (∃ ys zs. sum_list ys = sum_list zs ∧
xs = ys # zs ∧ ys ≠ [] ∧ zs ≠ [])"
fun check_list :: "int list ⇒ int ⇒ int ⇒ bool" where
"check_list [] n acc = False" |
"check_list (x#xs) n acc = (if n = acc then True else (check_list xs (n-x) (acc+x)))"
fun linear_split :: "int list ⇒ bool" where
"linear_split [] = False" |
"linear_split [x] = False" |
"linear_split (x # xs) = check_list xs (sum_list xs) x"
The theorem to prove is as follows:
lemma linear_correct: "linear_split xs ⟷ ex_balanced_sum xs"
If I reason for instance for the first implication as:
lemma linear_correct_1: "linear_split xs ⟹ ex_balanced_sum xs"
apply(induction xs rule: linear_split.induct)
Then I get a list of subgoals that I think are not appropriate:
linear_split [] ⟹ ex_balanced_sum []
⋀x. linear_split [x] ⟹ ex_balanced_sum [x]
⋀x v va. linear_split (x # v # va) ⟹ ex_balanced_sum (x # v # va)
In particular, these subgoals don't have an induction hypothesis! (am I right?). I tried to perform a different induction by just writing apply(induction xs) but then the goals look as:
linear_split [] ⟹ ex_balanced_sum []
⋀a xs. (linear_split xs ⟹ ex_balanced_sum xs) ⟹ linear_split (a # xs) ⟹ ex_balanced_sum (a # xs)
Here the hypothesis is also not an induction hypothesis since it is assuming an implication.
So, what is the best way to define this function to get a nice induction hypothesis?
Edit (a one-function version)
fun check :: "int list ⇒ int ⇒ int ⇒ bool" where
"check [] n acc = False" |
"check [x] n acc = False" |
"check (x # y # xs) n acc = (if n-x = acc+x then True else check (y # xs) (n-x) (acc+x))"
definition "linear_split xs = check xs (sum_list xs) 0"
Background
I was able to prove the theorem linear_correct for a function (splitl) that is very similar to the function check in the statement of the question. Unfortunately, I would prefer not to make any attempts to convert the proof into an apply script.
The proof below is the first proof that came to my mind after I started investigating the question. Thus, there may exist better proofs.
Proof Outline
The proof is based on the induction based on the length of the list. In particular, assume
splitl xs (sum_list xs) 0 ⟹ ex_balanced_sum xs
holds for all lists with the length less than l. If l = 1, then the result is easy to show. Assume, that l>=2. Then the list can be expressed in the form x#v#xs. In this case if it is possible to split the list using splitl, then it can be shown (splitl_reduce) that either
"splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0" (1)
or
"x = sum_list (v#xs)" (2).
Thus, the proof proceeds by cases for (1) and (2). For (1), the length of the list is (x + v)#xs) is l-1. Hence, by the induction hypothesis ex_balanced_sum (x + v)#xs). Therefore, by the definition of ex_balanced_sum, also ex_balanced_sum x#v#xs. For (2), it can be easily seen that the list can be expressed as [x]#(v#xs) and, in this case, given (2), it satisfies the conditions of ex_balanced_sum by definition.
The proof for the other direction is similar and based on the converse of the lemma associated with (1) and (2) above: if "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0" or "x = sum_list (v#xs)", then "splitl (x#v#xs) (sum_list (x#v#xs)) 0".
theory so_ptcoaatplii
imports Complex_Main
begin
definition
"ex_balanced_sum xs =
(∃ ys zs. sum_list ys = sum_list zs ∧ xs = ys # zs ∧ ys ≠ [] ∧ zs ≠ [])"
fun splitl :: "int list ⇒ int ⇒ int ⇒ bool" where
"splitl [] s1 s2 = False" |
"splitl [x] s1 s2 = False" |
"splitl (x # xs) s1 s2 = ((s1 - x = s2 + x) ∨ splitl xs (s1 - x) (s2 + x))"
lemma splitl_reduce:
assumes "splitl (x#v#xs) (sum_list (x#v#xs)) 0"
shows "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0 ∨ x = sum_list (v#xs)"
proof -
from assms have prem_cases:
"((x = sum_list (v#xs)) ∨ splitl (v#xs) (sum_list (v#xs)) x)" by auto
{
assume "splitl (v#xs) (sum_list (v#xs)) x"
then have "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0"
proof(induction xs arbitrary: x v)
case Nil then show ?case by simp
next
case (Cons a xs) then show ?case by simp
qed
}
with prem_cases show ?thesis by auto
qed
(*Sledgehammered*)
lemma splitl_expand:
assumes "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0 ∨ x = sum_list (v#xs)"
shows "splitl (x#v#xs) (sum_list (x#v#xs)) 0"
by (smt assms list.inject splitl.elims(2) splitl.simps(3) sum_list.Cons)
lemma splitl_to_sum: "splitl xs (sum_list xs) 0 ⟹ ex_balanced_sum xs"
proof(induction xs rule: length_induct)
case (1 xs) show ?case
proof-
obtain x v xst where x_xst: "xs = x#v#xst"
by (meson "1.prems" splitl.elims(2))
have main_cases:
"splitl ((x + v)#xst) (sum_list ((x + v)#xst)) 0 ∨ x = sum_list (v#xst)"
by (rule splitl_reduce, insert x_xst "1.prems", rule subst)
{
assume "splitl ((x + v)#xst) (sum_list ((x + v)#xst)) 0"
with "1.IH" x_xst have "ex_balanced_sum ((x + v)#xst)" by simp
then obtain yst zst where
yst_zst: "(x + v)#xst = yst#zst"
and sum_yst_eq_sum_zst: "sum_list yst = sum_list zst"
and yst_ne: "yst ≠ []"
and zst_ne: "zst ≠ []"
unfolding ex_balanced_sum_def by auto
then obtain ystt where ystt: "yst = (x + v)#ystt"
by (metis append_eq_Cons_conv)
with sum_yst_eq_sum_zst have "sum_list (x#v#ystt) = sum_list zst" by simp
moreover have "xs = (x#v#ystt)#zst" using x_xst yst_zst ystt by auto
moreover have "(x#v#ystt) ≠ []" by simp
moreover with zst_ne have "zst ≠ []" by simp
ultimately have "ex_balanced_sum xs" unfolding ex_balanced_sum_def by blast
}
note prem = this
{
assume "x = sum_list (v#xst)"
then have "sum_list [x] = sum_list (v#xst)" by auto
moreover with x_xst have "xs = [x] # (v#xst)" by auto
ultimately have "ex_balanced_sum xs" using ex_balanced_sum_def by blast
}
with prem main_cases show ?thesis by blast
qed
qed
lemma sum_to_splitl: "ex_balanced_sum xs ⟹ splitl xs (sum_list xs) 0"
proof(induction xs rule: length_induct)
case (1 xs) show ?case
proof -
from "1.prems" ex_balanced_sum_def obtain ys zs where
ys_zs: "xs = ys#zs"
and sum_ys_eq_sum_zs: "sum_list ys = sum_list zs"
and ys_ne: "ys ≠ []"
and zs_ne: "zs ≠ []"
by blast
have prem_cases: "∃y v yst. ys = (y#v#yst) ∨ (∃y. ys = [y])"
by (metis remdups_adj.cases ys_ne)
{
assume "∃y. ys = [y]"
then have "splitl xs (sum_list xs) 0"
using splitl.elims(3) sum_ys_eq_sum_zs ys_zs zs_ne by fastforce
}
note prem = this
{
assume "∃y v yst. ys = (y#v#yst)"
then obtain y v yst where y_v_yst: "ys = (y#v#yst)" by auto
then have
"sum_list ((y + v)#yst) = sum_list zs ∧ ((y + v)#yst) ≠ [] ∧ zs ≠ []"
using sum_ys_eq_sum_zs zs_ne by auto
then have ebs_ypv: "ex_balanced_sum (((y + v)#yst)#zs)"
using ex_balanced_sum_def by blast
have l_ypv: "length (((y + v)#yst)#zs) < length xs"
by (simp add: y_v_yst ys_zs)
from l_ypv ebs_ypv have
"splitl (((y + v)#yst)#zs) (sum_list (((y + v)#yst)#zs)) 0"
by (rule "1.IH"[THEN spec, rule_format])
with splitl_expand have splitl_ys_exp:
"splitl ((y#v#yst)#zs) (sum_list ((y#v#yst)#zs)) 0"
by (metis Cons_eq_appendI)
from ys_zs have "splitl xs (sum_list xs) 0"
by (rule ssubst, insert y_v_yst splitl_ys_exp, simp)
}
with prem prem_cases show ?thesis by auto
qed
qed
lemma linear_correct: "ex_balanced_sum xs ⟷ splitl xs (sum_list xs) 0"
using splitl_to_sum sum_to_splitl by auto
end

Basic Isabelle/Isar style (exercise 4.6)

I'm interested in using Isabelle/Isar for writing proofs which are both human-readable and machine checked, and I am looking to improve my style and streamline my proofs.
prog-prove has the following exercise:
Exercise 4.6. Define a recursive function elems :: 'a list ⇒ 'a set and prove x ∈ elems xs ⟹ ∃ ys zs. xs = ys # x # zs ∧ x ∉ elems ys.
Mimicking something similar to what I would write with pen and paper, my solution is
fun elems :: "'a list ⇒ 'a set" where
"elems [] = {}" |
"elems (x # xs) = {x} ∪ elems xs"
fun takeUntil :: "('a ⇒ bool) ⇒ 'a list ⇒ 'a list" where
"takeUntil f [] = []" |
"takeUntil f (x # xs) = (case (f x) of False ⇒ x # takeUntil f xs | True ⇒ [])"
theorem "x ∈ elems xs ⟹ ∃ ys zs. xs = ys # x # zs ∧ x ∉ elems ys"
proof -
assume 1: "x ∈ elems xs"
let ?ys = "takeUntil (λ z. z = x) xs"
let ?zs = "drop (length ?ys + 1) xs"
have "xs = ?ys # x # ?zs ∧ x ∉ elems ?ys"
proof
have 2: "x ∉ elems ?ys"
proof (induction xs)
case Nil
thus ?case by simp
next
case (Cons a xs)
thus ?case
proof -
{
assume "a = x"
hence "takeUntil (λz. z = x) (a # xs) = []" by simp
hence A: ?thesis by simp
}
note eq = this
{
assume "a ≠ x"
hence "takeUntil (λz. z = x) (a # xs) = a # takeUntil (λz. z = x) xs" by simp
hence ?thesis using Cons.IH by auto
}
note noteq = this
have "a = x ∨ a ≠ x" by simp
thus ?thesis using eq noteq by blast
qed
qed
from 1 have "xs = ?ys # x # ?zs"
proof (induction xs)
case Nil
hence False by simp
thus ?case by simp
next
case (Cons a xs)
{
assume 1: "a = x"
hence 2: "takeUntil (λz. z = x) (a # xs) = []" by simp
hence "length (takeUntil (λz. z = x) (a # xs)) + 1 = 1" by simp
hence 3: "drop (length (takeUntil (λz. z = x) (a # xs)) + 1) (a # xs) = xs" by simp
from 1 2 3 have ?case by simp
}
note eq = this
{
assume 1: "a ≠ x"
with Cons.prems have "x ∈ elems xs" by simp
with Cons.IH
have IH: "xs = takeUntil (λz. z = x) xs # x # drop (length (takeUntil (λz. z = x) xs) + 1) xs" by simp
from 1 have 2: "takeUntil (λz. z = x) (a # xs) = a # takeUntil (λz. z = x) (xs)" by simp
from 1 have "drop (length (takeUntil (λz. z = x) (a # xs)) + 1) (a # xs) = drop (length (takeUntil (λz. z = x) xs) + 1) xs" by simp
hence ?case using IH 2 by simp
}
note noteq = this
have "a = x ∨ a ≠ x" by simp
thus ?case using eq noteq by blast
qed
with 2 have 3: ?thesis by blast
thus "xs = takeUntil (λz. z = x) xs # x # drop (length (takeUntil (λz. z = x) xs) + 1) xs" by simp
from 3 show "x ∉ elems (takeUntil (λz. z = x) xs)" by simp
qed
thus ?thesis by blast
qed
but it seems rather long. In particular, I think invoking law of excluded middle here is cumbersome, and I feel like there ought to be some convenient schematic variable like ?goal which can refer to the current goal or something.
How can I make this proof shorter without sacrificing clarity?
Not really an answer to your specific question, but I would nonetheless like to point out, that a more concise prove can still be comprehensible.
lemma "x ∈ elems xs ⟹ ∃ ys zs. xs = ys # x # zs ∧ x ∉ elems ys"
proof (induction)
case (Cons l ls)
thus ?case
proof (cases "x ≠ l")
case True
hence "∃ys zs. ls = ys # x # zs ∧ x ∉ elems ys" using Cons by simp
thus ?thesis using ‹x ≠ l› Cons_eq_appendI by fastforce
qed (fastforce)
qed (simp)
Here's another shorter proof than your own:
fun elems :: ‹'a list ⇒ 'a set› where
‹elems [] = {}› |
‹elems (x#xs) = {x} ∪ elems xs›
lemma elems_prefix_suffix:
assumes ‹x ∈ elems xs›
shows ‹∃pre suf. xs = pre # [x] # suf ∧ x ∉ elems pre›
using assms proof(induction xs)
fix y ys
assume *: ‹x ∈ elems (y#ys)›
and IH: ‹x ∈ elems ys ⟹ ∃pre suf. ys = pre # [x] # suf ∧ x ∉ elems pre›
{
assume ‹x = y›
from this have ‹∃pre suf. y#ys = pre # [x] # suf ∧ x ∉ elems pre›
using * by fastforce
}
note L = this
{
assume ‹x ≠ y› and ‹x ∈ elems ys›
moreover from this obtain pre and suf where ‹ys = pre # [x] # suf› and ‹x ∉ elems pre›
using IH by auto
moreover have ‹y#ys = y#pre # [x] # suf› and ‹x ∉ elems (y#pre)›
by(simp add: calculation)+
ultimately have ‹∃pre suf. y#ys = pre # [x] # suf ∧ x ∉ elems pre›
by(metis append_Cons)
}
from this and L show ‹∃pre suf. y#ys = pre # [x] # suf ∧ x ∉ elems pre›
using * by auto
qed auto ― ‹Base case trivial›
I've used a few features of Isar to compress the proof:
Blocks within the braces {...} allow you to perform hypothetical reasoning.
Facts can be explicitly named using note.
The moreover keyword starts a calculation that implicitly "carries along" facts as they are established. The calculation "comes to a head" with the ultimately keyword. This style can significantly reduce the number of explicitly named facts that you need to introduce over the course of a proof.
The qed auto completes the proof by applying auto to all remaining subgoals. A comment notes that the subgoal remaining is the base case of the induction, which is trivial.

How to define a linear ordering on a type?

I'm trying to define a conjunction function for 4-valued logic (false, true, null, and error). In my case the conjunction is equivavlent to min function on linear order false < error < null < true.
datatype bool4 = JF | JT | BN | BE
instantiation bool4 :: linear_order
begin
fun leq_bool4 :: "bool4 ⇒ bool4 ⇒ bool" where
"leq_bool4 JF b = True"
| "leq_bool4 BE b = (b = BE ∨ b = BN ∨ b = JT)"
| "leq_bool4 BN b = (b = BN ∨ b = JT)"
| "leq_bool4 JT b = (b = JT)"
instance proof
fix x y z :: bool4
show "x ⊑ x"
by (induct x) simp_all
show "x ⊑ y ⟹ y ⊑ z ⟹ x ⊑ z"
by (induct x; induct y) simp_all
show "x ⊑ y ⟹ y ⊑ x ⟹ x = y"
by (induct x; induct y) simp_all
show "x ⊑ y ∨ y ⊑ x"
by (induct x; induct y) simp_all
qed
end
definition and4 :: "bool4 ⇒ bool4 ⇒ bool4" where
"and4 a b ≡ minimum a b"
I guess there must be an easier way to define a linear order in Isabelle HOL. Could you suggest a simplification of the theory?
You can use the "Datatype_Order_Generator" AFP entry.
Then it's as simple as importing "$AFP/Datatype_Order_Generator/Order_Generator" and declaring derive linorder "bool4". Note that the constructors must be declared in the order you want them when defining your datatype.
Details on how to download and use the AFP locally can be found here.

Resources