I satarted to work with Isabelle a few weeks ago and it's hard to me to do some proves automatically. I've just use the rule "less_induct" to show a property on a list.
theorem cuenta_ordena_1:
"cuenta (ordena xs) y = cuenta xs y"
proof(induct "length xs" arbitrary: xs rule: less_induct)
case less
show ?case
proof(cases xs)
assume "xs=[]"
then show ?thesis by simp
next
fix a list
assume "xs=a#list"
have "length(menores a list)<Suc(length list)" by simp
also have "... = length (a#list)" by simp
also have "... = length (xs)" using `xs=a#list` by simp
finally have 1:"length (menores a list)< length xs" by simp
have "length(mayores a list)<Suc(length list)" by simp
also have "... = length (a#list)" by simp
also have "... = length (xs)" using `xs=a#list` by simp
finally have 2:"length (mayores a list)< length xs" by simp
have " cuenta (ordena xs) y= cuenta (ordena (a#list)) y" using `xs=a#list` by simp
also have "...= cuenta ((ordena (menores a list)) # (a # (ordena (mayores a list)))) y " by simp
also have "... = cuenta (ordena (menores a list)) y + cuenta (a # (ordena (mayores a list))) y " by (rule cuenta_append)
also have "... = cuenta (menores a list) y + cuenta (a # (ordena (mayores a list))) y " using less 1 by simp
finally have 3:"cuenta(ordena xs) y = cuenta (menores a list) y + cuenta (a # (ordena (mayores a list))) y" by simp
also have 4:"... = cuenta xs y"
proof(cases "a=y")
case False
then have "cuenta (menores a list) y + cuenta (a # (ordena (mayores a list))) y
= cuenta (menores a list) y + cuenta (ordena (mayores a list)) y " by simp
also have "... = cuenta (menores a list) y + cuenta (mayores a list) y " using less 2 by simp
also have "... = cuenta xs y"
proof (cases "y<a")
case True
hence "cuenta (menores a list) y + cuenta (mayores a list) y
= cuenta list y + cuenta (mayores a list) y" by (simp add: cuenta_menores)
also have "... = cuenta list y" using "True" by (simp add: cuenta_mayores)
also have "... = cuenta (a#list) y" using "False" by simp
finally show ?thesis using `xs=a#list` by simp
next
case False
hence "cuenta (menores a list) y + cuenta (mayores a list) y
= cuenta (mayores a list) y" by (simp add: cuenta_menores)
also have "... = cuenta list y" using "False" by (simp add: cuenta_mayores)
also have "... = cuenta (a#list) y" using `¬(a=y)` by simp
finally show ?thesis using `xs=a#list` by simp
qed
finally show ?thesis by simp
next
case True
hence "¬(y<a)" by simp
have "cuenta (menores a list) y + cuenta (a # (ordena (mayores a list))) y
= cuenta (menores a list) y + Suc(cuenta (ordena (mayores a list)) y) " using "True" by simp
also have "... = cuenta (menores a list) y + Suc(cuenta (mayores a list) y) " using less 2 by simp
also have "... = Suc(cuenta(mayores a list) y)" using `¬(y<a)` by (simp add: cuenta_menores)
also have "... = Suc(cuenta list y)" using `¬(y<a)` by (simp add: cuenta_mayores)
also have "... = cuenta (a#list) y" using "True" by simp
finally show ?thesis using `xs=a#list` by simp
qed
finally show ?thesis using 3 4 by simp
qed
qed
To do the automatic proof I think I have to write something like this:
theorem cuenta_ordena:
"cuenta (ordena xs) y = cuenta xs y"
apply (induction "length xs" arbitrary: xs rule: less_induct)
apply (cases xs)
apply (auto simp add: cuenta_append cuenta_menores cuenta_mayores)
Can you help me?
Thank you!
Based upon your proof and my meagre knowledge of Spanish, I imagine your theory looks something like this:
fun mejores :: "('a :: linorder) ⇒ 'a list ⇒ 'a list" where
"mejores y [] = []"
| "mejores y (x#xs) = (if x ≥ y then [x] else []) # mejores y xs"
fun menores :: "('a :: linorder) ⇒ 'a list ⇒ 'a list" where
"menores y [] = []"
| "menores y (x#xs) = (if x < y then [x] else []) # menores y xs"
lemma length_mejores [simp]: "length (mejores y xs) ≤ length xs"
by (induction xs) simp_all
lemma length_menores [simp]: "length (menores y xs) ≤ length xs"
by (induction xs) simp_all
fun ordena where
"ordena [] = []"
| "ordena (x#xs) = ordena (menores x xs) # [x] # ordena (mejores x xs)"
fun cuenta :: "_ list ⇒ _ ⇒ nat" where
"cuenta [] y = 0"
| "cuenta (x#xs) y = (if y = x then 1 else 0) + cuenta xs y"
The automatic proof you suggested cannot work here, because when you write apply (cases xs), xs is a variable that is universally quantified in the goal. If you want to do case distinction over such a variable, you should do an Isar proof (as you did before).
An easier approach requiring fewer auxiliary lemmas would be the following:
lemma cuenta_append [simp]: "cuenta (xs # ys) y = cuenta xs y + cuenta ys y"
by (induction xs) simp_all
lemma cuenta_mejores_menores: "cuenta (menores x xs) y + cuenta (mejores x xs) y = cuenta xs y"
by (induction xs) auto
...and the proof is completely automatic:
lemma "cuenta (ordena xs) y = cuenta xs y"
by (induction xs rule: ordena.induct) (auto simp: cuenta_mejores_menores)
Note that I used the induction rule for the ordena function. The induction on the list length that you did is more general, but that makes it more difficult to use automation. The rule ordena.induct looks like this:
P [] ⟹
(⋀x xs.
P (menores x xs) ⟹
P (mejores x xs) ⟹
P (x # xs)) ⟹
P a0
That is precisely what you need here. Also, note that if you do want to do induction on list length, using the rule length_induct is much easier than natural-number induction on the list length itself, which is what you did.
Also, a simpler definition of ordena that does not require the auxiliary functions menores and mejores would be:
fun ordena :: "('a :: linorder) list ⇒ 'a list" where
"ordena [] = []"
| "ordena (x#xs) = ordena [y ← xs. y < x] # [x] # ordena [y ← xs. y ≥ x]"
Note that [y ← xs. y < x] is simply syntactic sugar for filter (λy. y < x) xs. Then you don't need cuenta_mejores_menores anymore and can use the following very general lemma on the interaction between cuenta and filter:
lemma cuenta_filter [simp]: "cuenta (filter P xs) y = (if P y then cuenta xs y else 0)"
by (induction xs) simp_all
and the proof goes through automatically again:
lemma "cuenta (ordena xs) y = cuenta xs y"
by (induction xs rule: ordena.induct) auto
Related
I'm formalizing this article in Isabelle. In section 4.1 it describes the following setting:
context
fixes c d :: real
assumes "c ≠ 0" "∃ b. c = b^2" "∃ b'. d = b'^2"
begin
definition t where "t = sqrt(d/c)"
definition e' where "e' x y = x^2 + y^2 - 1 - t^2 * x^2 * y^2"
definition ρ where "ρ x y = (-y,x)"
definition τ where "τ x y = (1/(t*x),1/(t*y))"
It then defines G to be the abelian group of order eight generated by ρ and τ.
Is there an easy way of:
Stating that ρ and τ generate a group.
Since ρ and τ have order 2 and commute I think that all the rest commute and maybe there is a built-in theorem saying that this has to correspond to an abelian group of order 8?
I did make an attempt to solve the problem and came up with a slightly forceful method for its solution:
context
fixes c d :: real
assumes "c ≠ 0" "∃b. c = b^2" "∃b'. d = b'^2"
begin
definition t where "t = sqrt(d/c)"
definition e' where "e' x y = x^2 + y^2 - 1 - t^2 * x^2 * y^2"
context
assumes nz_t: "t ≠ 0"
begin
definition ρ :: "real × real ⇒ real × real" where
"ρ z = (-snd z, fst z)"
definition τ :: "real × real ⇒ real × real" where
"τ z = (1/(t*fst z), 1/(t*snd z))"
definition S where
"S ≡
{
id,
(λz. (-snd z, fst z)),
(λz. (-fst z, -snd z)),
(λz. (snd z, -fst z)),
(λz. (1/(t*fst z), 1/(t*snd z))),
(λz. (-1/(t*snd z), 1/(t*fst z))),
(λz. (-1/(t*fst z), -1/(t*snd z))),
(λz. (1/(t*snd z), -1/(t*fst z)))
}"
definition ρS where
"ρS ≡
{id, (λz. (-snd z, fst z)), (λz. (-fst z, -snd z)), (λz. (snd z, -fst z))}"
definition τS where
"τS ≡ {id, (λz. (1/(t*fst z), 1/(t*snd z)))}"
definition BIJ where "BIJ = ⦇carrier = {f. bij f}, mult = comp, one = id⦈"
interpretation bij: group BIJ
unfolding BIJ_def
apply unfold_locales
subgoal by (simp add: bij_comp)
subgoal by (simp add: comp_assoc)
subgoal by simp
subgoal by simp
subgoal by simp
subgoal
unfolding Units_def
by clarsimp
(metis inj_iff bij_betw_def bij_betw_inv_into inv_o_cancel surj_iff)
done
(*the proof may take quite a few seconds*)
lemma comp_S: "x ∈ S ⟹ y ∈ S ⟹ x ∘ y ∈ S"
unfolding comp_apply S_def Set.insert_iff by (elim disjE) fastforce+
lemma comm_S: "x ∈ S ⟹ y ∈ S ⟹ x ∘ y = y ∘ x"
unfolding comp_apply S_def Set.insert_iff by (elim disjE) fastforce+
lemma bij_ρ: "bij ρ"
unfolding bij_def inj_def surj_def ρ_def
by clarsimp (metis add.inverse_inverse)
lemma bij_τ: "bij τ"
unfolding bij_def inj_def surj_def τ_def
proof(simp add: nz_t, intro allI, intro exI)
fix a show "a = 1 / (t * (1/(a*t)))" using nz_t by simp
qed
lemma generate_ρτ: "generate BIJ {ρ, τ} = S"
proof(standard; intro subsetI)
have inv_τ: "inv⇘BIJ⇙ τ = τ"
unfolding m_inv_def
proof(standard)
show "τ ∈ carrier BIJ ∧ τ ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙ ∧ τ ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙"
unfolding BIJ_def apply(intro conjI)
subgoal using bij_τ by simp
subgoal unfolding τ_def using nz_t by auto
subgoal unfolding τ_def using nz_t by auto
done
then show
"y ∈ carrier BIJ ∧ τ ⊗⇘BIJ⇙ y = 𝟭⇘BIJ⇙ ∧ y ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙ ⟹ y = τ"
for y
unfolding BIJ_def by (auto intro: left_right_inverse_eq)
qed
define ρ' :: "real × real ⇒ real × real" where "ρ' = (λz. (snd z, -fst z))"
have bij_ρ': "bij ρ'"
unfolding bij_def inj_def surj_def ρ'_def
by simp (metis add.inverse_inverse)
have inv_ρ: "inv⇘BIJ⇙ ρ = ρ'"
unfolding m_inv_def
proof(standard)
show "ρ' ∈ carrier BIJ ∧ ρ ⊗⇘BIJ⇙ ρ' = 𝟭⇘BIJ⇙ ∧ ρ' ⊗⇘BIJ⇙ ρ = 𝟭⇘BIJ⇙"
unfolding BIJ_def apply(intro conjI)
subgoal using bij_ρ' by auto
subgoal unfolding ρ_def ρ'_def by auto
subgoal unfolding ρ_def ρ'_def by auto
done
then show
"y ∈ carrier BIJ ∧ ρ ⊗⇘BIJ⇙ y = 𝟭⇘BIJ⇙ ∧ y ⊗⇘BIJ⇙ ρ = 𝟭⇘BIJ⇙ ⟹ y = ρ'"
for y
unfolding BIJ_def by (auto intro: left_right_inverse_eq)
qed
have ττ: "τ ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙"
unfolding BIJ_def τ_def comp_def by (auto simp: nz_t)
show "x ∈ generate BIJ {ρ, τ} ⟹ x ∈ S" for x
apply(induction rule: generate.induct)
subgoal unfolding BIJ_def S_def by auto
subgoal unfolding BIJ_def S_def ρ_def τ_def by auto
subgoal
unfolding Set.insert_iff apply(elim disjE)
subgoal using inv_ρ unfolding BIJ_def S_def ρ_def ρ'_def by simp
subgoal using inv_τ unfolding BIJ_def S_def τ_def by simp
subgoal by simp
done
subgoal unfolding BIJ_def by (metis monoid.select_convs(1) comp_S)
done
show "x ∈ S ⟹ x ∈ generate BIJ {ρ, τ}" for x
unfolding S_def Set.insert_iff
proof(elim disjE; clarsimp)
show "id ∈ generate BIJ {ρ, τ}"
unfolding BIJ_def using generate.simps by fastforce
show ρ_gen: "(λz. (- snd z, fst z)) ∈ generate BIJ {ρ, τ}"
by (fold ρ_def, rule generate.simps[THEN iffD2]) simp
show τ_gen: "(λz. (1 / (t * fst z), 1 / (t * snd z))) ∈ generate BIJ {ρ, τ}"
by (fold τ_def) (simp add: generate.incl)
from inv_ρ show inv_ρ_gen: "(λz. (snd z, - fst z)) ∈ generate BIJ {ρ, τ}"
by (fold ρ'_def) (auto simp: generate.inv insertI1)
show ρρ_gen: "(λz. (- fst z, - snd z)) ∈ generate BIJ {ρ, τ}"
proof-
have ρρ: "(λz. (- fst z, - snd z)) = ρ ⊗⇘BIJ⇙ ρ"
unfolding ρ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using ρρ ρ_gen[folded ρ_def] by auto
qed
show "(λz. (- (1 / (t * snd z)), 1 / (t * fst z))) ∈ generate BIJ {ρ, τ}"
proof-
have ρτ: "(λz. (- (1 / (t * snd z)), 1 / (t * fst z))) = ρ ⊗⇘BIJ⇙ τ"
unfolding ρ_def τ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using ρτ ρ_gen[folded ρ_def] τ_gen[folded τ_def] by auto
qed
show
"(λz. (- (1 / (t * fst z)), - (1 / (t * snd z)))) ∈ generate BIJ {ρ, τ}"
proof-
have ρρτ:
"(λz. (- (1 / (t * fst z)), - (1 / (t * snd z)))) =
(λz. (- fst z, - snd z)) ⊗⇘BIJ⇙ τ"
unfolding τ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using ρρτ ρρ_gen τ_gen[folded τ_def] by auto
qed
show "(λz. (1 / (t * snd z), - (1 / (t * fst z)))) ∈ generate BIJ {ρ, τ}"
proof-
have inv_ρ_τ:
"(λz. (1 / (t * snd z), - (1 / (t * fst z)))) =
(λz. (snd z, - fst z)) ⊗⇘BIJ⇙ τ"
unfolding τ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using inv_ρ_τ inv_ρ_gen τ_gen[folded τ_def] by auto
qed
qed
qed
lemma "comm_group (BIJ⦇carrier := (generate BIJ {ρ, τ})⦈)"
proof-
have ρτ_ss_BIJ: "{ρ, τ} ⊆ carrier BIJ"
using bij_ρ bij_τ unfolding BIJ_def by simp
interpret ρτ_sg: subgroup "(generate BIJ {ρ, τ})" BIJ
using ρτ_ss_BIJ by (rule bij.generate_is_subgroup)
interpret ρτ_g: group "BIJ⦇carrier := (generate BIJ {ρ, τ})⦈"
by (rule ρτ_sg.subgroup_is_group[OF bij.group_axioms])
have car_S: "carrier (BIJ⦇carrier := S⦈) = S" by simp
have BIJ_comp: "x ⊗⇘BIJ⦇carrier := S⦈⇙ y = x ∘ y" for x y
unfolding BIJ_def by auto
from ρτ_g.group_comm_groupI[
unfolded generate_ρτ car_S BIJ_comp, OF comm_S, simplified
]
show ?thesis unfolding generate_ρτ by assumption
qed
lemma id_pair_def: "(λx. x) = (λz. (fst z, snd z))" by simp
lemma distinct_single: "distinct [x] = True" by simp
lemma ne_ff'_gg'_imp_ne_fgf'g':
assumes "f ≠ f' ∨ g ≠ g'"
shows
"(λz. (f (fst z) (snd z), g (fst z) (snd z))) ≠
(λz. (f' (fst z) (snd z), g' (fst z) (snd z)))"
using assms
proof(rule disjE)
assume "f ≠ f'"
then obtain x y where "f x y ≠ f' x y" by blast
then show ?thesis by (metis (hide_lams) fst_eqD snd_eqD)
next
assume "g ≠ g'"
then obtain x y where "g x y ≠ g' x y" by blast
then show ?thesis by (metis (hide_lams) fst_eqD snd_eqD)
qed
lemma id_ne_hyp: "(λa. a) ≠ (λa. 1/(t*a))"
proof(rule ccontr, simp)
assume id_eq_hyp: "(λa. a) = (λa. 1/(t*a))"
{
fix a :: real assume "a > 0"
define b where "b = sqrt(a)"
from ‹a > 0› have "a = b*b" and "b > 0" unfolding b_def by auto
from id_eq_hyp have "b = 1/(t*b)" by metis
with ‹b > 0› have "b div b =(1/(t*b)) div b" by simp
with ‹b > 0› have "1 = (1/(t*a))" unfolding ‹a = b*b› by simp
with ‹a > 0› nz_t have "t*a = 1" by simp
}
note ta_eq_one = this
define t2 where "t2 = (if t > 0 then 2/t else -2/t)"
with nz_t have "t2 > 0" unfolding t2_def by auto
from nz_t have "t*t2 = 2 ∨ t*t2 = -2" unfolding t2_def by auto
from ta_eq_one ‹t2 > 0› this show False by auto
qed
lemma id_ne_mhyp: "(λa. a) ≠ (λa. -1/(t*a))"
proof(rule ccontr, simp)
assume id_eq_hyp: "(λa. a) = (λa. -(1/(t*a)))"
{
fix a :: real assume "a > 0"
define b where "b = sqrt(a)"
from ‹a > 0› have "a = b*b" and "b > 0" unfolding b_def by auto
from id_eq_hyp have "b = -(1/(t*b))" by metis
with ‹b > 0› have "b div b =-1/(t*b) div b" by simp
with ‹b > 0› have "1 = -1/(t*a)" unfolding ‹a = b*b› by simp
with ‹a > 0› nz_t have "t*a = -1" by (metis divide_eq_1_iff)
}
note ta_eq_one = this
define t2 where "t2 = (if t > 0 then 2/t else -2/t)"
with nz_t have "t2 > 0" unfolding t2_def by auto
from nz_t have "t*t2 = 2 ∨ t*t2 = -2" unfolding t2_def by auto
from ta_eq_one ‹t2 > 0› this show False by auto
qed
lemma mid_ne_hyp: "(λa. -a) ≠ (λa. 1 / (t*a))"
using id_ne_mhyp by (metis minus_divide_left minus_equation_iff)
lemma mid_ne_mhyp: "(λa. -a) ≠ (λa. -1 / (t*a))"
using id_ne_hyp by (metis divide_minus_left minus_equation_iff)
lemma hyp_neq_hyp_1: "(λa. - 1/(t*a)) ≠ (λa. 1/(t*a))"
using nz_t
by (metis divide_cancel_right id_ne_mhyp mult_cancel_right1 mult_left_cancel
one_neq_neg_one)
lemma distinct:
"distinct
[
id,
(λz. (-snd z, fst z)),
(λz. (-fst z, -snd z)),
(λz. (snd z, -fst z)),
(λz. (1/(t*fst z), 1/(t*snd z))),
(λz. (-1/(t*snd z), 1/(t*fst z))),
(λz. (-1/(t*fst z), -1/(t*snd z))),
(λz. (1/(t*snd z), -1/(t*fst z)))
]"
apply(unfold distinct_length_2_or_more)+
unfolding
distinct_length_2_or_more
distinct_single
id_def id_pair_def
HOL.simp_thms(21)
by
(intro conjI)
(
rule ne_ff'_gg'_imp_ne_fgf'g',
metis one_neq_neg_one id_ne_hyp id_ne_mhyp
mid_ne_hyp mid_ne_mhyp hyp_neq_hyp_1
)+
lemma "card S = 8"
using distinct unfolding S_def using card_empty card_insert_disjoint by auto
end
end
Remarks
I relied on sledgehammer for many parts of the proofs and there is some unnecessary code duplication. Therefore, just like most of my answers on SO, this answer is far from perfect from the perspective of the coding style.
I would be interested to know if there is a better overall approach for the solution. Somehow, I came to believe that most of the more thoughtful approaches (e.g. using theorems about cyclic groups to determine the order of ρ and τ and then using |HK|=|H||K|/|H∩K| to determine the order of G) would require proving quite a number of additional theorems for HOL-Algebra, but I did not check with the AFP before making this remark and I do not use HOL-Algebra on a regular basis. Therefore, I may have missed something.
I trying to prove correct an algorithm to split a list of integers into sublists of equal sum in linear time. Here you can see the algorithm I have chosen to do so.
I would like to get some feedback regarding:
The convenience of my definition for the splitting function.
The "induction" hypothesis to use in my situation.
Please, bear in mind that up to now I have only worked with apply-scripts and not with Isar proofs.
Here is a preliminary implementation of the algorithm and the correctness definition:
definition
"ex_balanced_sum xs = (∃ ys zs. sum_list ys = sum_list zs ∧
xs = ys # zs ∧ ys ≠ [] ∧ zs ≠ [])"
fun check_list :: "int list ⇒ int ⇒ int ⇒ bool" where
"check_list [] n acc = False" |
"check_list (x#xs) n acc = (if n = acc then True else (check_list xs (n-x) (acc+x)))"
fun linear_split :: "int list ⇒ bool" where
"linear_split [] = False" |
"linear_split [x] = False" |
"linear_split (x # xs) = check_list xs (sum_list xs) x"
The theorem to prove is as follows:
lemma linear_correct: "linear_split xs ⟷ ex_balanced_sum xs"
If I reason for instance for the first implication as:
lemma linear_correct_1: "linear_split xs ⟹ ex_balanced_sum xs"
apply(induction xs rule: linear_split.induct)
Then I get a list of subgoals that I think are not appropriate:
linear_split [] ⟹ ex_balanced_sum []
⋀x. linear_split [x] ⟹ ex_balanced_sum [x]
⋀x v va. linear_split (x # v # va) ⟹ ex_balanced_sum (x # v # va)
In particular, these subgoals don't have an induction hypothesis! (am I right?). I tried to perform a different induction by just writing apply(induction xs) but then the goals look as:
linear_split [] ⟹ ex_balanced_sum []
⋀a xs. (linear_split xs ⟹ ex_balanced_sum xs) ⟹ linear_split (a # xs) ⟹ ex_balanced_sum (a # xs)
Here the hypothesis is also not an induction hypothesis since it is assuming an implication.
So, what is the best way to define this function to get a nice induction hypothesis?
Edit (a one-function version)
fun check :: "int list ⇒ int ⇒ int ⇒ bool" where
"check [] n acc = False" |
"check [x] n acc = False" |
"check (x # y # xs) n acc = (if n-x = acc+x then True else check (y # xs) (n-x) (acc+x))"
definition "linear_split xs = check xs (sum_list xs) 0"
Background
I was able to prove the theorem linear_correct for a function (splitl) that is very similar to the function check in the statement of the question. Unfortunately, I would prefer not to make any attempts to convert the proof into an apply script.
The proof below is the first proof that came to my mind after I started investigating the question. Thus, there may exist better proofs.
Proof Outline
The proof is based on the induction based on the length of the list. In particular, assume
splitl xs (sum_list xs) 0 ⟹ ex_balanced_sum xs
holds for all lists with the length less than l. If l = 1, then the result is easy to show. Assume, that l>=2. Then the list can be expressed in the form x#v#xs. In this case if it is possible to split the list using splitl, then it can be shown (splitl_reduce) that either
"splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0" (1)
or
"x = sum_list (v#xs)" (2).
Thus, the proof proceeds by cases for (1) and (2). For (1), the length of the list is (x + v)#xs) is l-1. Hence, by the induction hypothesis ex_balanced_sum (x + v)#xs). Therefore, by the definition of ex_balanced_sum, also ex_balanced_sum x#v#xs. For (2), it can be easily seen that the list can be expressed as [x]#(v#xs) and, in this case, given (2), it satisfies the conditions of ex_balanced_sum by definition.
The proof for the other direction is similar and based on the converse of the lemma associated with (1) and (2) above: if "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0" or "x = sum_list (v#xs)", then "splitl (x#v#xs) (sum_list (x#v#xs)) 0".
theory so_ptcoaatplii
imports Complex_Main
begin
definition
"ex_balanced_sum xs =
(∃ ys zs. sum_list ys = sum_list zs ∧ xs = ys # zs ∧ ys ≠ [] ∧ zs ≠ [])"
fun splitl :: "int list ⇒ int ⇒ int ⇒ bool" where
"splitl [] s1 s2 = False" |
"splitl [x] s1 s2 = False" |
"splitl (x # xs) s1 s2 = ((s1 - x = s2 + x) ∨ splitl xs (s1 - x) (s2 + x))"
lemma splitl_reduce:
assumes "splitl (x#v#xs) (sum_list (x#v#xs)) 0"
shows "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0 ∨ x = sum_list (v#xs)"
proof -
from assms have prem_cases:
"((x = sum_list (v#xs)) ∨ splitl (v#xs) (sum_list (v#xs)) x)" by auto
{
assume "splitl (v#xs) (sum_list (v#xs)) x"
then have "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0"
proof(induction xs arbitrary: x v)
case Nil then show ?case by simp
next
case (Cons a xs) then show ?case by simp
qed
}
with prem_cases show ?thesis by auto
qed
(*Sledgehammered*)
lemma splitl_expand:
assumes "splitl ((x + v)#xs) (sum_list ((x + v)#xs)) 0 ∨ x = sum_list (v#xs)"
shows "splitl (x#v#xs) (sum_list (x#v#xs)) 0"
by (smt assms list.inject splitl.elims(2) splitl.simps(3) sum_list.Cons)
lemma splitl_to_sum: "splitl xs (sum_list xs) 0 ⟹ ex_balanced_sum xs"
proof(induction xs rule: length_induct)
case (1 xs) show ?case
proof-
obtain x v xst where x_xst: "xs = x#v#xst"
by (meson "1.prems" splitl.elims(2))
have main_cases:
"splitl ((x + v)#xst) (sum_list ((x + v)#xst)) 0 ∨ x = sum_list (v#xst)"
by (rule splitl_reduce, insert x_xst "1.prems", rule subst)
{
assume "splitl ((x + v)#xst) (sum_list ((x + v)#xst)) 0"
with "1.IH" x_xst have "ex_balanced_sum ((x + v)#xst)" by simp
then obtain yst zst where
yst_zst: "(x + v)#xst = yst#zst"
and sum_yst_eq_sum_zst: "sum_list yst = sum_list zst"
and yst_ne: "yst ≠ []"
and zst_ne: "zst ≠ []"
unfolding ex_balanced_sum_def by auto
then obtain ystt where ystt: "yst = (x + v)#ystt"
by (metis append_eq_Cons_conv)
with sum_yst_eq_sum_zst have "sum_list (x#v#ystt) = sum_list zst" by simp
moreover have "xs = (x#v#ystt)#zst" using x_xst yst_zst ystt by auto
moreover have "(x#v#ystt) ≠ []" by simp
moreover with zst_ne have "zst ≠ []" by simp
ultimately have "ex_balanced_sum xs" unfolding ex_balanced_sum_def by blast
}
note prem = this
{
assume "x = sum_list (v#xst)"
then have "sum_list [x] = sum_list (v#xst)" by auto
moreover with x_xst have "xs = [x] # (v#xst)" by auto
ultimately have "ex_balanced_sum xs" using ex_balanced_sum_def by blast
}
with prem main_cases show ?thesis by blast
qed
qed
lemma sum_to_splitl: "ex_balanced_sum xs ⟹ splitl xs (sum_list xs) 0"
proof(induction xs rule: length_induct)
case (1 xs) show ?case
proof -
from "1.prems" ex_balanced_sum_def obtain ys zs where
ys_zs: "xs = ys#zs"
and sum_ys_eq_sum_zs: "sum_list ys = sum_list zs"
and ys_ne: "ys ≠ []"
and zs_ne: "zs ≠ []"
by blast
have prem_cases: "∃y v yst. ys = (y#v#yst) ∨ (∃y. ys = [y])"
by (metis remdups_adj.cases ys_ne)
{
assume "∃y. ys = [y]"
then have "splitl xs (sum_list xs) 0"
using splitl.elims(3) sum_ys_eq_sum_zs ys_zs zs_ne by fastforce
}
note prem = this
{
assume "∃y v yst. ys = (y#v#yst)"
then obtain y v yst where y_v_yst: "ys = (y#v#yst)" by auto
then have
"sum_list ((y + v)#yst) = sum_list zs ∧ ((y + v)#yst) ≠ [] ∧ zs ≠ []"
using sum_ys_eq_sum_zs zs_ne by auto
then have ebs_ypv: "ex_balanced_sum (((y + v)#yst)#zs)"
using ex_balanced_sum_def by blast
have l_ypv: "length (((y + v)#yst)#zs) < length xs"
by (simp add: y_v_yst ys_zs)
from l_ypv ebs_ypv have
"splitl (((y + v)#yst)#zs) (sum_list (((y + v)#yst)#zs)) 0"
by (rule "1.IH"[THEN spec, rule_format])
with splitl_expand have splitl_ys_exp:
"splitl ((y#v#yst)#zs) (sum_list ((y#v#yst)#zs)) 0"
by (metis Cons_eq_appendI)
from ys_zs have "splitl xs (sum_list xs) 0"
by (rule ssubst, insert y_v_yst splitl_ys_exp, simp)
}
with prem prem_cases show ?thesis by auto
qed
qed
lemma linear_correct: "ex_balanced_sum xs ⟷ splitl xs (sum_list xs) 0"
using splitl_to_sum sum_to_splitl by auto
end
I'm trying to prove that a transitive relation on elements of lists is equivalent to a transitive relation on lists (under some conditions).
Here is a first lemma:
lemma list_all2_rtrancl1:
"(list_all2 P)⇧*⇧* xs ys ⟹
list_all2 P⇧*⇧* xs ys"
apply (induct rule: rtranclp_induct)
apply (simp add: list.rel_refl)
by (smt list_all2_trans rtranclp.rtrancl_into_rtrancl)
And here is a symmetric lemma:
lemma list_all2_rtrancl2:
"(⋀x. P x x) ⟹
list_all2 P⇧*⇧* xs ys ⟹
(list_all2 P)⇧*⇧* xs ys"
apply (erule list_all2_induct)
apply simp
I guess that a relation should be reflexive. But maybe I should use another assumptions. The lemma could be proven given the assumption that P is transitive, however P is not transitive. I'm stuck. Could you suggest what assumptions to choose and how to prove this lemma?
It seems that nitpick gives me a wrong counterexample for the specific case of the last lemma (xs = [0] and ys = [2]):
lemma list_all2_rtrancl2_example:
"list_all2 (λx y. x = y ∨ Suc x = y)⇧*⇧* xs ys ⟹
(list_all2 (λx y. x = y ∨ Suc x = y))⇧*⇧* xs ys"
nitpick
I can prove that the lemma holds for this example:
lemma list_all2_rtrancl2_example_0_2:
"list_all2 (λx y. x = y ∨ Suc x = y)⇧*⇧* [0] [2] ⟹
(list_all2 (λx y. x = y ∨ Suc x = y))⇧*⇧* [0] [2]"
apply (rule_tac ?b="[1]" in converse_rtranclp_into_rtranclp; simp)
apply (rule_tac ?b="[2]" in converse_rtranclp_into_rtranclp; simp)
done
It may be feasible to use listrel instead of list_all2. Indeed, as shown below, they are equivalent (see set_listrel_eq_list_all2). However, there are several theorems in the standard library about listrel that do not have their equivalents for list_all2.
lemma set_listrel_eq_list_all2:
"listrel {(x, y). r x y} = {(xs, ys). list_all2 r xs ys}"
using list_all2_conv_all_nth listrel_iff_nth by fastforce
lemma listrel_tclosure_1: "(listrel r)⇧* ⊆ listrel (r⇧*)"
by
(
simp add:
listrel_rtrancl_eq_rtrancl_listrel1
listrel_subset_rtrancl_listrel1
rtrancl_subset_rtrancl
)
lemma listrel_tclosure_2: "refl r ⟹ listrel (r⇧*) ⊆ (listrel r)⇧*"
by
(
simp add:
listrel1_subset_listrel
listrel_rtrancl_eq_rtrancl_listrel1
rtrancl_mono
)
context
includes lifting_syntax
begin
lemma listrel_list_all2_transfer[transfer_rule]:
"((=) ===> (=) ===> (=) ===> (=))
(λr xs ys. (xs, ys) ∈ listrel {(x, y). r x y}) list_all2"
unfolding rel_fun_def using set_listrel_eq_list_all2 listrel_iff_nth by blast
end
lemma list_all2_rtrancl_1:
"(list_all2 r)⇧*⇧* xs ys ⟹ list_all2 r⇧*⇧* xs ys"
proof transfer
fix r :: "'a ⇒ 'a ⇒ bool" and xs :: "'a list" and ys:: "'a list"
assume "(λxs ys. (xs, ys) ∈ listrel {(x, y). r x y})⇧*⇧* xs ys"
then have "(xs, ys) ∈ (listrel {(x, y). r x y})⇧*"
unfolding rtranclp_def rtrancl_def by auto
then have "(xs, ys) ∈ listrel ({(x, y). r x y}⇧*)"
using listrel_tclosure_1 by auto
then show "(xs, ys) ∈ listrel {(x, y). r⇧*⇧* x y}"
unfolding rtranclp_def rtrancl_def by auto
qed
lemma list_all2_rtrancl_2:
"reflp r ⟹ list_all2 r⇧*⇧* xs ys ⟹ (list_all2 r)⇧*⇧* xs ys"
proof transfer
fix r :: "'a ⇒ 'a ⇒ bool" and xs :: "'a list" and ys :: "'a list"
assume as_reflp: "reflp r" and p_in_lr: "(xs, ys) ∈ listrel {(x, y). r⇧*⇧* x y}"
from as_reflp have refl: "refl {(x, y). r x y}"
using reflp_refl_eq by fastforce
from p_in_lr have "(xs, ys) ∈ listrel ({(x, y). r x y}⇧*)"
unfolding rtranclp_def rtrancl_def by auto
with refl have "(xs, ys) ∈ (listrel {(x, y). r x y})⇧*"
using listrel_tclosure_2 by auto
then show "(λxs ys. (xs, ys) ∈ listrel {(x, y). r x y})⇧*⇧* xs ys"
unfolding rtranclp_def rtrancl_def by auto
qed
A direct proof for list_all2 is also provided (legacy):
list_all2_induct is applied to the lists; the base case is trivial. Thence, it remains to show that (L P)* x#xs y#ys if (L (P*)) xs ys, (L P)* xs ys and P* x y.
The idea is that it is possible to find zs (e.g. xs) such that (L P) xs zs and (L P)+ zs ys.
Then, given that P* x y and P x x, by induction based on the transitive properties of P*, (L P) x#xs y#zs. Therefore, also, (L P)* x#xs y#zs.
Also, given that (L P)+ zs ys and P y y, by induction, (L P)+ y#zs y#ys. Thus, also, (L P)* y#zs y#ys.
From 3 and 4 conclude (L P)* x#xs y#ys.
lemma list_all2_rtrancl2:
assumes as_r: "(⋀x. P x x)"
shows "(list_all2 P⇧*⇧*) xs ys ⟹ (list_all2 P)⇧*⇧* xs ys"
proof(induction rule: list_all2_induct)
case Nil then show ?case by simp
next
case (Cons x xs y ys) show ?case
proof -
from as_r have lp_xs_xs: "list_all2 P xs xs" by (rule list_all2_refl)
from Cons.hyps(1) have x_xs_y_zs: "(list_all2 P)⇧*⇧* (x#xs) (y#xs)"
proof(induction rule: rtranclp_induct)
case base then show ?case by simp
next
case (step y z) then show ?case
proof -
have rt_step_2: "(list_all2 P)⇧*⇧* (y#xs) (z#xs)"
by (rule r_into_rtranclp, rule list_all2_Cons[THEN iffD2])
(simp add: step.hyps(2) lp_xs_xs)
from step.IH rt_step_2 show ?thesis by (rule rtranclp_trans)
qed
qed
from Cons.IH have "(list_all2 P)⇧*⇧* (y#xs) (y#ys)"
proof(induction rule: rtranclp_induct)
case base then show ?case by simp
next
case (step ya za) show ?case
proof -
have rt_step_2: "(list_all2 P)⇧*⇧* (y#ya) (y#za)"
by (rule r_into_rtranclp, rule list_all2_Cons[THEN iffD2])
(simp add: step.hyps(2) as_r)
from step.IH rt_step_2 show ?thesis by (rule rtranclp_trans)
qed
qed
with x_xs_y_zs show ?thesis by simp
qed
qed
As a side note, in my view (I know very little about nitpick), nitpick should not provide invalid counterexamples without any warning. I believe, usually, when nitpick 'suspects' that a counterexample may be invalid it notifies the user that the example is 'potentially spurious'. It may be useful to submit a bug report if this issue has not been recorded elsewhere.
Isabelle version: Isabelle2020
I'm interested in using Isabelle/Isar for writing proofs which are both human-readable and machine checked, and I am looking to improve my style and streamline my proofs.
prog-prove has the following exercise:
Exercise 4.6. Define a recursive function elems :: 'a list ⇒ 'a set and prove x ∈ elems xs ⟹ ∃ ys zs. xs = ys # x # zs ∧ x ∉ elems ys.
Mimicking something similar to what I would write with pen and paper, my solution is
fun elems :: "'a list ⇒ 'a set" where
"elems [] = {}" |
"elems (x # xs) = {x} ∪ elems xs"
fun takeUntil :: "('a ⇒ bool) ⇒ 'a list ⇒ 'a list" where
"takeUntil f [] = []" |
"takeUntil f (x # xs) = (case (f x) of False ⇒ x # takeUntil f xs | True ⇒ [])"
theorem "x ∈ elems xs ⟹ ∃ ys zs. xs = ys # x # zs ∧ x ∉ elems ys"
proof -
assume 1: "x ∈ elems xs"
let ?ys = "takeUntil (λ z. z = x) xs"
let ?zs = "drop (length ?ys + 1) xs"
have "xs = ?ys # x # ?zs ∧ x ∉ elems ?ys"
proof
have 2: "x ∉ elems ?ys"
proof (induction xs)
case Nil
thus ?case by simp
next
case (Cons a xs)
thus ?case
proof -
{
assume "a = x"
hence "takeUntil (λz. z = x) (a # xs) = []" by simp
hence A: ?thesis by simp
}
note eq = this
{
assume "a ≠ x"
hence "takeUntil (λz. z = x) (a # xs) = a # takeUntil (λz. z = x) xs" by simp
hence ?thesis using Cons.IH by auto
}
note noteq = this
have "a = x ∨ a ≠ x" by simp
thus ?thesis using eq noteq by blast
qed
qed
from 1 have "xs = ?ys # x # ?zs"
proof (induction xs)
case Nil
hence False by simp
thus ?case by simp
next
case (Cons a xs)
{
assume 1: "a = x"
hence 2: "takeUntil (λz. z = x) (a # xs) = []" by simp
hence "length (takeUntil (λz. z = x) (a # xs)) + 1 = 1" by simp
hence 3: "drop (length (takeUntil (λz. z = x) (a # xs)) + 1) (a # xs) = xs" by simp
from 1 2 3 have ?case by simp
}
note eq = this
{
assume 1: "a ≠ x"
with Cons.prems have "x ∈ elems xs" by simp
with Cons.IH
have IH: "xs = takeUntil (λz. z = x) xs # x # drop (length (takeUntil (λz. z = x) xs) + 1) xs" by simp
from 1 have 2: "takeUntil (λz. z = x) (a # xs) = a # takeUntil (λz. z = x) (xs)" by simp
from 1 have "drop (length (takeUntil (λz. z = x) (a # xs)) + 1) (a # xs) = drop (length (takeUntil (λz. z = x) xs) + 1) xs" by simp
hence ?case using IH 2 by simp
}
note noteq = this
have "a = x ∨ a ≠ x" by simp
thus ?case using eq noteq by blast
qed
with 2 have 3: ?thesis by blast
thus "xs = takeUntil (λz. z = x) xs # x # drop (length (takeUntil (λz. z = x) xs) + 1) xs" by simp
from 3 show "x ∉ elems (takeUntil (λz. z = x) xs)" by simp
qed
thus ?thesis by blast
qed
but it seems rather long. In particular, I think invoking law of excluded middle here is cumbersome, and I feel like there ought to be some convenient schematic variable like ?goal which can refer to the current goal or something.
How can I make this proof shorter without sacrificing clarity?
Not really an answer to your specific question, but I would nonetheless like to point out, that a more concise prove can still be comprehensible.
lemma "x ∈ elems xs ⟹ ∃ ys zs. xs = ys # x # zs ∧ x ∉ elems ys"
proof (induction)
case (Cons l ls)
thus ?case
proof (cases "x ≠ l")
case True
hence "∃ys zs. ls = ys # x # zs ∧ x ∉ elems ys" using Cons by simp
thus ?thesis using ‹x ≠ l› Cons_eq_appendI by fastforce
qed (fastforce)
qed (simp)
Here's another shorter proof than your own:
fun elems :: ‹'a list ⇒ 'a set› where
‹elems [] = {}› |
‹elems (x#xs) = {x} ∪ elems xs›
lemma elems_prefix_suffix:
assumes ‹x ∈ elems xs›
shows ‹∃pre suf. xs = pre # [x] # suf ∧ x ∉ elems pre›
using assms proof(induction xs)
fix y ys
assume *: ‹x ∈ elems (y#ys)›
and IH: ‹x ∈ elems ys ⟹ ∃pre suf. ys = pre # [x] # suf ∧ x ∉ elems pre›
{
assume ‹x = y›
from this have ‹∃pre suf. y#ys = pre # [x] # suf ∧ x ∉ elems pre›
using * by fastforce
}
note L = this
{
assume ‹x ≠ y› and ‹x ∈ elems ys›
moreover from this obtain pre and suf where ‹ys = pre # [x] # suf› and ‹x ∉ elems pre›
using IH by auto
moreover have ‹y#ys = y#pre # [x] # suf› and ‹x ∉ elems (y#pre)›
by(simp add: calculation)+
ultimately have ‹∃pre suf. y#ys = pre # [x] # suf ∧ x ∉ elems pre›
by(metis append_Cons)
}
from this and L show ‹∃pre suf. y#ys = pre # [x] # suf ∧ x ∉ elems pre›
using * by auto
qed auto ― ‹Base case trivial›
I've used a few features of Isar to compress the proof:
Blocks within the braces {...} allow you to perform hypothetical reasoning.
Facts can be explicitly named using note.
The moreover keyword starts a calculation that implicitly "carries along" facts as they are established. The calculation "comes to a head" with the ultimately keyword. This style can significantly reduce the number of explicitly named facts that you need to introduce over the course of a proof.
The qed auto completes the proof by applying auto to all remaining subgoals. A comment notes that the subgoal remaining is the base case of the induction, which is trivial.
I have a function that doubles the elements of a list in the form
double [x1, x2, ...] = [x1, x1, x2, x2, ...]
namely
fun double :: " 'a list ⇒ 'a list"
where
"double [] = []" |
"double (x#xs) = x # x # double xs"
and a function that reverses the elements of a list with the help of another function snoc that adds an element to the right side of a list:
fun snoc :: "'a list ⇒ 'a ⇒ 'a list"
where
"snoc [] x = [x]" |
"snoc (y # ys) x = y # (snoc ys x)"
fun reverse :: "'a list ⇒ 'a list"
where
"reverse [] = []" |
"reverse (x # xs) = snoc (reverse xs) x"
Now I want to prove that
lemma rev_double: "rev (double xs) = double (rev xs)"
is true.
I tried to apply induction on xs
lemma rev_double: "rev (double xs) = double (rev xs)"
by (induction xs)
and I wrote an auxiliary lemma double_snoc that ensures that doubling a list is the same as doubling its first element and the rest of the list (which uses the function snocleft which inserts an element at the left end of a list)
fun snocleft::"'a list ⇒ 'a ⇒ 'a list "
where
"snocleft [] x = [x]" |
"snocleft (y # ys) x = x # (y # ys)"
lemma double_snoc: "double (snocleft xs y) = y # y # double xs"
by (induction xs) auto
I still haven't made any progress in proving the lemma. Do you have some solutions or hints on how to set up the prove?
You define your function as reverse, but in all of your lemmas, you use rev, referring to the pre-defined list reversal function rev.
What you mean is probably this:
lemma reverse_double: "reverse (double xs) = double (reverse xs)"
If you attempt to prove this by induction (with apply (induction xs)), you will get stuck in the induction case with the following goal:
snoc (snoc (double (reverse xs)) a) a =
double (snoc (reverse xs) a)
This should be intuitively obvious: if you first snoc and then double, it is the same as first doubling and then snoc-ing twice. So let's prove this as an auxiliary lemma:
lemma double_snoc: "double (snoc xs x) = snoc (snoc (double xs) x) x"
by (induction xs) auto
Now the proof of reverse_double goes through automatically:
lemma reverse_double: "reverse (double xs) = double (reverse xs)"
by (induction xs) (auto simp: double_snoc)