This pattern generator produces a list with a given number at a given position, all other values are zero.
fun pattern_one_value :: "nat ⇒ nat ⇒ nat ⇒ nat ⇒ nat list" where
"pattern_one_value _ _ _ 0 = []" |
"pattern_one_value pos pos1 val lng =
(if pos = pos1 then val else 0) # (pattern_one_value pos (pos1 + 1) val (lng - 1))"
The following lemma is aimed to prove that generated lists contain the right value at the given position.
lemma pattern_one_value_check [simp]: "∀pos val. pos < lng ⟹ pattern_one_value pos 0 val lng ! pos = val"
proof(induct lng)
case 0 then show ?case by simp
next
case (Suc lng) then show ?case by auto
qed
It seems to be a correct proof; however, changing val in the cons expression of the generator function into an arbitrary number like (if pos = pos1 then 7 else 0) # ..., the proof still holds because both the base and the induction hypothesis are false.
Where am I wrong? Thanks for any help.
It seems to be a correct proof; however, changing val in the cons
expression of the generator function into an arbitrary number like (if pos = pos1 then 7 else 0) # ..., the proof still holds because both
the base and the induction hypothesis are false. Where am I wrong?
I believe that the problem is related to an attempt to treat HOL's universal quantifier ∀ as equivalent to Pure's universal quantifier ⋀. Effectively, it is possible to prove anything from the premise of the theorem pattern_one_value_check, as stated in your question. Indeed:
lemma pattern_one_value_check'[simp]:
"(∀pos val::nat. pos < (lng::nat)) = False"
by auto
lemma pattern_one_value_check''[simp]:
"(∀pos val::nat. pos < (lng::nat)) ⟹ P"
by auto
I believe that you meant to use Pure's universal quantification in the statement of the theorem, e.g.
lemma pattern_one_value_check [simp]:
"⋀pos val. pos < lng ⟹ pattern_one_value pos 0 val lng ! pos = val"
proof(induct lng)
case 0 then show ?case by simp
next
case (Suc lng) then show ?case sorry
qed
In fact, even this is not necessary. The following theorem, once proven, will appear in the context as identical to the one stated above:
lemma pattern_one_value_check' [simp]:
"pos < lng ⟹ pattern_one_value pos 0 val lng ! pos = val"
proof(induct lng)
case 0 then show ?case by simp
next
case (Suc lng) then show ?case sorry
qed
If you seek a more detailed explanation, see Section 2.1 in Isar-ref and the document "Programming and Proving in Isabelle/HOL", both are part of the official documentation.
As a side note, I have to mention that, perhaps, there is an easier way to define pattern_one_value. In this case, the proof of pattern_one_value_check also seems to be easier:
definition pattern_one_value :: "nat ⇒ nat ⇒ nat ⇒ nat list"
where "pattern_one_value val pos len = list_update (replicate len 0) pos val"
lemma pattern_one_value_check:
assumes "pos < len"
shows "pattern_one_value val pos len ! pos = val"
using assms unfolding pattern_one_value_def
apply(induct len)
subgoal by auto
subgoal by (metis length_replicate nth_list_update)
done
Related
I have datatype stack_op which consists of several (~20) cases. I'm trying write function which skips some of that cases in list:
function (sequential) skip_expr :: "stack_op list ⇒ stack_op list" where
"skip_expr [] = []"
| "skip_expr ((stack_op.Unary _)#other) = (skip_expr other)"
| "skip_expr ((stack_op.Binary _)#other) = skip_expr (skip_expr other)"
| "skip_expr ((stack_op.Value _)#other) = other"
| "skip_expr other = other"
by pat_completeness auto termination by lexicographic_order
which seems to always terminate. But trying by lexicographic order generates such unresolved cases:
Calls:
c) stack_op.Binary uv_ # other ~> skip_expr other
Measures:
1) size_list size
2) length
Result matrix:
1 2
c: ? ?
(size_change also desn't work)
I've read https://isabelle.in.tum.de/dist/Isabelle2021/doc/functions.pdf, but it couldn't help. (Maybe there are more complex examples of tremination use?)
I tried to rewrite function adding another param:
function (sequential) skip_expr :: "stack_op list ⇒ nat ⇒ stack_op list" where
"skip_expr l 0 = l"
| "skip_expr [] _ = []"
| "skip_expr ((stack_op.Unary _)#other) depth = (skip_expr other (depth - 1))"
| "skip_expr ((stack_op.Binary _)#other) depth =
(let buff1 = (skip_expr other (depth - 1))
in (skip_expr buff1 (length buff1)))"
| "skip_expr ((stack_op.Value _)#other) _ = other"
| "skip_expr other _ = other"
by pat_completeness auto
termination by (relation "measure (λ(_,dep). dep)") auto
which generates unresolved subgoal:
1. ⋀other v. skip_expr_dom (other, v) ⟹ length (skip_expr other v) < Suc v
which I also don't how to proof.
Could anyone how such cases solved (As I can understand there is some problem with two-level recursive call on rigth side of stack_op.Binary case)? Or maybe there is another way to make such skip?
Thanks in advance
The lexicographic_order method simply tries to solve the arising goals with the simplifier, so if the simplifier gets stuck you end up with unresolved termination subgoals.
In this case, as you identified correctly, the problem is that you have a nested recursive call skip_expr (skip_expr other). This is always problematic because at this stage, the simplifier knows nothing about what skip_expr does to the input list. For all we know, it might just return the list unmodified, or even a longer list, and then it surely would not terminate.
Confronting the issue head on
The solution is to show something about length (skip_expr …) and make that information available to the simplifier. Because we have not yet shown termination of the function, we have to use the skip_expr.psimps rules and the partial induction rule skip_expr.pinduct, i.e. every statement we make about skip_expr xs always has as a precondition that skip_expr actually terminates on the input xs. For this, there is the predicate skip_expr_dom.
Putting it all together, it looks like this:
lemma length_skip_expr [termination_simp]:
"skip_expr_dom xs ⟹ length (skip_expr xs) ≤ length xs"
by (induction xs rule: skip_expr.pinduct) (auto simp: skip_expr.psimps)
termination skip_expr by lexicographic_order
Circumventing the issue
Sometimes it can also be easier to circumvent the issue entirely. In your case, you could e.g. define a more general function skip_exprs that skips not just one instruction but n instructions. This you can define without nested induction:
fun skip_exprs :: "nat ⇒ stack_op list ⇒ stack_op list" where
"skip_exprs 0 xs = xs"
| "skip_exprs (Suc n) [] = []"
| "skip_exprs (Suc n) (Unary _ # other) = skip_exprs (Suc n) other"
| "skip_exprs (Suc n) (Binary _ # other) = skip_exprs (Suc (Suc n)) other"
| "skip_exprs (Suc n) (Value _ # other) = skip_exprs n other"
| "skip_exprs (Suc n) xs = xs"
Equivalence to your skip_expr is then straightforward to prove:
lemma skip_exprs_conv_skip_expr: "skip_exprs n xs = (skip_expr ^^ n) xs"
proof -
have [simp]: "(skip_expr ^^ n) [] = []" for n
by (induction n) auto
have [simp]: "(skip_expr ^^ n) (Other # xs) = Other # xs" for xs n
by (induction n) auto
show ?thesis
by (induction n xs rule: skip_exprs.induct)
(auto simp del: funpow.simps simp: funpow_Suc_right)
qed
lemma skip_expr_Suc_0 [simp]: "skip_exprs (Suc 0) xs = skip_expr xs"
by (simp add: skip_exprs_conv_skip_expr)
In your case, I don't think it actually makes sense to do this because figuring out the termination is fairly easy, but it may be good to keep in mind.
While doing some basic algebra, I frequently arrive at a subgoal of the following type (sometimes with a finite sum, sometimes with a finite product).
lemma foo:
fixes N :: nat
fixes a :: "nat ⇒ nat"
shows "(a 0) = (∑x = 0..N. (if x = 0 then 1 else 0) * (a x))"
This seems pretty obvious to me, but neither auto nor auto cong: sum.cong split: if_splits can handle this. What's more, sledgehammer also surrenders when called on this lemma. How can one efficiently work with finite sums and products containing if-then-else in general, and how to approach this case in particular?
My favourite way to do these things (because it is very general) is to use the rules sum.mono_neutral_left and sum.mono_neutral_cong_left and the corresponding right versions (and analogously for products). The rule sum.mono_neutral_right lets you drop arbitrarily many summands if they are all zero:
finite T ⟹ S ⊆ T ⟹ ∀i∈T - S. g i = 0
⟹ sum g T = sum g S
The cong rule additionally allows you to modify the summation function on the now smaller set:
finite T ⟹ S ⊆ T ⟹ ∀i∈T - S. g i = 0 ⟹ (⋀x. x ∈ S ⟹ g x = h x)
⟹ sum g T = sum h S
With those, it looks like this:
lemma foo:
fixes N :: nat and a :: "nat ⇒ nat"
shows "a 0 = (∑x = 0..N. (if x = 0 then 1 else 0) * a x)"
proof -
have "(∑x = 0..N. (if x = 0 then 1 else 0) * a x) = (∑x ∈ {0}. a x)"
by (intro sum.mono_neutral_cong_right) auto
also have "… = a 0"
by simp
finally show ?thesis ..
qed
Assuming the left-hand side could use an arbitrary value between 0 and N, what about adding a more general lemma
lemma bar:
fixes N :: nat
fixes a :: "nat ⇒ nat"
assumes
"M ≤ N"
shows "a M = (∑x = 0..N. (if x = M then 1 else 0) * (a x))"
using assms by (induction N) force+
and solving the original one with using bar by blast?
Consider the following datatypes with bindings in Nominal Isabelle:
theory Example
imports "Nominal2.Nominal2"
begin
atom_decl vrs
nominal_datatype ty =
Tvar "vrs"
| Arrow x::vrs T::"ty" binds x in T
nominal_datatype trm =
Var "vrs"
| Abs x::"vrs" t::"trm" binds x in t
inductive
typing :: "trm ⇒ ty ⇒ bool" ("_ , _" [60,60] 60)
where
T_Abs[intro]: "(Abs x t) , (Arrow x T)"
equivariance typing
nominal_inductive typing done
lemma
assumes "(Abs x t), (Arrow y T)"
shows "x = y"
using assms
I want to prove that the two bindings appearing in the relation are equal. I see two ways an Isabelle user could help:
If you know Nominal Isabelle is it possible to do this?
Otherwise, are the two occurrences of x in the rule T_Abs equal for the assistant or are they sort of bound variable with different identity?
If you know Nominal Isabelle is it possible to do this?
Unfortunately, it is not possible to prove the theorem that you are trying to prove. Here is a counterexample (the proofs were Sledgehammered):
theory Scratch
imports "Nominal2.Nominal2"
begin
atom_decl vrs
nominal_datatype ty =
Tvar "vrs"
| Arrow x::vrs T::"ty" binds x in T
nominal_datatype trm =
Var "vrs"
| Abs x::"vrs" t::"trm" binds x in t
inductive
typing :: "trm ⇒ ty ⇒ bool" ("_ , _" [60,60] 60)
where
T_Abs[intro]: "(Abs x t) , (Arrow x T)"
equivariance typing
nominal_inductive typing .
abbreviation s where "s ≡ Sort ''Scratch.vrs'' []"
abbreviation v where "v n ≡ Abs_vrs (Atom s n)"
lemma neq: "Abs (v 1) (Var (v 0)), Arrow (v (Suc (Suc 0))) (Tvar (v 0))"
(is "?a, ?b")
proof-
have a_def: "Abs (v 1) (Var (v 0)) = Abs (v (Suc (Suc 0))) (Var (v 0))"
(*Sledgehammered*)
by simp (smt Abs_vrs_inverse atom.inject flip_at_base_simps(3) fresh_PairD(2)
fresh_at_base(2) mem_Collect_eq nat.distinct(1) sort_of.simps trm.fresh(1))
from typing.simps[of ?a ?b, unfolded this, THEN iffD2] have
"Abs (v (Suc (Suc 0))) (Var (v 0)) , Arrow (v (Suc (Suc 0))) (Tvar (v 0))"
by auto
then show ?thesis unfolding a_def by clarsimp
qed
lemma "∃x y t T. x ≠ y ∧ (Abs x t), (Arrow y T)"
proof(intro exI conjI)
show "v 1 ≠ v (Suc (Suc 0))"
(*Sledgehammered*)
by (smt Abs_vrs_inverse One_nat_def atom.inject mem_Collect_eq n_not_Suc_n
sort_of.simps)
show "Abs (v 1) (Var (v 0)) , Arrow (v (Suc (Suc 0))) (Tvar (v 0))"
by (rule neq)
qed
end
Otherwise, are the two occurrences of x in the rule T_Abs equal for
the assistant or are they sort of bound variable with different
identity?
I believe that you are thinking along the right lines and, hopefully, the example above will clarify any confusion that you might have. Generally, you could interpret the meaning of Abs x t1 = Abs y t2 as the alpha-equivalence of (λx. t1) and (λy. t2). Of course, (λx. t1) and (λy. t2) may be alpha equivalent without x and y being equal.
Given a function that generates a list of identical items I wish to prove that the generated lists consist the given natural number at all positions independent of list length.
fun pattern_n :: "nat ⇒ nat ⇒ nat list" where
"pattern_n _ 0 = []" |
"pattern_n n lng = n # (pattern_n n (lng - 1))"
lemma pattern_n_1: "lng > 0 ∧ pos ≥ 0 ∧ pos < lng ∧ n ≥ 0 ⟹ (pattern_n n lng ! pos) = n"
It seems obvious that the proof should be based on induction on the length of the generated list but pos also seems to be an induction variable candidate. I'd appreciate any help on how to proceed with this proof.
The function pattern_n is equivalent to the function replicate from the standard library (theory List). The standard library also contains the theorem nth_replicate for the function replicate that is nearly identical to the theorem that you are trying to prove:
fun pattern_n :: "nat ⇒ nat ⇒ nat list" where
"pattern_n _ 0 = []" |
"pattern_n n lng = n # (pattern_n n (lng - 1))"
lemma "pattern_n n k = replicate k n"
by (induction k) auto
thm nth_replicate
UPDATE
Alternatively, you can use induction to prove the result. Usually it is more convenient to use the definition in the form that is provided by the function pattern_n' below, because the theorems that are generated automatically when you define the function are more consistent with this form.
fun pattern_n :: "nat ⇒ nat ⇒ nat list" where
"pattern_n _ 0 = []" |
"pattern_n n lng = n # (pattern_n n (lng - 1))"
fun pattern_n' :: "nat ⇒ nat ⇒ nat list" where
"pattern_n' n 0 = []" |
"pattern_n' n (Suc lng) = n # (pattern_n' n lng)"
lemma "pattern_n n lng = pattern_n' n lng"
by (induct lng) auto
lemma pattern_n_1_via_replicate:
"pos < lng ⟹ (pattern_n val lng) ! pos = val"
proof(induct lng arbitrary: pos)
case 0 then show ?case by simp
next
case (Suc lng) then show ?case by (fastforce simp: less_Suc_eq_0_disj)
qed
Isabelle version: Isabelle2020
My aim is to prove properties of lists containing generated patterns.
In the first example the pattern is simply a sequence of 0s and lemma pattern_0_len proves that the length of the generated list indeed equals to the length parameter of the generator function.
theory pattern_0
imports Main
begin
fun pattern_0 :: "nat ⇒ nat list" where
"pattern_0 0 = []" |
"pattern_0 len = (pattern_0 (len - 1)) # [0]"
lemma pattern_0_len [simp]: "length (pattern_0 lng) = lng"
apply(induction lng)
apply(simp)
apply(auto)
done
end
In the second example the generator produces a sequence of 0, 1 items.
theory pattern_0_1
imports Main
begin
fun pattern_0_1 :: "nat ⇒ nat ⇒ nat list" where
"pattern_0_1 0 item = []" |
"pattern_0_1 len item = (pattern_0_1 (len - 1) (if item = 0 then 1 else 0)) # [item]"
lemma pattern_0_1_len [simp]: "length (pattern_0_1 lng item) = lng"
apply(induction lng)
apply(simp)
apply(auto)
done
end
Unfortunately, pattern_0_1_len does not prove (after simp the goal is exactly the induction step) and I'd like to understand the reason why not. Is it the presence of the item parameter that 'confuses' Isabelle? What can be done in this situation, preferably without declaring anything about how the pattern is generated?
The additional parameter is indeed the problem. For example, consider this subgoal:
1. ⋀lng. length (pattern_0_1 lng 0) = lng ⟹ item = 0 ⟹ length (pattern_0_1 lng (Suc 0)) = lng
You see that the induction hypothesis is only applicable for zero, but you need it for one.
The fix is simple:
apply(induction lng arbitrary: item)
This instructs the induction method to first generalize the variable item. Then, the induction hypothesis becomes more broadly applicable.