The following expression is evaluated fine:
value "foldr plus [1::nat, 2] 0"
But the following expressions:
value "Finite_Set.fold plus 0 (set [1::nat, 2])"
value "ffold plus 0 {|1::nat, 2|}"
raise error:
Wellsortedness error:
Type nat not of sort finite
No type arity nat :: finite
I understand that nat is not finite. So nat set is not finite too.
But is it possible to define code equations for these functions? I'm trying to prove one. But I'm stuck:
lemma finite_set_fold_code [code]:
"comp_fun_commute f ⟹
Finite_Set.fold f x (set xs) = foldr f xs x"
apply (rule Finite_Set.comp_fun_commute.fold_equality)
apply simp
apply (induct xs arbitrary: x)
apply (simp add: Finite_Set.fold_graph.emptyI)
apply auto
apply (rule Finite_Set.fold_graph.insertI)
UPDATE:
The lemma doesn't hold and can't be proven. Duplicates must be removed from a list.
interpretation nat_plus_commute: comp_fun_commute "plus :: nat ⇒ nat ⇒ nat"
by standard auto
lemma finite_set_nat_plus [code]:
"Finite_Set.fold plus (y :: nat) (set xs) = fold plus (remdups xs) y"
by (simp add: nat_plus_commute.fold_set_fold_remdups)
But I get the following warning:
Partially applied constant "Groups.plus_class.plus" on left hand side of equation, in theorem:
Finite_Set.fold op + ?y (set ?xs) ≡ fold op + (remdups ?xs) ?y
And still can't evaluate the expression.
UPDATE 2:
Actually I can evaluate it using schematic_goal:
schematic_goal g1:
"Finite_Set.fold plus 0 (set [1::nat, 2]) = ?x"
by (simp add: nat_plus_commute.fold_set_fold_remdups)
thm g1
What should I do to evaluate it using value?
Related
I have datatype stack_op which consists of several (~20) cases. I'm trying write function which skips some of that cases in list:
function (sequential) skip_expr :: "stack_op list ⇒ stack_op list" where
"skip_expr [] = []"
| "skip_expr ((stack_op.Unary _)#other) = (skip_expr other)"
| "skip_expr ((stack_op.Binary _)#other) = skip_expr (skip_expr other)"
| "skip_expr ((stack_op.Value _)#other) = other"
| "skip_expr other = other"
by pat_completeness auto termination by lexicographic_order
which seems to always terminate. But trying by lexicographic order generates such unresolved cases:
Calls:
c) stack_op.Binary uv_ # other ~> skip_expr other
Measures:
1) size_list size
2) length
Result matrix:
1 2
c: ? ?
(size_change also desn't work)
I've read https://isabelle.in.tum.de/dist/Isabelle2021/doc/functions.pdf, but it couldn't help. (Maybe there are more complex examples of tremination use?)
I tried to rewrite function adding another param:
function (sequential) skip_expr :: "stack_op list ⇒ nat ⇒ stack_op list" where
"skip_expr l 0 = l"
| "skip_expr [] _ = []"
| "skip_expr ((stack_op.Unary _)#other) depth = (skip_expr other (depth - 1))"
| "skip_expr ((stack_op.Binary _)#other) depth =
(let buff1 = (skip_expr other (depth - 1))
in (skip_expr buff1 (length buff1)))"
| "skip_expr ((stack_op.Value _)#other) _ = other"
| "skip_expr other _ = other"
by pat_completeness auto
termination by (relation "measure (λ(_,dep). dep)") auto
which generates unresolved subgoal:
1. ⋀other v. skip_expr_dom (other, v) ⟹ length (skip_expr other v) < Suc v
which I also don't how to proof.
Could anyone how such cases solved (As I can understand there is some problem with two-level recursive call on rigth side of stack_op.Binary case)? Or maybe there is another way to make such skip?
Thanks in advance
The lexicographic_order method simply tries to solve the arising goals with the simplifier, so if the simplifier gets stuck you end up with unresolved termination subgoals.
In this case, as you identified correctly, the problem is that you have a nested recursive call skip_expr (skip_expr other). This is always problematic because at this stage, the simplifier knows nothing about what skip_expr does to the input list. For all we know, it might just return the list unmodified, or even a longer list, and then it surely would not terminate.
Confronting the issue head on
The solution is to show something about length (skip_expr …) and make that information available to the simplifier. Because we have not yet shown termination of the function, we have to use the skip_expr.psimps rules and the partial induction rule skip_expr.pinduct, i.e. every statement we make about skip_expr xs always has as a precondition that skip_expr actually terminates on the input xs. For this, there is the predicate skip_expr_dom.
Putting it all together, it looks like this:
lemma length_skip_expr [termination_simp]:
"skip_expr_dom xs ⟹ length (skip_expr xs) ≤ length xs"
by (induction xs rule: skip_expr.pinduct) (auto simp: skip_expr.psimps)
termination skip_expr by lexicographic_order
Circumventing the issue
Sometimes it can also be easier to circumvent the issue entirely. In your case, you could e.g. define a more general function skip_exprs that skips not just one instruction but n instructions. This you can define without nested induction:
fun skip_exprs :: "nat ⇒ stack_op list ⇒ stack_op list" where
"skip_exprs 0 xs = xs"
| "skip_exprs (Suc n) [] = []"
| "skip_exprs (Suc n) (Unary _ # other) = skip_exprs (Suc n) other"
| "skip_exprs (Suc n) (Binary _ # other) = skip_exprs (Suc (Suc n)) other"
| "skip_exprs (Suc n) (Value _ # other) = skip_exprs n other"
| "skip_exprs (Suc n) xs = xs"
Equivalence to your skip_expr is then straightforward to prove:
lemma skip_exprs_conv_skip_expr: "skip_exprs n xs = (skip_expr ^^ n) xs"
proof -
have [simp]: "(skip_expr ^^ n) [] = []" for n
by (induction n) auto
have [simp]: "(skip_expr ^^ n) (Other # xs) = Other # xs" for xs n
by (induction n) auto
show ?thesis
by (induction n xs rule: skip_exprs.induct)
(auto simp del: funpow.simps simp: funpow_Suc_right)
qed
lemma skip_expr_Suc_0 [simp]: "skip_exprs (Suc 0) xs = skip_expr xs"
by (simp add: skip_exprs_conv_skip_expr)
In your case, I don't think it actually makes sense to do this because figuring out the termination is fairly easy, but it may be good to keep in mind.
I need to generate a code calculating all values greater or equal to some value:
datatype ty = A | B | C
instantiation ty :: order
begin
fun less_ty where
"A < x = (x = C)"
| "B < x = (x = C)"
| "C < x = False"
definition "(x :: ty) ≤ y ≡ x = y ∨ x < y"
instance
apply intro_classes
apply (metis less_eq_ty_def less_ty.elims(2) ty.distinct(3) ty.distinct(5))
apply (simp add: less_eq_ty_def)
apply (metis less_eq_ty_def less_ty.elims(2))
using less_eq_ty_def less_ty.elims(2) by fastforce
end
instantiation ty :: enum
begin
definition [simp]: "enum_ty ≡ [A, B, C]"
definition [simp]: "enum_all_ty P ≡ P A ∧ P B ∧ P C"
definition [simp]: "enum_ex_ty P ≡ P A ∨ P B ∨ P C"
instance
apply intro_classes
apply auto
by (case_tac x, auto)+
end
lemma less_eq_code_predI [code_pred_intro]:
"Predicate_Compile.contains {z. x ≤ z} y ⟹ x ≤ y"
(* "Predicate_Compile.contains {z. z ≤ y} x ⟹ x ≤ y"*)
by (simp_all add: Predicate_Compile.contains_def)
code_pred [show_modes] less_eq
by (simp add: Predicate_Compile.containsI)
values "{x. A ≤ x}"
(* values "{x. x ≤ C}" *)
It works fine. But the theory looks over-complicated. Also I can't calculate values less or equal to some value. If one will uncoment the 2nd part of less_eq_code_predI lemma, then less_eq will have only one mode i => i => boolpos.
Is there a simpler and more generic approach?
Can less_eq support i => o => boolpos and o => i => boolpos at the same time?
Is it possible not to declare ty as an instance of enum class? I can declare a function returning a set of elements greater or equal to some element:
fun ge_values where
"ge_values A = {A, C}"
| "ge_values B = {B, C}"
| "ge_values C = {C}"
lemma ge_values_eq_less_eq_ty:
"{y. x ≤ y} = ge_values x"
by (cases x; auto simp add: dual_order.order_iff_strict)
This would allow me to remove enum and code_pred stuff. But in this case I will not be able to use this function in the definition of other predicates. How to replace (≤) by ge_values in the following definition?
inductive pred1 where
"x ≤ y ⟹ pred1 x y"
code_pred [show_modes] pred1 .
I need pred1 to have at least i => o => boolpos mode.
The predicate compiler has an option inductify that tries to convert functional definitions into inductive ones. It is somewhat experimental and does not work in every case, so use it with care. In the above example, the type classes make the whole situation a bit more complicated. Here's what I managed to get working:
case_of_simps less_ty_alt: less_ty.simps
definition less_ty' :: "ty ⇒ ty ⇒ bool" where "less_ty' = (<)"
declare less_ty_alt [folded less_ty'_def, code_pred_def]
code_pred [inductify, show_modes] "less_ty'" .
values "{x. less_ty' A x}"
The first line convertes the pattern-matching equations into one with a case expression on the right. It uses the command case_of_simps from HOL-Library.Simps_Case_Conv.
Unfortunately, the predicate compiler seems to have trouble with compiling type class operations. At least I could not get it to work.
So the second line introduces a new constant for (<) on ty.
The attribute code_pred_def tells the predicate compiler to use the given theorem (namely less_ty_alt with less_ty' instead of (<)) as the "defining equation".
code_pred with the inductify option looks at the equation for less_ty' declared by code_pred_def and derives an inductive definition out of that. inductify usually works well with case expressions, constructors and quantifiers. Everything beyond that is at your own risk.
Alternatively, you could also manually implement the enumeration similar to ge_values and register the connection between (<) and ge_values with the predicate compiler. See the setup block at the end of the Predicate_Compile theory in the distribution for an example with Predicate.contains. Note however that the predicate compiler works best with predicates and not with sets. So you'd have to write ge_values in the predicate monad Predicate.pred.
I have the following grammar defined in Isabelle:
inductive S where
S_empty: "S []" |
S_append: "S xs ⟹ S ys ⟹ S (xs # ys)" |
S_paren: "S xs ⟹ S (Open # xs # [Close])"
Then I define a gramar T that conceptually only adds the following rule:
T_left: "T xs ⟹ T (Open # xs)"
Then I tried to proof the following theorem:
theorem T_S:
"T xs ⟹ count xs Open = count xs Close ⟹ S xs"
apply(erule T.induct)
apply(simp add: S_empty)
apply(simp add: S_append)
apply(simp add: S_paren)
oops
To my surprise the final goal seems to be false:
⋀xsa. count xs Open = count xs Close ⟹ T xsa ⟹ S xsa ⟹ S (Open # xsa)
So here S (Open # xsa) cannot hold because there is no such production in the grammar assuming S xsa.
This situation makes no-sense to me? Is erule producing goals that are false?
Induction rules like T.induct should usually be used with the induction proof method rather than erule. The induction method ensures that the whole statement becomes part of the inductive statements whereas with erule only the conclusion is part of the inductive argument; other assumptions are basically ignored for the induction. This can be seen in the last goal state where the inductive statement involves the goal parameter xsa whereas the crucial assumption count xs Open = count xs Close still talks about the variable xs. So, the proof step should be apply(induction rule: T.induct). Then there is a chance to prove this statement.
While trying to prove lemmas about functions in continuation-passing style by induction I have come across a problem with free type variables. In my induction hypothesis, the continuation is a schematic variable but its type involves a free type variable. As a result Isabelle is not able to unify the type variable with a concrete type when I try to apply the i.h. I have cooked up this minimal example:
fun add_k :: "nat ⇒ nat ⇒ (nat ⇒ 'a) ⇒ 'a" where
"add_k 0 m k = k m" |
"add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))"
lemma add_k_cps: "∀k. add_k n m k = k (add_k n m id)"
proof(rule, induction n)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))" by simp
also have "… = k (Suc (add_k n m id))"
using Suc[where k="(λn'. k (Suc n'))"] by metis
also have "… = k (add_k n m (λn'. Suc n'))"
using Suc[where k="(λn'. Suc n')"] sorry (* Type unification failed *)
also have "… = k (add_k (Suc n) m id)" by simp
finally show ?case .
qed
In the "sorry step", the explicit instantiation of the schematic variable ?k fails with
Type unification failed
Failed to meet type constraint:
Term: Suc :: nat ⇒ nat
Type: nat ⇒ 'a
since 'a is free and not schematic. Without the instantiation the simplifier fails anyway and I couldn't find other methods that would work.
Since I cannot quantify over types, I don't see any way how to make 'a schematic inside the proof. When a term variable becomes schematic locally inside a proof, why isn't this the case with variables in its type too? After the lemma has been proved, they become schematic at the theory level anyway. This seems quite limiting. Could an option to do this be implemented in the future or is there some inherent limitation? Alternatively, is there an approach to avoid this problem and still keeping the continuation schematically polymorphic in the proven lemma?
Free type variables become schematic in a theorem when the theorem is exported from the block in which the type variables have been fixed. In particular, you cannot quantify over type variables in a block and then instantiate the type variable within the block, as you are trying to do in your induction. Arbitrary quantification over types leads to inconsistencies in HOL, so there is little hope that this could be changed.
Fortunately, there is a way to prove your lemma in CPS style without type quantification. The problem is that your statement is not general enough, because it contains id. If you generalise it, then the proof works:
lemma add_k_cps: "add_k n m (k ∘ f) = k (add_k n m f)"
proof(induction n arbitrary: f)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m (k ∘ f) = add_k n m (k ∘ (λn'. f (Suc n')))" by(simp add: o_def)
also have "… = k (add_k n m (λn'. f (Suc n')))"
using Suc.IH[where f="(λn'. f (Suc n'))"] by metis
also have "… = k (add_k (Suc n) m f)" by simp
finally show ?case .
qed
You get your original theorem back, if you choose f = id.
This is an inherent limitation how induction works in HOL. Induction is a rule in HOL, so it is not possible to generalize any types in the induction hypothesis.
A specialized solution for your problem is to first prove
lemma add_k_cps_nat: "add_k n m k = k (n + m)"
by (induction n arbitrary: m k) auto
and then prove add_k_cps.
A general approach is: prove instances for fixed types first, for which the induction works. In the example case is is an induction by nat. And then derive a proof generalized in the type itself.
I tried to prove an existential theorem
lemma "∃ x. x * (t :: nat) = t"
proof
obtain y where "y * t = t" by (auto)
but I could not finish the proof. So I have the necessary y but how can I feed it into the original goal?
Soundness of natural deduction requires that you get hold of the witness before you open the existential quantifier. This is why you are not allowed to use obtained variables in show statements. In your example, the proof step implicitly applies the rule exI. This turns the existentially quantified variable x into the schematic variable ?x, which can be instantiated later, but the instantiation may only refer to variables that have been in scope when ?x came into place. In the low-level proof state, obtained variables are meta-quantified (!!) and the instantiations for ?x can only refer to such variables that appear as a parameter to ?x.
Therefore, you have to switch the order in your proof:
lemma "∃ x. x * (t :: nat) = t"
proof - (* method - does not change the goal *)
obtain y where "y * t = t" by (auto)
then show ?thesis by(rule exI)
qed
You can give the witness (i.e. the element you want to put in for x) in the show clause:
lemma "∃ x. x * (t :: nat) = t"
proof
show "1*t = t" by simp
qed
Alternatively, when you already know the witness (1 or Suc 0 here), you can explicitly instantiate the rule exI to introduce the existential term:
lemma "∃ x. x * (t :: nat) = t"
by (rule exI[where x = "Suc 0"], simp)
Here, the existential quantifier introduction rule thm exI is
?P ?x ⟹ ∃x. ?P x
you can explore and instantiate it gradually with the answer.
thm exI[where x = "Suc 0"] is:
?P (Suc 0) ⟹ ∃x. ?P x
and exI[where P = "λ x. x * t = t" and x = "Suc 0"] is
Suc 0 * t = t ⟹ ∃x. x * t = t
And Suc 0 * t = t is only one simplification (simp) away. But the system can figure out the last instantiation P = "λ x. x * t = t" via unification, so it isn't really necessary.
Related:
Instantiating theorems in Isabelle