Is there an Isabelle equivalent to Haskell newtype? - abstraction

I want to make a new datatype shaped like an old one, but (unlike using type_synonym) it should be recognized as distinct in other theories.
My motivating example: I'm making a stack datatype out of lists. I don't want my other theories to see my stacks as lists so I can enforce my own simplification rules on it, but the only solution I've found is the following:
datatype 'a stk = S "'a list"
...
primrec index_of' :: "'a list => 'a => nat option"
where "index_of' [] b = None"
| "index_of' (a # as) b = (
if b = a then Some 0
else case index_of' as b of Some n => Some (Suc n) | None => None)"
primrec index_of :: "'a stk => 'a => nat option"
where "index_of (S as) x = index_of' as x"
...
lemma [simp]: "index_of' del v = Some m ==> m <= n ==>
index_of' (insert_at' del n v) v = Some m"
<proof>
lemma [simp]: "index_of del v = Some m ==> m <= n ==>
index_of (insert_at del n v) v = Some m"
by (induction del, simp)
It works, but it means my stack theory is bloated and filled with way too much redundancy: every function has a second version stripping the constructor off, and every theorem has a second version (for which the proof is always by (induction del, simp), which strikes me as a sign I'm doing too much work somewhere).
Is there anything that would help here?

You want to use typedef.
The declaration
typedef 'a stack = "{xs :: 'a list. True}"
morphisms list_of_stack as_stack
by auto
introduces a new type, containing all lists, as well as functions between 'a stack and 'a list and a bunch of theorems. Here is selection of them (you can view all using show_theorems after the typedef command):
theorems:
as_stack_cases: (⋀y. ?x = as_stack y ⟹ y ∈ {xs. True} ⟹ ?P) ⟹ ?P
as_stack_inject: ?x ∈ {xs. True} ⟹ ?y ∈ {xs. True} ⟹ (as_stack ?x = as_stack ?y) = (?x = ?y)
as_stack_inverse: ?y ∈ {xs. True} ⟹ list_of_stack (as_stack ?y) = ?y
list_of_stack: list_of_stack ?x ∈ {xs. True}
list_of_stack_inject: (list_of_stack ?x = list_of_stack ?y) = (?x = ?y)
list_of_stack_inverse: as_stack (list_of_stack ?x) = ?x
type_definition_stack: type_definition list_of_stack as_stack {xs. True}
The ?x ∈ {xs. True} assumptions are quite boring here, but you can specify a subset of all lists there, e.g. if your stacks are never empty, and ensure on the type level that the property holds for all types.
The type_definition_stack theorem is useful in conjunction with the lifting package. After the declaration
setup_lifting type_definition_stack
you can define functions on stacks by giving their definition in terms of lists, and also prove theorems involving stacks by proving their equivalent proposition in terms of lists; much easier than manually juggling with the conversion functions.

Related

Topological filters in Isabelle

I'm studying topological filters in Filter.thy
theory Filter
imports Set_Interval Lifting_Set
begin
subsection ‹Filters›
text ‹
This definition also allows non-proper filters.
›
locale is_filter =
fixes F :: "('a ⇒ bool) ⇒ bool"
assumes True: "F (λx. True)"
assumes conj: "F (λx. P x) ⟹ F (λx. Q x) ⟹ F (λx. P x ∧ Q x)"
assumes mono: "∀x. P x ⟶ Q x ⟹ F (λx. P x) ⟹ F (λx. Q x)"
typedef 'a filter = "{F :: ('a ⇒ bool) ⇒ bool. is_filter F}"
proof
show "(λx. True) ∈ ?filter" by (auto intro: is_filter.intro)
qed
I don't get this definition. It's quite convoluted so I'll simplify it first
The expression
F (λx. P x) could be simplified to F P (using eta reduction of lambda calculus). The predicate 'a ⇒ bool is really just a set 'a set. Similarly ('a ⇒ bool) ⇒ bool should be 'a set set. Then we could rewrite the axioms as
assumes conj: "P ∈ F ∧ Q ∈ F ⟹ Q ∩ P ∈ F"
assumes mono: "P ⊆ Q ∧ P ∈ F ⟹ Q ∈ F"
Now my question is about the True axiom. It is equivalent to
assumes True: "UNIV ∈ F"
This does not match with the definitions of filters that I ever saw.
The axiom should be instead
assumes True: "{} ∉ F" (* the name True is not very fitting anymore *)
The statement UNIV ∈ F is unnecessary because it follows from axiom mono.
So what's up with this definition that Isabelle provides?
The link provided by Javier Diaz has lots of explanations.
Turns out this is a definition of improper filter. The axiom True is necessary and does not follow from mono. If this axiom was missing then F could be defined as
F P = False
or in set-theory notation, F could be an empty set and mono and conj would then be satisfied vacuously.

How to generate code for less_eq operation

I need to generate a code calculating all values greater or equal to some value:
datatype ty = A | B | C
instantiation ty :: order
begin
fun less_ty where
"A < x = (x = C)"
| "B < x = (x = C)"
| "C < x = False"
definition "(x :: ty) ≤ y ≡ x = y ∨ x < y"
instance
apply intro_classes
apply (metis less_eq_ty_def less_ty.elims(2) ty.distinct(3) ty.distinct(5))
apply (simp add: less_eq_ty_def)
apply (metis less_eq_ty_def less_ty.elims(2))
using less_eq_ty_def less_ty.elims(2) by fastforce
end
instantiation ty :: enum
begin
definition [simp]: "enum_ty ≡ [A, B, C]"
definition [simp]: "enum_all_ty P ≡ P A ∧ P B ∧ P C"
definition [simp]: "enum_ex_ty P ≡ P A ∨ P B ∨ P C"
instance
apply intro_classes
apply auto
by (case_tac x, auto)+
end
lemma less_eq_code_predI [code_pred_intro]:
"Predicate_Compile.contains {z. x ≤ z} y ⟹ x ≤ y"
(* "Predicate_Compile.contains {z. z ≤ y} x ⟹ x ≤ y"*)
by (simp_all add: Predicate_Compile.contains_def)
code_pred [show_modes] less_eq
by (simp add: Predicate_Compile.containsI)
values "{x. A ≤ x}"
(* values "{x. x ≤ C}" *)
It works fine. But the theory looks over-complicated. Also I can't calculate values less or equal to some value. If one will uncoment the 2nd part of less_eq_code_predI lemma, then less_eq will have only one mode i => i => boolpos.
Is there a simpler and more generic approach?
Can less_eq support i => o => boolpos and o => i => boolpos at the same time?
Is it possible not to declare ty as an instance of enum class? I can declare a function returning a set of elements greater or equal to some element:
fun ge_values where
"ge_values A = {A, C}"
| "ge_values B = {B, C}"
| "ge_values C = {C}"
lemma ge_values_eq_less_eq_ty:
"{y. x ≤ y} = ge_values x"
by (cases x; auto simp add: dual_order.order_iff_strict)
This would allow me to remove enum and code_pred stuff. But in this case I will not be able to use this function in the definition of other predicates. How to replace (≤) by ge_values in the following definition?
inductive pred1 where
"x ≤ y ⟹ pred1 x y"
code_pred [show_modes] pred1 .
I need pred1 to have at least i => o => boolpos mode.
The predicate compiler has an option inductify that tries to convert functional definitions into inductive ones. It is somewhat experimental and does not work in every case, so use it with care. In the above example, the type classes make the whole situation a bit more complicated. Here's what I managed to get working:
case_of_simps less_ty_alt: less_ty.simps
definition less_ty' :: "ty ⇒ ty ⇒ bool" where "less_ty' = (<)"
declare less_ty_alt [folded less_ty'_def, code_pred_def]
code_pred [inductify, show_modes] "less_ty'" .
values "{x. less_ty' A x}"
The first line convertes the pattern-matching equations into one with a case expression on the right. It uses the command case_of_simps from HOL-Library.Simps_Case_Conv.
Unfortunately, the predicate compiler seems to have trouble with compiling type class operations. At least I could not get it to work.
So the second line introduces a new constant for (<) on ty.
The attribute code_pred_def tells the predicate compiler to use the given theorem (namely less_ty_alt with less_ty' instead of (<)) as the "defining equation".
code_pred with the inductify option looks at the equation for less_ty' declared by code_pred_def and derives an inductive definition out of that. inductify usually works well with case expressions, constructors and quantifiers. Everything beyond that is at your own risk.
Alternatively, you could also manually implement the enumeration similar to ge_values and register the connection between (<) and ge_values with the predicate compiler. See the setup block at the end of the Predicate_Compile theory in the distribution for an example with Predicate.contains. Note however that the predicate compiler works best with predicates and not with sets. So you'd have to write ge_values in the predicate monad Predicate.pred.

How to generate code for reverse sorting

What is the easiest way to generate code for a sorting algorithm that sorts its argument in reverse order, while building on top of the existing List.sort?
I came up with two solutions that are shown below in my answer. But both of them are not really satisfactory.
Any other ideas how this could be done?
I came up with two possible solutions. But both have (severe) drawbacks. (I would have liked to obtain the result almost automatically.)
Introduce a Haskell-style newtype. E.g., if we wanted to sort lists of nats, something like
datatype 'a new = New (old : 'a)
instantiation new :: (linorder) linorder
begin
definition "less_eq_new x y ⟷ old x ≥ old y"
definition "less_new x y ⟷ old x > old y"
instance by (default, case_tac [!] x) (auto simp: less_eq_new_def less_new_def)
end
At this point
value [code] "sort_key New [0::nat, 1, 0, 0, 1, 2]"
yields the desired reverse sorting. While this is comparatively easy, it is not as automatic as I would like the solution to be and in addition has a small runtime overhead (since Isabelle doesn't have Haskell's newtype).
Via a locale for the dual of a linear order. First we more or less copy the existing code for insertion sort (but instead of relying on a type class, we make the parameter that represents the comparison explicit).
fun insort_by_key :: "('b ⇒ 'b ⇒ bool) ⇒ ('a ⇒ 'b) ⇒ 'a ⇒ 'a list ⇒ 'a list"
where
"insort_by_key P f x [] = [x]"
| "insort_by_key P f x (y # ys) =
(if P (f x) (f y) then x # y # ys else y # insort_by_key P f x ys)"
definition "revsort_key f xs = foldr (insort_by_key (op ≥) f) xs []"
at this point we have code for revsort_key.
value [code] "revsort_key id [0::nat, 1, 0, 0, 1, 2]"
but we also want all the nice results that have already been proved in the linorder locale (that derives from the linorder class). To this end, we introduce the dual of a linear order and use a "mixin" (not sure if I'm using the correct naming here) to replace all occurrences of linorder.sort_key (which does not allow for code generation) by our new "code constant" revsort_key.
interpretation dual_linorder!: linorder "op ≥ :: 'a::linorder ⇒ 'a ⇒ bool" "op >"
where
"linorder.sort_key (op ≥ :: 'a ⇒ 'a ⇒ bool) f xs = revsort_key f xs"
proof -
show "class.linorder (op ≥ :: 'a ⇒ 'a ⇒ bool) (op >)" by (rule dual_linorder)
then interpret rev_order: linorder "op ≥ :: 'a ⇒ 'a ⇒ bool" "op >" .
have "rev_order.insort_key f = insort_by_key (op ≥) f"
by (intro ext) (induct_tac xa; simp)
then show "rev_order.sort_key f xs = revsort_key f xs"
by (simp add: rev_order.sort_key_def revsort_key_def)
qed
While with this solution we do not have any runtime penalty, it is far too verbose for my taste and is not easily adaptable to changes in the standard code setup (e.g., if we wanted to use the mergesort implementation from the Archive of Formal Proofs for all of our sorting operations).

How to use single-valued relations to rewrite a goal automatically?

Single valued relations (as defined by single_valued in the Relation theory) allow to deduce equalities from membership relations. I was wondering if there was a way to take advantage of this to rewrite terms in the goal (and then merge these membership relations).
As an example, here is a goal that cannot be solved by auto or force without auxiliary theorems:
lemma
assumes "single_valued A"
assumes "(a,b) ∈ A" and "(a,b') ∈ A"
shows "b = b'"
using assms
by (metis single_valued_def)
Here the equality is directly in the goal, but it would be great to also rewrite in the assumptions.
Also, I talk here about sets of pairs but I have a more complex application with another kind of relation with a similar property, where such kind of assumptions are common, and I am then looking for a way to simplify them.
It seems to me that automatic methods could greatly benefit from such a feature.
I have already written some simprocs before and it seems to me we could use them here if we could access the set of assumptions once the simproc has been triggered on, which I don't know if this is possible. For example, once "(a,b) ∈ A" has triggered the simproc, could we check if any assumption contains "(a,_) ∈ A" ? But it would perhaps be too costly...
Any idea ?
Here is a simproc that does what you want:
lemma single_valuedD_eq:
"⟦ single_valued A; (x, a) ∈ A ⟧ ⟹ (x, b) ∈ A ⟷ b = a"
by(auto dest: single_valuedD)
simproc_setup single_valued ("(x, y) ∈ A") = {*
(fn phi => fn ctxt => fn redex => case term_of redex of
Const (#{const_name "Set.member"},
Type (#{type_name fun},
[Txy as Type (#{type_name prod}, [Tx, Ty]),
Type (#{type_name fun}, [TA, _])])) $
(Const (#{const_name "Pair"}, _) $ tx $ ty) $
tA =>
let
val thy = Proof_Context.theory_of ctxt;
val prems = Simplifier.prems_of ctxt;
fun mk_stmt t = t |> HOLogic.mk_Trueprop |> Thm.cterm_of thy |> Goal.init
fun mk_thm tac t =
case SINGLE (tac 1) (mk_stmt t) of
SOME thm => SOME (Goal.finish (Syntax.init_pretty_global (Thm.theory_of_thm thm)) thm)
| NONE => NONE;
val svA = Const (#{const_name single_valued}, TA --> #{typ bool}) $ tA
val [z] = Name.invent (Variable.names_of ctxt) "z" 1
val xzA =
Const (#{const_name Set.member}, Txy --> TA --> #{typ bool})
$ (Const (#{const_name Pair}, Tx --> Ty --> Txy)
$ tx $ Var ((z, 0), Ty))
$ tA
in
case mk_thm (resolve_tac prems) svA of NONE => NONE
| SOME thm_svA => case mk_thm (resolve_tac prems) xzA of NONE => NONE
| SOME thm_xzA =>
SOME (#{thm single_valuedD_eq[THEN eq_reflection]} OF [thm_svA, thm_xzA])
end
| _ => NONE)
*}
When it triggers on a term of the pattern (_, _) ∈ _, say (x, y) ∈ A, it checks whether there are assumptions single_valued A and (x, ?z) ∈ A in the current goal. If so, it instantiates the theorem single_valuedD_eq with them and rewrites (x, y) ∈ A to y = ?z with ?z being appropriate instantiated.
Here is an example:
lemma
"⟦ single_valued A; (x, b) ∈ A; (x, c) ∈ A ⟧
⟹ map (λy. (x, y) ∈ A) xs = foo"
apply simp
NEW GOAL:
1. ⟦single_valued A; b = c; (x, c) ∈ A⟧ ⟹ map (λy. y = c) xs = foo
Note that single_valued A has to be an assumption of the goal. It does not suffice to have single_valued A as a [simp] rule somewhere. This is because the assumptions are looked up with resolve_tac prems rather than a full simplifier invocation.

Unfold/simp has no effect in a primrec type class instantiation proof

Up until several days ago, I always defined a type, and then proved theorems directly about the type. Now I'm trying to use type classes.
Problem
The problem is that I can't instantiate cNAT for my type myD below, and it appears it's because simp has no effect on the abstract function cNAT, which I've made concrete with my primrec function cNAT_myD. I can only guess what's happening because of the automation that happens after instance proof.
Questions
Q1: Below, at the statement instantiation myD :: (type) cNAT, can you tell me how to finish the proof, and why I can prove the following theorem, but not the type class proof, which requires injective?
theorem dNAT_1_to_1: "(dNAT n = dNAT m) ==> n = m"
assumes injective: "(cNAT n = cNAT m) ==> n = m"
Q2: This is not as important, but at the bottom is this statement:
instantiation myD :: (type) cNAT2
It involves another way I was trying to instantiate cNAT. Can you tell me why I get Failed to refine any pending goal at shows? I put some comments in the source to explain some of what I did to set it up. I used this slightly modified formula for the requirement injective:
assumes injective: "!!n m. (cNAT2 n = cNAT2 m) --> n = m"
Specifics
My contrived datatype is this, which may be useful to me someday: (Update: Well, for another example maybe. A good mental exercise is for me to try and figure out how I can actually get something inside a 'a myD list, other than []. With BNF, something like datatype_new 'a myD = myS "'a myD fset" gives me the warning that there's an unused type variable on the right-hand side)
datatype 'a myD = myL "'a myD list"
The type class is this, which requires an injective function from nat to 'a:
class cNAT =
fixes cNAT :: "nat => 'a"
assumes injective: "(cNAT n = cNAT m) ==> n = m"
dNAT: this non-type class version of cNAT works
fun get_myL :: "'a myD => 'a myD list" where
"get_myL (myL L) = L"
primrec dNAT :: "nat => 'a myD" where
"dNAT 0 = myL []"
|"dNAT (Suc n) = myL (myL [] # get_myL(dNAT n))"
fun myD2nat :: "'a myD => nat" where
"myD2nat (myL []) = 0"
|"myD2nat (myL (x # xs)) = Suc(myD2nat (myL xs))"
theorem left_inverse_1 [simp]:
"myD2nat(dNAT n) = n"
apply(induct n, auto)
by(metis get_myL.cases get_myL.simps)
theorem dNAT_1_to_1:
"(dNAT n = dNAT m) ==> n = m"
apply(induct n)
apply(simp) (*
The simp method expanded dNAT.*)
apply(metis left_inverse_1 myD2nat.simps(1))
by (metis left_inverse_1)
cNAT: type class version that I can't instantiate
instantiation myD :: (type) cNAT
begin
primrec cNAT_myD :: "nat => 'a myD" where
"cNAT_myD 0 = myL []"
|"cNAT_myD (Suc n) = myL (myL [] # get_myL(cNAT_myD n))"
instance
proof
fix n m :: nat
show "cNAT n = cNAT m ==> n = m"
apply(induct n)
apply(simp) (*
The simp method won't expand cNAT to cNAT_myD's definition.*)
by(metis injective)+ (*
Metis proved it without unfolding cNAT_myD. It's useless. Goals always remain,
and the type variables in the output panel are all weird.*)
oops
end
cNAT2: Failed to refine any pending goal at show
(*I define a variation of `injective` in which the `assumes` definition, the
goal, and the `show` statement are exactly the same, and that strange `fails
to refine any pending goal shows up.*)
class cNAT2 =
fixes cNAT2 :: "nat => 'a"
assumes injective: "!!n m. (cNAT2 n = cNAT2 m) --> n = m"
instantiation myD :: (type) cNAT2
begin
primrec cNAT2_myD :: "nat => 'a myD" where
"cNAT2_myD 0 = myL []"
|"cNAT2_myD (Suc n) = myL (myL [] # get_myL(cNAT2_myD n))"
instance
proof (*
goal: !!n m. cNAT2 n = cNAT2 m --> n = m.*)
show
"!!n m. cNAT2 n = cNAT2 m --> n = m"
(*Failed to refine any pending goal
Local statement fails to refine any pending goal
Failed attempt to solve goal by exported rule:
cNAT2 (n::nat) = cNAT2 (m::nat) --> n = m *)
Your function cNAT is polymorphic in its result type, but the type variable does not appear among the parameters. This often causes type inference to compute a type which is more general than you want. In your case for cNAT, Isabelle infers for the two occurrences of cNAT in the show statement the type nat => 'b for some 'b of sort cNAT, but their type in the goal is nat => 'a myD. You can see this in jEdit by Ctrl-hovering over the cNAT occurrences to inspect the types. In ProofGeneral, you can enable printing of types with using [[show_consts]].
Therefore, you have to explicitly constrain types in the show statement as follows:
fix n m
assume "(cNAT n :: 'a myD) = cNAT m"
then show "n = m"
Note that it is usually not a good idea to use Isabelle's meta-connectives !! and ==> inside a show statement, you better rephrase them using fix/assume/show.

Resources