How to ensure that instantiations of type variables are different - isabelle

In Isabelle, is there a way to ensure that instantiations for two type variables in a locale or proposition are different?
For a concrete example, I want to reason about a composite entity without committing to a specific representation. To this end I define a class of components, with some operations on them:
class Component = fixes oper :: "'a ⇒ 'a"
I also define a Composite, which has the same operations, lifted by applying them component-wise plus selectors for the components:
class Composite = Component (* + ... *)
locale ComponentAccess =
fixes set :: "'c :: Composite ⇒ 'a :: Component ⇒ 'c"
and get :: "'c ⇒ 'a"
assumes (* e.g. *) "get (set c a) = a"
and "set c (get c) = c"
and "oper (set c1 a1) = set (oper c1) (oper a2)"
Now I want to state some axioms for a pairwise composite, e.g.:
locale CompositeAxioms =
a: ComponentAccess set get + b: ComponentAccess set' get'
for set :: "'c :: Composite ⇒ 'a1 :: Component ⇒ 'c"
and get :: "'c ⇒ 'a1"
and set' :: "'c ⇒ 'a2 :: Component ⇒ 'c"
and get' :: "'c ⇒ 'a2" +
assumes set_disj_commut: "set' (set c a1) a2 = set (set' c a2) a1"
However, the above law is only sensible if 'a1 and 'a2 are instantiated to different types. Otherwise we trivially get unwanted consequences, like reverting a component setting:
lemma
fixes set get
assumes "CompositeAxioms set get set get"
shows "set (set c a1) a2 = set (set c a2) a1"
using assms CompositeAxioms.set_disj_commut by blast
In the above locale and it's assumes, is there a way of ensuring that 'a1 and 'a2 are always instantiated to different types?
Update (clarification). Actually, the 'law' makes sense only if set and set' are different. But then I would have to compare two functions over different types which, I think, is not possible. Since I define get/set operations in type classes and use sort constraints to ensure that a composite has certain components, my gets and sets always differ in the component type. Hence the question.

You can express in Isabelle/HOL that two types are different by using the reflection of types as terms. To that end, the types must be representable, i.e., instantiate the class typerep. Most types in HOL do so. Then, you can write
TYPEREP('a) ~= TYPEREP('b)
to express that 'a and 'b can only be instantiated to different types. However, TYPEREP is normally used only for internal purposes (especially in the code generator), so there is no reasoning infrastructure available and I do not know how to exploit such an assumption.
Anyway, I wonder why you want to formulate such a constraint at all. If a user instantiates your locale CompositeAxioms with both components being the same (and leave the swapping law for set and set' as is), it is the user who has to show the swapping law. If he can, then the set function is a bit strange, but soundness is not affected. Moreover, a locale assumption like TYPEREP('a) ~= TYPEREP('b) would unnecessarily restrict the generality of your development, is it might be perfectly sensible to use the same representation type with different instances for set and get.

Related

Reasoning about the entirety of a codatatype in Isabelle/HOL

I'd like to write down some definitions (and prove some lemmas!) about paths in a graph. Let's say that the graph is given implicitly by a relation of type 'a => 'a => bool. To talk about a possibly infinite path in the graph, I thought a sensible thing was to use a lazy list codatatype like 'a llist as given in "Defining (Co)datatypes and Primitively (Co)recursive Functions in Isabelle/HOL" (datatypes.pdf in the Isabelle distribution).
This works well enough, but I'd then I'd like to define a predicate that takes such a list and a graph relation and evaluates to true iff the list defines a valid path in the graph: any pair of adjacent entries in the list must be an edge.
If I was using 'a list as a type to represent the paths, this would be easy: I'd just define the predicate using primrec. However, the co-inductive definitions I can find all seem to generate or consume the data one element at a time, rather than being able to make a statement about the whole thing. Obviously, I realise that the resulting predicate won't be computable (because it is making a statement about infinite streams), so it might have a \forall in there somewhere, but that's fine - I want to use it for developing a theory, not generating code.
How can I define such a predicate? (And how could I prove the obvious associated introduction and elimination lemmas that make it useful?)
Thanks!
I suppose the most idiomatic way to do this is to use a coinductive predicate. Intuitively, this is like a normal inductive predicate except that you also allow ‘infinite derivation trees’:
type_synonym 'a graph = "'a ⇒ 'a ⇒ bool"
codatatype 'a llist = LNil | LCons 'a "'a llist"
coinductive is_path :: "'a graph ⇒ 'a llist ⇒ bool" for g :: "'a graph" where
is_path_LNil:
"is_path g LNil"
| is_path_singleton:
"is_path g (LCons x LNil)"
| is_path_LCons:
"g x y ⟹ is_path g (LCons y path) ⟹ is_path g (LCons x (LCons y path))"
This gives you introduction rules is_path.intros and an elimination rule is_path.cases.
When you want to show that an inductive predicate holds, you just use its introduction rules; when you want to show that an inductive predicate implies something else, you use induction with its induction rule.
With coinductive predicates, it is typically the other way round: When you want to show that a coinductive predicate implies something else, you just use its elimination rules. When you want to show that a coinductive predicate holds, you have to use coinduction.

How to define a data type with constraints?

For example I need to define a data type for pairs of list, both of which must have the same length:
type_synonym list2 = "nat list × nat list"
definition good_list :: "list2" where
"good_list ≡ ([1,2],[3,4])"
definition bad_list :: "list2" where
"bad_list ≡ ([1,2],[3,4,5])"
I can define a separate predicate, which checks whether a pair of lists is ok:
definition list2_is_good :: "list2 ⇒ bool" where
"list2_is_good x ≡ length (fst x) = length (snd x)"
value "list2_is_good good_list"
value "list2_is_good bad_list"
Is it possible to combine the datatype and the predicate? I've tried to use inductive_set, but I have no idea how to use it:
inductive_set ind_list2 :: "(nat list × nat list) set" where
"length (fst x) = length (snd x) ⟹
x ∈ ind_list2"
You can create a new type which is constraint by some predicate via typedef, though the result will just be a type and not a datatype.
typedef good_lists2 = "{xy :: list2. list2_is_good xy}"
by (intro exI[of _ "([],[])"], auto simp: list2_is_good_def)
Working with such a newly created type is best done via the lifting-package.
setup_lifting type_definition_good_lists2
Now for every operation on this new lifted type good_lists2,
you first have
to lift the operation from the raw type list2.
For instance, below we define an extraction function and a Cons-function.
In the latter you have prove that indeed the newly generated pair satisfies the invariant.
lift_definition get_lists :: "good_lists2 ⇒ list2" is "λ x. x" .
lift_definition Cons_good_lists2 :: "nat ⇒ nat ⇒ good_lists2 ⇒ good_lists2"
is "λ x y (xs,ys). (x # xs, y # ys)"
by (auto simp: list2_is_good_def)
Of course, you it is also possible to access the invariant
of the lifted type.
lemma get_lists: "get_lists xy = (x,y) ⟹ length x = length y"
by (transfer, auto simp: list2_is_good_def)
I hope this helps.
René's answer is the answer to what you asked for, but just for the sake of completeness, I would like to add two things:
First, stating the obvious here: It seems like it would be much easier if you just worked with lists of pairs instead of pairs of lists. Your proposed new type is clearly isomorphic to a list of pairs. Then you don't have to introduce an extra type.
Also, on a more general note, just because you can introduce new types with type definitions in Isabelle that capture certain invariants does not mean that this is always the best idea. It may be easier to just carry around the invariants separately. It depends very much on what those invariants look like and what you actually do with the values of that type. In many cases, I would argue that the additional boilerplate for setting up the new type (in particular class instantiations if you need those) and converting between the base type and the new type is not worth whatever abstraction benefit you get from it.
A good heuristic, I think, is to ask yourself whether the type you are introducing is more of a ‘throw-away’ thing that you need in one specific place – then don't introduce a new type for it – or whether it is something that you can prove nice general facts about and introduce a good abstract theory on – then do introduce a new type for it. Good examples from the distribution for the latter are things like multisets, finite sets, and probability mass functinos.

Using the ordering locale with partial maps

The following code doesn't typecheck:
type_synonym env = "char list ⇀ val"
interpretation map: order "op ⊆⇩m :: (env ⇒ env ⇒ bool)" "(λa b. a ≠ b ∧ a ⊆⇩m b)"
by unfold_locales (auto intro: map_le_trans simp: map_le_antisym)
lemma
assumes "mono (f :: env ⇒ env)"
shows "True"
by simp
Isabelle complains with the following error at the lemma:
Type unification failed: No type arity option :: order
Type error in application: incompatible operand type
Operator: mono :: (??'a ⇒ ??'b) ⇒ bool
Operand: f :: (char list ⇒ val option) ⇒ char list ⇒ val option
Why so? Did I miss something to use the interpretation? I suspect I need something like a newtype wrapper here...
When you interpret a locale like order which corresponds to a type class, you only get the theorems proved inside the context of the locale. However, the constant mono is only defined on the type class. The reason is that mono's type contains two type variables, whereas only one is available inside locales from type classes. You can notice this because there is no map.mono stemming from your interpretation.
If you instantiate the type class order for the option type with None being less than Some x, then you can use mono for maps, because the function space instantiates order with the pointwise order. However, the ordering <= on maps will only be semantically equivalent to ⊆⇩m, not syntactically, so none of the existing theorems about ⊆⇩m will work for <= and vice versa. Moreover, your theories will be incompatible with other people's that instantiate order for option differently.
Therefore, I recommend to go without type classes. The predicate monotone explicitly takes the order to be used. This is a bit more writing, but in the end, you are more flexible than with type classes. For example, you can write monotone (op ⊆⇩m) (op ⊆⇩m) f to express that f is a monotone transformation of environments.

How do I state a lemma that does not respect sort constraints (for using OFCLASS)

How can I state a lemma in Isabelle/HOL that does not obey sort constraints?
To explain why that makes sense, consider the following example theory:
theory Test
imports Main
begin
class embeddable =
fixes embedding::"'a ⇒ nat"
assumes "inj embedding"
lemma OFCLASS_I:
assumes "inj (embedding::'a⇒_)"
shows "OFCLASS('a::type,embeddable_class)"
apply intro_classes by (fact assms)
instantiation nat :: embeddable begin
definition "embedding = id"
instance
apply (rule OFCLASS_I) unfolding embedding_nat_def by simp
end
end
This theory defines a type class "embeddable" with one type parameters "embedding". (Basically, the class embeddable characterizes countable numbers with an explicit enumeration, but I chose this simply to have a very simple example.)
To simplify instance-proofs for the type class, the theory states an auxiliary lemma OFCLASS_I that shows goals of the form OFCLASS(...,embeddable). This lemma can then be used to solve the proof obligations produced by instance.
First, why would I even want that? (In the present theory, apply (rule OFCLASS_I) does the same as the builtin apply intro_classes and hence is useless.) In more complex cases, there are two reasons for such a lemma:
The lemma can give alternative criteria for instantiation of a type class, making subsequent proofs simpler.
The lemma could be used in automated (ML-level) instantiations. Here using intro_classes is problematic, because the number of subgoals of intro_classes can vary depending on what instance proofs have been performed earlier. (A lemma such as OFCLASS_I has a stable, well-controlled subgoals.)
However, the above theory does not work in Isabelle/HOL (neither 2015 or 2016-RC1) because the lemma does not type-check. The sort constraint "embedding::(_::embeddable => _)" is not satisfied. However, this type contraint is not a logic necessity (it is not enforced by the logical kernel but instead of a higher level).
So, is there a clean way of stating the theory above? (I give a somewhat ugly solution in my answer, but I am looking for something cleaner.)
The checking of the sort constraints can be disabled temporarily on the ML-level, and reactivated afterwards. (See the complete example below.) But that solution looks very much like a hack. We have to change the sort constraints in the context, and remember to restore them afterwards.
theory Test
imports Main
begin
class embeddable =
fixes embedding::"'a ⇒ nat"
assumes "inj embedding"
ML {*
val consts_to_unconstrain = [#{const_name embedding}]
val consts_orig_constraints = map (Sign.the_const_constraint
#{theory}) consts_to_unconstrain
*}
setup {*
fold (fn c => fn thy => Sign.add_const_constraint (c,NONE) thy) consts_to_unconstrain
*}
lemma OFCLASS_I:
assumes "inj (embedding::'a⇒_)"
shows "OFCLASS('a::type,embeddable_class)"
apply intro_classes by (fact assms)
(* Recover stored type constraints *)
setup {*
fold2 (fn c => fn T => fn thy => Sign.add_const_constraint
(c,SOME (Logic.unvarifyT_global T)) thy)
consts_to_unconstrain consts_orig_constraints
*}
instantiation nat :: embeddable begin
definition "embedding = id"
instance
apply (rule OFCLASS_I) unfolding embedding_nat_def by simp
end
end
This theory is accepted by Isabelle/HOL. The approach works in more complex settings (I have used it several times), but I would prefer a more elegant one.
The following is a different solution that does not require to "clean up" after proving the lemma (this answer requires to "repair" the sort constraints afterwards).
The idea is to define a new constant (embedding_UNCONSTRAINED in my example) that is a copy of the original sort-constrained constant (embedding), except without sort constraints (using the local_setup command below). Then the lemma is stated using embedding_UNCONSTRAINED instead of embedding. But by adding the attribute [unfolded embedding_UNCONSTRAINED_def] the lemma that is actually stored uses the constant embedding without sort constraint.
The drawback of this approach is that during the proof of the lemma, we can never explicitly write any terms containing embedding (since that will add the unwanted sort constraints). But if we need to state a subgoal containing embedding, we can always state it using embedding_UNCONSTRAINED and then use, e.g., unfolding embedding_UNCONSTRAINED to transform it into embedding.
theory Test
imports Main
begin
class embeddable =
fixes embedding::"'a ⇒ nat"
assumes "inj embedding"
(* This does roughly:
definition "embedding_UNCONSTRAINED == (embedding::('a::type)=>nat)" *)
local_setup {*
Local_Theory.define ((#{binding embedding_UNCONSTRAINED},NoSyn),((#{binding embedding_UNCONSTRAINED_def},[]),
Const(#{const_name embedding},#{typ "'a ⇒ nat"}))) #> snd
*}
lemma OFCLASS_I [unfolded embedding_UNCONSTRAINED_def]:
assumes "inj (embedding_UNCONSTRAINED::'a⇒_)"
shows "OFCLASS('a::type,embeddable_class)"
apply intro_classes by (fact assms[unfolded embedding_UNCONSTRAINED_def])
thm OFCLASS_I (* The theorem now uses "embedding", but without sort "embeddable" *)
instantiation nat :: embeddable begin
definition "embedding = id"
instance
apply (rule OFCLASS_I) unfolding embedding_nat_def by simp
end
end

Isabelle: Class of topological vector spaces

I wanted to define the class of topological vector spaces in the obvious way:
theory foo
imports Real_Vector_Spaces
begin
class topological_vector = topological_space + real_vector +
assumes add_cont_fst: "∀a. continuous_on UNIV (λb. a + b)"
...
but I got the error Type inference imposes additional sort constraint topological_space of type parameter 'a of sort type
I tried introducing type constraints in the condition, and it looks like
continuous_on doesn't want to match with the default type 'a of the class.
Of course I can work around this by replacing continuity with equivalent conditions, I'm just curious why this doesn't work.
Inside a class definition in Isabelle/HOL, there may occur only one type variable (namely 'a), which has the default HOL sort type. Thus, one cannot formalise multi-parameter type classes. This also affects definitions inside type classes, which may depend only on the parameters of one type class. For example, you can define a predicate cont :: 'a set => ('a => 'a) => bool inside the type class context topological_space as follows
definition (in topological_space) cont :: "'a set ⇒ ('a ⇒ 'a) ⇒ bool"
where "cont s f = (∀x∈s. (f ---> f x) (at x within s))"
The target (in topological_space) tells the type class system that cont really depends only on one type. Thus, it is safe to use cont in assumptions of other type classes which inherit from topological_space.
Now, the predicate continuous_on in Isabelle/HOL has the type 'a set => ('a => 'b) => bool where both 'a and 'b must be of sort topological_space. Thus, continuous_on is more general than cont, because it allows different topological spaces a and b. Conversely, continuous_on cannot be defined within any one type class. Consequently, you cannot use continuous_on in assumptions of type classes either. This restriction is not specific to continuous_on, it appears for all kinds of morphisms, e.g. mono for order-preserving functions, homomorphisms between algebraic structures, etc. Single-parameter type classes just cannot express such things.
In your example, you get the error because Isabelle unifies all occuring type variables to 'a and then realises that continuous_on forces the sort topological_space on 'a, but for the above reasons, you may not depend on sorts in class specifications.
Nevertheless, there might be a simple way out. Just define cont as described above and use it in the assumptions of topological_vector instead of continuous_on. Outside of the class context, you can then prove that cont = continuous_on and derive the original assumption with continuous_on instead of cont. This keeps you from reasoning abstractly within the class context, but this is only a minor restriction.

Resources