Isabelle: Class of topological vector spaces - isabelle

I wanted to define the class of topological vector spaces in the obvious way:
theory foo
imports Real_Vector_Spaces
begin
class topological_vector = topological_space + real_vector +
assumes add_cont_fst: "∀a. continuous_on UNIV (λb. a + b)"
...
but I got the error Type inference imposes additional sort constraint topological_space of type parameter 'a of sort type
I tried introducing type constraints in the condition, and it looks like
continuous_on doesn't want to match with the default type 'a of the class.
Of course I can work around this by replacing continuity with equivalent conditions, I'm just curious why this doesn't work.

Inside a class definition in Isabelle/HOL, there may occur only one type variable (namely 'a), which has the default HOL sort type. Thus, one cannot formalise multi-parameter type classes. This also affects definitions inside type classes, which may depend only on the parameters of one type class. For example, you can define a predicate cont :: 'a set => ('a => 'a) => bool inside the type class context topological_space as follows
definition (in topological_space) cont :: "'a set ⇒ ('a ⇒ 'a) ⇒ bool"
where "cont s f = (∀x∈s. (f ---> f x) (at x within s))"
The target (in topological_space) tells the type class system that cont really depends only on one type. Thus, it is safe to use cont in assumptions of other type classes which inherit from topological_space.
Now, the predicate continuous_on in Isabelle/HOL has the type 'a set => ('a => 'b) => bool where both 'a and 'b must be of sort topological_space. Thus, continuous_on is more general than cont, because it allows different topological spaces a and b. Conversely, continuous_on cannot be defined within any one type class. Consequently, you cannot use continuous_on in assumptions of type classes either. This restriction is not specific to continuous_on, it appears for all kinds of morphisms, e.g. mono for order-preserving functions, homomorphisms between algebraic structures, etc. Single-parameter type classes just cannot express such things.
In your example, you get the error because Isabelle unifies all occuring type variables to 'a and then realises that continuous_on forces the sort topological_space on 'a, but for the above reasons, you may not depend on sorts in class specifications.
Nevertheless, there might be a simple way out. Just define cont as described above and use it in the assumptions of topological_vector instead of continuous_on. Outside of the class context, you can then prove that cont = continuous_on and derive the original assumption with continuous_on instead of cont. This keeps you from reasoning abstractly within the class context, but this is only a minor restriction.

Related

What's the difference between `overloading` and `adhoc_overloading`?

The Isabelle reference manual describes to ways to perform type-based overloading of constants: "Adhoc overloading of constants" in section 11.3, and "Overloaded constant definitions" in section 5.9.
It seems that 5.9 overloading requires all type parameters to be known before it decides on an overloaded constant, whereas 11.3 (adhoc) overloading decides on an overloaded constant if there is only one matching:
consts
c1 :: "'t ⇒ 'a set"
c2 :: "'t ⇒ 'a set"
definition f1 :: ‹'a list ⇒ 'a set› where
‹f1 s ≡ set s›
adhoc_overloading
c1 f1
overloading
f2 ≡ ‹c2 :: 'a list ⇒ 'a set›
begin
definition ‹f2 w ≡ set w›
end
context
fixes s :: ‹int list›
begin
term ‹c1 s› (* c1 s :: int set *)
term ‹c2 s› (* c2 s :: 'a set *)
end
What's the difference between the two? When would I use one over the other?
Overloading is a core feature of Isabelle's logic. It allows you to declare a single constant with a "broad" type that can be defined on specific types. There's rarely a need for users to do that manually. It is the underlying mechanism used to implement type classes. For example, if you define a type class as follows:
class empty =
fixes empty :: 'a
assumes (* ... *)
Then, the class command will declare the constant empty of type 'a', and subsequent instantiations merely provide a definition of empty for specific types, like nat or list.
Long story short: overloading is – for most purposes – an implementation detail that is managed by higher-level commands. Occasionally, the need for some manual tweaking arises, e.g. when you have to define a type that depends on class constraints.
Ad-hoc overloading is, in my opinion, a misleading name. As far as I understand, it stems from Haskell (see this paper from Wadler and Blott). There, they use it to describe precisely the type class mechanism that in Isabelle would be coined as just "overloading". In Isabelle, ad-hoc overloading means something entirely different. The idea is that you can use it to define abstract syntax (like do-notation for monads) that can't accurately be capture by Isabelle's ML-style simple type system. As in overloading, you'd define a constant with a "broad" type. But that constant never receives any definitions! Instead, you define new constants with more specific types. When Isabelle's term parser encounters the use of the abstract constant, it will try to replace it with a concrete constant.
For example: you can use do-notation with option, list, and a few other types. If you write something like:
do { x <- foo; bar }
Then Isabelle sees:
Monad_Syntax.bind foo (%x. bar)
In a second step, depending on the type of foo, it will translate it to one of these possible terms:
Option.bind foo (%x. bar)
List.bind foo (%x. bar)
(* ... more possibilities ...*)
Again, users probably don't need to deal with this concept explicitly. If you pull in Monad_Syntax from the library, you'll get one application of ad-hoc overloading readily configured.
Long story short: ad-hoc overloading is a mechanism for enabling "fancy" syntax in Isabelle. Newbies may get confused by it because error messages tend to be hard to understand if there's something wrong in the internal translation.

Reasoning about the entirety of a codatatype in Isabelle/HOL

I'd like to write down some definitions (and prove some lemmas!) about paths in a graph. Let's say that the graph is given implicitly by a relation of type 'a => 'a => bool. To talk about a possibly infinite path in the graph, I thought a sensible thing was to use a lazy list codatatype like 'a llist as given in "Defining (Co)datatypes and Primitively (Co)recursive Functions in Isabelle/HOL" (datatypes.pdf in the Isabelle distribution).
This works well enough, but I'd then I'd like to define a predicate that takes such a list and a graph relation and evaluates to true iff the list defines a valid path in the graph: any pair of adjacent entries in the list must be an edge.
If I was using 'a list as a type to represent the paths, this would be easy: I'd just define the predicate using primrec. However, the co-inductive definitions I can find all seem to generate or consume the data one element at a time, rather than being able to make a statement about the whole thing. Obviously, I realise that the resulting predicate won't be computable (because it is making a statement about infinite streams), so it might have a \forall in there somewhere, but that's fine - I want to use it for developing a theory, not generating code.
How can I define such a predicate? (And how could I prove the obvious associated introduction and elimination lemmas that make it useful?)
Thanks!
I suppose the most idiomatic way to do this is to use a coinductive predicate. Intuitively, this is like a normal inductive predicate except that you also allow ‘infinite derivation trees’:
type_synonym 'a graph = "'a ⇒ 'a ⇒ bool"
codatatype 'a llist = LNil | LCons 'a "'a llist"
coinductive is_path :: "'a graph ⇒ 'a llist ⇒ bool" for g :: "'a graph" where
is_path_LNil:
"is_path g LNil"
| is_path_singleton:
"is_path g (LCons x LNil)"
| is_path_LCons:
"g x y ⟹ is_path g (LCons y path) ⟹ is_path g (LCons x (LCons y path))"
This gives you introduction rules is_path.intros and an elimination rule is_path.cases.
When you want to show that an inductive predicate holds, you just use its introduction rules; when you want to show that an inductive predicate implies something else, you use induction with its induction rule.
With coinductive predicates, it is typically the other way round: When you want to show that a coinductive predicate implies something else, you just use its elimination rules. When you want to show that a coinductive predicate holds, you have to use coinduction.

Using the ordering locale with partial maps

The following code doesn't typecheck:
type_synonym env = "char list ⇀ val"
interpretation map: order "op ⊆⇩m :: (env ⇒ env ⇒ bool)" "(λa b. a ≠ b ∧ a ⊆⇩m b)"
by unfold_locales (auto intro: map_le_trans simp: map_le_antisym)
lemma
assumes "mono (f :: env ⇒ env)"
shows "True"
by simp
Isabelle complains with the following error at the lemma:
Type unification failed: No type arity option :: order
Type error in application: incompatible operand type
Operator: mono :: (??'a ⇒ ??'b) ⇒ bool
Operand: f :: (char list ⇒ val option) ⇒ char list ⇒ val option
Why so? Did I miss something to use the interpretation? I suspect I need something like a newtype wrapper here...
When you interpret a locale like order which corresponds to a type class, you only get the theorems proved inside the context of the locale. However, the constant mono is only defined on the type class. The reason is that mono's type contains two type variables, whereas only one is available inside locales from type classes. You can notice this because there is no map.mono stemming from your interpretation.
If you instantiate the type class order for the option type with None being less than Some x, then you can use mono for maps, because the function space instantiates order with the pointwise order. However, the ordering <= on maps will only be semantically equivalent to ⊆⇩m, not syntactically, so none of the existing theorems about ⊆⇩m will work for <= and vice versa. Moreover, your theories will be incompatible with other people's that instantiate order for option differently.
Therefore, I recommend to go without type classes. The predicate monotone explicitly takes the order to be used. This is a bit more writing, but in the end, you are more flexible than with type classes. For example, you can write monotone (op ⊆⇩m) (op ⊆⇩m) f to express that f is a monotone transformation of environments.

How to ensure that instantiations of type variables are different

In Isabelle, is there a way to ensure that instantiations for two type variables in a locale or proposition are different?
For a concrete example, I want to reason about a composite entity without committing to a specific representation. To this end I define a class of components, with some operations on them:
class Component = fixes oper :: "'a ⇒ 'a"
I also define a Composite, which has the same operations, lifted by applying them component-wise plus selectors for the components:
class Composite = Component (* + ... *)
locale ComponentAccess =
fixes set :: "'c :: Composite ⇒ 'a :: Component ⇒ 'c"
and get :: "'c ⇒ 'a"
assumes (* e.g. *) "get (set c a) = a"
and "set c (get c) = c"
and "oper (set c1 a1) = set (oper c1) (oper a2)"
Now I want to state some axioms for a pairwise composite, e.g.:
locale CompositeAxioms =
a: ComponentAccess set get + b: ComponentAccess set' get'
for set :: "'c :: Composite ⇒ 'a1 :: Component ⇒ 'c"
and get :: "'c ⇒ 'a1"
and set' :: "'c ⇒ 'a2 :: Component ⇒ 'c"
and get' :: "'c ⇒ 'a2" +
assumes set_disj_commut: "set' (set c a1) a2 = set (set' c a2) a1"
However, the above law is only sensible if 'a1 and 'a2 are instantiated to different types. Otherwise we trivially get unwanted consequences, like reverting a component setting:
lemma
fixes set get
assumes "CompositeAxioms set get set get"
shows "set (set c a1) a2 = set (set c a2) a1"
using assms CompositeAxioms.set_disj_commut by blast
In the above locale and it's assumes, is there a way of ensuring that 'a1 and 'a2 are always instantiated to different types?
Update (clarification). Actually, the 'law' makes sense only if set and set' are different. But then I would have to compare two functions over different types which, I think, is not possible. Since I define get/set operations in type classes and use sort constraints to ensure that a composite has certain components, my gets and sets always differ in the component type. Hence the question.
You can express in Isabelle/HOL that two types are different by using the reflection of types as terms. To that end, the types must be representable, i.e., instantiate the class typerep. Most types in HOL do so. Then, you can write
TYPEREP('a) ~= TYPEREP('b)
to express that 'a and 'b can only be instantiated to different types. However, TYPEREP is normally used only for internal purposes (especially in the code generator), so there is no reasoning infrastructure available and I do not know how to exploit such an assumption.
Anyway, I wonder why you want to formulate such a constraint at all. If a user instantiates your locale CompositeAxioms with both components being the same (and leave the swapping law for set and set' as is), it is the user who has to show the swapping law. If he can, then the set function is a bit strange, but soundness is not affected. Moreover, a locale assumption like TYPEREP('a) ~= TYPEREP('b) would unnecessarily restrict the generality of your development, is it might be perfectly sensible to use the same representation type with different instances for set and get.

What's the difference between the empty sort, 'a::{}, and a sort of "type", 'a::type

Below, the comments show the output for the term commands:
declare[[show_sorts]]
term "x"
(* "x::'a::{}" :: "'a::{}" *)
term "x::'a"
(* "x::'a::type" :: "'a::type" *)
In a section title about a type class, I'm using the phrase "nat to type", when what I mean is "nat to 'a" (which I don't use because words generally work better in titles).
I need to be succinct, but if I'm also reasonably, technically correct, that's even better.
Update: Here, I try and clarify what I was asking about. I think I was saying this:
I'm confused. The command term "x" shows that x is of type 'a, and that 'a is of sort {}. Especially with hindsight here, and in comparison to what I got for term "x::'a", a sort of {} is not what I would expect for 'a. Here, like many times, I look to the software for answers, and when it tells me 'a for x has no sort, that makes me wonder.
So, I minimally give x the type 'a, which results in 'a as having sort type. This kind of answer makes sense to me. Not that 'a has to have the sort type, but that 'a should at least have a sort, though my original motivation was to assure myself that the 'a in a type class is of sort type.
From Lars' answer, I am reminded that the type inference engine interprets a type as broadly as possible, so I assume that's at the core of this.
Update 2
From Lars' additional comment, it turns out, at least for me, that a key phrase in understanding 'a::{} is "sort constraint", the "constraint" in "sort constraint" giving important meaning to {}.
Here's some source for anyone who's interested in studying the subtleties of the language of types and sorts:
declare [[show_sorts]]
thm "Pure.reflexive" (* ?x::?'a::{} == ?x [name "Pure.reflexive"] *)
thm "HOL.refl" (* (?t::?'a::type) = ?t [name "HOL.refl"] *)
(* Pure.reflexive *)
theorem "(x::'a::type) == x"
by(rule Pure.reflexive)
theorem "(x::prop) == x"
by(rule Pure.reflexive)
theorem "(x::'a::{}) == x"
by(rule Pure.reflexive)
(* HOL.refl *)
theorem "(x::'a::type) = x"
by(rule HOL.refl)
theorem "(x::'a::{}) = x"
by(rule HOL.refl)
(*ERROR: Type unification failed: Variable 'a::{} not of sort type*)
(* LINE 47 HOL.thy: where the use of "type" is defined. *)
setup {* Axclass.class_axiomatization (#{binding type}, []) *}
default_sort type
setup {* Object_Logic.add_base_sort #{sort type} *}
A sort is an intersection of type classes. Hence, the most general sort is the full sort, written {} (i.e., the empty intersection). If a sort consists only of a single class, the curly braces are omitted.
In Isabelle/HOL, type is the sort of HOL types (in contrast to the types of the logical framework, most notably the type prop of propositions. So, all the types you usually work with (bool, nat, int, pairs, lists, types defined with typedef or datatype) will have sort type.
This guarantees the separation between of the types of the object logic (e.g., HOL) and the logical framework (i.e., Isabelle/Pure): Operators of the logical framework can be used to compose HOL expressions, but cannot occur inside HOL expressions.
So, when working in Isabelle/HOL, you almost always want your expressions to have sort type and hence type is declared as default sort, which means that the type inference will use this instead of the empty sort if no additional constraints are given.
However, due to a shortcoming(?) of the type inference setup, there are some rare cases where type inference still infers the empty sort. This can yield to surprising errors.

Resources