Dynamically retrieve variable name - is it possible? - isabelle

Is there a way to dynamically retrieve the name of variables in Isabelle/HOL?
I am trying to do something like this (in a simplified version):
consts sa :: "nat set"
sb :: "nat set"
axiomatization where
sa_fin: "finite sa"
and sb_fin: "finite sb"
definition expand :: "nat set set ⇒ string set" where
"expand S = {nameof(s)| s. s ∈ S}"
Basically, for the U below
definition U :: "nat set set" where "U = {sa, sb}"
the expand function will return {"sa", "sb"}. Is it possible at all within Isabelle/HOL?
TIA

In general, no. Your expand function cannot exist as a HOL function because you did not assume anywhere that sa and sb are distinct. If they are the same, there logically cannot be a function rhat returns ''sa'' for sa and ''sb'' for sb.
Of course, if you do know that sa ≠ sb you can easily define a function that does this manually, e.g. nameof x = (if x = sa then ''sa'' else ''sb''). Generating a function like this automatically for a given set of constants is possible in Isabelle/ML. Finding all constants that were declared in a given theory should also be possible in Isabelle/ML.
But all of this seems a bit odd to me. I would wager that whatever you are trying to do here is probably possible in a more direct way. It's hard to say more without knowing what your exact use case is.

Related

Defining a predicate; need prop=>bool

I'm trying to define a function that takes a set and a relation and returns a bool telling if the relation is reflexive on the set. I tried to define it like this:
definition refl::"'a set⇒('a×'a) set⇒bool" where
"refl A R = (∀x. x∈A⟹(x,x)∈R)"
but Isabelle gives me the following error:
Type unification failed: Clash of types "prop" and "bool"
Type error in application: incompatible operand type
Operator: (=) (refl A R) :: bool ⇒ bool
Operand: ∀x. x ∈ A ⟹ (x, x) ∈ R :: prop
I can't seem to find any function to force a "prop" into a "bool". I also tried changing the definition to set the RHS = True, but I get the same error.
What is the correct way to define my function?
You can't go from prop to bool. But you don't have to: just use the object level connectives (⟶ and ∀) instead of the meta-logical ones (⟹ and ⋀). They are logically equivalent, so this is not a problem.
The meta-logical connectives should (and usually can) only be used on the ‘outermost level’ of a proposition.
Note however that when you can use the mega-logical ones, it is usually more convenient to use them because the object-level ones are opaque to Isabelle and the Isar proof language (i.e. they are functions just like any other function) whereas Isar ‘knows’ what ⟹ and ⋀ mean. For instance, if you have a fact stated with ⟹ and ⋀, you can immediately instantiate variables and discharge assumptions in it using the of/OF attributes.
You need to write it such that the value is not prop in the first place — there's no conversion. In this case, you used the prop-level implication ⟹ between ∀x. x ∈ A and (x, x) ∈ R. You may use a single-width arrow --> instead, which is an implication of bools.

Isabelle/HOL restrict codomain

I am sorry for asking so many Isabelle questions lately. Right now I have a type problem.
I want to use a type_synonym introduced in a AFP-theory.
type_synonym my_fun = "nat ⇒ real"
I have a locale in my own theory where:
fixes n :: nat
and f :: "my_fun"
and A :: "nat set"
defines A: "A ≡ {0..n}"
However, in my use case the output of the function f is always a natural number in the set {0..n}. I want to impose this as a condition (or is there a better way to do it?). The only way I found was to:
assumes "∀v. ∃ i. f v = i ∧ i ∈ A"
since
assumes "∀v. f v ∈ A"
does not work.
If I let Isabelle show me the involved types it seems alright to me:
∀v::nat. ∃i::nat. (f::nat ⇒ real) v = real i ∧ i ∈ (A::nat set)
But of course now I cannot type something like this:
have "f ` {0..10} ⊆ A"
But I have to prove this. I understand where this problem comes from. However, I do not know how to proceed in a case like this. What is the normal way to deal with it? I would like to use my_fun as it has the same meaning as in my theory.
Thank you (again).
If you look closely at ∀v::nat. ∃i::nat. (f::nat ⇒ real) v = real i ∧ i ∈ (A::nat set), you will be able to see the mechanism that was used for making the implicit type conversion between nat and real: it is the abbreviation real (this invokes of_nat defined for semiring_1 in Nat.thy) that appears in the statement of the assumption in the context of the locale.
Of course, you can use the same mechanism explicitly. For example, you can define A::real set as A ≡ image real {0..n} instead of A::nat set as A ≡ {0..n}. Then you can use range f ⊆ A instead of assumes "∀v. ∃ i. f v = i ∧ i ∈ A”. However, I doubt that there is a universally accepted correct way to do it: it depends on what exactly you are trying to achieve. Nonetheless, for the sake of the argument, your locale could look like this:
type_synonym my_fun = "nat ⇒ real"
locale myloc_basis =
fixes n :: nat
abbreviation (in myloc_basis) A where "A ≡ image real {0..n}"
locale myloc = myloc_basis +
fixes f :: "my_fun"
assumes range: "range f ⊆ A"
lemma (in myloc) "f ` {0..10} ⊆ A"
using range by auto
I want to impose this as a condition (or is there a better way to do
it?).
The answer depends on what is known about f. If only a condition on the range of f is known, as the statement of your question seems to suggest, then, I guess, you can only state is as an assumption.
As a side note, to the best of my knowledge, defines is considered to be obsolete and it is best to avoid using it in the specifications of a locale: stackoverflow.com/questions/56497678.

How to define a data type with constraints?

For example I need to define a data type for pairs of list, both of which must have the same length:
type_synonym list2 = "nat list × nat list"
definition good_list :: "list2" where
"good_list ≡ ([1,2],[3,4])"
definition bad_list :: "list2" where
"bad_list ≡ ([1,2],[3,4,5])"
I can define a separate predicate, which checks whether a pair of lists is ok:
definition list2_is_good :: "list2 ⇒ bool" where
"list2_is_good x ≡ length (fst x) = length (snd x)"
value "list2_is_good good_list"
value "list2_is_good bad_list"
Is it possible to combine the datatype and the predicate? I've tried to use inductive_set, but I have no idea how to use it:
inductive_set ind_list2 :: "(nat list × nat list) set" where
"length (fst x) = length (snd x) ⟹
x ∈ ind_list2"
You can create a new type which is constraint by some predicate via typedef, though the result will just be a type and not a datatype.
typedef good_lists2 = "{xy :: list2. list2_is_good xy}"
by (intro exI[of _ "([],[])"], auto simp: list2_is_good_def)
Working with such a newly created type is best done via the lifting-package.
setup_lifting type_definition_good_lists2
Now for every operation on this new lifted type good_lists2,
you first have
to lift the operation from the raw type list2.
For instance, below we define an extraction function and a Cons-function.
In the latter you have prove that indeed the newly generated pair satisfies the invariant.
lift_definition get_lists :: "good_lists2 ⇒ list2" is "λ x. x" .
lift_definition Cons_good_lists2 :: "nat ⇒ nat ⇒ good_lists2 ⇒ good_lists2"
is "λ x y (xs,ys). (x # xs, y # ys)"
by (auto simp: list2_is_good_def)
Of course, you it is also possible to access the invariant
of the lifted type.
lemma get_lists: "get_lists xy = (x,y) ⟹ length x = length y"
by (transfer, auto simp: list2_is_good_def)
I hope this helps.
René's answer is the answer to what you asked for, but just for the sake of completeness, I would like to add two things:
First, stating the obvious here: It seems like it would be much easier if you just worked with lists of pairs instead of pairs of lists. Your proposed new type is clearly isomorphic to a list of pairs. Then you don't have to introduce an extra type.
Also, on a more general note, just because you can introduce new types with type definitions in Isabelle that capture certain invariants does not mean that this is always the best idea. It may be easier to just carry around the invariants separately. It depends very much on what those invariants look like and what you actually do with the values of that type. In many cases, I would argue that the additional boilerplate for setting up the new type (in particular class instantiations if you need those) and converting between the base type and the new type is not worth whatever abstraction benefit you get from it.
A good heuristic, I think, is to ask yourself whether the type you are introducing is more of a ‘throw-away’ thing that you need in one specific place – then don't introduce a new type for it – or whether it is something that you can prove nice general facts about and introduce a good abstract theory on – then do introduce a new type for it. Good examples from the distribution for the latter are things like multisets, finite sets, and probability mass functinos.

How to ensure that instantiations of type variables are different

In Isabelle, is there a way to ensure that instantiations for two type variables in a locale or proposition are different?
For a concrete example, I want to reason about a composite entity without committing to a specific representation. To this end I define a class of components, with some operations on them:
class Component = fixes oper :: "'a ⇒ 'a"
I also define a Composite, which has the same operations, lifted by applying them component-wise plus selectors for the components:
class Composite = Component (* + ... *)
locale ComponentAccess =
fixes set :: "'c :: Composite ⇒ 'a :: Component ⇒ 'c"
and get :: "'c ⇒ 'a"
assumes (* e.g. *) "get (set c a) = a"
and "set c (get c) = c"
and "oper (set c1 a1) = set (oper c1) (oper a2)"
Now I want to state some axioms for a pairwise composite, e.g.:
locale CompositeAxioms =
a: ComponentAccess set get + b: ComponentAccess set' get'
for set :: "'c :: Composite ⇒ 'a1 :: Component ⇒ 'c"
and get :: "'c ⇒ 'a1"
and set' :: "'c ⇒ 'a2 :: Component ⇒ 'c"
and get' :: "'c ⇒ 'a2" +
assumes set_disj_commut: "set' (set c a1) a2 = set (set' c a2) a1"
However, the above law is only sensible if 'a1 and 'a2 are instantiated to different types. Otherwise we trivially get unwanted consequences, like reverting a component setting:
lemma
fixes set get
assumes "CompositeAxioms set get set get"
shows "set (set c a1) a2 = set (set c a2) a1"
using assms CompositeAxioms.set_disj_commut by blast
In the above locale and it's assumes, is there a way of ensuring that 'a1 and 'a2 are always instantiated to different types?
Update (clarification). Actually, the 'law' makes sense only if set and set' are different. But then I would have to compare two functions over different types which, I think, is not possible. Since I define get/set operations in type classes and use sort constraints to ensure that a composite has certain components, my gets and sets always differ in the component type. Hence the question.
You can express in Isabelle/HOL that two types are different by using the reflection of types as terms. To that end, the types must be representable, i.e., instantiate the class typerep. Most types in HOL do so. Then, you can write
TYPEREP('a) ~= TYPEREP('b)
to express that 'a and 'b can only be instantiated to different types. However, TYPEREP is normally used only for internal purposes (especially in the code generator), so there is no reasoning infrastructure available and I do not know how to exploit such an assumption.
Anyway, I wonder why you want to formulate such a constraint at all. If a user instantiates your locale CompositeAxioms with both components being the same (and leave the swapping law for set and set' as is), it is the user who has to show the swapping law. If he can, then the set function is a bit strange, but soundness is not affected. Moreover, a locale assumption like TYPEREP('a) ~= TYPEREP('b) would unnecessarily restrict the generality of your development, is it might be perfectly sensible to use the same representation type with different instances for set and get.

What is a Quotient type pattern in Isabelle?

What is a "Quotient type pattern" in Isabelle?
I couldn't find any explanation over the internet.
It would be better if you would quote a little from where you saw the phrase. I know of "pattern matching," and I know of "quotient type," but I don't know of "quotient type pattern."
I prefer not to ask for clarification, and then wait, so I pick two of the three words, "quotient type." If I'm on the wrong track, it's still a worthy subject, and a big and important part of Isabelle/HOL.
There is the quotient_type keyword, and it allows you to define a new type with an equivalence relation.
It is part of the quotient package, described starting on page 248 of isar-ref.pdf. There happens to be a Wiki page, Quotient_type.
A more involved description is given by Brian Hufmann and Ondřej Kunčar. Go to Kunčar's web page and look at the two PDFs titled Lifting and Transfer: A Modular Design for Quotients in Isabelle/HOL, which are not exactly the same.
It happens to be that lifting and quotient types are heavily related, and not easy to understand, which is why I try to study a little here and there, like right now, to get a better understanding of it all.
Integers and Rationals in HOL Are Quotient Types, I Pick One as an Example, Integers
You can start by looking Int.thy.
For a quotient type, you need an equivalence relation, which defines a set, and intrel is what is used to define that set for type int.
definition intrel :: "(nat * nat) => (nat * nat) => bool" where
"intrel = (%(x, y) (u, v). x + v = u + y)"
This is the classic definition of the integers, based on the natural numbers. Integers are ordered pairs of natural numbers (and sets as I describe below), and they're equal by that definition.
For example, informally, (2,3) = (4,5) because 2 + 5 = 4 + 3.
I'm boring you, and you're waiting for the good stuff. Here's part of it, the use of quotient_type:
quotient_type int = "nat * nat" / "intrel"
morphisms Rep_Integ Abs_Integ
Those two morphisms come into play, if you want to strain your brain, and really understand what's going on, which I do. There are lots of functions and simp rules that quotient_type generates, and you have to do a lot of work to find it all, such as with the find_theorems command.
An Abs function abstracts an ordered pair to an int. Check these out:
lemma "Abs_Integ(1,0) = (1::int)"
by(metis one_int_def)
lemma "Abs_Integ(x,0) + Abs_Integ(y,0) ≥ (0::int)"
by(smt int_def)
They show that an int really is an ordered pair, under the hood of the engine.
Now I show the explicit types of those morphisms, along with Abs_int and Rep_int, which show int not only as an ordered pair, but as a set of ordered pairs.
term "Abs_int :: (nat * nat) set => int"
term "Abs_Integ :: (nat * nat) => int"
term "Rep_int :: int => (nat * nat) set"
term "Rep_Integ :: int => (nat * nat)"
I'm boring you again, but I have an emotional need to show some more examples. Two positive integers are equal if the components of the ordered pairs differ by one, such as these:
lemma "Abs_Integ(1,0) = Abs_Integ(3,2)"
by(smt nat.abs_eq split_conv)
lemma "Abs_Integ(4,3) = Abs_Integ(3,2)"
by(smt nat.abs_eq split_conv)
What would you expect if you added Abs_Integ(4,3) and Abs_Integ(3,2)? This:
lemma "Abs_Integ(2,3) + Abs_Integ(3,4) = Abs_Integ(2 + 3, 3 + 4)"
by(metis plus_int.abs_eq plus_int_def split_conv)
That plus_int in the proof is defined in Int.thy, on line 44.
lift_definition plus_int :: "int => int => int"
is "%(x, y) (u, v). (x + u, y + v)"
What is this lifting all about? That would put me at "days into" this explanation, and I'm only just starting to understand it a little.
The find_theorems shows there's lots of stuff hidden, as I said:
thm "plus_int.abs_eq"
find_theorems name: "Int.plus_int*"
More examples, but these are to emphasize that, under the hood of the engine, an int ties back into an equivalence class as a set, where I'm using intrel above to define the sets right:
term "Abs_int::(nat * nat) set => int"
term "Abs_int {(x,y). x + 3 = 2 + y}" (*(2,3)*)
term "Abs_int {(x,y). x + 4 = 3 + y}" (*(3,4)*)
lemma "Abs_int {(x,y). x + 3 = 2 + y} = Abs_int {(x,y). x + 100 = 99 + y}"
by(auto)
That auto proof was easy, but there's no magic coming through for me on this next one, even though it's simple.
lemma "Abs_int {(x,y). x + 3 = 2 + y} + Abs_int {(x,y). x + 4 = 3 + y}
= Abs_int {(x,y). x + 7 = 5 + y}"
apply(auto simp add: plus_int.abs_eq plus_int_def intrel_def)
oops
It could be that all I need to do is tap into something that's not a simp rule by default.
If quotient_type is not the "quotient type pattern" you're talking about, at least I got something out of it by seeing all what find_theorems returns about Int.plus_int* above.
What is a quotient type?
A quotient type is a way to define a new type in terms of an already existing type. That way, we don't have to axiomatize the new type. For example, one might find reasonable to use the naturals to build the integers, since they can be seen as "naturals+negatives". You may then want to use the integers to build the rationals, since they can be seen as "integers+quotients". And so on.
Quotient types use a given equivalence relation on the "lower type" to determine what equality means for the "higher type".
Being more precise: A quotient type is an abstract type for which equality is dictated by some equivalence relation on its underlying representation.
This definition might be too abstract at first, so we'll use the integers as a grounding example.
Example: Integers from Naturals
If one wants to define the integers, the most standard way is to use an ordered pair of natural numbers, such as (a,b), which intuitively represents "a-b". For example, the number represented by the pair (2,4) is -2, since intuitively 2-4 = -2. By the same logic, (0,2) also represents '-2', and so does (1,3) or (10,12), since 0-2 = 1-3 = 10-12 = -2.
We could then say that "two pairs (a,b) and (x,y) represent the same integer iff a - b = x - y". However, the minus operation can be weird in natural numbers (what is '2-3' in the naturals?). To avoid that weirdness, rewrite 'a - b = x - y' as 'a + y = x + b', now using only addition. So, two pairs (a,b) and (x,y) represent the same integer when 'a + y = x + b'. For example, (7,9) represents the same integer as (1,3), since '7 + 3 = 1 + 9'.
That leads to a quotient definition of integers: An integer is a type represented by an ordered pair of natural numbers. Two integers represented by (a,b) and (x,y) are equal if, and only if, a+y = x+b.
The integer type derives from the type "ordeded pair of natural numbers" which is its representation. We may call the integer itself an abstraction of that. The equality of integers is defined as whenever some underlying representations '(a,b)' and '(x,y)' follow the equivalence relation 'a+y = x+b'.
In that sense, the integer '-3' is represented by both '(0,3)' and '(2,5)', and we may show this by noticing that 0+5 = 3+2. On the other hand, '(0,3)' and '(6,10)' do not represent the same integer, since '0+10 ≠ 3+6'. This reflects the fact that '-3 ≠ -4'.
Technically speaking, the integer '-3' is not specifically '(0,3)', nor '(1,4)', nor '(10,13)', but the whole equivalence class. By that I mean that '-3' is the set containing all of its representations (i.e. -3 = { (0,3), (1,4), (2,5), (3,6), (4,7), ... }). '(0,3)' is called a representation for '-3', and '-3' is the abstraction of '(0,3)'.
Morphisms: Rep and Abs in Isabelle
Rep and Abs are ways for us to transition between the representations and the abstractions they represent. More precisely, they are mappings from an equivalence class to one of its representations, and vice-versa. We call them morphisms.
Rep takes an abstract object (an equivalence class), such as '-3', and transforms it into one of its representations, for example '(0,3)'. Abs does the opposite, taking a representation such as '(3,10)', and mapping it into its abstract object, which is '-7'. Int.thy (Isabelle's implementation of integers) defines these as Rep_Integ and Abs_Integ for integers.
Notice that the statement '(2,3) = (8,9)' is an absurd. Since these are ordered pairs, that would imply '2 = 8' and '3 = 9'. On the other hand the statement 'Abs_Integ(2,3) = Abs_Integ(8,9)' is very much true, as we are simply saying that the integer abstraction of '(2,3)' is the same as the integer abstraction '(8,9)', namely '-1'.
A more precise phrasing of 'Abs_Integ(2,3) = Abs_Integ(8,9)' is: "'(2,3)' and '(8,9)' belong in the same equivalence class under the integer relation". We usually call this class '-1'.
It's important to note that '-1' is just a convenient shorthand for "the equivalence class of (0,1)", in the same vein that '5' is just a shorthand for "the equivalence class of (5,0)" and '-15' is shorthand for "the equivalence class of '(0,15)'. We call '(0,1)', '(5,0)', and '(0,15) the canonical representations. So saying "Abs_Integ(2,3) = -1" is really just a nice abbreviation for "Abs_Integ(2,3) = Abs_Integ(0,1)" .
It's also worth noting that the mapping Rep is one-to-one. This means that Rep_Integ(-1) will always yield the same representation pair, usually the canonical '(0,1)'. The specific pair picked does not matter much, but it'll always pick the same one. That is useful to know, as it implies that the statement Rep_Integ(i) = Rep_Integ(i) is always true.
The quotient_type command in Isabelle
'quotient_type' creates a quotient type using the specified type and equivalence relation. So quotient_type int = "nat × nat" / "intrel" creates the quotient type int, as the equivalence classes of nat × nat under the relation intrel (where "intrel = (λ(a,b) (x,y). a+y = x+b)"). Section 11.9.1 of the manual details the specifics about the command.
It's worth noting that you actually have to prove that the relation provided (intrel) is an equivalence.
Here's a usage example from Int.thy, which defines the integers, it's morphisms, and proves that intrel is an equivalence relation:
(* Definition *)
quotient_type int = "nat × nat" / "intrel"
morphisms Rep_Integ Abs_Integ
(* Proof that 'intrel' is indeed an equivalence *)
proof (rule equivpI)
show "reflp intrel" by (auto simp: reflp_def)
show "symp intrel" by (auto simp: symp_def)
show "transp intrel" by (auto simp: transp_def)
qed
Definitions and Lemmas: The Lifting and Transfer packages
Now, the previous explanations suggest that Rep and Abs should appear everywhere, right? These transformations are crucial for proving properties about quotient types. However, they appear less than 10 times throughout the 2000 lines of Int.thy. Why?
lift_definition and the proof method transfer are the answer. They come from the Lifting and Transfer packages. These packages do a lot, but for our purposes, they do the job of concealing Rep and Abs from your definitions and theorems.
The gist when working with quotient types in Isabelle, is that you want to [1] define some operations, [2] prove some useful lemmas with the representation type, and then [3] completely forget about these representations, working only with the abstract type. When proving theorems about the abstract type, you should be using the previously shown properties and lemmas.
To get [1], lift_definition helps you to define the operations. In specific, it allows you to define a function with the representation type, and it automatically "lifts" it to the abstract type.
As an example, you can define addition on integers as such:
lift_definition int_plus:: "int ⇒ int ⇒ int"
is "λ(a,b)(c,d). (a+c, b+d)"
This definition is stated in terms of nat × nat ⇒ nat × nat ⇒ nat × nat, but 'lift_definition' will automatically "lift" it to int ⇒ int ⇒ int.
An important thing to note is that you have to prove the function still follows the equivalence relation after applied (i.e. if 'x ≃ y' then 'f x ≃ f y'). The definition above for example, will prompt you to prove that "if '(a,b) ≃ (x,y)' and '(c,d) ≃ (u,v)', then '(a+c,b+d) ≃ (x+u,y+v)'" (if it doesn't look like it, try using apply clarify).
One of the nice things about lift_definition is that it works in terms of the underlying representation only, so you don't have to worry about transitioning between abstractions and representations. Hence the lack of Rep_Integ and Abs_Integ in Int.thy.
It also sets up a transfer rule for the function. This how you get [2]: proving properties without having to worry about Rep and Abs. Using the transfer proof method, you can bring a lemma about an abstraction down to the representation level, and prove the desired property there.
As an example, you can state the commutativity of addition in the form int_plus x y = int_plus y x, and then use the transfer method to bring that statement down to the representation level, which after a clarify looks like intrel (a + c, b + d) (c + a, d + b). We can then prove by simplification with the definition of intrel:
lemma plus_comm: "int_plus x y = int_plus y x"
apply transfer
apply clarify
by (simp add: intrel_def)
And to get [3], you simply use these lemmas and properties of the abstract type, without worrying about the actual representations.
After this point, you'll even forget that you're using a quotient type, since the abstract type and it's properties are all you need. Usually a handful of lemmas on the abstract type is enough, and Int.thy will give you a lot more than a handful.
References and further reading
Section 1 of the paper "Quotient Types" gives a good overview of the topic (and goes in depth in the other sections).
The introduction of "Quotients Revisited for Isabelle/HOL" also explains very well the purpose of 'Rep' and 'Abs'.
"Lifting and Transfer" is also a great read into how these can be concealed and the automation behind quotient types in Isabelle.
Isabelle's Reference Manual (with some ctrl+f) is also a great source when in doubt about what specific commands do.

Resources