code_pred in locales - isabelle

I want to create an executable inductive within a locale. Without the locale everything works fine:
definition "P a b = True"
inductive test :: "'a ⇒ 'a ⇒ bool" where
"test a a" |
"test a b ⟹ P b c ⟹ test a c"
code_pred test .
However, when I try the same in a locale, it does not work:
locale localTest
begin
definition "P' a b = True"
inductive test' :: "'a ⇒ 'a ⇒ bool" where
"test' a a" |
"test' a b ⟹ P' b c ⟹ test' a c"
code_pred test'
end
The code_pred line in the locale returns the following error:
Not a constant: test'

You can give alternative introduction rules (see isabelle doc codegen section 4.2: Alternative introduction rules) and thereby avoid an interpretation. This also works for locales with parameters (and even for constants that are not defined inductively). A variant of your example having a parameter:
locale l =
fixes A :: "'a set"
begin
definition "P a b = True"
inductive test :: "'a ⇒ 'a ⇒ bool" where
"a ∈ A ⟹ test a a" |
"test a b ⟹ P b c ⟹ test a c"
end
We introduce a new constant
definition "foo A = l.test A"
And prove its introduction rules (thus the new constant is sound w.r.t. the old one).
lemma [code_pred_intro]:
"a ∈ A ⟹ foo A a a"
"foo A a b ⟹ l.P b c ⟹ foo A a c"
unfolding foo_def by (fact l.test.intros)+
Finally, we have to show that the new constant is also complete w.r.t. the old one:
code_pred foo by (unfold foo_def) (metis l.test.simps)

Expressed sloppily, locales are an abstraction mechanism that allows to introduce new constants relative to some hypothetical constants satisfying hypothetical properties, whereas code generation is more concrete (you need all the information that is required to implement a function, not just its abstract specification).
For that reason you first need to interpret a locale, before you can generate code. Of course, in your example there are no hypothetical constants and properties, thus an interpretation is trivial
interpretation test: localTest .
After that, you can use
code_pred test.test' .

Complete guess here, but I'm wondering if renaming test to test' has messed things up for you. (Consider changing code_pred test' to code_pred "test'".)

Related

Isabelle order type-class on lambda expressions

I found this expression somewhere in Isabelle's standard library and tried to see what value does with it
value "(λ x::bool . ¬x) ≤ (λ x . x)"
It outputs False. What is the meaning of ≤ here? Ideally, where can I find the exact instantiation of it? When I Ctrl+Click on the lambda symbol, jEdit doesn't take me anywhere. Is λ part of meta logic then? Where is it defined?
This and many other things are defined in Lattices.thy theory in Main library
https://isabelle.in.tum.de/library/HOL/HOL/Lattices.html
under the following section.
subsection ‹Lattice on \<^typ>‹_ ⇒ _››
instantiation "fun" :: (type, semilattice_sup) semilattice_sup
begin
definition "f ⊔ g = (λx. f x ⊔ g x)"
lemma sup_apply [simp, code]: "(f ⊔ g) x = f x ⊔ g x"
by (simp add: sup_fun_def)
instance
by standard (simp_all add: le_fun_def)
end

How to include statement about type of the variable in the Isabelle/HOL term

I have following simple Isabelle/HOL theory:
theory Max_Of_Two_Integers_Real
imports Main
"HOL-Library.Multiset"
"HOL-Library.Code_Target_Numeral"
"HOL-Library.Code_Target_Nat"
"HOL-Library.Code_Abstract_Nat"
begin
definition two_integer_max_case_def :: "nat ⇒ nat ⇒ nat" where
"two_integer_max_case_def a b = (case a > b of True ⇒ a | False ⇒ b)"
lemma spec_final:
fixes a :: nat and b :: nat
assumes "a > b" (* and "b < a" *)
shows "two_integer_max_case_def a b = a"
using assms by (simp add: two_integer_max_case_def_def)
lemma spec_1:
fixes a :: nat and b :: nat
shows "a > b ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
lemma spec_2:
shows " (a ∈ nat set) ∧ (b ∈ nat set) ∧ (a > b) ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
end
Three lemmas try to express and prove that same statement, but progressively I am trying to move information from assumes and fixes towards the term. First 2 lemmas are correct, but the third (last) lemma is failing syntactically with the error message:
Type unification failed: Clash of types "_ ⇒ _" and "int"
Type error in application: incompatible operand type
Operator: nat :: int ⇒ nat
Operand: set :: ??'a list ⇒ ??'a set
My aim in this lemma is to move type information from the fixes towards the term/statement? How I make statements about the type of variable in the term (of inner syntax)?
Maybe I should use, if I am trying to avoid fixes clause (in which the variables may be declared), the full ForAll expression like:
lemma spec_final_3:
shows "∀ a :: nat . ∀ b :: nat . ( (a > b) ⟹ two_integer_max_case_def a b = a)"
by (simp add: two_integer_max_case_def_def)
But it is failing syntactically as well with the error message:
Inner syntax error: unexpected end of input⌂
Failed to parse prop
So - is it possible to include type statements in the term directly (without fixes clause) and is there any difference between fixes clause and type statement in the term? Maybe such differences start to appear during (semi)automatic proofs, e.g., when simplification tactics are applied or some other tactics?
nat set is interpreted as a function (that does not type correctly). The set of natural numbers can be expressed as UNIV :: nat set. Then, spec_2 reads
lemma spec_2:
shows "a ∈ (UNIV :: nat set) ∧ b ∈ (UNIV :: nat set) ∧ a > b ⟹
two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
However, more natural way would be to include the type information in spec_1 without fixes clause:
lemma spec_1':
shows "(a :: nat) > (b :: nat) ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
∀ belongs to HOL, so the HOL implication should be used in spec_final_3:
lemma spec_final_3:
shows "∀ a :: nat. ∀ b :: nat. a > b ⟶ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)
spec_1 can also be rewritten using an explicit meta-logic qualification (and implication) to look similar to spec_final_3:
lemma spec_1'':
shows "⋀ a :: nat. ⋀ b :: nat. a > b ⟹ two_integer_max_case_def a b = a"
by (simp add: two_integer_max_case_def_def)

Using the type-to-sets approach for defining quotients

Isabelle has some automation for quotient reasoning through the quotient package. I would like to see if that automation is of any use for my example. The relevant definitions is:
definition e_proj where "e_proj = e'_aff_bit // gluing"
So I try to write:
typedef e_aff_t = e'_aff_bit
quotient_type e_proj_t = "e'_aff_bit" / "gluing
However, I get the error:
Extra type variables in representing set: "'a"
The error(s) above occurred in typedef "e_aff_t"
Because as Manuel Eberl explains here, we cannot have type definitions that depend on type parameters. In the past, I was suggested to use the type-to-sets approach.
How would that approach work in my example? Would it lead to more automation?
In the past, I was suggested to use the type-to-sets approach ...
The suggestion that was made in my previous answer was to use the standard set-based infrastructure for reasoning about quotients. I only mentioned that there exist other options for completeness.
I still believe that it is best not to use Types-To-Sets, provided that the definition of a quotient type is the only reason why you wish to use Types-To-Sets:
Even with Types-To-Sets, you will only be able to mimic the behavior of a quotient type in a local context with certain additional assumptions. Upon leaving the local context, the theorems that use locally defined quotient types would need to be converted to the set-based theorems that would inevitably rely on the standard set-based infrastructure for reasoning about quotients.
One would need to develop additional Isabelle/ML infrastructure before Local Typedef Rule can be used to define quotient types locally conveniently. It should not be too difficult to develop an infrastructure that is useable, but it would take some time to develop something that is universally applicable. Personally, I do not consider this application to be sufficiently important to invest my time in it.
In my view, it is only viable to use Types-To-Sets for the definition of quotient types locally if you are already using Types-To-Sets for its intended purpose in a given development. Then, the possibility of using the framework for the definition of quotient types locally can be seen as a 'value-added benefit'.
For completeness, I provide an example that I developed for an answer on the mailing list some time ago. Of course, this is merely the demonstration of the concept, not a solution that can be used for work that is meant to be published in some form. To make this useable, one would need to convert this development to an Isabelle/ML command that would take care of all the details automatically.
theory Scratch
imports Main
"HOL-Types_To_Sets.Prerequisites"
"HOL-Types_To_Sets.Types_To_Sets"
begin
locale local_typedef =
fixes R :: "['a, 'a] ⇒ bool"
assumes is_equivalence: "equivp R"
begin
(*The exposition subsumes some of the content of
HOL/Types_To_Sets/Examples/Prerequisites.thy*)
context
fixes S and s :: "'s itself"
defines S: "S ≡ {x. ∃u. x = {v. R u v}}"
assumes Ex_type_definition_S:
"∃(Rep::'s ⇒ 'a set) (Abs::'a set ⇒ 's). type_definition Rep Abs S"
begin
definition "rep = fst (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "Abs = snd (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "rep' a = (SOME x. a ∈ S ⟶ x ∈ a)"
definition "Abs' x = (SOME a. a ∈ S ∧ a = {v. R x v})"
definition "rep'' = rep' o rep"
definition "Abs'' = Abs o Abs'"
lemma type_definition_S: "type_definition rep Abs S"
unfolding Abs_def rep_def split_beta'
by (rule someI_ex) (use Ex_type_definition_S in auto)
lemma rep_in_S[simp]: "rep x ∈ S"
and rep_inverse[simp]: "Abs (rep x) = x"
and Abs_inverse[simp]: "y ∈ S ⟹ rep (Abs y) = y"
using type_definition_S
unfolding type_definition_def by auto
definition cr_S where "cr_S ≡ λs b. s = rep b"
lemmas Domainp_cr_S = type_definition_Domainp[OF type_definition_S
cr_S_def, transfer_domain_rule]
lemmas right_total_cr_S = typedef_right_total[OF type_definition_S
cr_S_def, transfer_rule]
and bi_unique_cr_S = typedef_bi_unique[OF type_definition_S cr_S_def,
transfer_rule]
and left_unique_cr_S = typedef_left_unique[OF type_definition_S cr_S_def,
transfer_rule]
and right_unique_cr_S = typedef_right_unique[OF type_definition_S
cr_S_def, transfer_rule]
lemma cr_S_rep[intro, simp]: "cr_S (rep a) a" by (simp add: cr_S_def)
lemma cr_S_Abs[intro, simp]: "a∈S ⟹ cr_S a (Abs a)" by (simp add: cr_S_def)
(* this part was sledgehammered - please do not pay attention to the
(absence of) proof style *)
lemma r1: "∀a. Abs'' (rep'' a) = a"
unfolding Abs''_def rep''_def comp_def
proof-
{
fix s'
note repS = rep_in_S[of s']
then have "∃x. x ∈ rep s'" using S equivp_reflp is_equivalence by force
then have "rep' (rep s') ∈ rep s'"
using repS unfolding rep'_def by (metis verit_sko_ex')
moreover with is_equivalence repS have "rep s' = {v. R (rep' (rep s'))
v}"
by (smt CollectD S equivp_def)
ultimately have arr: "Abs' (rep' (rep s')) = rep s'"
unfolding Abs'_def by (smt repS some_sym_eq_trivial verit_sko_ex')
have "Abs (Abs' (rep' (rep s'))) = s'" unfolding arr by (rule
rep_inverse)
}
then show "∀a. Abs (Abs' (rep' (rep a))) = a" by auto
qed
lemma r2: "∀a. R (rep'' a) (rep'' a)"
unfolding rep''_def rep'_def
using is_equivalence unfolding equivp_def by blast
lemma r3: "∀r s. R r s = (R r r ∧ R s s ∧ Abs'' r = Abs'' s)"
apply(intro allI)
apply standard
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def by auto
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def
by (smt Abs''_def Abs'_def CollectD S comp_apply local.Abs_inverse
mem_Collect_eq someI_ex)
done
definition cr_Q where "cr_Q = (λx y. R x x ∧ Abs'' x = y)"
lemma quotient_Q: "Quotient R Abs'' rep'' cr_Q"
unfolding Quotient_def
apply(intro conjI)
subgoal by (rule r1)
subgoal by (rule r2)
subgoal by (rule r3)
subgoal by (rule cr_Q_def)
done
(* instantiate the quotient lemmas from the theory Lifting *)
lemmas Q_Quotient_abs_rep = Quotient_abs_rep[OF quotient_Q]
(*...*)
(* prove the statements about the quotient type 's *)
(*...*)
(* transfer the results back to 'a using the capabilities of transfer -
not demonstrated in the example *)
lemma aa: "(a::'a) = (a::'a)"
by auto
end
thm aa[cancel_type_definition]
(* this shows {x. ∃u. x = {v. R u v}} ≠ {} ⟹ ?a = ?a *)
end

How to use a definition written on locale parameters in the assumptions of the locale?

If there is some definition on the parameters of a locale which would make the assumptions of the locale easier to write and/or read and/or understand (either because the function is quite complicated so would simplify the statement of the assumptions, or its name makes the assumptions easier to read and understand), what is the best way to define that function?
As a contrived example, say we want to incorporate the function fg into the statement of the assumptions (not actually useful here of course):
locale defined_after =
fixes f :: "'a ⇒ 'b ⇒ 'c"
and g :: "'b ⇒ 'a"
assumes "∀a. ∃b. f a b = f (g b) b"
and univ: "(UNIV::'b set) = {b}"
begin
definition fg :: "'b ⇒ 'c" where
"fg b ≡ f (g b) b"
lemma "∀b b'. fg b = fg b'" using univ the_elem_eq by (metis (full_types))
(* etc *)
end
One might think to use defines:
locale defined_during =
fixes f :: "'a ⇒ 'b ⇒ 'c"
and g :: "'b ⇒ 'a"
and fg :: "'b ⇒ 'c"
defines fg_def: "fg b ≡ f (g b) b"
assumes "∀a. ∃b. f a b = fg b"
and univ: "(UNIV::'b set) = {b}"
begin
lemma "∀b b'. fg b = fg b'" using univ the_elem_eq by (metis (full_types))
end
but the locales.pdf document seems to suggest it is deprecated (but by what I'm not sure):
The grammar is complete with the exception of the context elements constrains and defines, which are provided for backward compatibility.
Ctrl-hovering over fg in the lemma in the locale defined_after names it as constant "local.fg" whereas in defined_during it is fixed fg\nfree variable. It does however achieve defined_after_def being equal to defined_during_def (i.e. there are no additional parameters or assumptions in the latter), which the third option does not:
locale extra_defined_during =
fixes f :: "'a ⇒ 'b ⇒ 'c"
and g :: "'b ⇒ 'a"
and fg :: "'b ⇒ 'c"
assumes fg_def: "fg b ≡ f (g b) b"
and "∀a. ∃b. f a b = fg b"
and univ: "(UNIV::'b set) = {b}"
begin
lemma "∀b b'. fg b = fg b'" using univ the_elem_eq by (metis (full_types))
end
which also has the same Ctrl-hover text for fg in the lemma as the defined_during locale does.
Maybe there's something about it in one of the PDFs on the website, or in the NEWS file, but I can't find anything obvious. isar-ref.pdf makes a comment:
Both assumes and defines elements contribute to the locale specification. When defining an operation derived from the parameters, definition (§5.4) is usually more appropriate.
But I'm not sure how to use this information. Presumably it is saying that when one doesn't gain much by doing what I am asking about, one should proceed as in the locale defined_after (unless the quote means one can use definition inside a locale definition), which is not what I want. (As an aside: the first sentence of this quote would have suggested to me that defines is somehow equivalent to the third option which introduces an extra parameter and assumption, but that isn't the case. Maybe understanding what the possibly-subtler-than-it-appears-Isabelle-jargon "locale specification" means would explain what is causing the Ctrl-hover text to differ between the first and second option, I don't know.)
The specification element defines is indeed nothing that I would recommend to use. It goes back to a time when definition was not available inside a locale context and all definitions had to be done in the locale declaration itself.
Nowadays, the standard approach to your problem is to split the locale into two parts: First define a locale l1 without the complicated assumption, but with the relevant parameters. (If you need some assumptions to justify the definition, e.g. for the termination proof of function, include those assumptions.) Then define your function fg inside l1 as usual. Finally, define your actual locale l that extends l1. You can then use the definition of fg in the assumptions of l.
locale l = l1 + assumes "... fg ..."

How to generate code for the existential quantifier

Here is a sample theory:
datatype ty = A | B | C
inductive test where
"test A B"
| "test B C"
inductive test2 where
"¬(∃z. test x z) ⟹ test2 x"
code_pred [show_modes] test .
code_pred [show_modes] test2 .
values "{x. test2 A}"
The generated code tries to enumerate over ty. And so it fails.
I'm tring to define an executable version of test predicate:
definition "test_ex x ≡ ∃y. test x y"
definition "test_ex_fun x ≡
Predicate.singleton (λ_. False)
(Predicate.map (λ_. True) (test_i_o x))"
lemma test_ex_code [code_abbrev, simp]:
"test_ex_fun = test_ex"
apply (intro ext)
unfolding test_ex_def test_ex_fun_def Predicate.singleton_def
apply (simp split: if_split)
But I can't prove the lemma. Could you suggest a better approach?
Existential quantifiers over an argument to an inductive predicate can be made executable by introducing another inductive predicate. For example:
inductive test2_aux where "test x z ==> test2_aux x"
inductive test2 where "~ test2_aux x ==> test2 x"
with appropriate code_pred statements. The free variable z in the premise of test2_aux acts like an existential. Since this transformation is canonical, code_pred has a preprocessor to do so:
code_pred [inductify] test2 .
does the job.
Well, values complains about the fact that ty is not of sort enum. So, in this particular case it is easiest to perform this instantiation.
instantiation ty :: enum
begin
definition enum_ty :: "ty list" where
"enum_ty = [A,B,C]"
definition "enum_all_ty f = list_all f [A,B,C]"
definition "enum_ex_ty f = list_ex f [A,B,C]"
instance
proof (intro_classes)
let ?U = "UNIV :: ty set"
show id: "?U = set enum_class.enum"
unfolding enum_ty_def
using ty.exhaust by auto
fix P
show "enum_class.enum_all P = Ball ?U P"
"enum_class.enum_ex P = Bex ?U P"
unfolding id enum_all_ty_def enum_ex_ty_def enum_ty_def by auto
show "distinct (enum_class.enum :: ty list)" unfolding enum_ty_def by auto
qed
Afterwards, your values-command evaluates without problems.
I thought that the lemma is unprovable, and I should find another approach. But it can be proven as follows:
lemma test_ex_code [code_abbrev, simp]:
"Predicate.singleton (λ_. False)
(Predicate.map (λ_. True) (test_i_o x)) = (∃y. test x y)"
apply (intro ext iffI)
unfolding Predicate.singleton_def
apply (simp_all split: if_split)
apply (metis SUP1_E mem_Collect_eq pred.sel test_i_o_def)
apply (intro conjI impI)
apply (smt SUP1_E the_equality)
apply (metis (full_types) SUP1_E SUP1_I mem_Collect_eq pred.sel test_i_o_def)
done
The interesting thing is that the lemma structure and the proof structure seems to be independent of the concrete predicate. I guess there could be a general solution for any predicate.

Resources