quotient_type warning "no map function" - isabelle

When using the quotient_type command I get the following warning: "No map function defined for Example.A. This will cause problems later on".
Here is a minimal example to trigger the warning(tested with Isabelle2017).
theory Example
imports
Main
begin
datatype 'a A = B "'a A" | C
(*for map: map *) (* uncommenting doesn't fix the warning*)
quotient_type 'a Q = "'a A" / "op ="
by (rule identity_equivp)
end
So my questions are:
What is meant by a map function in this context (I only do know the concept of a map function in the context of functors in functional programming)?
What does it have to do with the datatype packages map functions, like one that would be generated by the commented line?
Which problems will one get later on?

The datatype command does not by default register the generated map function with the quotient package because there may be more general mappers (in case there are dead type variables). You therefore must do the functor declaration manually:
functor map_A
by(simp_all add: A.map_id0 A.map_comp o_def)
The mapper and its theorems are needed if you later want to lift definitions through the quotient type. This has been discussed on the Isabelle mailing list.

Related

Access a definition from a locale

Specific example: Let's say I have demonstrated that I have a graph in the sense of https://www.isa-afp.org/theories/category3/#FreeCategory.html :
lemma i_have_a_graph: shows "graph Obj Arr Dom Cod"
sorry
where the symbols Obj, Arr, Dom, and Cod were defined earlier in the file. This gives me access to the lemmas and theorems stated within the graph locale.
How do I use the symbol path defined within the graph locale?
Related question with no answers: Access definitions from sublocale
You don't do it using a lemma but by the interpretation command:
interpretation MyGraph: graph Obj Are Dom Cod <proof>
Of course, the <proof> could use your lemma, but you don't need to prove such a lemma separately.
Now MyGraph.path refers to the path component of this instance.
See https://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle/doc/locales.pdf

Isabelle tactic definition

I was trying to copy a tactic file that might help proving my theorem, however, there seems to be having some problems.
The original tactic is like this:
fun typechk_step_tac tyrls =
FIRSTGOAL (test_assume_tac ORELSE' filt_resolve_tac tyrls 4);
(* Output:
Value or constructor (filt_resolve_tac) has not been declared
*)
I tried to find this tactic in the internet, but there is not much explanation of this. I saw that some theory file in 2009 use this method, and for 2020 one, a theory file use a similar method called filt_resolve_from_net_tac, which I think the types of them are different, so I am not sure about how to use them.
Other than the filt_resolve_tac, the tactic file used a function called ref like this:
val basictypeinfo = ref([]:thm list);
(* Output:
Value or constructor (ref) has not been declared
*)
However, the Isabelle 2020 seems know something about the ref, since when I changed something:
val basictypeinfo = [];
fun addT typelist = (basictypeinfo := (typelist #(!basictypeinfo)));
(* It shows error:
Type error in function application.
Function: ! : 'a ref -> 'a
Argument: basictypeinfo : 'a list
Reason: Can't unify 'a ref (*In Basis*) with 'a list (*In Basis*) (Different type constructors)
*)
It clearly shows that ref is like a type, and it is defined in the Isabelle right?
Therefore the ref in ref([]:the list) should be similar to a casting function, and I found that there is a thing called Unsynchronized.ref which solves the type problem, may I know are they the same thing in this context?
In the later part of the files, there are also some tactic and rule set seems to be not defined, for example:
etac: Value or constructor (etac) has not been declared
(*
I saw Prof. Paulson had shown this tactic in his Isabelle lecture,
but I couldn't find it in the Isabelle manual or the implementation manual,
is the name of it changed?
*)
ZF_typechecks: I couldn't find any rule sets that has this name in the whole ZF directory.
Sorry to have so many questions, it seems that the tactic file is no longer really well-supported by the new Isabelle, are there still people using Isar/ML to define new tactic? Or most people are doing this with the method declaration in the Isabelle? And which part of the Isabelle/Isar reference manual should I read to master this skill? Thank you very much indeed.

Isabelle/HOL: access interpretation in another locale

There's an Isabelle/HOL library that I want to build on with new definitions and proofs. The library defines locale2, which I'd like to build upon. Inside locale2, there's an interpretation of locale1.
To extend locale2 in a separate theory, I define locale3 = locale2. Inside locale3, however, I can't figure out how to access locale2's interpretation of locale1. How can I do that? (Am I even going about this in the right way at all?)
Below is an MWE. This is the library theory with the locale I want to extend:
theory ExistingLibrary
imports Main
begin
(* this is the locale with the function I want *)
locale locale1 = assumes True
begin
fun inc :: "nat ⇒ nat"
where "inc n = n + 1"
end
(* this is the locale that interprets the locale I want *)
locale locale2 = assumes True
begin
interpretation I: locale1
by unfold_locales auto
end
end
This is my extension theory. My attempt is at the bottom, causing an error:
theory MyExtension
imports ExistingLibrary
begin
locale locale3 = locale2
begin
definition x :: nat
where "x = I.inc 7" (* Undefined constant: "I.inc" *)
end
end
Interpretations inside a context last only until the end of the context. When the context is entered again, you have to repeat the interpretation to make the definitions and theorems available:
locale 3 = locale2 begin
interpretation I: locale1 <proof>
For this reason, I recommend to split the first interpretation step into two:
A lemma with a name that proves the goal of the interpretation step.
The interpretation command itself which can be proved by(rulelemma)
If you want the interpretation to take place whenever you open the locale and whenever you interpret the locale, then sublocale instead of interpretation might be better.

Using syntax/translations wiith locales

Is there any known hack that allows custom syntax for definitions inside a given locale, using the syntax/translation mechanism? All of my attempts at an "obvious" solution are generating type errors, which I am led to believe is caused by syntax/translation not yet being made "locale-aware".
Raw AST transformations with syntax and translations cannot be used inside locales in Isabelle2016. There is a workaround for constants and types whose declaration does not depend on locale parameters. You merely have to issue the syntax declaration outside of the locale for the appropriate constant from background theory. Below is a proof of concept:
locale test = fixes a :: nat begin
definition foo :: "nat ⇒ nat" where "foo x = x"
end
syntax "_foo" :: "nat ⇒ bool" ("FOO")
translations "FOO" ↽ "CONST test.foo"
context test begin
term foo
This workaround does not work for constants which depend on parameters of the locale, because then constant in the background theory takes these parameters as additional arguments and the locale installs an abbreviation, which is folded before the custom syntax translation fires.

Using type classes to overload notation for constructors (now a namespace issue)

This is a derivative question of Existing constants (e.g. constructors) in type class instantiations.
The short question is this: How can I prevent the error that occurs due to free_constructors, so that I can combine the two theories that I include below.
I've been sitting on this for months. The other question helped me move forward (it appears). Thanks to the person who deserves thanks.
The real issue here is about overloading notation, though it looks like I now just have a namespace problem.
At this point, it's not a necessity, just an inconvenience that two theories have to be used. If the system allows, all this will disappear, but I ask anyway to make it possible to get a little extra information.
The big explanation here comes in explaining the motivation, which may lead to getting some extra information. I explain some, then include S1.thy, make a few comments, and then include S2.thy.
Motivation: using syntactic type classes for overloading notation of multiple binary datatypes
The basic idea is that I might have 5 different forms of binary words that have been defined with datatype, and I want to define some binary and hexadecimal notation that's overloaded for all 5 types.
I don't know what all is possible, but the past tells me (by others telling me things) that if I want code generation, then I should use type classes, to get the magic that comes with type classes.
The first theory, S1
Next is the theory S1.thy. What I do is instantiate bool for the type classes zero and one, and then use free_constructors to set up the notation 0 and 1 for use as the bool constructors True and False. It seems to work. This in itself is something I specifically wanted, but didn't know how to do.
I then try to do the same thing with an example datatype, BitA. It doesn't work because constant case_BitA is created when BitA is defined with datatype. It causes a conflict.
Further comments of mine are in the THY.
theory S1
imports Complex_Main
begin
declare[[show_sorts]]
(*---EXAMPLE, NAT 0: IT CAN BE USED AS A CONSTRUCTOR.--------------------*)
fun foo_nat :: "nat => nat" where
"foo_nat 0 = 0"
(*---SETTING UP BOOL TRUE & FALSE AS 0 AND 1.----------------------------*)
(*
I guess it works, because 'free_constructors' was used for 'bool' in
Product_Type.thy, instead of in this theory, like I try to do with 'BitA'.
*)
instantiation bool :: "{zero,one}"
begin
definition "zero_bool = False"
definition "one_bool = True"
instance ..
end
(*Non-constructor pattern error at this point.*)
fun foo1_bool :: "bool => bool" where
"foo1_bool 0 = False"
find_consts name: "case_bool"
free_constructors case_bool for "0::bool" | "1::bool"
by(auto simp add: zero_bool_def one_bool_def)
find_consts name: "case_bool"
(*found 2 constant(s):
Product_Type.bool.case_bool :: "'a∷type => 'a∷type => bool => 'a∷type"
S1.bool.case_bool :: "'a∷type => 'a∷type => bool => 'a∷type" *)
fun foo2_bool :: "bool => bool" where
"foo2_bool 0 = False"
|"foo2_bool 1 = True"
thm foo2_bool.simps
(*---TRYING TO WORK A DATATYPE LIKE I DID WITH BOOL.---------------------*)
(*
There will be 'S1.BitA.case_BitA', so I can't do it here.
*)
datatype BitA = A0 | A1
instantiation BitA :: "{zero,one}"
begin
definition "0 = A0"
definition "1 = A1"
instance ..
end
find_consts name: "case_BitA"
(*---ERROR NEXT: because there's already S1.BitA.case_BitA.---*)
free_constructors case_BitA for "0::BitA" | "1::BitA"
(*ERROR: Duplicate constant declaration "S1.BitA.case_BitA" vs.
"S1.BitA.case_BitA" *)
end
The second theory, S2
It seems that case_BitA is necessary for free_constructors to set things up, and it occurred to me that maybe I could get it to work by using datatype in one theory, and use free_constructors in another theory.
It seems to work. Is there a way I can combine these two theories?
theory S2
imports S1
begin
(*---HERE'S THE WORKAROUND. IT WORKS BECAUSE BitA IS IN S1.THY.----------*)
(*
I end up with 'S1.BitA.case_BitA' and 'S2.BitA.case_BitA'.
*)
declare[[show_sorts]]
find_consts name: "BitA"
free_constructors case_BitA for "0::BitA" | "1::BitA"
unfolding zero_BitA_def one_BitA_def
using BitA.exhaust
by(auto)
find_consts name: "BitA"
fun foo_BitA :: "BitA => BitA" where
"foo_BitA 0 = A0"
|"foo_BitA 1 = A1"
thm foo_BitA.simps
end
The command free_constructors always creates a new constant of the given name for the case expression and names the generated theorems in the same way as datatype does, because datatype internaly calls free_constructors.
Thus, you have to issue the command free_constructors in a context that changes the name space. For example, use a locale:
locale BitA_locale begin
free_constructors case_BitA for "0::BitA" | "1::BitA" ...
end
interpretation BitA!: BitA_locale .
After that, you can use both A0 and A1 as constructors in pattern matching equations and 0 and 1, but you should not mix them in a single equation. Yet, A0 and 0 are still different constants to Isabelle. This means that you may have to manually convert the one into the other during proofs and code generation works only for one of them. You would have to set up the code generator to replace A0 with 0 and A1 with 1 (or vice versa) in the code equations. To that end, you want to declare the equations A0 = 0 and A1 = 1 as [code_unfold], but you also probably want to write your own preprocessor function in ML that replaces A0 and A1 in left-hand sides of code equations, see the code generator tutorial for details.
Note that if BitA was a polymorphic datatype, packages such as BNF and lifting would continue to use the old set of constructors.
Given these problems, I would really go for the manual definition of the type as described in my answer to another question. This saves you a lot of potential issues later on. Also, if you are really only interested in notation, you might want to consider adhoc_overloading. It works perfectly well with code generation and is more flexible than type classes. However, you cannot talk about the overloaded notation abstractly, i.e., every occurrence of the overloaded constant must be disambiguated to a single use case. In terms of proving, this should not be a restriction, as you assume nothing about the overloaded constant. In terms of definitions over the abstract notation, you would have to repeat the overloading there as well (or abstract over the overloaded definitions in a locale and interpret the locale several times).

Resources