I have defined locale and proved a few theorems. Now I need to use them outside of this locale/context. How can I do so?
Can I obtain theorem with hypotheses extended by locale's assumptions? (Like it is done in Coq.)
locale mylocale =
assumes H1:‹a ∈ A›
begin
theorem thm:‹a∈A›
by (rule H1)
end
I need to obtain theorem "a∈A==>a∈A" from thm that is defined above. (I don't need exactly this theorem, it's just a simplest example of obtaining theorem with extended set of assumptions.(thm inside mylocale has zero hypotheses))
Every definition and theorem in a locale context generates a global version. You can access this global version (generalized over the locale parameters and extended with the locale assumptions) using locale_name.constant_name or locale_name.theorem_name. In your example, mylocale.thm gives you what you want.
If you need several theorems without the generalization of the locale parameters, you can interpret the locale in an unnamend context that has fixed the parameters and assumed the assumptions. Here's an example:
locale l = fixes a :: 'a assumes "a ~= undefined" begin
definition foo :: 'a where "foo = a"
lemma lem: "a = foo" by(simp add: foo_def)
end
thm l.lem (* a is generalized to ?a *)
consts bar :: nat
context assumes *: "bar ~= undefined" begin
interpretation bar: l bar by(fact bar)
thm lem (* a is instantiated with bar *)
end
Related
assume I have a function
definition "foo_function x = x+1"
And I have some code which processes functions. For the sake of simplicity, let this code be the identity function id. I want to print an example. I want to print that
id foo_function = foo_function
The printing shall appear in the proof doc and I want it to be checked. How can I best achieve this?
I already tried a few things:
value ‹id foo_function› raises Wellsortedness error ...
value[simp] ‹id foo_function› returns foo_function in the output panel and is quite close to what I'm looking for. But I really want to print id foo_function = foo_function and have this checked. I don't want folks to go to the output panel.
lemma ‹id foo_function = foo_function› by eval is what I'd like to have but it fails with Wellsortedness error: Type 'a ⇒ 'a not of sort equal
lemma ‹id foo_function = foo_function› by(code_simp) fails with Wellsortedness error: Type 'a ⇒ 'a not of sort equal
lemma ‹id foo_function = foo_function› by(normalization) fails with Wellsortedness error: Type 'a ⇒ 'a not of sort equal
I know that the underlying problem is equality of functions, which is absolutely not trivial and I don't expect eval to solve this. Yet, value[simp] ‹id foo_function› displays in the output panel exactly what I want to see, which gives me hope that there is a way to achieve what I'm looking for.
In my simplified example, lemma ‹id foo_function = foo_function› by(simp add: foo_function_def) would work. But for my real problem, there are way too many definitions which need unfolding to make this a pleasant choice. In particular, since I want to print a lot of examples.
Here's one possible solution:
ML‹
fun bar_conv ctxt =
Conv.arg_conv (Conv.arg1_conv (Code_Simp.dynamic_conv ctxt) then_conv Conv.arg_conv (Code_Simp.dynamic_conv ctxt))
fun bar_tac ctxt =
HEADGOAL (CONVERSION (bar_conv ctxt) THEN_ALL_NEW (resolve_tac ctxt #{thms refl}))
›
method_setup bar = ‹Scan.succeed (SIMPLE_METHOD o bar_tac)›
lemma ‹id foo_function = foo_function›
by bar
What does it do?
bar_conv defines a conv (which is a rewriting function) that rewrites both sides of the equation with Code_Simp.dynamic_conv, essentially the same thing that code_simp does, except for ignoring the equality sign. (The outher arg_conv is to dive into the Trueprop that every lemma has.)
bar_tac turns the conv into a tactic, then applies the refl theorem, which discharges the remaining goal of the shape x = x.
method_setup creates an Isar binding for the method.
There's an Isabelle/HOL library that I want to build on with new definitions and proofs. The library defines locale2, which I'd like to build upon. Inside locale2, there's an interpretation of locale1.
To extend locale2 in a separate theory, I define locale3 = locale2. Inside locale3, however, I can't figure out how to access locale2's interpretation of locale1. How can I do that? (Am I even going about this in the right way at all?)
Below is an MWE. This is the library theory with the locale I want to extend:
theory ExistingLibrary
imports Main
begin
(* this is the locale with the function I want *)
locale locale1 = assumes True
begin
fun inc :: "nat ⇒ nat"
where "inc n = n + 1"
end
(* this is the locale that interprets the locale I want *)
locale locale2 = assumes True
begin
interpretation I: locale1
by unfold_locales auto
end
end
This is my extension theory. My attempt is at the bottom, causing an error:
theory MyExtension
imports ExistingLibrary
begin
locale locale3 = locale2
begin
definition x :: nat
where "x = I.inc 7" (* Undefined constant: "I.inc" *)
end
end
Interpretations inside a context last only until the end of the context. When the context is entered again, you have to repeat the interpretation to make the definitions and theorems available:
locale 3 = locale2 begin
interpretation I: locale1 <proof>
For this reason, I recommend to split the first interpretation step into two:
A lemma with a name that proves the goal of the interpretation step.
The interpretation command itself which can be proved by(rulelemma)
If you want the interpretation to take place whenever you open the locale and whenever you interpret the locale, then sublocale instead of interpretation might be better.
When using the quotient_type command I get the following warning: "No map function defined for Example.A. This will cause problems later on".
Here is a minimal example to trigger the warning(tested with Isabelle2017).
theory Example
imports
Main
begin
datatype 'a A = B "'a A" | C
(*for map: map *) (* uncommenting doesn't fix the warning*)
quotient_type 'a Q = "'a A" / "op ="
by (rule identity_equivp)
end
So my questions are:
What is meant by a map function in this context (I only do know the concept of a map function in the context of functors in functional programming)?
What does it have to do with the datatype packages map functions, like one that would be generated by the commented line?
Which problems will one get later on?
The datatype command does not by default register the generated map function with the quotient package because there may be more general mappers (in case there are dead type variables). You therefore must do the functor declaration manually:
functor map_A
by(simp_all add: A.map_id0 A.map_comp o_def)
The mapper and its theorems are needed if you later want to lift definitions through the quotient type. This has been discussed on the Isabelle mailing list.
Is there any known hack that allows custom syntax for definitions inside a given locale, using the syntax/translation mechanism? All of my attempts at an "obvious" solution are generating type errors, which I am led to believe is caused by syntax/translation not yet being made "locale-aware".
Raw AST transformations with syntax and translations cannot be used inside locales in Isabelle2016. There is a workaround for constants and types whose declaration does not depend on locale parameters. You merely have to issue the syntax declaration outside of the locale for the appropriate constant from background theory. Below is a proof of concept:
locale test = fixes a :: nat begin
definition foo :: "nat ⇒ nat" where "foo x = x"
end
syntax "_foo" :: "nat ⇒ bool" ("FOO")
translations "FOO" ↽ "CONST test.foo"
context test begin
term foo
This workaround does not work for constants which depend on parameters of the locale, because then constant in the background theory takes these parameters as additional arguments and the locale installs an abbreviation, which is folded before the custom syntax translation fires.
(NOTE: If I can get rid of the warning I show below, then I say a bunch of extraneous stuff. As part of asking a question, I also do some opinionating. I guess that's sort of asking the question "Why am I wrong here in what I say?")
It seems that 6 of the symbols used for bool operators should have been assigned to syntactic type classes, and bool instantiated for those type classes. In particular, these:
~, &, |, \<not>, \<and>, \<or>.
Because type annotation of terms is a frequent requirement for HOL operators, I don't think it would be a great burden to have to use bool annotations for those 6 operators.
I would like to overload those 6 symbols for other logical operators. Not having the usual symbols for an application can result in there being no good solution for notation.
In the following example source, if I can get rid of the warnings, then the problem is solved (unless I would be setting a trap for myself):
definition natOP :: "nat => nat => nat" where
"natOP x y = x"
definition natlistOP :: "nat list => nat list => nat list" where
"natlistOP x y = x"
notation
natOP (infixr "&" 35)
notation
natlistOP (infixr "&" 35)
term "True & False"
term "2 & (2::nat)"
term "[2] & [(2::nat)]" (*
OUTPUT: Ambiguous input produces 3 parse trees:
...
Fortunately, only one parse tree is well-formed and type-correct,
but you may still want to disambiguate your grammar or your input.*)
Can I get rid of the warnings? It seems that since there's a type correct term, there shouldn't be a problem.
There are actually other symbols I also want, such as !, used for list:
term "[1,2,3] ! 1"
Here's the application for which I want the symbols:
Verilog HDL Operators.
Update
Based on Brian Huffman's answer, I unnotate &, and switch & to a syntactic type class. It'll work out, or it won't, indeed, binary logic, so diversely applicable. My general rule is "don't mess with default Isabelle/HOL".
(*|Unnotate; switch to a type class; see someday why this is a bad idea.|*)
no_notation conj (infixr "&" 35)
class conj =
fixes syntactic_type_classes_are_awesome :: "'a => 'a => 'a" (infixr "&" 35)
instantiation bool :: conj
begin
definition syntactic_type_classes_are_awesome_bool :: "bool => bool => bool"
where "p & q == conj p q"
instance ..
end
term "True & False"
value "True & False"
declare[[show_sorts]]
term "p & q" (* "(p::'a::conj) & (q::'a::conj)" :: "'a::conj" *)
You can "undeclare" special syntax using the no_notation command, e.g.
no_notation conj (infixr "\<and>" 35)
This infix operator is then available to be used with your own functions:
notation myconj (infixr "\<and>" 35)
From here on, \<and> refers to the constant myconj instead of the HOL library's standard conjunction operator for type bool. You should have no warnings about ambiguous syntax. The original HOL boolean operator is still accessible by name (conj), or you can give it a different syntax if you want with another notation command.
For the no_notation command to work, the pattern and fixities must be exactly the same as they were declared originally. See src/HOL/HOL.thy for the declarations of the operators you are interested in.
I should warn about a potential pitfall: Subsequent theory merges can bring the original syntax back into scope, causing ambiguous syntax again. For example, say your theory A.thy imports Main and redeclares the \<and> syntax. Then your theory B.thy imports both A and another library theory, say Complex_Main. Then in theory B, \<and> will be ambiguous. To prevent this, make sure to put all your external theory imports in the one theory file where you change the syntax; then have all of your other theories import this one.