Nika Pona asked this question on vst-user, and I have the same question:
I have a goal of the form semax _ PROP()LOCAL()SEP(a;b;c;d) and a lemma of the form a*c |-- k or a*c = k. How do I get form SEP(a;b;c;d) to SEP(k;b;d)?
I've fiddled around with the sep_apply tactic but so far I've failed to get it to apply in this context.
sep_apply should work, but you may need to fully instantiate the arguments of the lemma, e.g., sep_apply (my_lemma a c k)
Related
I was trying to copy a tactic file that might help proving my theorem, however, there seems to be having some problems.
The original tactic is like this:
fun typechk_step_tac tyrls =
FIRSTGOAL (test_assume_tac ORELSE' filt_resolve_tac tyrls 4);
(* Output:
Value or constructor (filt_resolve_tac) has not been declared
*)
I tried to find this tactic in the internet, but there is not much explanation of this. I saw that some theory file in 2009 use this method, and for 2020 one, a theory file use a similar method called filt_resolve_from_net_tac, which I think the types of them are different, so I am not sure about how to use them.
Other than the filt_resolve_tac, the tactic file used a function called ref like this:
val basictypeinfo = ref([]:thm list);
(* Output:
Value or constructor (ref) has not been declared
*)
However, the Isabelle 2020 seems know something about the ref, since when I changed something:
val basictypeinfo = [];
fun addT typelist = (basictypeinfo := (typelist #(!basictypeinfo)));
(* It shows error:
Type error in function application.
Function: ! : 'a ref -> 'a
Argument: basictypeinfo : 'a list
Reason: Can't unify 'a ref (*In Basis*) with 'a list (*In Basis*) (Different type constructors)
*)
It clearly shows that ref is like a type, and it is defined in the Isabelle right?
Therefore the ref in ref([]:the list) should be similar to a casting function, and I found that there is a thing called Unsynchronized.ref which solves the type problem, may I know are they the same thing in this context?
In the later part of the files, there are also some tactic and rule set seems to be not defined, for example:
etac: Value or constructor (etac) has not been declared
(*
I saw Prof. Paulson had shown this tactic in his Isabelle lecture,
but I couldn't find it in the Isabelle manual or the implementation manual,
is the name of it changed?
*)
ZF_typechecks: I couldn't find any rule sets that has this name in the whole ZF directory.
Sorry to have so many questions, it seems that the tactic file is no longer really well-supported by the new Isabelle, are there still people using Isar/ML to define new tactic? Or most people are doing this with the method declaration in the Isabelle? And which part of the Isabelle/Isar reference manual should I read to master this skill? Thank you very much indeed.
In Isabelle/HOL, how do I find where a given type was instantiated for a given class? For the sake of this post for example, where real was instantiated as a conditionally_complete_linorder. To justify the question: I might want to know this for inspiration for a similar instantiation, for showing it to someone(s), for Isabelle/HOL practice reading, for curiosity, and so on. My process at the moment:
First, check it actually is: type instantiation real :: conditionally_complete_linorder begin end and see if I get the error message "No parameters and no pending instance proof obligations in instantiation."
Next, ideally before where I'd need to know how i.e. whether it was direct, or implicit via classes C_1[, C_2, C_3, etc]. Then, I would need to find where those instantiations are, either an explicit instantiation real :: conditionally_complete_linorder or the implicit ones for the C_i (same process for either case ofc). I don't know how to find out how, so I have to check for an explicit instantiation, then all possible implicit instantiations.
For explicit, I can do a grep -Ern ~/.local/src/Isabelle2019 -e 'instantiation real :: conditionally_complete_linorder' (and hope the whitespace isn't weird, or do a more robust search :)). Repeat for AFP location. Alternatively, to stay within the jEdit window:
I can find where the class itself was defined by typing term "x::'a::conditionally_complete_linorder" then Ctrl-clicking the class name, and then check if real is directly instantiated in that file with Ctrl-f.
I could then check if it's instantiated where the type real is defined by typing term "x::real" and Ctrl-clicking real, then Ctrl-f for conditionally_complete_linorder in that file.
(If it is in either place it'll be whichever is further down in the import hierarchy, but I find just going through those two steps simpler.) However, if neither two places turn it up then either, for whatever reason, it is explicitly instantiated somewhere else or is implicitly instantiated. For that reason grep is more robust.
If explicit turns nothing up then I check implicit. Looking at the class_deps graph I can see that conditionally_complete_linorder can follow from either complete_linorder or linear_continuum. I can then continue the search by seeing if real is instantiated as either of them (disregarding any I happen to know real can't be instantiated as). I can also check to see if it's instantiated as both conditioanlly_complete_lattice and linorder, which is what I can see conditionally_complete_linorder is a simple (no additional assumptions) combination of*. Repeat for all of these classes recursively until the instantiations are found. In this case, I can see that linear_continuum_topology implies linear_continuum, so kill two birds with one stone with grep -Ern ~/.local/src/Isabelle2019 -e "instantiation.*real" | grep continuum and find /path/to/.local/src/Isabelle2019/src/HOL/Real.thy:897:instantiation real :: linear_continuum.
This process is quite tedious. Less but still quite tedious** would be to get the class_deps graph up and Ctrl-f for "instantiation real" in Real.thy and look for instantiations of: the original class, the superclasses of it, or the classes which imply it. Then in the files each those classes are defined search for "instantiation real". Do this recursively till done. In this case I would have found what I needed in Real.thy.
Is there an easier way? Hope I just missed something obvious.
* I can't Ctrl-click in Conditionally_Complete_Lattices.thy to jump to linorder directly, I guess because of something to do with it being pre-built, so I have to do the term "x::'a::linorder" thing again.
** And also less robust, as it is minus grep-ing which can turn up weirder instantiation locations, then again I'm not sure if this ever comes up in practice.
Thanks
You can import the theory in the code listing below and then use the command find_instantiations. I will leave the code without further explanation, but please feel free to ask further questions in the comments if you need further details or suspect that something is not quite right.
section ‹Auxiliary commands›
theory aux_cmd
imports Complex_Main
keywords "find_instantiations" :: thy_decl
begin
subsection ‹Commands›
ML ‹
fun find_instantiations ctxt c =
let
val {classes, ...} = ctxt |> Proof_Context.tsig_of |> Type.rep_tsig;
val algebra = classes |> #2
val arities = algebra |> Sorts.arities_of;
in
Symtab.lookup arities c
|> the
|> map #1
|> Sorts.minimize_sort algebra
end
fun find_instantiations_cmd tc st =
let
val ctxt = Toplevel.context_of st;
val _ = tc
|> Syntax.parse_typ ctxt
|> dest_Type
|> fst
|> find_instantiations ctxt
|> map Pretty.str
|> Pretty.writeln_chunks
in () end
val q = Outer_Syntax.command
\<^command_keyword>‹find_instantiations›
"find all instantiations of a given type constructor"
(Parse.type_const >> (fn tc => Toplevel.keep (find_instantiations_cmd tc)));
›
subsection ‹Examples›
find_instantiations filter
find_instantiations nat
find_instantiations real
end
Remarks
I would be happy to provide amendments if you find any problems with it, but do expect a reasonable delay in further replies.
The command finds both explicit and implicit instantiations, i.e. it also finds the ones that were achieved by means other than the use of the commands instance or instantiation, e.g. inside an ML function.
Unfortunately, the command does not give you the location of the file where the instantiation was performed - this is something that would be more difficult to achieve, especially, given that instantiations can also be performed programmatically. Nevertheless, given a list of all instantiations, I believe, it is nearly always easy to use the in-built search functionality on the imported theories to narrow down the exact place where the instantiation was performed.
I am just getting started with Isabelle. I have a file like this:
theory Z
imports Main Int
begin
value "(2::int) + (2::int)"
lemma "(n::int) + (m::int) = m + n"
apply(auto) done
print_locale comm_ring_1
print_interps comm_ring_1
end
Most of this works as I expected: Isabelle tells me that 2+2=4, and it knows how to prove that n+m=m+n, and it prints the axioms for a commutative unital ring.
However, I expected that the line "print_interps comm_ring_1" would cause Isabelle to tell me that it knows that the integers are an instance of the class comm_ring_1 (given that this fact is certainly proved in the file Int.thy in the standard library, which we have imported). But Isabelle does not in fact tell me that.
Is there some other way to ask Isabelle to list all the instances of comm_ring_1 that it knows about? Or to query specifically whether int is an instance of comm_ring_1? I have looked in the reference manual for such a command, but cannot find one.
Every type class in Isabelle defines a locale of the same name, but they are not the same. The commands print_locale and print_interps consider only the locale aspect of a type class. Type class registration with instance or instantiation does not register the type as an interpretation of that locale. Therefore print_interps does not list the types that have been proven instances of type classes. This is done by the command print_classes.
This is a derivative question of Existing constants (e.g. constructors) in type class instantiations.
The short question is this: How can I prevent the error that occurs due to free_constructors, so that I can combine the two theories that I include below.
I've been sitting on this for months. The other question helped me move forward (it appears). Thanks to the person who deserves thanks.
The real issue here is about overloading notation, though it looks like I now just have a namespace problem.
At this point, it's not a necessity, just an inconvenience that two theories have to be used. If the system allows, all this will disappear, but I ask anyway to make it possible to get a little extra information.
The big explanation here comes in explaining the motivation, which may lead to getting some extra information. I explain some, then include S1.thy, make a few comments, and then include S2.thy.
Motivation: using syntactic type classes for overloading notation of multiple binary datatypes
The basic idea is that I might have 5 different forms of binary words that have been defined with datatype, and I want to define some binary and hexadecimal notation that's overloaded for all 5 types.
I don't know what all is possible, but the past tells me (by others telling me things) that if I want code generation, then I should use type classes, to get the magic that comes with type classes.
The first theory, S1
Next is the theory S1.thy. What I do is instantiate bool for the type classes zero and one, and then use free_constructors to set up the notation 0 and 1 for use as the bool constructors True and False. It seems to work. This in itself is something I specifically wanted, but didn't know how to do.
I then try to do the same thing with an example datatype, BitA. It doesn't work because constant case_BitA is created when BitA is defined with datatype. It causes a conflict.
Further comments of mine are in the THY.
theory S1
imports Complex_Main
begin
declare[[show_sorts]]
(*---EXAMPLE, NAT 0: IT CAN BE USED AS A CONSTRUCTOR.--------------------*)
fun foo_nat :: "nat => nat" where
"foo_nat 0 = 0"
(*---SETTING UP BOOL TRUE & FALSE AS 0 AND 1.----------------------------*)
(*
I guess it works, because 'free_constructors' was used for 'bool' in
Product_Type.thy, instead of in this theory, like I try to do with 'BitA'.
*)
instantiation bool :: "{zero,one}"
begin
definition "zero_bool = False"
definition "one_bool = True"
instance ..
end
(*Non-constructor pattern error at this point.*)
fun foo1_bool :: "bool => bool" where
"foo1_bool 0 = False"
find_consts name: "case_bool"
free_constructors case_bool for "0::bool" | "1::bool"
by(auto simp add: zero_bool_def one_bool_def)
find_consts name: "case_bool"
(*found 2 constant(s):
Product_Type.bool.case_bool :: "'a∷type => 'a∷type => bool => 'a∷type"
S1.bool.case_bool :: "'a∷type => 'a∷type => bool => 'a∷type" *)
fun foo2_bool :: "bool => bool" where
"foo2_bool 0 = False"
|"foo2_bool 1 = True"
thm foo2_bool.simps
(*---TRYING TO WORK A DATATYPE LIKE I DID WITH BOOL.---------------------*)
(*
There will be 'S1.BitA.case_BitA', so I can't do it here.
*)
datatype BitA = A0 | A1
instantiation BitA :: "{zero,one}"
begin
definition "0 = A0"
definition "1 = A1"
instance ..
end
find_consts name: "case_BitA"
(*---ERROR NEXT: because there's already S1.BitA.case_BitA.---*)
free_constructors case_BitA for "0::BitA" | "1::BitA"
(*ERROR: Duplicate constant declaration "S1.BitA.case_BitA" vs.
"S1.BitA.case_BitA" *)
end
The second theory, S2
It seems that case_BitA is necessary for free_constructors to set things up, and it occurred to me that maybe I could get it to work by using datatype in one theory, and use free_constructors in another theory.
It seems to work. Is there a way I can combine these two theories?
theory S2
imports S1
begin
(*---HERE'S THE WORKAROUND. IT WORKS BECAUSE BitA IS IN S1.THY.----------*)
(*
I end up with 'S1.BitA.case_BitA' and 'S2.BitA.case_BitA'.
*)
declare[[show_sorts]]
find_consts name: "BitA"
free_constructors case_BitA for "0::BitA" | "1::BitA"
unfolding zero_BitA_def one_BitA_def
using BitA.exhaust
by(auto)
find_consts name: "BitA"
fun foo_BitA :: "BitA => BitA" where
"foo_BitA 0 = A0"
|"foo_BitA 1 = A1"
thm foo_BitA.simps
end
The command free_constructors always creates a new constant of the given name for the case expression and names the generated theorems in the same way as datatype does, because datatype internaly calls free_constructors.
Thus, you have to issue the command free_constructors in a context that changes the name space. For example, use a locale:
locale BitA_locale begin
free_constructors case_BitA for "0::BitA" | "1::BitA" ...
end
interpretation BitA!: BitA_locale .
After that, you can use both A0 and A1 as constructors in pattern matching equations and 0 and 1, but you should not mix them in a single equation. Yet, A0 and 0 are still different constants to Isabelle. This means that you may have to manually convert the one into the other during proofs and code generation works only for one of them. You would have to set up the code generator to replace A0 with 0 and A1 with 1 (or vice versa) in the code equations. To that end, you want to declare the equations A0 = 0 and A1 = 1 as [code_unfold], but you also probably want to write your own preprocessor function in ML that replaces A0 and A1 in left-hand sides of code equations, see the code generator tutorial for details.
Note that if BitA was a polymorphic datatype, packages such as BNF and lifting would continue to use the old set of constructors.
Given these problems, I would really go for the manual definition of the type as described in my answer to another question. This saves you a lot of potential issues later on. Also, if you are really only interested in notation, you might want to consider adhoc_overloading. It works perfectly well with code generation and is more flexible than type classes. However, you cannot talk about the overloaded notation abstractly, i.e., every occurrence of the overloaded constant must be disambiguated to a single use case. In terms of proving, this should not be a restriction, as you assume nothing about the overloaded constant. In terms of definitions over the abstract notation, you would have to repeat the overloading there as well (or abstract over the overloaded definitions in a locale and interpret the locale several times).
Is there a possibility of writing functions which are generic in respect to collection types they support other than using the seq module?
The goal is, not having to resort to copy and paste when adding new collection functions.
Generic programming with collections can be handled the same way generic programming is done in general: Using generics.
let f (map_fun : ('T1 -> 'T2) -> 'T1s -> 'T2s) (iter_fun : ('T2 -> unit) -> 'T2s -> unit) (ts : 'T1s) (g : 'T1 -> 'T2) (h : 'T2 -> unit)=
ts
|> map_fun g
|> iter_fun h
type A =
static member F(ts, g, h) = f (Array.map) (Array.iter) ts g h
static member F(ts, g, h) = f (List.map) (List.iter) ts g h
A bit ugly and verbose, but it's possible. I'm using a class and static members to take advantage of overloading. In your code, you can just use A.F and the correct specialization will be called.
For a prettier solution, see https://stackoverflow.com/questions/979084/what-features-would-you-add-remove-or-change-in-f/987569#987569 Although this feature is enabled only for the core library, it should not be a problem to modify the compiler to allow it in your code. That's possible because the source code of the compiler is open.
The seq<'T> type is the primary way of writing computations that work for any collections in F#. There are a few ways you can work with the type:
You can use functions from the Seq module (such as Seq.filter, Seq.windowed etc.)
You can use sequence comprehensions (e.g. seq { for x in col -> x * 2 })
You can use the underlying (imperative) IEnumerator<'T> type, which is sometimes needed e.g. if you want to implement your own zipping of collections (this is returned by calling GetEnumerator)
This is relatively simple type and it can be used only for reading data from collections. As the result, you'll always get a value of type seq<'T> which is essentially a lazy sequence.
F# doesn't have any mechanism for transforming collections (e.g. generic function taking collection C to collection C with new values) or any mechanism for creating collections (which is available in Haskell or Scala).
In most of the practical cases, I don't find that a problem - most of the work can be done using seq<'T> and when you need a specialized collection (e.g. array for performance), you typically need a slightly different implementation anyway.