I see that many theorems in Isabelle/HOL prefer meta-level implication:
==>
instead of
-->
the object logic level, i.e. higher order logic implication.
Isabelle wiki says that roughly speaking, meta level implication should be used to separate the assumptions from the conclusion in rule statements.
Other than that, what should I know about the use of object and meta level implication? I see the latter is being used mostly. When and for what should I use HOL implication?
I think the short answer is: Use ==> whenever possible as it is easier to work with than -->.
That being said, you should not see ==> too often in the code you write.
When writing a Lemma, it is often nicer to use the assumes and shows syntax.
For intermediate steps with have there is a syntax with if:
have "B" if "A" instead of have "B ==> A"
The meta implication can only be used at the top level, so if you have a predicate as argument to a function you cannot use ==> in that predicate.
Related
I'm confused by Data.Map API. I'm looking for a simple way to find a range of keys of the map at log(n) cost. This is a basic concept known as "binary search", and maybe "bisecting".
I see this strange takeWhileAntitone function where I need to provide an "antitone" predicate function. It's the first time I encounter this concept.
After reading Wikipedia on the topic, this seems to be simply saying that there may be only one place where the function changes from True to False when applied to arguments in key order. This fits a requirement for a binary search.
Since the API is documented in a strange (to me) language, I wanted to ask here:
if my understanding is correct, and
is there a reason these functions aren't called bisect, binarySearch or similar?
Since the API is documented in a strange (to me) language, I wanted to ask here:
if my understanding is correct, and
Yes. takeWhileAntitone (and other similarly named variants in the library) is the function for doing binary search on keys. It's not named takeWhile because it does not work for any argument predicate, so if you're reviewing code, it serves as a reminder to check for that.
is there a reason these functions aren't called bisect, binarySearch or similar?
This name serves to distinguish variants takeWhileAntitone, dropWhileAntitone, spanAntitone that "do binary search" but with different final results.
takeWhile is a well-known name from Haskell's standard library (in Data.List).
In FP we like to distinguish the "what" from the "how". "binary search" is an algorithm ("how"). "take while" is also literally a "how", but its meaning is arguably more naturally connected to a "what" (the longest prefix of elements satisfying a predicate). In particular, the interpretation of "take while" as "longest prefix" doesn't rely on any assumption about the predicate.
Is it possible to get a list of all the predicates and functions that I can use from in Isabelle ?
Because more often than not it happens, that I start defining by hand a predicate that I need for a proof (like coprime) only to realize that it already exists.
There is find_consts that can search the defined constants either by type or by name, similarly to find_theorems, e.g. find_consts "'a list => nat" or find_consts name:"lim". Apparently, this can also find abbreviations like coprime (which is an abbreviation for gcd a b = 1).
Of course, this can only find constants that were defined in one of the theories you have actually loaded (either directly or indirectly). There is always the possibility that the concept you need has already been formalised somewhere in ~~/src/HOL/Library or some other part of the distribution or in the AFP.
When I suspect that this is the case, I just grep the distribution or the AFP for an appropriate keyword or I ask someone who might know about it. Asking here on StackOverflow might also be a good idea to find out something like this, since there are a number of professional Isabelle users who frequent this site and who have good knowledge of what is there and what is not.
I would like to understand how keyword proof works in an Isar proof. I consulted the Isabelle/Isar reference, section 6.3.2 and Programming and Proving in Isabelle/HOL, section 4.1.
To summarise what I have learned, there are three ways of beginning a proof by the keyword proof:
without any argument Isabelle finds a suitable introduction rule to the lemma being proved and applies it in a forward mode
if a hyphen: - is supplied as an argument, then proof does nothing to the goal, avoiding any automatic rule being applied when it would lead to a blind alley
if a specific rule, like rule name, unfold true_def, clarify or induct n is supplied, then it is applied to the goal in a forward mode
Am I right that the third case is like using apply with the argument supplied?
How is the automatic introduction rule in the first case picked by the system?
And, does the above fully describe the usage of proof?
The command proof without a method specification applies the method default. The method default is almost like rule, but if rule fails, then it tries next intro_classes and then unfold_locales. The method rule without being given a list of theorems tries all rules that are declared to the classical reasoner (intro, elim, and dest). If no facts are chained in, only intro rules are considered. Otherwise, all types of rules are tried. All chained-in facts must unify with the rules. dest rules are transformed into elim rules before they are applied.
You can print all declared rules with print_rules. Safe rules (intro!, elim!, ...) are preferred over normal rules (intro, elim, ...) and extra rules (intro?, elim?) come last.
You can also use rule without giving any rules. Then, it behaves like default, but without the fallbacks intro_classes and unfold_locales.
Andreas gave a good description of how proof without a method argument works; I'll just cover some other parts of the question.
First, proof (method) is like apply (method) except for one thing: While apply leaves you in "prove" mode, where you can continue with more apply statements, proof transitions into "state" mode, where you must use a have or show statement to continue proving. Otherwise the effect on the goal state is the same.
I'd also like to point out that case 2 (proof -) is really an instance of case 3, because - is actually an ordinary proof method just like rule name or induct (you can also write apply -, for example). The hyphen - proof method does nothing, except that it will insert chained facts into the current goal, if it is given any chained facts.
I have always thought the definition of both of these were functions that take other functions as arguments. I understand the domain of each is different, but what are their defining characteristics?
Well, let me try to kind of derive their defining characteristics from their different domains ;)
First of all, in their usual context combinators are higher order functions. But as it turns out, context is an important thing to keep in mind when talking about differences of these two terms:
Higher Order Functions
When we think of higher order functions, the first thing usually mentioned is "oh, they (also) take at least one function as an argument" (thinking of fold, etc)... as if they were something special because of that. Which - depending on context - they are.
Typical context: functional programming, haskell, any other (usually typed) language where functions are first class citizens (like when LINQ made C# even more awesome)
Focus: let the caller specify/customize some functionality of this function
Combinators
Combinators are somewhat special functions, primitive ones do not even mind what they are given as arguments (argument type often does not matter at all, so passing functions as arguments is not a problem at all). So can the identity-combinator also be called "higher order function"??? Formally: No, it does not need a function as argument! But hold on... in which context would you ever encounter/use combinators (like I, K, etc) instead of just implementing desired functionality "directly"? Answer: Well, in purely functional context!
This is not a law or something, but I can really not think of a situation where you would see actual combinators in a context where you suddenly pass pointers, hash-tables, etc. to a combinator... again, you can do that, but in such scenarios there should really be a better way than using combinators.
So based on this "weak" law of common sense - that you will work with combinators only in a purely functional context - they inherently are higher order functions. What else would you have available to pass as arguments? ;)
Combining combinators (by application only, of course - if you take it seriously) always gives new combinators that therefore also are higher order functions, again. Primitive combinators usually just represent some basic behaviour or operation (thinking of S, K, I, Y combinators) that you want to apply to something without using abstractions. But of course the definition of combinators does not limit them to that purpose!
Typical context: (untyped) lambda calculus, combinatory logic (surprise)
Focus: (structurally) combine existing combinators/"building blocks" to something new (e.g. using the Y-combinator to "add recursion" to something that is not recursive, yet)
Summary
Yes, as you can see, it might be more of a contextual/philosophical thing or about what you want to express: I would never call the K-combinator (definition: K = \a -> \b -> a) "higher order function" - although it is very likely that you will never see K being called with something else than functions, therefore "making" it a higher order function.
I hope this sort of answered your question - formally they certainly are not the same, but their defining characteristics are pretty similar - personally I think of combinators as functions used as higher order functions in their typical context (which usually is somewhere between special an weird).
EDIT: I have adjusted my answer a little bit since - as it turned out - it was slightly "biased" by personal experience/imression. :) To get an even better idea about correctly distinguishing combinators from HOFs, read the comments below!
EDIT2: Taking a look at HaskellWiki also gives a technical definition for combinators that is pretty far away from HOFs!
There are many SO posts related to this, but I am asking this again with a different purpose
I am trying to understand why closures are important and useful. One of things that I've read in other SO posts related to this is that when you pass a variable to closure, the closure starts remembering this value from then onwards. Is this the entire Technical aspect of it or there is more to what happens there.
What I wonder then is what would happen when the variable used inside the closure gets modified from outside. Should they be constants only?
In the language Clojure, I can do the following: But since there are value is immutable, this issue does not arise. What about other languages and what is the proper technical definition of a closure?
(defn make-greeter [greeting-prefix]
(fn [username] (str greeting-prefix ", " username)))
((make-greeter "Hello") "World")
This is not the sort of answer that appears to get up-votes around here, but I would heartily urge you to discover the answer to your question by reading Shriram Krishnamurthi's (free!) (online!) textbook, Programming Languages: Application and Interpretation.
I will paraphrase the book very, very briefly, by summarizing the development of the teeny tiny interpreters that it leads you through:
an arithmetic expression language (AE)
an arithmetic expression language with named expressions (WAE);
implementing this involves developing a substitution function that can
replace names with values
a language that adds first-order functions (F1WAE): using a function involves substituting
values for each of the parameter names.
The same language, without substitution: it turns out that "environments" allow you to avoid the overhead of pre-emptive substitution.
a language that eliminates the separation between functions and expressions by allowing
functions to be defined at arbitrary locations (FWAE)
This is the key point: you implement this, and then you discover that with substitution it works fine, but with environments it's broken. In particular, in order to fix it up, you must be sure to associate with an evaluated function definition the environment that was in place when it was evaluated. This pair (fundef + environment-of-definition) is what's called a "closure".
Whew!
Okay, what happens when we add mutable bindings to the picture? If you try this yourself, you'll see that the natural implementation replaces an environment that associates names with values with an environment that associates names with bindings. This is orthogonal to the notion of closures; since closures capture environments, and since environments now map names to bindings, you get the behavior you describe, whereby mutation of a variable captured in an environment is visible and persistent.
Again, I would very much urge you to take a look at PLAI.
A closure is really a data structure used by the compiler to make sure that a function will always have access to the data that it needs to opperate. here is an example of a function that recordes when it was defined.
(defn outer []
(let [foo (get-time-of-day)]
(defn inner []
#(str "then:" foo " now:" (get-time-of-day)))))
(def then-and-now (outer))
(then-and-now) ==> "then:1:02:03 now:2:30:01"
....
(then-and-now) ==> "then:1:02:03 now:2:31:02"
when this function is defined a class is created and a small structure (a closure) is allocated on the heap that stores the value of foo. the class has a pointer to that (or it contains it im not sure). if you run this again then a second closure would be allocated to hold that other foo. When we say "this function closes over foo" we mean to say that it has a reference to a stricture/class/whatever that stores the state of foo at the time it was compiled. The reason you need to close over something is because the function that contains it is going away before the data will be used. In this case outer (which contains the value of foo) is going to end and be gone long before foo is used so nobody will be around to modify foo. of course foo could pas a ref to somebody who could then modify it.
A lexical closure is one in which the enclosed variables (e.g. greeting-prefix in your example) are enclosed by reference. The closure created does not simply get the value of greeting-prefix at the time it is created, but gets a reference. If greeting-prefix is modified after the closure is created, then its new value will be used by the closure every time it is called.
In pure functional languages this isn't much of a distinction, because values are never changed. So it doesn't matter if the value of greeting-prefix is copied into the closure: there's no possible difference in behaviour that could arise from referring to the original versus its copy.
In "imperative-languages-with-closures", such as C# and Java (via anonymous classes), some decision has to be made about whether the enclosed variable is enclosed by value or by reference. In Java this decision is pre-empted by only allowing final variables to be enclosed, effectively mimicking a functional language as far as that variable is concerned. In C# I believe it is a different matter.
Enclosing by value simplifies the implementation: the variable to be enclosed will often exist on the stack and hence will be destroyed when the function constructing the closure returns -- that means it can't be enclosed by reference. If you need enclosure by reference, a workaround is to identify such variables and keep them in an object allocated each time that function is called. This object is then kept as part of the closure's environment and must remain live as long as all closures using it are live. (I do not know if any compiled languages directly use this technique.)
For more descriptions see for example:
Common Lisp HyperSpec, 3.1.4 Closures and Lexical Binding
and
Common Lisp the Language, 2nd Edition, Chapter 3., Scope and Extent
You can think of a closure as an "environment", in which names are bound to values. Those names are entirely private to the closure, which is why we say that it "closes over" its environment. So your question isn't meaningful, in that the "outside" cannot affect the closed-over environment. Yes, a closure can refer to a name in a global environment (in other words, if it uses a name that is not bound in its private, closed-over environment), but that's a different story.
If you like, you can think of an environment as a dictionary, or hash table. A closure gets its own little dictionary where names are looked up.
You might enjoy reading On lambdas, capture, and mutability, which describes how this works in C# and F#, for comparison.
Have a look at this blog post: ADTs in Clojure. It shows a nice application of closures to the problem of locking up data so that it is accessible exclusively through a particular interface (rendering the data type opaque).
The main idea behind this type of locking is more simply illustrated with the counter example, which huaiyuan posted in Common Lisp while I was composing this answer. Actually, the Clojure version is interesting in that it shows that the issue of a closed-over variable changing its value does arise in Clojure if the variable happens to hold an instance of one of the reference types.
(defn create-counter []
(let [counter (atom 0)
inc-counter! #(swap! counter inc)
get-counter (fn [] #counter)]
[inc-counter! get-counter]))
As for the original make-greeter example, you could rewrite it thus (note the deref/#):
(defn make-greeter [greeting-prefix]
(fn [username] (str #greeting-prefix ", " username)))
Then you can use it to render personalised greetings from the different operators of various sections of a website. :-)
((make-greeter "Hello from Gizmos Dept") "John")
((make-greeter "Hello from Gadgets Dept") "Jack").
You can think of a closure as an
"environment", in which names are
bound to values. Those names are
entirely private to the closure, which
is why we say that it "closes over"
its environment. So your question
isn't meaningful, in that the
"outside" cannot affect the
closed-over environment. Yes, a
closure can refer to a name in a
global environment (in other words, if
it uses a name that is not bound in
its private, closed-over environment),
but that's a different story.
I suppose that the question was if things like these are possible in languages which allow mutation of local variables:
CL-USER> (let ((x (list 1 2 3)))
(prog1
(let ((y x))
(lambda () y))
(rplaca x 2)))
#<COMPILED-LEXICAL-CLOSURE #x9FEC77E>
CL-USER> (funcall *)
(2 2 3)
And -- since they are obviously possible -- I think the question is legitimate.