What's the difference between `overloading` and `adhoc_overloading`? - isabelle

The Isabelle reference manual describes to ways to perform type-based overloading of constants: "Adhoc overloading of constants" in section 11.3, and "Overloaded constant definitions" in section 5.9.
It seems that 5.9 overloading requires all type parameters to be known before it decides on an overloaded constant, whereas 11.3 (adhoc) overloading decides on an overloaded constant if there is only one matching:
consts
c1 :: "'t ⇒ 'a set"
c2 :: "'t ⇒ 'a set"
definition f1 :: ‹'a list ⇒ 'a set› where
‹f1 s ≡ set s›
adhoc_overloading
c1 f1
overloading
f2 ≡ ‹c2 :: 'a list ⇒ 'a set›
begin
definition ‹f2 w ≡ set w›
end
context
fixes s :: ‹int list›
begin
term ‹c1 s› (* c1 s :: int set *)
term ‹c2 s› (* c2 s :: 'a set *)
end
What's the difference between the two? When would I use one over the other?

Overloading is a core feature of Isabelle's logic. It allows you to declare a single constant with a "broad" type that can be defined on specific types. There's rarely a need for users to do that manually. It is the underlying mechanism used to implement type classes. For example, if you define a type class as follows:
class empty =
fixes empty :: 'a
assumes (* ... *)
Then, the class command will declare the constant empty of type 'a', and subsequent instantiations merely provide a definition of empty for specific types, like nat or list.
Long story short: overloading is – for most purposes – an implementation detail that is managed by higher-level commands. Occasionally, the need for some manual tweaking arises, e.g. when you have to define a type that depends on class constraints.
Ad-hoc overloading is, in my opinion, a misleading name. As far as I understand, it stems from Haskell (see this paper from Wadler and Blott). There, they use it to describe precisely the type class mechanism that in Isabelle would be coined as just "overloading". In Isabelle, ad-hoc overloading means something entirely different. The idea is that you can use it to define abstract syntax (like do-notation for monads) that can't accurately be capture by Isabelle's ML-style simple type system. As in overloading, you'd define a constant with a "broad" type. But that constant never receives any definitions! Instead, you define new constants with more specific types. When Isabelle's term parser encounters the use of the abstract constant, it will try to replace it with a concrete constant.
For example: you can use do-notation with option, list, and a few other types. If you write something like:
do { x <- foo; bar }
Then Isabelle sees:
Monad_Syntax.bind foo (%x. bar)
In a second step, depending on the type of foo, it will translate it to one of these possible terms:
Option.bind foo (%x. bar)
List.bind foo (%x. bar)
(* ... more possibilities ...*)
Again, users probably don't need to deal with this concept explicitly. If you pull in Monad_Syntax from the library, you'll get one application of ad-hoc overloading readily configured.
Long story short: ad-hoc overloading is a mechanism for enabling "fancy" syntax in Isabelle. Newbies may get confused by it because error messages tend to be hard to understand if there's something wrong in the internal translation.

Related

How to realize declared types and constants in Isabelle/HOL?

I know that the typedecl and consts commands can be used in Isabelle (as of 2021) to declare a new type/constant without defining it. For example:
typedecl state
consts M :: "(state × state)set"
typedecl "atom"
consts L :: "state ⇒ atom set"
My questions are:
Can one define the type/constants to be some actual type or constants after the declaration?
If so, can multiple definitions be defined in different theories to implement those declarations? (, so that the declaration serves as an interface)
I have seen some old documents that use defs (plus consts) command to define things, but it no longer seems to be defined in Isabelle 2021.

Implementing custom primitive types in Julia

The Julia documentation says:
A primitive type is a concrete type whose data consists of plain old
bits. Classic examples of primitive types are integers and
floating-point values. Unlike most languages, Julia lets you declare
your own primitive types, rather than providing only a fixed set of
built-in ones. In fact, the standard primitive types are all defined
in the language itself:
I'm unable to find an example of how to do this, though, either in the docs or in the source code or anywhere else. What I'm looking for is an example of how to declare a primitive type, and how to subsequently implement a function or method on that type that works by manipulating those bits.
Is anyone able to point me towards an example? Thanks.
Edit: It's clear how to declare a primitive type, as there are examples immediately below the above quote in the doc. I'm hoping for information about how to subsequently manipulate them. For example, say I wanted to (pointlessly) implement my own primitive type MyInt8. I could declare that with primitive type MyInt8 <: Signed 8 end. But how would I subsequently implement a function myplus that manipulated the bits within Myint8?
PS in case it helps, the reason I'm asking is not because I need to do anything specific in Julia; I'm designing my own language for fun, and am researching how other languages implement various things.
# Declare the new type.
primitive type MyInt8 <: Signed 8 end
# A constructor to create values of the type MyInt8.
MyInt8(x :: Int8) = reinterpret(MyInt8, x)
# A constructor to convert back.
Int8(x :: MyInt8) = reinterpret(Int8, x)
# This allows the REPL to show values of type MyInt8.
Base.show(io :: IO, x :: MyInt8) = print(io, Int8(x))
# Declare an operator for the new type.
import Base: +
+ (a :: MyInt8, b :: MyInt8) = MyInt8(Int8(a) + Int8(b))
The key function here is reinterpret. It allows the bit representation of an Int8 to be treated as the new type.
To store a value with a custom bit layout, inside the MyInt8 constructor, you could perform any of the standard bit manipulation functions on the Int8 before 'reinterpreting' them as a MyInt8.

Isabelle: Class of topological vector spaces

I wanted to define the class of topological vector spaces in the obvious way:
theory foo
imports Real_Vector_Spaces
begin
class topological_vector = topological_space + real_vector +
assumes add_cont_fst: "∀a. continuous_on UNIV (λb. a + b)"
...
but I got the error Type inference imposes additional sort constraint topological_space of type parameter 'a of sort type
I tried introducing type constraints in the condition, and it looks like
continuous_on doesn't want to match with the default type 'a of the class.
Of course I can work around this by replacing continuity with equivalent conditions, I'm just curious why this doesn't work.
Inside a class definition in Isabelle/HOL, there may occur only one type variable (namely 'a), which has the default HOL sort type. Thus, one cannot formalise multi-parameter type classes. This also affects definitions inside type classes, which may depend only on the parameters of one type class. For example, you can define a predicate cont :: 'a set => ('a => 'a) => bool inside the type class context topological_space as follows
definition (in topological_space) cont :: "'a set ⇒ ('a ⇒ 'a) ⇒ bool"
where "cont s f = (∀x∈s. (f ---> f x) (at x within s))"
The target (in topological_space) tells the type class system that cont really depends only on one type. Thus, it is safe to use cont in assumptions of other type classes which inherit from topological_space.
Now, the predicate continuous_on in Isabelle/HOL has the type 'a set => ('a => 'b) => bool where both 'a and 'b must be of sort topological_space. Thus, continuous_on is more general than cont, because it allows different topological spaces a and b. Conversely, continuous_on cannot be defined within any one type class. Consequently, you cannot use continuous_on in assumptions of type classes either. This restriction is not specific to continuous_on, it appears for all kinds of morphisms, e.g. mono for order-preserving functions, homomorphisms between algebraic structures, etc. Single-parameter type classes just cannot express such things.
In your example, you get the error because Isabelle unifies all occuring type variables to 'a and then realises that continuous_on forces the sort topological_space on 'a, but for the above reasons, you may not depend on sorts in class specifications.
Nevertheless, there might be a simple way out. Just define cont as described above and use it in the assumptions of topological_vector instead of continuous_on. Outside of the class context, you can then prove that cont = continuous_on and derive the original assumption with continuous_on instead of cont. This keeps you from reasoning abstractly within the class context, but this is only a minor restriction.

What's the difference between the empty sort, 'a::{}, and a sort of "type", 'a::type

Below, the comments show the output for the term commands:
declare[[show_sorts]]
term "x"
(* "x::'a::{}" :: "'a::{}" *)
term "x::'a"
(* "x::'a::type" :: "'a::type" *)
In a section title about a type class, I'm using the phrase "nat to type", when what I mean is "nat to 'a" (which I don't use because words generally work better in titles).
I need to be succinct, but if I'm also reasonably, technically correct, that's even better.
Update: Here, I try and clarify what I was asking about. I think I was saying this:
I'm confused. The command term "x" shows that x is of type 'a, and that 'a is of sort {}. Especially with hindsight here, and in comparison to what I got for term "x::'a", a sort of {} is not what I would expect for 'a. Here, like many times, I look to the software for answers, and when it tells me 'a for x has no sort, that makes me wonder.
So, I minimally give x the type 'a, which results in 'a as having sort type. This kind of answer makes sense to me. Not that 'a has to have the sort type, but that 'a should at least have a sort, though my original motivation was to assure myself that the 'a in a type class is of sort type.
From Lars' answer, I am reminded that the type inference engine interprets a type as broadly as possible, so I assume that's at the core of this.
Update 2
From Lars' additional comment, it turns out, at least for me, that a key phrase in understanding 'a::{} is "sort constraint", the "constraint" in "sort constraint" giving important meaning to {}.
Here's some source for anyone who's interested in studying the subtleties of the language of types and sorts:
declare [[show_sorts]]
thm "Pure.reflexive" (* ?x::?'a::{} == ?x [name "Pure.reflexive"] *)
thm "HOL.refl" (* (?t::?'a::type) = ?t [name "HOL.refl"] *)
(* Pure.reflexive *)
theorem "(x::'a::type) == x"
by(rule Pure.reflexive)
theorem "(x::prop) == x"
by(rule Pure.reflexive)
theorem "(x::'a::{}) == x"
by(rule Pure.reflexive)
(* HOL.refl *)
theorem "(x::'a::type) = x"
by(rule HOL.refl)
theorem "(x::'a::{}) = x"
by(rule HOL.refl)
(*ERROR: Type unification failed: Variable 'a::{} not of sort type*)
(* LINE 47 HOL.thy: where the use of "type" is defined. *)
setup {* Axclass.class_axiomatization (#{binding type}, []) *}
default_sort type
setup {* Object_Logic.add_base_sort #{sort type} *}
A sort is an intersection of type classes. Hence, the most general sort is the full sort, written {} (i.e., the empty intersection). If a sort consists only of a single class, the curly braces are omitted.
In Isabelle/HOL, type is the sort of HOL types (in contrast to the types of the logical framework, most notably the type prop of propositions. So, all the types you usually work with (bool, nat, int, pairs, lists, types defined with typedef or datatype) will have sort type.
This guarantees the separation between of the types of the object logic (e.g., HOL) and the logical framework (i.e., Isabelle/Pure): Operators of the logical framework can be used to compose HOL expressions, but cannot occur inside HOL expressions.
So, when working in Isabelle/HOL, you almost always want your expressions to have sort type and hence type is declared as default sort, which means that the type inference will use this instead of the empty sort if no additional constraints are given.
However, due to a shortcoming(?) of the type inference setup, there are some rare cases where type inference still infers the empty sort. This can yield to surprising errors.

In Functional Programming, what is a functor?

I've come across the term 'Functor' a few times while reading various articles on functional programming, but the authors typically assume the reader already understands the term. Looking around on the web has provided either excessively technical descriptions (see the Wikipedia article) or incredibly vague descriptions (see the section on Functors at this ocaml-tutorial website).
Can someone kindly define the term, explain its use, and perhaps provide an example of how Functors are created and used?
Edit: While I am interested in the theory behind the term, I am less interested in the theory than I am in the implementation and practical use of the concept.
Edit 2: Looks like there is some cross-terminoligy going on: I'm specifically referring to the Functors of functional programming, not the function objects of C++.
The word "functor" comes from category theory, which is a very general, very abstract branch of mathematics. It has been borrowed by designers of functional languages in at least two different ways.
In the ML family of languages, a functor is a module that takes one or more other modules as a parameter. It's considered an advanced feature, and most beginning programmers have difficulty with it.
As an example of implementation and practical use, you could define your favorite form of balanced binary search tree once and for all as a functor, and it would take as a parameter a module that provides:
The type of key to be used in the binary tree
A total-ordering function on keys
Once you've done this, you can use the same balanced binary tree implementation forever. (The type of value stored in the tree is usually left polymorphic—the tree doesn't need to look at values other than to copy them around, whereas the tree definitely needs to be able to compare keys, and it gets the comparison function from the functor's parameter.)
Another application of ML functors is layered network protocols. The link is to a really terrific paper by the CMU Fox group; it shows how to use functors to build more complex protocol layers (like TCP) on type of simpler layers (like IP or even directly over Ethernet). Each layer is implemented as a functor that takes as a parameter the layer below it. The structure of the software actually reflects the way people think about the problem, as opposed to the layers existing only in the mind of the programmer. In 1994 when this work was published, it was a big deal.
For a wild example of ML functors in action, you could see the paper ML Module Mania, which contains a publishable (i.e., scary) example of functors at work. For a brilliant, clear, pellucid explanation of the ML modules system (with comparisons to other kinds of modules), read the first few pages of Xavier Leroy's brilliant 1994 POPL paper Manifest Types, Modules, and Separate Compilation.
In Haskell, and in some related pure functional language, Functor is a type class. A type belongs to a type class (or more technically, the type "is an instance of" the type class) when the type provides certain operations with certain expected behavior. A type T can belong to class Functor if it has certain collection-like behavior:
The type T is parameterized over another type, which you should think of as the element type of the collection. The type of the full collection is then something like T Int, T String, T Bool, if you are containing integers, strings, or Booleans respectively. If the element type is unknown, it is written as a type parameter a, as in T a.
Examples include lists (zero or more elements of type a), the Maybe type (zero or one elements of type a), sets of elements of type a, arrays of elements of type a, all kinds of search trees containing values of type a, and lots of others you can think of.
The other property that T has to satisfy is that if you have a function of type a -> b (a function on elements), then you have to be able to take that function and product a related function on collections. You do this with the operator fmap, which is shared by every type in the Functor type class. The operator is actually overloaded, so if you have a function even with type Int -> Bool, then
fmap even
is an overloaded function that can do many wonderful things:
Convert a list of integers to a list of Booleans
Convert a tree of integers to a tree of Booleans
Convert Nothing to Nothing and Just 7 to Just False
In Haskell, this property is expressed by giving the type of fmap:
fmap :: (Functor t) => (a -> b) -> t a -> t b
where we now have a small t, which means "any type in the Functor class."
To make a long story short, in Haskell a functor is a kind of collection for which if you are given a function on elements, fmap will give you back a function on collections. As you can imagine, this is an idea that can be widely reused, which is why it is blessed as part of Haskell's standard library.
As usual, people continue to invent new, useful abstractions, and you may want to look into applicative functors, for which the best reference may be a paper called Applicative Programming with Effects by Conor McBride and Ross Paterson.
Other answers here are complete, but I'll try another explanation of the FP use of functor. Take this as analogy:
A functor is a container of type a that, when subjected to a function that maps from a→b, yields a container of type b.
Unlike the abstracted-function-pointer use in C++, here the functor is not the function; rather, it's something that behaves consistently when subjected to a function.
There are three different meanings, not much related!
In Ocaml it is a parametrized module. See manual. I think the best way to grok them is by example: (written quickly, might be buggy)
module type Order = sig
type t
val compare: t -> t -> bool
end;;
module Integers = struct
type t = int
let compare x y = x > y
end;;
module ReverseOrder = functor (X: Order) -> struct
type t = X.t
let compare x y = X.compare y x
end;;
(* We can order reversely *)
module K = ReverseOrder (Integers);;
Integers.compare 3 4;; (* this is false *)
K.compare 3 4;; (* this is true *)
module LexicographicOrder = functor (X: Order) ->
functor (Y: Order) -> struct
type t = X.t * Y.t
let compare (a,b) (c,d) = if X.compare a c then true
else if X.compare c a then false
else Y.compare b d
end;;
(* compare lexicographically *)
module X = LexicographicOrder (Integers) (Integers);;
X.compare (2,3) (4,5);;
module LinearSearch = functor (X: Order) -> struct
type t = X.t array
let find x k = 0 (* some boring code *)
end;;
module BinarySearch = functor (X: Order) -> struct
type t = X.t array
let find x k = 0 (* some boring code *)
end;;
(* linear search over arrays of integers *)
module LS = LinearSearch (Integers);;
LS.find [|1;2;3] 2;;
(* binary search over arrays of pairs of integers,
sorted lexicographically *)
module BS = BinarySearch (LexicographicOrder (Integers) (Integers));;
BS.find [|(2,3);(4,5)|] (2,3);;
You can now add quickly many possible orders, ways to form new orders, do a binary or linear search easily over them. Generic programming FTW.
In functional programming languages like Haskell, it means some type constructors (parametrized types like lists, sets) that can be "mapped". To be precise, a functor f is equipped with (a -> b) -> (f a -> f b). This has origins in category theory. The Wikipedia article you linked to is this usage.
class Functor f where
fmap :: (a -> b) -> (f a -> f b)
instance Functor [] where -- lists are a functor
fmap = map
instance Functor Maybe where -- Maybe is option in Haskell
fmap f (Just x) = Just (f x)
fmap f Nothing = Nothing
fmap (+1) [2,3,4] -- this is [3,4,5]
fmap (+1) (Just 5) -- this is Just 6
fmap (+1) Nothing -- this is Nothing
So, this is a special kind of a type constructors, and has little to do with functors in Ocaml!
In imperative languages, it is a pointer to function.
In OCaml, it's a parameterised module.
If you know C++, think of an OCaml functor as a template. C++ only has class templates, and functors work at the module scale.
An example of functor is Map.Make; module StringMap = Map.Make (String);; builds a map module that works with String-keyed maps.
You couldn't achieve something like StringMap with just polymorphism; you need to make some assumptions on the keys. The String module contains the operations (comparison, etc) on a totally ordered string type, and the functor will link against the operations the String module contains. You could do something similar with object-oriented programming, but you'd have method indirection overhead.
You got quite a few good answers. I'll pitch in:
A functor, in the mathematical sense, is a special kind of function on an algebra. It is a minimal function which maps an algebra to another algebra. "Minimality" is expressed by the functor laws.
There are two ways to look at this. For example, lists are functors over some type. That is, given an algebra over a type 'a', you can generate a compatible algebra of lists containing things of type 'a'. (For example: the map that takes an element to a singleton list containing it: f(a) = [a]) Again, the notion of compatibility is expressed by the functor laws.
On the other hand, given a functor f "over" a type a, (that is, f a is the result of applying the functor f to the algebra of type a), and function from g: a -> b, we can compute a new functor F = (fmap g) which maps f a to f b. In short, fmap is the part of F that maps "functor parts" to "functor parts", and g is the part of the function that maps "algebra parts" to "algebra parts". It takes a function, a functor, and once complete, it IS a functor too.
It might seem that different languages are using different notions of functors, but they're not. They're merely using functors over different algebras. OCamls has an algebra of modules, and functors over that algebra let you attach new declarations to a module in a "compatible" way.
A Haskell functor is NOT a type class. It is a data type with a free variable which satisfies the type class. If you're willing to dig into the guts of a datatype (with no free variables), you can reinterpret a data type as a functor over an underlying algebra. For example:
data F = F Int
is isomorphic to the class of Ints. So F, as a value constructor, is a function that maps Int to F Int, an equivalent algebra. It is a functor. On the other hand, you don't get fmap for free here. That's what pattern matching is for.
Functors are good for "attaching" things to elements of algebras, in an algebraically compatible way.
The best answer to that question is found in "Typeclassopedia" by Brent Yorgey.
This issue of Monad Reader contain a precise definition of what a functor is as well as many definition of other concepts as well as a diagram. (Monoid, Applicative, Monad and other concept are explained and seen in relation to a functor).
http://haskell.org/sitewiki/images/8/85/TMR-Issue13.pdf
excerpt from Typeclassopedia for Functor:
"A simple intuition is that a Functor represents a “container” of some
sort, along with the ability to apply a function uniformly to every element in the
container"
But really the whole typeclassopedia is a highly recommended reading that is surprisingly easy. In a way you can see the typeclass presented there as a parallel to design pattern in object in the sense that they give you a vocabulary for given behavior or capability.
Cheers
There is a pretty good example in the O'Reilly OCaml book that's on Inria's website (which as of writing this is unfortunately down). I found a very similar example in this book used by caltech: Introduction to OCaml (pdf link). The relevant section is the chapter on functors (Page 139 in the book, page 149 in the PDF).
In the book they have a functor called MakeSet which creates a data structure that consists of a list, and functions to add an element, determine if an element is in the list, and to find the element. The comparison function that is used to determine if it's in/not in the set has been parametrized (which is what makes MakeSet a functor instead of a module).
They also have a module that implements the comparison function so that it does a case insensitive string compare.
Using the functor and the module that implements the comparison they can create a new module in one line:
module SSet = MakeSet(StringCaseEqual);;
that creates a module for a set data structure that uses case insensitive comparisons. If you wanted to create a set that used case sensitive comparisons then you would just need to implement a new comparison module instead of a new data structure module.
Tobu compared functors to templates in C++ which I think is quite apt.
Given the other answers and what I'm going to post now, I'd say that it's a rather heavily overloaded word, but anyway...
For a hint regarding the meaning of the word 'functor' in Haskell, ask GHCi:
Prelude> :info Functor
class Functor f where
fmap :: forall a b. (a -> b) -> f a -> f b
(GHC.Base.<$) :: forall a b. a -> f b -> f a
-- Defined in GHC.Base
instance Functor Maybe -- Defined in Data.Maybe
instance Functor [] -- Defined in GHC.Base
instance Functor IO -- Defined in GHC.Base
So, basically, a functor in Haskell is something that can be mapped over. Another way to say it is that a functor is something which can be regarded as a container which can be asked to use a given function to transform the value it contains; thus, for lists, fmap coincides with map, for Maybe, fmap f (Just x) = Just (f x), fmap f Nothing = Nothing etc.
The Functor typeclass subsection and the section on Functors, Applicative Functors and Monoids of Learn You a Haskell for Great Good give some examples of where this particular concept is useful. (A summary: lots of places! :-))
Note that any monad can be treated as a functor, and in fact, as Craig Stuntz points out, the most often used functors tend to be monads... OTOH, it is convenient at times to make a type an instance of the Functor typeclass without going to the trouble of making it a Monad. (E.g. in the case of ZipList from Control.Applicative, mentioned on one of the aforementioned pages.)
"Functor is mapping of objects and morphisms that preserves composition and identity of a category."
Lets define what is a category ?
It's a bunch of objects!
Draw a few dots (for now 2 dots, one is 'a' another is 'b') inside a
circle and name that circle A(Category) for now.
What does the category holds ?
Composition between objects and Identity function for every object.
So, we have to map the objects and preserve the composition after applying our Functor.
Lets imagine 'A' is our category which has objects ['a', 'b'] and there exists a morphism a -> b
Now, we have to define a functor which can map these objects and morphisms into another category 'B'.
Lets say the functor is called 'Maybe'
data Maybe a = Nothing | Just a
So, The category 'B' looks like this.
Please draw another circle but this time with 'Maybe a' and 'Maybe b' instead of 'a' and 'b'.
Everything seems good and all the objects are mapped
'a' became 'Maybe a' and 'b' became 'Maybe b'.
But the problem is we have to map the morphism from 'a' to 'b' as well.
That means morphism a -> b in 'A' should map to morphism 'Maybe a' -> 'Maybe b'
morphism from a -> b is called f, then morphism from 'Maybe a' -> 'Maybe b' is called 'fmap f'
Now lets see what function 'f' was doing in 'A' and see if we can replicate it in 'B'
function definition of 'f' in 'A':
f :: a -> b
f takes a and returns b
function definition of 'f' in 'B' :
f :: Maybe a -> Maybe b
f takes Maybe a and return Maybe b
lets see how to use fmap to map the function 'f' from 'A' to function 'fmap f' in 'B'
definition of fmap
fmap :: (a -> b) -> (Maybe a -> Maybe b)
fmap f Nothing = Nothing
fmap f (Just x) = Just(f x)
So, what are we doing here ?
We are applying the function 'f' to 'x' which is of type 'a'. Special pattern matching of 'Nothing' comes from the definition of Functor Maybe.
So, we mapped our objects [a, b] and morphisms [ f ] from category 'A' to category 'B'.
Thats Functor!
Here's an article on functors from a programming POV, followed up by more specifically how they surface in programming languages.
The practical use of a functor is in a monad, and you can find many tutorials on monads if you look for that.
In a comment to the top-voted answer, user Wei Hu asks:
I understand both ML-functors and Haskell-functors, but lack the
insight to relate them together. What's the relationship between these
two, in a category-theoretical sense?
Note: I don't know ML, so please forgive and correct any related mistakes.
Let's initially assume that we are all familiar with the definitions of 'category' and 'functor'.
A compact answer would be that "Haskell-functors" are (endo-)functors F : Hask -> Hask while "ML-functors" are functors G : ML -> ML'.
Here, Hask is the category formed by Haskell types and functions between them, and similarly ML and ML' are categories defined by ML structures.
Note: There are some technical issues with making Hask a category, but there are ways around them.
From a category theoretic perspective, this means that a Hask-functor is a map F of Haskell types:
data F a = ...
along with a map fmap of Haskell functions:
instance Functor F where
fmap f = ...
ML is pretty much the same, though there is no canonical fmap abstraction I am aware of, so let's define one:
signature FUNCTOR = sig
type 'a f
val fmap: 'a -> 'b -> 'a f -> 'b f
end
That is f maps ML-types and fmap maps ML-functions, so
functor StructB (StructA : SigA) :> FUNCTOR =
struct
fmap g = ...
...
end
is a functor F: StructA -> StructB.
Rough Overview
In functional programming, a functor is essentially a construction of lifting ordinary unary functions (i.e. those with one argument) to functions between variables of new types. It is much easier to write and maintain simple functions between plain objects and use functors to lift them, then to manually write functions between complicated container objects. Further advantage is to write plain functions only once and then re-use them via different functors.
Examples of functors include arrays, "maybe" and "either" functors, futures (see e.g. https://github.com/Avaq/Fluture), and many others.
Illustration
Consider the function constructing the full person's name from the first and last names. We could define it like fullName(firstName, lastName) as function of two arguments, which however would not be suitable for functors that only deal with functions of one arguments. To remedy, we collect all the arguments in a single object name, which now becomes the function's single argument:
// In JavaScript notation
fullName = name => name.firstName + ' ' + name.lastName
Now what if we have many people in an array? Instead of manually go over the list, we can simply re-use our function fullName via the map method provided for arrays with short single line of code:
fullNameList = nameList => nameList.map(fullName)
and use it like
nameList = [
{firstName: 'Steve', lastName: 'Jobs'},
{firstName: 'Bill', lastName: 'Gates'}
]
fullNames = fullNameList(nameList)
// => ['Steve Jobs', 'Bill Gates']
That will work, whenever every entry in our nameList is an object providing both firstName and lastName properties. But what if some objects don't (or even aren't objects at all)? To avoid the errors and make the code safer, we can wrap our objects into the Maybe type (se e.g. https://sanctuary.js.org/#maybe-type):
// function to test name for validity
isValidName = name =>
(typeof name === 'object')
&& (typeof name.firstName === 'string')
&& (typeof name.lastName === 'string')
// wrap into the Maybe type
maybeName = name =>
isValidName(name) ? Just(name) : Nothing()
where Just(name) is a container carrying only valid names and Nothing() is the special value used for everything else. Now instead of interrupting (or forgetting) to check the validity of our arguments, we can simply reuse (lift) our original fullName function with another single line of code, based again on the map method, this time provided for the Maybe type:
// Maybe Object -> Maybe String
maybeFullName = maybeName => maybeName.map(fullName)
and use it like
justSteve = maybeName(
{firstName: 'Steve', lastName: 'Jobs'}
) // => Just({firstName: 'Steve', lastName: 'Jobs'})
notSteve = maybeName(
{lastName: 'SomeJobs'}
) // => Nothing()
steveFN = maybeFullName(justSteve)
// => Just('Steve Jobs')
notSteveFN = maybeFullName(notSteve)
// => Nothing()
Category Theory
A Functor in Category Theory is a map between two categories respecting composition of their morphisms. In a Computer Language, the main Category of interest is the one whose objects are types (certain sets of values), and whose morphisms are functions f:a->b from one type a to another type b.
For example, take a to be the String type, b the Number type, and f is the function mapping a string into its length:
// f :: String -> Number
f = str => str.length
Here a = String represents the set of all strings and b = Number the set of all numbers. In that sense, both a and b represent objects in the Set Category (which is closely related to the category of types, with the difference being inessential here). In the Set Category, morphisms between two sets are precisely all functions from the first set into the second. So our length function f here is a morphism from the set of strings into the set of numbers.
As we only consider the set category, the relevant Functors from it into itself are maps sending objects to objects and morphisms to morphisms, that satisfy certain algebraic laws.
Example: Array
Array can mean many things, but only one thing is a Functor -- the type construct, mapping a type a into the type [a] of all arrays of type a. For instance, the Array functor maps the type String into the type [String] (the set of all arrays of strings of arbitrary length), and set type Number into the corresponding type [Number] (the set of all arrays of numbers).
It is important not to confuse the Functor map
Array :: a => [a]
with a morphism a -> [a]. The functor simply maps (associates) the type a into the type [a] as one thing to another. That each type is actually a set of elements, is of no relevance here. In contrast, a morphism is an actual function between those sets. For instance, there is a natural morphism (function)
pure :: a -> [a]
pure = x => [x]
which sends a value into the 1-element array with that value as single entry. That function is not a part of the Array Functor! From the point of view of this functor, pure is just a function like any other, nothing special.
On the other hand, the Array Functor has its second part -- the morphism part. Which maps a morphism f :: a -> b into a morphism [f] :: [a] -> [b]:
// a -> [a]
Array.map(f) = arr => arr.map(f)
Here arr is any array of arbitrary length with values of type a, and arr.map(f) is the array of the same length with values of type b, whose entries are results of applying f to the entries of arr. To make it a functor, the mathematical laws of mapping identity to identity and compositions to compositions must hold, which are easy to check in this Array example.
Not to contradict the previous theoretical or mathematical answers, but a Functor is also an Object (in an Object-Oriented programming language) that has only one method and is effectively used as a function.
An example is the Runnable interface in Java, which has only one method: run.
Consider this example, first in Javascript, which has first-class functions:
[1, 2, 5, 10].map(function(x) { return x*x; });
Output:
[1, 4, 25, 100]
The map method takes a function and returns a new array with each element being the result of the application of that function to the value at the same position in the original array.
To do the same thing is Java, using a Functor, you would first need to define an interface, say:
public interface IntMapFunction {
public int f(int x);
}
Then, if you add a collection class which had a map function, you could do:
myCollection.map(new IntMapFunction() { public int f(int x) { return x * x; } });
This uses an in-line subclass of IntMapFunction to create a Functor, which is the OO equivalent of the function from the earlier JavaScript example.
Using Functors let you apply functional techniques in an OO language. Of course, some OO languages also have support for functions directly, so this isn't required.
Reference: http://en.wikipedia.org/wiki/Function_object
KISS: A functor is an object that has a map method.
Arrays in JavaScript implement map and are therefore functors. Promises, Streams and Trees often implement map in functional languages, and when they do, they are considered functors. The map method of the functor takes it’s own contents and transforms each of them using the transformation callback passed to map, and returns a new functor, which contains the structure as the first functor, but with the transformed values.
src: https://www.youtube.com/watch?v=DisD9ftUyCk&feature=youtu.be&t=76
In functional programming, error handling is different. Throwing and catching exceptions is imperative code. Instead of using try/catch block, a safety box is created around the code that might throw an error. this is a fundamental design pattern in functional programming. A wrapper object is used to encapsulate a potentially erroneous value. The wrapper's main purpose is to provide a 'different' way to use the wrapped object
const wrap = (val) => new Wrapper(val);
Wrapping guards direct access to the values so they can be manipulated
safely and immutably. Because we won’t have direct access to it, the only way to extract it is to use the identity function.
identity :: (a) -> a
This is another use case of identity function: Extracting data functionally from encapsulated types.
The Wrapper type uses the map to safely access and manipulate values. In this case, we are mapping the identity function over the container to extract the value from the container. With this approach, we can check for null before calling the function, or check for an empty string, a negative number, and so on.
fmap :: (A -> B) -> Wrapper[A] -> Wrapper[B]
fmap, first opens the container, then applies the given function to its value, and finally closes the value back into a new container of the same type. This type of function is called a functor.
fmap returns a new copy of the container at each invocation.
functors are side-effect-free
functors must be composable
In practice, functor means an object that implements the call operator in C++. In ocaml I think functor refers to something that takes a module as input and output another module.
Put simply, a functor, or function object, is a class object that can be called just like a function.
In C++:
This is how you write a function
void foo()
{
cout << "Hello, world! I'm a function!";
}
This is how you write a functor
class FunctorClass
{
public:
void operator ()
{
cout << "Hello, world! I'm a functor!";
}
};
Now you can do this:
foo(); //result: Hello, World! I'm a function!
FunctorClass bar;
bar(); //result: Hello, World! I'm a functor!
What makes these so great is that you can keep state in the class - imagine if you wanted to ask a function how many times it has been called. There's no way to do this in a neat, encapsulated way. With a function object, it's just like any other class: you'd have some instance variable that you increment in operator () and some method to inspect that variable, and everything's neat as you please.
Functor is not specifically related to functional programming. It's just a "pointer" to a function or some kind of object, that can be called as it would be a function.

Resources