What are real use cases of currying? - functional-programming

I've been reading lots of articles on currying, but almost all of them are misleading, explaining currying as a partial function application and all of examples usually are about functions with arity of 2, like add function or something.
Also many implementations of curry function in JavaScript makes it to accept more than 1 argument per partial application (see lodash), when Wikipedia article clearly tells that currying is about:
translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument (partial application)
So basically currying is a series of partial applications each with a single argument. And I really want to know real uses of that, in any language.

Real use case of currying is partial application.
Currying by itself is not terribly interesting. What's interesting is if your programming language supports currying by default, as is the case in F# or Haskell.
You can define higher order functions for currying and partial application in any language that supports first class functions, but it's a far cry from the flexibility you get when every function you get is curried, and thus partially applicable without you having to do anything.
So if you see people conflating currying and partial application, that's because of how closely those concepts are tied there - since currying is ubiquitous, you don't really need other forms of partial application than applying curried functions to consecutive arguments.

It is usefull to pass context.
Consider the 'map' function. It takes a function as argument:
map : (a -> b) -> [a] -> [b]
Given a function which uses some form of context:
f : SomeContext -> a -> b
This means you can elegantly use the map function without having to state the 'a'-argument:
map (f actualContext) [1,2,3]
Without currying, you would have to use a lambda:
map (\a -> f actualContext a) [1,2,3]
Notes:
map is a function which takes a list containing values of a, a function f. It constructs a new list, by taking each a and applying f to it, resulting in a list of b
e.g. map (+1) [1,2,3] = [2,3,4]

The bearing currying has on code can be divided into two sets of issues (I use Haskell to illustrate).
Syntactical, Implementation.
Syntax Issue 1:
Currying allows greater code clarity in certain cases.
What does clarity mean? Reading the function provides clear indication of its functionality.
e.g. The map function.
map : (a -> b) -> ([a] -> [b])
Read in this way, we see that map is a higher order function that lifts a function transforming as to bs to a function that transforms [a] to [b].
This intuition is particularly useful when understanding such expressions.
map (map (+1))
The inner map has the type above [a] -> [b].
In order to figure out the type of the outer map, we recursively apply our intuition from above. The outer map thus lifts [a] -> [b] to [[a]] -> [[b]].
This intuition will carry you forward a LOT.
Once we generalize map over into fmap, a map over arbitrary containers, it becomes really easy to read expressions like so (Note I've monomorphised the type of each fmap to a different type for the sake of the example).
showInt : Int -> String
(fmap . fmap . fmap) showInt : Tree (Set [Int]) -> Tree (Set [String])
Hopefully the above illustrates that fmap provides this generalized notion of lifting vanilla functions into functions over some arbitrary container.
Syntax Issue 2:
Currying also allows us to express functions in point-free form.
nthSmallest : Int -> [Int] -> Maybe Int
nthSmallest n = safeHead . drop n . sort
safeHead (x:_) = Just x
safeHead _ = Nothing
The above is usually considered good style as it illustrates thinking in terms of a pipeline of functions rather than the explicit manipulation of data.
Implementation:
In Haskell, point free style (through currying) can help us optimize functions. Writing a function in point free form will allow us to memoize it.
memoized_fib :: Int -> Integer
memoized_fib = (map fib [0 ..] !!)
where fib 0 = 0
fib 1 = 1
fib n = memoized_fib (n-2) + memoized_fib (n-1)
not_memoized_fib :: Int -> Integer
not_memoized_fib x = map fib [0 ..] !! x
where fib 0 = 0
fib 1 = 1
fib n = not_memoized_fib (n-2) + not_memoized_fib (n-1)
Writing it as a curried function as in the memoized version treats the curried function as an entity and therefore memoizes it.

Related

Difference between lifting and higher order functions

I usually hear the term lifting, when people are talking about map, fold, or bind, but isn't basically every higher order function some kind of lifting?
Why can't filter be a lift from a -> Bool to [a] -> [a], heck even the bool function (which models an if statement) can be considered a lift from a -> a to Bool -> a. And if they are not, then why is ap from the Applicative type class considered a lift?
If the important thing is going from ... a ... to ... f a ..., then ap wouldn't fit the case either: f (a -> b) -> f a -> f b
I'm surprised no one has answered this already.
A lifting function's role is to lift a function into a context (typically a Functor or Monad). So lifting a function of type a -> b into a List context would result in a function of type List[a] -> List[b]. If you think about it this is exactly what map (or fmap in Haskell) does. In fact, it is part of the definition of a Functor.
However, a Functor can only lift functions of one argument. We also want to be able to deal with functions of other arities as well. For example if we have a function of type a -> b -> c we cannot use map. This is where a more general lifting operation comes into the picture. In Haskell we have a lift2 for this case:
lift2:: (a -> b -> c) -> (M[a] -> M[b] -> M[c])
where M[a] is some particular Monad (like List) parameterized with a given type a.
There are additional variants of lift defined as well for other arities.
This is also why filter is not a lifting function as it doesn't fit the type signature required; you are not lifting a function of type a -> bool to M[a] -> M[bool]. It is, however, a higher-ordered function.
If you want to read more about lifting the Haskell Wiki has a good article on it

Curried Functions in Standard ML

I've been banging my head against the wall trying to learn about curried functions. Here's what I understand so far; suppose I have a function:
fun curry (a b c) = a * b * c;
or
fun curry a b c = a * b * c;
In ML, I can only have one argument so the first function uses a 3-tuple to get around this / get access to a, b, and c.
In the second example, what I really have is:
fun ((curry a) b) c
where curry a returns a function, and
(curry a) b returns a function and ((curry a) b) c returns another function. A few questions:
1) Why is this preferable to using a tuple? Is it just that I can make use of the intermediate functions curry a and (curry a) b. My book mentions partial instantiation but not totally clear on it.
2) How do you determine what function curry a, (curry a) b actually do? ((curry a) b) c is just a * b * c, right?
Thanks for any help clearing this up,
bclayman
There is an element of taste in using curried vs. non-curried functions. I don't use curried functions as a matter of course. For example, if were to write a gcd function I would tend to write it as a function designed to operated on a tuple simply because I seldom have use for a defined partially-instantiated gcd function.
Where curried functions are really useful is in defining higher-order functions. Consider map. It is easy enough to write a non-curried version:
fun mymap (f,[])= []
| mymap (f,x::xs) = f(x)::mymap(f,xs)
It has type fn : ('a -> 'b) * 'a list -> 'b list taking a tuple consisting of a function between two types and a list of elements of the input type, returning a list of element of the output type. There is nothing exactly wrong with this function, but -- it isn't the same as SML's map. The built-in map has type
fn : ('a -> 'b) -> 'a list -> 'b list
which is curried. What does the curried function do for us? For one thing, it can be thought of as a function transformer. You feed map a function, f, designed to operate on elements of a given type and it returns as function map f which is designed to operate on whole lists of elements. For example, if
fun square(x) = x*x;
Is a function designed to square ints then val list_square = map square defines list_square as a function which takes a list of elements and returns the list of their squares.
When you use map in a call like map square [1,2,3] you have to remember that function application is left associative so that this parses as
'(map square) [1,2,3]. The functionmap square*is* the same as the functionlist_squareI defined above. The invocationmap square [1,2,3]takes that function and applies it to[1,2,3]yielding[1,4,9]`.
The curried version is really nice if you want to define a function, metamap, which can be used to apply functions to each element of a matrix thought of as a list of lists. Using the curried version it is as simple as:
fun metamap f = map (map f)
used like (in the REPL):
- metamap square [[1,2],[3,4]];
val it = [[1,4],[9,16]] : int list list
The logic is that map lifts a function from applying to elements to applying to lists. If you want a function to apply to lists of lists (e.g. matrices) just apply map twice -- which is all metamap does. You could, of course, write a version a non-curried version of metamap using our noncurried mymap function (it wouldn't even be all that hard), but you wouldn't be able to approach the elegance of the 1-line definition above.

In pure functional languages, is data (strings, ints, floats.. ) also just functions?

I was thinking about pure Object Oriented Languages like Ruby, where everything, including numbers, int, floats, and strings are themselves objects. Is this the same thing with pure functional languages? For example, in Haskell, are Numbers and Strings also functions?
I know Haskell is based on lambda calculus which represents everything, including data and operations, as functions. It would seem logical to me that a "purely functional language" would model everything as a function, as well as keep with the definition that a function most always returns the same output with the same inputs and has no state.
It's okay to think about that theoretically, but...
Just like in Ruby not everything is an object (argument lists, for instance, are not objects), not everything in Haskell is a function.
For more reference, check out this neat post: http://conal.net/blog/posts/everything-is-a-function-in-haskell
#wrhall gives a good answer. However you are somewhat correct that in the pure lambda calculus it is consistent for everything to be a function, and the language is Turing-complete (capable of expressing any pure computation that Haskell, etc. is).
That gives you some very strange things, since the only thing you can do to anything is to apply it to something else. When do you ever get to observe something? You have some value f and want to know something about it, your only choice is to apply it some value x to get f x, which is another function and the only choice is to apply it to another value y, to get f x y and so on.
Often I interpret the pure lambda calculus as talking about transformations on things that are not functions, but only capable of expressing functions itself. That is, I can make a function (with a bit of Haskelly syntax sugar for recursion & let):
purePlus = \zero succ natCase ->
let plus = \m n -> natCase m n (\m' -> plus m' n)
in plus (succ (succ zero)) (succ (succ zero))
Here I have expressed the computation 2+2 without needing to know that there are such things as non-functions. I simply took what I needed as arguments to the function I was defining, and the values of those arguments could be church encodings or they could be "real" numbers (whatever that means) -- my definition does not care.
And you could think the same thing of Haskell. There is no particular reason to think that there are things which are not functions, nor is there a particular reason to think that everything is a function. But Haskell's type system at least prevents you from applying an argument to a number (anybody thinking about fromInteger right now needs to hold their tongue! :-). In the above interpretation, it is because numbers are not necessarily modeled as functions, so you can't necessarily apply arguments to them.
In case it isn't clear by now, this whole answer has been somewhat of a technical/philosophical digression, and the easy answer to your question is "no, not everything is a function in functional languages". Functions are the things you can apply arguments to, that's all.
The "pure" in "pure functional" refers to the "freedom from side effects" kind of purity. It has little relation to the meaning of "pure" being used when people talk about a "pure object-oriented language", which simply means that the language manipulates purely (only) in objects.
The reason is that pure-as-in-only is a reasonable distinction to use to classify object-oriented languages, because there are languages like Java and C++, which clearly have values that don't have all that much in common with objects, and there are also languages like Python and Ruby, for which it can be argued that every value is an object1
Whereas for functional languages, there are no practical languages which are "pure functional" in the sense that every value the language can manipulate is a function. It's certainly possible to program in such a language. The most basic versions of the lambda calculus don't have any notion of things that are not functions, but you can still do arbitrary computation with them by coming up with ways of representing the things you want to compute on as functions.2
But while the simplicity and minimalism of the lambda calculus tends to be great for proving things about programming, actually writing substantial programs in such a "raw" programming language is awkward. The function representation of basic things like numbers also tends to be very inefficient to implement on actual physical machines.
But there is a very important distinction between languages that encourage a functional style but allow untracked side effects anywhere, and ones that actually enforce that your functions are "pure" functions (similar to mathematical functions). Object-oriented programming is very strongly wed to the use of impure computations3, so there are no practical object-oriented programming languages that are pure in this sense.
So the "pure" in "pure functional language" means something very different from the "pure" in "pure object-oriented language".4 In each case the "pure vs not pure" distinction is one that is completely uninteresting applied to the other kind of language, so there's no very strong motive to standardise the use of the term.
1 There are corner cases to pick at in all "pure object-oriented" languages that I know of, but that's not really very interesting. It's clear that the object metaphor goes much further in languages in which 1 is an instance of some class, and that class can be sub-classed, than it does in languages in which 1 is something else than an object.
2 All computation is about representation anyway. Computers don't know anything about numbers or anything else. They just have bit-patterns that we use to represent numbers, and operations on bit-patterns that happen to correspond to operations on numbers (because we designed them so that they would).
3 This isn't fundamental either. You could design a "pure" object-oriented language that was pure in this sense. I tend to write most of my OO code to be pure anyway.
4 If this seems obtuse, you might reflect that the terms "functional", "object", and "language" have vastly different meanings in other contexts also.
A very different angle on this question: all sorts of data in Haskell can be represented as functions, using a technique called Church encodings. This is a form of inversion of control: instead of passing data to functions that consume it, you hide the data inside a set of closures, and to consume it you pass in callbacks describing what to do with this data.
Any program that uses lists, for example, can be translated into a program that uses functions instead of lists:
-- | A list corresponds to a function of this type:
type ChurchList a r = (a -> r -> r) --^ how to handle a cons cell
-> r --^ how to handle the empty list
-> r --^ result of processing the list
listToCPS :: [a] -> ChurchList a r
listToCPS xs = \f z -> foldr f z xs
That function is taking a concrete list as its starting point, but that's not necessary. You can build up ChurchList functions out of just pure functions:
-- | The empty 'ChurchList'.
nil :: ChurchList a r
nil = \f z -> z
-- | Add an element at the front of a 'ChurchList'.
cons :: a -> ChurchList a r -> ChurchList a r
cons x xs = \f z -> f z (xs f z)
foldChurchList :: (a -> r -> r) -> r -> ChurchList a r -> r
foldChurchList f z xs = xs f z
mapChurchList :: (a -> b) -> ChurchList a r -> ChurchList b r
mapChurchList f = foldChurchList step nil
where step x = cons (f x)
filterChurchList :: (a -> Bool) -> ChurchList a r -> ChurchList a r
filterChurchList pred = foldChurchList step nil
where step x xs = if pred x then cons x xs else xs
That last function uses Bool, but of course we can replace Bool with functions as well:
-- | A Bool can be represented as a function that chooses between two
-- given alternatives.
type ChurchBool r = r -> r -> r
true, false :: ChurchBool r
true a _ = a
false _ b = b
filterChurchList' :: (a -> ChurchBool r) -> ChurchList a r -> ChurchList a r
filterChurchList' pred = foldChurchList step nil
where step x xs = pred x (cons x xs) xs
This sort of transformation can be done for basically any type, so in theory, you could get rid of all "value" types in Haskell, and keep only the () type, the (->) and IO type constructors, return and >>= for IO, and a suitable set of IO primitives. This would obviously be hella impractical—and it would perform worse (try writing tailChurchList :: ChurchList a r -> ChurchList a r for a taste).
Is getChar :: IO Char a function or not? Haskell Report doesn't provide us with a definition. But it states that getChar is a function (see here). (Well, at least we can say that it is a function.)
So I think the answer is YES.
I don't think there can be correct definition of "function" except "everything is a function". (What is "correct definition"? Good question...) Consider the next example:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Applicative
f :: Applicative f => f Int
f = pure 1
g1 :: Maybe Int
g1 = f
g2 :: Int -> Int
g2 = f
Is f a function or datatype? It depends.

Why does ocaml need both "let" and "let rec"? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why are functions in Ocaml/F# not recursive by default?
OCaml uses let to define a new function, or let rec to define a function that is recursive. Why does it need both of these - couldn't we just use let for everything?
For example, to define a non-recursive successor function and recursive factorial in OCaml (actually, in the OCaml interpreter) I might write
let succ n = n + 1;;
let rec fact n =
if n = 0 then 1 else n * fact (n-1);;
Whereas in Haskell (GHCI) I can write
let succ n = n + 1
let fact n =
if n == 0 then 1 else n * fact (n-1)
Why does OCaml distinguish between let and let rec? Is it a performance issue, or something more subtle?
Well, having both available instead of only one gives the programmer tighter control on the scope. With let x = e1 in e2, the binding is only present in e2's environment, while with let rec x = e1 in e2 the binding is present in both e1 and e2's environments.
(Edit: I want to emphasize that it is not a performance issue, that makes no difference at all.)
Here are two situations where having this non-recursive binding is useful:
shadowing an existing definition with a refinement that use the old binding. Something like: let f x = (let x = sanitize x in ...), where sanitize is a function that ensures the input has some desirable property (eg. it takes the norm of a possibly-non-normalized vector, etc.). This is very useful in some cases.
metaprogramming, for example macro writing. Imagine I want to define a macro SQUARE(foo) that desugars into let x = foo in x * x, for any expression foo. I need this binding to avoid code duplication in the output (I don't want SQUARE(factorial n) to compute factorial n twice). This is only hygienic if the let binding is not recursive, otherwise I couldn't write let x = 2 in SQUARE(x) and get a correct result.
So I claim it is very important indeed to have both the recursive and the non-recursive binding available. Now, the default behaviour of the let-binding is a matter of convention. You could say that let x = ... is recursive, and one must use let nonrec x = ... to get the non-recursive binder. Picking one default or the other is a matter of which programming style you want to favor and there are good reasons to make either choice. Haskell suffers¹ from the unavailability of this non-recursive mode, and OCaml has exactly the same defect at the type level : type foo = ... is recursive, and there is no non-recursive option available -- see this blog post.
¹: when Google Code Search was available, I used it to search in Haskell code for the pattern let x' = sanitize x in .... This is the usual workaround when non-recursive binding is not available, but it's less safe because you risk writing x instead of x' by mistake later on -- in some cases you want to have both available, so picking a different name can be voluntary. A good idiom would be to use a longer variable name for the first x, such as unsanitized_x. Anyway, just looking for x' literally (no other variable name) and x1 turned a lot of results. Erlang (and all language that try to make variable shadowing difficult: Coffeescript, etc.) has even worse problems of this kind.
That said, the choice of having Haskell bindings recursive by default (rather than non-recursive) certainly makes sense, as it is consistent with lazy evaluation by default, which makes it really easy to build recursive values -- while strict-by-default languages have more restrictions on which recursive definitions make sense.

In Functional Programming, what is a functor?

I've come across the term 'Functor' a few times while reading various articles on functional programming, but the authors typically assume the reader already understands the term. Looking around on the web has provided either excessively technical descriptions (see the Wikipedia article) or incredibly vague descriptions (see the section on Functors at this ocaml-tutorial website).
Can someone kindly define the term, explain its use, and perhaps provide an example of how Functors are created and used?
Edit: While I am interested in the theory behind the term, I am less interested in the theory than I am in the implementation and practical use of the concept.
Edit 2: Looks like there is some cross-terminoligy going on: I'm specifically referring to the Functors of functional programming, not the function objects of C++.
The word "functor" comes from category theory, which is a very general, very abstract branch of mathematics. It has been borrowed by designers of functional languages in at least two different ways.
In the ML family of languages, a functor is a module that takes one or more other modules as a parameter. It's considered an advanced feature, and most beginning programmers have difficulty with it.
As an example of implementation and practical use, you could define your favorite form of balanced binary search tree once and for all as a functor, and it would take as a parameter a module that provides:
The type of key to be used in the binary tree
A total-ordering function on keys
Once you've done this, you can use the same balanced binary tree implementation forever. (The type of value stored in the tree is usually left polymorphic—the tree doesn't need to look at values other than to copy them around, whereas the tree definitely needs to be able to compare keys, and it gets the comparison function from the functor's parameter.)
Another application of ML functors is layered network protocols. The link is to a really terrific paper by the CMU Fox group; it shows how to use functors to build more complex protocol layers (like TCP) on type of simpler layers (like IP or even directly over Ethernet). Each layer is implemented as a functor that takes as a parameter the layer below it. The structure of the software actually reflects the way people think about the problem, as opposed to the layers existing only in the mind of the programmer. In 1994 when this work was published, it was a big deal.
For a wild example of ML functors in action, you could see the paper ML Module Mania, which contains a publishable (i.e., scary) example of functors at work. For a brilliant, clear, pellucid explanation of the ML modules system (with comparisons to other kinds of modules), read the first few pages of Xavier Leroy's brilliant 1994 POPL paper Manifest Types, Modules, and Separate Compilation.
In Haskell, and in some related pure functional language, Functor is a type class. A type belongs to a type class (or more technically, the type "is an instance of" the type class) when the type provides certain operations with certain expected behavior. A type T can belong to class Functor if it has certain collection-like behavior:
The type T is parameterized over another type, which you should think of as the element type of the collection. The type of the full collection is then something like T Int, T String, T Bool, if you are containing integers, strings, or Booleans respectively. If the element type is unknown, it is written as a type parameter a, as in T a.
Examples include lists (zero or more elements of type a), the Maybe type (zero or one elements of type a), sets of elements of type a, arrays of elements of type a, all kinds of search trees containing values of type a, and lots of others you can think of.
The other property that T has to satisfy is that if you have a function of type a -> b (a function on elements), then you have to be able to take that function and product a related function on collections. You do this with the operator fmap, which is shared by every type in the Functor type class. The operator is actually overloaded, so if you have a function even with type Int -> Bool, then
fmap even
is an overloaded function that can do many wonderful things:
Convert a list of integers to a list of Booleans
Convert a tree of integers to a tree of Booleans
Convert Nothing to Nothing and Just 7 to Just False
In Haskell, this property is expressed by giving the type of fmap:
fmap :: (Functor t) => (a -> b) -> t a -> t b
where we now have a small t, which means "any type in the Functor class."
To make a long story short, in Haskell a functor is a kind of collection for which if you are given a function on elements, fmap will give you back a function on collections. As you can imagine, this is an idea that can be widely reused, which is why it is blessed as part of Haskell's standard library.
As usual, people continue to invent new, useful abstractions, and you may want to look into applicative functors, for which the best reference may be a paper called Applicative Programming with Effects by Conor McBride and Ross Paterson.
Other answers here are complete, but I'll try another explanation of the FP use of functor. Take this as analogy:
A functor is a container of type a that, when subjected to a function that maps from a→b, yields a container of type b.
Unlike the abstracted-function-pointer use in C++, here the functor is not the function; rather, it's something that behaves consistently when subjected to a function.
There are three different meanings, not much related!
In Ocaml it is a parametrized module. See manual. I think the best way to grok them is by example: (written quickly, might be buggy)
module type Order = sig
type t
val compare: t -> t -> bool
end;;
module Integers = struct
type t = int
let compare x y = x > y
end;;
module ReverseOrder = functor (X: Order) -> struct
type t = X.t
let compare x y = X.compare y x
end;;
(* We can order reversely *)
module K = ReverseOrder (Integers);;
Integers.compare 3 4;; (* this is false *)
K.compare 3 4;; (* this is true *)
module LexicographicOrder = functor (X: Order) ->
functor (Y: Order) -> struct
type t = X.t * Y.t
let compare (a,b) (c,d) = if X.compare a c then true
else if X.compare c a then false
else Y.compare b d
end;;
(* compare lexicographically *)
module X = LexicographicOrder (Integers) (Integers);;
X.compare (2,3) (4,5);;
module LinearSearch = functor (X: Order) -> struct
type t = X.t array
let find x k = 0 (* some boring code *)
end;;
module BinarySearch = functor (X: Order) -> struct
type t = X.t array
let find x k = 0 (* some boring code *)
end;;
(* linear search over arrays of integers *)
module LS = LinearSearch (Integers);;
LS.find [|1;2;3] 2;;
(* binary search over arrays of pairs of integers,
sorted lexicographically *)
module BS = BinarySearch (LexicographicOrder (Integers) (Integers));;
BS.find [|(2,3);(4,5)|] (2,3);;
You can now add quickly many possible orders, ways to form new orders, do a binary or linear search easily over them. Generic programming FTW.
In functional programming languages like Haskell, it means some type constructors (parametrized types like lists, sets) that can be "mapped". To be precise, a functor f is equipped with (a -> b) -> (f a -> f b). This has origins in category theory. The Wikipedia article you linked to is this usage.
class Functor f where
fmap :: (a -> b) -> (f a -> f b)
instance Functor [] where -- lists are a functor
fmap = map
instance Functor Maybe where -- Maybe is option in Haskell
fmap f (Just x) = Just (f x)
fmap f Nothing = Nothing
fmap (+1) [2,3,4] -- this is [3,4,5]
fmap (+1) (Just 5) -- this is Just 6
fmap (+1) Nothing -- this is Nothing
So, this is a special kind of a type constructors, and has little to do with functors in Ocaml!
In imperative languages, it is a pointer to function.
In OCaml, it's a parameterised module.
If you know C++, think of an OCaml functor as a template. C++ only has class templates, and functors work at the module scale.
An example of functor is Map.Make; module StringMap = Map.Make (String);; builds a map module that works with String-keyed maps.
You couldn't achieve something like StringMap with just polymorphism; you need to make some assumptions on the keys. The String module contains the operations (comparison, etc) on a totally ordered string type, and the functor will link against the operations the String module contains. You could do something similar with object-oriented programming, but you'd have method indirection overhead.
You got quite a few good answers. I'll pitch in:
A functor, in the mathematical sense, is a special kind of function on an algebra. It is a minimal function which maps an algebra to another algebra. "Minimality" is expressed by the functor laws.
There are two ways to look at this. For example, lists are functors over some type. That is, given an algebra over a type 'a', you can generate a compatible algebra of lists containing things of type 'a'. (For example: the map that takes an element to a singleton list containing it: f(a) = [a]) Again, the notion of compatibility is expressed by the functor laws.
On the other hand, given a functor f "over" a type a, (that is, f a is the result of applying the functor f to the algebra of type a), and function from g: a -> b, we can compute a new functor F = (fmap g) which maps f a to f b. In short, fmap is the part of F that maps "functor parts" to "functor parts", and g is the part of the function that maps "algebra parts" to "algebra parts". It takes a function, a functor, and once complete, it IS a functor too.
It might seem that different languages are using different notions of functors, but they're not. They're merely using functors over different algebras. OCamls has an algebra of modules, and functors over that algebra let you attach new declarations to a module in a "compatible" way.
A Haskell functor is NOT a type class. It is a data type with a free variable which satisfies the type class. If you're willing to dig into the guts of a datatype (with no free variables), you can reinterpret a data type as a functor over an underlying algebra. For example:
data F = F Int
is isomorphic to the class of Ints. So F, as a value constructor, is a function that maps Int to F Int, an equivalent algebra. It is a functor. On the other hand, you don't get fmap for free here. That's what pattern matching is for.
Functors are good for "attaching" things to elements of algebras, in an algebraically compatible way.
The best answer to that question is found in "Typeclassopedia" by Brent Yorgey.
This issue of Monad Reader contain a precise definition of what a functor is as well as many definition of other concepts as well as a diagram. (Monoid, Applicative, Monad and other concept are explained and seen in relation to a functor).
http://haskell.org/sitewiki/images/8/85/TMR-Issue13.pdf
excerpt from Typeclassopedia for Functor:
"A simple intuition is that a Functor represents a “container” of some
sort, along with the ability to apply a function uniformly to every element in the
container"
But really the whole typeclassopedia is a highly recommended reading that is surprisingly easy. In a way you can see the typeclass presented there as a parallel to design pattern in object in the sense that they give you a vocabulary for given behavior or capability.
Cheers
There is a pretty good example in the O'Reilly OCaml book that's on Inria's website (which as of writing this is unfortunately down). I found a very similar example in this book used by caltech: Introduction to OCaml (pdf link). The relevant section is the chapter on functors (Page 139 in the book, page 149 in the PDF).
In the book they have a functor called MakeSet which creates a data structure that consists of a list, and functions to add an element, determine if an element is in the list, and to find the element. The comparison function that is used to determine if it's in/not in the set has been parametrized (which is what makes MakeSet a functor instead of a module).
They also have a module that implements the comparison function so that it does a case insensitive string compare.
Using the functor and the module that implements the comparison they can create a new module in one line:
module SSet = MakeSet(StringCaseEqual);;
that creates a module for a set data structure that uses case insensitive comparisons. If you wanted to create a set that used case sensitive comparisons then you would just need to implement a new comparison module instead of a new data structure module.
Tobu compared functors to templates in C++ which I think is quite apt.
Given the other answers and what I'm going to post now, I'd say that it's a rather heavily overloaded word, but anyway...
For a hint regarding the meaning of the word 'functor' in Haskell, ask GHCi:
Prelude> :info Functor
class Functor f where
fmap :: forall a b. (a -> b) -> f a -> f b
(GHC.Base.<$) :: forall a b. a -> f b -> f a
-- Defined in GHC.Base
instance Functor Maybe -- Defined in Data.Maybe
instance Functor [] -- Defined in GHC.Base
instance Functor IO -- Defined in GHC.Base
So, basically, a functor in Haskell is something that can be mapped over. Another way to say it is that a functor is something which can be regarded as a container which can be asked to use a given function to transform the value it contains; thus, for lists, fmap coincides with map, for Maybe, fmap f (Just x) = Just (f x), fmap f Nothing = Nothing etc.
The Functor typeclass subsection and the section on Functors, Applicative Functors and Monoids of Learn You a Haskell for Great Good give some examples of where this particular concept is useful. (A summary: lots of places! :-))
Note that any monad can be treated as a functor, and in fact, as Craig Stuntz points out, the most often used functors tend to be monads... OTOH, it is convenient at times to make a type an instance of the Functor typeclass without going to the trouble of making it a Monad. (E.g. in the case of ZipList from Control.Applicative, mentioned on one of the aforementioned pages.)
"Functor is mapping of objects and morphisms that preserves composition and identity of a category."
Lets define what is a category ?
It's a bunch of objects!
Draw a few dots (for now 2 dots, one is 'a' another is 'b') inside a
circle and name that circle A(Category) for now.
What does the category holds ?
Composition between objects and Identity function for every object.
So, we have to map the objects and preserve the composition after applying our Functor.
Lets imagine 'A' is our category which has objects ['a', 'b'] and there exists a morphism a -> b
Now, we have to define a functor which can map these objects and morphisms into another category 'B'.
Lets say the functor is called 'Maybe'
data Maybe a = Nothing | Just a
So, The category 'B' looks like this.
Please draw another circle but this time with 'Maybe a' and 'Maybe b' instead of 'a' and 'b'.
Everything seems good and all the objects are mapped
'a' became 'Maybe a' and 'b' became 'Maybe b'.
But the problem is we have to map the morphism from 'a' to 'b' as well.
That means morphism a -> b in 'A' should map to morphism 'Maybe a' -> 'Maybe b'
morphism from a -> b is called f, then morphism from 'Maybe a' -> 'Maybe b' is called 'fmap f'
Now lets see what function 'f' was doing in 'A' and see if we can replicate it in 'B'
function definition of 'f' in 'A':
f :: a -> b
f takes a and returns b
function definition of 'f' in 'B' :
f :: Maybe a -> Maybe b
f takes Maybe a and return Maybe b
lets see how to use fmap to map the function 'f' from 'A' to function 'fmap f' in 'B'
definition of fmap
fmap :: (a -> b) -> (Maybe a -> Maybe b)
fmap f Nothing = Nothing
fmap f (Just x) = Just(f x)
So, what are we doing here ?
We are applying the function 'f' to 'x' which is of type 'a'. Special pattern matching of 'Nothing' comes from the definition of Functor Maybe.
So, we mapped our objects [a, b] and morphisms [ f ] from category 'A' to category 'B'.
Thats Functor!
Here's an article on functors from a programming POV, followed up by more specifically how they surface in programming languages.
The practical use of a functor is in a monad, and you can find many tutorials on monads if you look for that.
In a comment to the top-voted answer, user Wei Hu asks:
I understand both ML-functors and Haskell-functors, but lack the
insight to relate them together. What's the relationship between these
two, in a category-theoretical sense?
Note: I don't know ML, so please forgive and correct any related mistakes.
Let's initially assume that we are all familiar with the definitions of 'category' and 'functor'.
A compact answer would be that "Haskell-functors" are (endo-)functors F : Hask -> Hask while "ML-functors" are functors G : ML -> ML'.
Here, Hask is the category formed by Haskell types and functions between them, and similarly ML and ML' are categories defined by ML structures.
Note: There are some technical issues with making Hask a category, but there are ways around them.
From a category theoretic perspective, this means that a Hask-functor is a map F of Haskell types:
data F a = ...
along with a map fmap of Haskell functions:
instance Functor F where
fmap f = ...
ML is pretty much the same, though there is no canonical fmap abstraction I am aware of, so let's define one:
signature FUNCTOR = sig
type 'a f
val fmap: 'a -> 'b -> 'a f -> 'b f
end
That is f maps ML-types and fmap maps ML-functions, so
functor StructB (StructA : SigA) :> FUNCTOR =
struct
fmap g = ...
...
end
is a functor F: StructA -> StructB.
Rough Overview
In functional programming, a functor is essentially a construction of lifting ordinary unary functions (i.e. those with one argument) to functions between variables of new types. It is much easier to write and maintain simple functions between plain objects and use functors to lift them, then to manually write functions between complicated container objects. Further advantage is to write plain functions only once and then re-use them via different functors.
Examples of functors include arrays, "maybe" and "either" functors, futures (see e.g. https://github.com/Avaq/Fluture), and many others.
Illustration
Consider the function constructing the full person's name from the first and last names. We could define it like fullName(firstName, lastName) as function of two arguments, which however would not be suitable for functors that only deal with functions of one arguments. To remedy, we collect all the arguments in a single object name, which now becomes the function's single argument:
// In JavaScript notation
fullName = name => name.firstName + ' ' + name.lastName
Now what if we have many people in an array? Instead of manually go over the list, we can simply re-use our function fullName via the map method provided for arrays with short single line of code:
fullNameList = nameList => nameList.map(fullName)
and use it like
nameList = [
{firstName: 'Steve', lastName: 'Jobs'},
{firstName: 'Bill', lastName: 'Gates'}
]
fullNames = fullNameList(nameList)
// => ['Steve Jobs', 'Bill Gates']
That will work, whenever every entry in our nameList is an object providing both firstName and lastName properties. But what if some objects don't (or even aren't objects at all)? To avoid the errors and make the code safer, we can wrap our objects into the Maybe type (se e.g. https://sanctuary.js.org/#maybe-type):
// function to test name for validity
isValidName = name =>
(typeof name === 'object')
&& (typeof name.firstName === 'string')
&& (typeof name.lastName === 'string')
// wrap into the Maybe type
maybeName = name =>
isValidName(name) ? Just(name) : Nothing()
where Just(name) is a container carrying only valid names and Nothing() is the special value used for everything else. Now instead of interrupting (or forgetting) to check the validity of our arguments, we can simply reuse (lift) our original fullName function with another single line of code, based again on the map method, this time provided for the Maybe type:
// Maybe Object -> Maybe String
maybeFullName = maybeName => maybeName.map(fullName)
and use it like
justSteve = maybeName(
{firstName: 'Steve', lastName: 'Jobs'}
) // => Just({firstName: 'Steve', lastName: 'Jobs'})
notSteve = maybeName(
{lastName: 'SomeJobs'}
) // => Nothing()
steveFN = maybeFullName(justSteve)
// => Just('Steve Jobs')
notSteveFN = maybeFullName(notSteve)
// => Nothing()
Category Theory
A Functor in Category Theory is a map between two categories respecting composition of their morphisms. In a Computer Language, the main Category of interest is the one whose objects are types (certain sets of values), and whose morphisms are functions f:a->b from one type a to another type b.
For example, take a to be the String type, b the Number type, and f is the function mapping a string into its length:
// f :: String -> Number
f = str => str.length
Here a = String represents the set of all strings and b = Number the set of all numbers. In that sense, both a and b represent objects in the Set Category (which is closely related to the category of types, with the difference being inessential here). In the Set Category, morphisms between two sets are precisely all functions from the first set into the second. So our length function f here is a morphism from the set of strings into the set of numbers.
As we only consider the set category, the relevant Functors from it into itself are maps sending objects to objects and morphisms to morphisms, that satisfy certain algebraic laws.
Example: Array
Array can mean many things, but only one thing is a Functor -- the type construct, mapping a type a into the type [a] of all arrays of type a. For instance, the Array functor maps the type String into the type [String] (the set of all arrays of strings of arbitrary length), and set type Number into the corresponding type [Number] (the set of all arrays of numbers).
It is important not to confuse the Functor map
Array :: a => [a]
with a morphism a -> [a]. The functor simply maps (associates) the type a into the type [a] as one thing to another. That each type is actually a set of elements, is of no relevance here. In contrast, a morphism is an actual function between those sets. For instance, there is a natural morphism (function)
pure :: a -> [a]
pure = x => [x]
which sends a value into the 1-element array with that value as single entry. That function is not a part of the Array Functor! From the point of view of this functor, pure is just a function like any other, nothing special.
On the other hand, the Array Functor has its second part -- the morphism part. Which maps a morphism f :: a -> b into a morphism [f] :: [a] -> [b]:
// a -> [a]
Array.map(f) = arr => arr.map(f)
Here arr is any array of arbitrary length with values of type a, and arr.map(f) is the array of the same length with values of type b, whose entries are results of applying f to the entries of arr. To make it a functor, the mathematical laws of mapping identity to identity and compositions to compositions must hold, which are easy to check in this Array example.
Not to contradict the previous theoretical or mathematical answers, but a Functor is also an Object (in an Object-Oriented programming language) that has only one method and is effectively used as a function.
An example is the Runnable interface in Java, which has only one method: run.
Consider this example, first in Javascript, which has first-class functions:
[1, 2, 5, 10].map(function(x) { return x*x; });
Output:
[1, 4, 25, 100]
The map method takes a function and returns a new array with each element being the result of the application of that function to the value at the same position in the original array.
To do the same thing is Java, using a Functor, you would first need to define an interface, say:
public interface IntMapFunction {
public int f(int x);
}
Then, if you add a collection class which had a map function, you could do:
myCollection.map(new IntMapFunction() { public int f(int x) { return x * x; } });
This uses an in-line subclass of IntMapFunction to create a Functor, which is the OO equivalent of the function from the earlier JavaScript example.
Using Functors let you apply functional techniques in an OO language. Of course, some OO languages also have support for functions directly, so this isn't required.
Reference: http://en.wikipedia.org/wiki/Function_object
KISS: A functor is an object that has a map method.
Arrays in JavaScript implement map and are therefore functors. Promises, Streams and Trees often implement map in functional languages, and when they do, they are considered functors. The map method of the functor takes it’s own contents and transforms each of them using the transformation callback passed to map, and returns a new functor, which contains the structure as the first functor, but with the transformed values.
src: https://www.youtube.com/watch?v=DisD9ftUyCk&feature=youtu.be&t=76
In functional programming, error handling is different. Throwing and catching exceptions is imperative code. Instead of using try/catch block, a safety box is created around the code that might throw an error. this is a fundamental design pattern in functional programming. A wrapper object is used to encapsulate a potentially erroneous value. The wrapper's main purpose is to provide a 'different' way to use the wrapped object
const wrap = (val) => new Wrapper(val);
Wrapping guards direct access to the values so they can be manipulated
safely and immutably. Because we won’t have direct access to it, the only way to extract it is to use the identity function.
identity :: (a) -> a
This is another use case of identity function: Extracting data functionally from encapsulated types.
The Wrapper type uses the map to safely access and manipulate values. In this case, we are mapping the identity function over the container to extract the value from the container. With this approach, we can check for null before calling the function, or check for an empty string, a negative number, and so on.
fmap :: (A -> B) -> Wrapper[A] -> Wrapper[B]
fmap, first opens the container, then applies the given function to its value, and finally closes the value back into a new container of the same type. This type of function is called a functor.
fmap returns a new copy of the container at each invocation.
functors are side-effect-free
functors must be composable
In practice, functor means an object that implements the call operator in C++. In ocaml I think functor refers to something that takes a module as input and output another module.
Put simply, a functor, or function object, is a class object that can be called just like a function.
In C++:
This is how you write a function
void foo()
{
cout << "Hello, world! I'm a function!";
}
This is how you write a functor
class FunctorClass
{
public:
void operator ()
{
cout << "Hello, world! I'm a functor!";
}
};
Now you can do this:
foo(); //result: Hello, World! I'm a function!
FunctorClass bar;
bar(); //result: Hello, World! I'm a functor!
What makes these so great is that you can keep state in the class - imagine if you wanted to ask a function how many times it has been called. There's no way to do this in a neat, encapsulated way. With a function object, it's just like any other class: you'd have some instance variable that you increment in operator () and some method to inspect that variable, and everything's neat as you please.
Functor is not specifically related to functional programming. It's just a "pointer" to a function or some kind of object, that can be called as it would be a function.

Resources