Importance of isomorphic functions - functional-programming

Short Question: What is the importance of isomorphic functions in programming (namely in functional programming)?
Long Question: I'm trying to draw some analogs between functional programming and concepts in Category Theory based off of some of the lingo I hear from time-to-time. Essentially I'm trying to "unpackage" that lingo into something concrete I can then expand on. I'll then be able to use the lingo with an understanding of just-what-the-heck-I'm-talking about. Which is always nice.
One of these terms I hear all the time is Isomorphism, I gather this is about reasoning about equivalence between functions or function compositions. I was wondering if someone could provide some insights into some common patterns where the property of isomorphism comes in handy (in functional programming), and any by-products gained, such as compiler optimizations from reasoning about isomorphic functions.

I take a little issue with the upvoted answer for isomorphism, as the category theory definition of isomorphism says nothing about objects. To see why, let's review the definition.
Definition
An isomorphism is a pair of morphisms (i.e. functions), f and g, such that:
f . g = id
g . f = id
These morphisms are then called "iso"morphisms. A lot of people don't catch that the "morphism" in isomorphism refers to the function and not the object. However, you would say that the objects they connect are "isomorphic", which is what the other answer is describing.
Notice that the definition of isomorphism does not say what (.), id, or = must be. The only requirement is that, whatever they are, they also satisfy the category laws:
f . id = f
id . f = f
(f . g) . h = f . (g . h)
Composition (i.e. (.)) joins two morphisms into one morphism and id denotes some sort of "identity" transition. This means that if our isomorphisms cancel out to the identity morphism id, then you can think of them as inverses of each other.
For the specific case where the morphisms are functions, then id is defined as the identity function:
id x = x
... and composition is defined as:
(f . g) x = f (g x)
... and two functions are isomorphisms if they cancel out to the identity function id when you compose them.
Morphisms versus objects
However, there are multiple ways two objects could be isomorphic. For example, given the following two types:
data T1 = A | B
data T2 = C | D
There are two isomorphisms between them:
f1 t1 = case t1 of
A -> C
B -> D
g1 t2 = case t2 of
C -> A
D -> B
(f1 . g1) t2 = case t2 of
C -> C
D -> D
(f1 . g1) t2 = t2
f1 . g1 = id :: T2 -> T2
(g1 . f1) t1 = case t1 of
A -> A
B -> B
(g1 . f1) t1 = t1
g1 . f1 = id :: T1 -> T1
f2 t1 = case t1 of
A -> D
B -> C
g2 t2 = case t2 of
C -> B
D -> A
f2 . g2 = id :: T2 -> T2
g2 . f2 = id :: T1 -> T1
So that's why it's better to describe the isomorphism in terms of the specific functions relating the two objects rather than the two objects, since there may not necessarily be a unique pair of functions between two objects that satisfy the isomorphism laws.
Also, note that it is not sufficient for the functions to be invertible. For example, the following function pairs are not isomorphisms:
f1 . g2 :: T2 -> T2
f2 . g1 :: T2 -> T2
Even though no information is lost when you compose f1 . g2, you don't return back to your original state, even if the final state has the same type.
Also, isomorphisms don't have to be between concrete data types. Here's an example of two canonical isomorphisms are not between concrete algebraic data types and instead simply relate functions: curry and uncurry:
curry . uncurry = id :: (a -> b -> c) -> (a -> b -> c)
uncurry . curry = id :: ((a, b) -> c) -> ((a, b) -> c)
Uses for Isomorphisms
Church Encoding
One use of isomorphisms is to Church-encode data types as functions. For example, Bool is isomorphic to forall a . a -> a -> a:
f :: Bool -> (forall a . a -> a -> a)
f True = \a b -> a
f False = \a b -> b
g :: (forall a . a -> a -> a) -> Bool
g b = b True False
Verify that f . g = id and g . f = id.
The benefit of Church encoding data types is that they sometimes run faster (because Church-encoding is continuation-passing style) and they can be implemented in languages that don't even have language support for algebraic data types at all.
Translating Implementations
Sometimes one tries to compare one library's implementation of some feature to another library's implementation, and if you can prove that they are isomorphic, then you can prove that they are equally powerful. Also, the isomorphisms describe how to translate one library into the other.
For example, there are two approaches that provide the ability to define a monad from a functor's signature. One is the free monad, provided by the free package and the other is operational semantics, provided by the operational package.
If you look at the two core data types, they look different, especially their second constructors:
-- modified from the original to not be a monad transformer
data Program instr a where
Lift :: a -> Program instr a
Bind :: Program instr b -> (b -> Program instr a) -> Program instr a
Instr :: instr a -> Program instr a
data Free f r = Pure r | Free (f (Free f r))
... but they are actually isomorphic! That means that both approaches are equally powerful and any code written in one approach can be translated mechanically into the other approach using the isomorphisms.
Isomorphisms that are not functions
Also, isomorphisms are not limited to functions. They are actually defined for any Category and Haskell has lots of categories. This is why it's more useful to think in terms of morphisms rather than data types.
For example, the Lens type (from data-lens) forms a category where you can compose lenses and have an identity lens. So using our above data type, we can define two lenses that are isomorphisms:
lens1 = iso f1 g1 :: Lens T1 T2
lens2 = iso g1 f1 :: Lens T2 T1
lens1 . lens2 = id :: Lens T1 T1
lens2 . lens1 = id :: Lens T2 T2
Note that there are two isomorphisms in play. One is the isomorphism that is used to build each lens (i.e. f1 and g1) (and that's also why that construction function is called iso), and then the lenses themselves are also isomorphisms. Note that in the above formulation, the composition (.) used is not function composition but rather lens composition, and the id is not the identity function, but instead is the identity lens:
id = iso id id
Which means that if we compose our two lenses, the result should be indistinguishable from that identity lens.

An isomorphism u :: a -> b is a function that has an inverse, i.e. another function v :: b -> a such that the relationships
u . v = id
v . u = id
are satisfied. You say that two types are isomorphic if there is an isomorphism between them. This essentially means that you can consider them to be the same type - anything that you can do with one, you can do with the other.
Isomorphism of functions
The two function types
(a,b) -> c
a -> b -> c
are isomorphic, since we can write
u :: ((a,b) -> c) -> a -> b -> c
u f = \x y -> f (x,y)
v :: (a -> b -> c) -> (a,b) -> c
v g = \(x,y) -> g x y
You can check that u . v and v . u are both id. In fact, the functions u and v are better known by the names curry and uncurry.
Isomorphism and Newtypes
We exploit isomorphism whenever we use a newtype declaration. For example, the underlying type of the state monad is s -> (a,s) which can be a little confusing to think about. By using a newtype declaration:
newtype State s a = State { runState :: s -> (a,s) }
we generate a new type State s a which is isomorphic to s -> (a,s) and which makes it clear when we use it, we are thinking about functions that have modifiable state. We also get a convenient constructor State and a getter runState for the new type.
Monads and Comonads
For a more advanced viewpoint, consider the isomorphism using curry and uncurry that I used above. The Reader r a type has the newtype declaration
newType Reader r a = Reader { runReader :: r -> a }
In the context of monads, a function f producing a reader therefore has the type signature
f :: a -> Reader r b
which is equivalent to
f :: a -> r -> b
which is one half of the curry/uncurry isomorphism. We can also define the CoReader r a type:
newtype CoReader r a = CoReader { runCoReader :: (a,r) }
which can be made into a comonad. There we have a function cobind, or =>> which takes a function that takes a coreader and produces a raw type:
g :: CoReader r a -> b
which is isomorphic to
g :: (a,r) -> b
But we already saw that a -> r -> b and (a,r) -> b are isomorphic, which gives us a nontrivial fact: the reader monad (with monadic bind) and the coreader comonad (with comonadic cobind) are isomorphic as well! In particular, they can both be used for the same purpose - that of providing a global environment that is threaded through every function call.

Think in terms of datatypes. In Haskell for example you can think of two data types to be isomorphic, if there exists a pair of functions that transform data between them in a unique way. The following three types are isomorphic to each other:
data Type1 a = Ax | Ay a
data Type2 a = Blah a | Blubb
data Maybe a = Just a | Nothing
You can think of the functions that transform between them as isomorphisms. This fits with the categorical idea of isomorphism. If between Type1 and Type2 there exist two functions f and g with f . g = g . f = id, then the two functions are isomorphisms between those two types (objects).

Related

F# - Treating a function like a map

Long story short, I came up with this funny function set, that takes a function, f : 'k -> 'v, a chosen value, k : 'k, a chosen result, v : 'v, uses f as the basis for a new function g : 'k -> 'v that is the exact same as f, except for that it now holds that, g k = v.
Here is the (pretty simple) F# code I wrote in order to make it:
let set : ('k -> 'v) -> 'k -> 'v -> 'k -> 'v =
fun f k v x ->
if x = k then v else f x
My questions are:
Does this function pose any problems?
I could imagine repeat use of the function, like this
let kvs : (int * int) List = ... // A very long list of random int pairs.
List.fold (fun f (k,v) -> set f k v) id kvs
would start building up a long list of functions on the heap. Is this something to be concerned about?
Is there a better way to do this, while still keeping the type?
I mean, I could do stuff like construct a type for holding the original function, f, a Map, setting key-value pairs to the map, and checking the map first, the function second, when using keys to get values, but that's not what interests me here - what interest me is having a function for "modifying" a single result for a given value, for a given function.
Potential problems:
The set-modified function leaks space if you override the same value twice:
let huge_object = ...
let small_object = ...
let f0 = set f 0 huge_object
let f1 = set f0 0 small_object
Even though it can never be the output of f1, huge_object cannot be garbage-collected until f1 can: huge_object is referenced by f0, which is in turn referenced by the f1.
The set-modified function has overhead linear in the number of set operations applied to it.
I don't know if these are actual problems for your intended application.
If you wish set to have exactly the type ('k -> 'v) -> 'k -> 'v -> 'k -> 'v then I don't see a better way(*). The obvious idea would be to have a "modification table" of functions you've already modified, then let set look up a given f in this table. But function types do not admit equality checking, so you cannot compare f to the set of functions known to your modification table.
(*) Reflection not withstanding.

Recursive discriminated unions and map

I need a type of tree and a map on those, so I do this:
type 'a grouping =
G of ('a * 'a grouping) list
with
member g.map f =
let (G gs) = g
gs |> List.map (fun (s, g) -> f s, g.map f) |> G
But this makes me wonder:
The map member is boilerplate. In Haskell, GHC would implement fmap for me (... deriving (Functor)). I know F# doesn't have typeclasses, but is there some other way I can avoid writing map myself in F#?
Can I somehow avoid the line let (G gs) = g?
Is this whole construction somehow non-idiomatic? It looks weird to me, but maybe that's just because putting members on sum types is new to me.
I don't think there is a way to derive automatically map, however there's a way to emulate type classes in F#, your code can be written like this:
#r #"FsControl.Core.dll"
#r #"FSharpPlus.dll"
open FSharpPlus
open FsControl.Core.TypeMethods
type 'a grouping =
G of ('a * 'a grouping) list
with
// Add an instance for Functor
static member instance (_:Functor.Map, G gs, _) = fun (f:'b->'c) ->
map (fun (s, g) -> f s, map f g) gs |> G
// TEST
let a = G [(1, G [2, G[]] )]
let b = map ((+) 10) a // G [(11, G [12, G[]] )]
Note that map is really overloaded, the first application you see calls the instance for List<'a> and the second one the instance for grouping<'a>. So it behaves like fmap in Haskell.
Also note this way you can decompose G gs without creating the let (G gs) = g
Now regarding what is idiomatic I think many people would agree your solution is more F# idiomatic, but to me new idioms should also be developed in order to get more features and overcome current language limitations, that's why I consider using a library which define clear conventions also idiomatic.
Anyway I agree with #kvb in that it's slightly more idiomatic to define map into a module, in F#+ that convention is also used, so you have the generic map and the specific ModuleX.map

Recursive anonymous functions in SML

Is it possible to write recursive anonymous functions in SML? I know I could just use the fun syntax, but I'm curious.
I have written, as an example of what I want:
val fact =
fn n => case n of
0 => 1
| x => x * fact (n - 1)
The anonymous function aren't really anonymous anymore when you bind it to a
variable. And since val rec is just the derived form of fun with no
difference other than appearance, you could just as well have written it using
the fun syntax. Also you can do pattern matching in fn expressions as well
as in case, as cases are derived from fn.
So in all its simpleness you could have written your function as
val rec fact = fn 0 => 1
| x => x * fact (x - 1)
but this is the exact same as the below more readable (in my oppinion)
fun fact 0 = 1
| fact x = x * fact (x - 1)
As far as I think, there is only one reason to use write your code using the
long val rec, and that is because you can easier annotate your code with
comments and forced types. For examples if you have seen Haskell code before and
like the way they type annotate their functions, you could write it something
like this
val rec fact : int -> int =
fn 0 => 1
| x => x * fact (x - 1)
As templatetypedef mentioned, it is possible to do it using a fixed-point
combinator. Such a combinator might look like
fun Y f =
let
exception BlackHole
val r = ref (fn _ => raise BlackHole)
fun a x = !r x
fun ta f = (r := f ; f)
in
ta (f a)
end
And you could then calculate fact 5 with the below code, which uses anonymous
functions to express the faculty function and then binds the result of the
computation to res.
val res =
Y (fn fact =>
fn 0 => 1
| n => n * fact (n - 1)
)
5
The fixed-point code and example computation are courtesy of Morten Brøns-Pedersen.
Updated response to George Kangas' answer:
In languages I know, a recursive function will always get bound to a
name. The convenient and conventional way is provided by keywords like
"define", or "let", or "letrec",...
Trivially true by definition. If the function (recursive or not) wasn't bound to a name it would be anonymous.
The unconventional, more anonymous looking, way is by lambda binding.
I don't see what unconventional there is about anonymous functions, they are used all the time in SML, infact in any functional language. Its even starting to show up in more and more imperative languages as well.
Jesper Reenberg's answer shows lambda binding; the "anonymous"
function gets bound to the names "f" and "fact" by lambdas (called
"fn" in SML).
The anonymous function is in fact anonymous (not "anonymous" -- no quotes), and yes of course it will get bound in the scope of what ever function it is passed onto as an argument. In any other cases the language would be totally useless. The exact same thing happens when calling map (fn x => x) [.....], in this case the anonymous identity function, is still in fact anonymous.
The "normal" definition of an anonymous function (at least according to wikipedia), saying that it must not be bound to an identifier, is a bit weak and ought to include the implicit statement "in the current environment".
This is in fact true for my example, as seen by running it in mlton with the -show-basis argument on an file containing only fun Y ... and the val res ..
val Y: (('a -> 'b) -> 'a -> 'b) -> 'a -> 'b
val res: int32
From this it is seen that none of the anonymous functions are bound in the environment.
A shorter "lambdanonymous" alternative, which requires OCaml launched
by "ocaml -rectypes":
(fun f n -> f f n)
(fun f n -> if n = 0 then 1 else n * (f f (n - 1))
7;; Which produces 7! = 5040.
It seems that you have completely misunderstood the idea of the original question:
Is it possible to write recursive anonymous functions in SML?
And the simple answer is yes. The complex answer is (among others?) an example of this done using a fix point combinator, not a "lambdanonymous" (what ever that is supposed to mean) example done in another language using features not even remotely possible in SML.
All you have to do is put rec after val, as in
val rec fact =
fn n => case n of
0 => 1
| x => x * fact (n - 1)
Wikipedia describes this near the top of the first section.
let fun fact 0 = 1
| fact x = x * fact (x - 1)
in
fact
end
This is a recursive anonymous function. The name 'fact' is only used internally.
Some languages (such as Coq) use 'fix' as the primitive for recursive functions, while some languages (such as SML) use recursive-let as the primitive. These two primitives can encode each other:
fix f => e
:= let rec f = e in f end
let rec f = e ... in ... end
:= let f = fix f => e ... in ... end
In languages I know, a recursive function will always get bound to a name. The convenient and conventional way is provided by keywords like "define", or "let", or "letrec",...
The unconventional, more anonymous looking, way is by lambda binding. Jesper Reenberg's answer shows lambda binding; the "anonymous" function gets bound to the names "f" and "fact" by lambdas (called "fn" in SML).
A shorter "lambdanonymous" alternative, which requires OCaml launched by "ocaml -rectypes":
(fun f n -> f f n)
(fun f n -> if n = 0 then 1 else n * (f f (n - 1))
7;;
Which produces 7! = 5040.

OCaml: Is there a function with type 'a -> 'a other than the identity function?

This isn't a homework question, by the way. It got brought up in class but my teacher couldn't think of any. Thanks.
How do you define the identity functions ? If you're only considering the syntax, there are different identity functions, which all have the correct type:
let f x = x
let f2 x = (fun y -> y) x
let f3 x = (fun y -> y) (fun y -> y) x
let f4 x = (fun y -> (fun y -> y) y) x
let f5 x = (fun y z -> z) x x
let f6 x = if false then x else x
There are even weirder functions:
let f7 x = if Random.bool() then x else x
let f8 x = if Sys.argv < 5 then x else x
If you restrict yourself to a pure subset of OCaml (which rules out f7 and f8), all the functions you can build verify an observational equation that ensures, in a sense, that what they compute is the identity : for all value f : 'a -> 'a, we have that f x = x
This equation does not depend on the specific function, it is uniquely determined by the type. There are several theorems (framed in different contexts) that formalize the informal idea that "a polymorphic function can't change a parameter of polymorphic type, only pass it around". See for example the paper of Philip Wadler, Theorems for free!.
The nice thing with those theorems is that they don't only apply to the 'a -> 'a case, which is not so interesting. You can get a theorem out of the ('a -> 'a -> bool) -> 'a list -> 'a list type of a sorting function, which says that its application commutes with the mapping of a monotonous function.
More formally, if you have any function s with such a type, then for all types u, v, functions cmp_u : u -> u -> bool, cmp_v : v -> v -> bool, f : u -> v, and list li : u list, and if cmp_u u u' implies cmp_v (f u) (f u') (f is monotonous), you have :
map f (s cmp_u li) = s cmp_v (map f li)
This is indeed true when s is exactly a sorting function, but I find it impressive to be able to prove that it is true of any function s with the same type.
Once you allow non-termination, either by diverging (looping indefinitely, as with the let rec f x = f x function given above), or by raising exceptions, of course you can have anything : you can build a function of type 'a -> 'b, and types don't mean anything anymore. Using Obj.magic : 'a -> 'b has the same effect.
There are saner ways to lose the equivalence to identity : you could work inside a non-empty environment, with predefined values accessible from the function. Consider for example the following function :
let counter = ref 0
let f x = incr counter; x
You still that the property that for all x, f x = x : if you only consider the return value, your function still behaves as the identity. But once you consider side-effects, you're not equivalent to the (side-effect-free) identity anymore : if I know counter, I can write a separating function that returns true when given this function f, and would return false for pure identity functions.
let separate g =
let before = !counter in
g ();
!counter = before + 1
If counter is hidden (for example by a module signature, or simply let f = let counter = ... in fun x -> ...), and no other function can observe it, then we again can't distinguish f and the pure identity functions. So the story is much more subtle in presence of local state.
let rec f x = f (f x)
This function never terminates, but it does have type 'a -> 'a.
If we only allow total functions, the question becomes more interesting. Without using evil tricks, it's not possible to write a total function of type 'a -> 'a, but evil tricks are fun so:
let f (x:'a):'a = Obj.magic 42
Obj.magic is an evil abomination of type 'a -> 'b which allows all kinds of shenanigans to circumvent the type system.
On second thought that one isn't total either because it will crash when used with boxed types.
So the real answer is: the identity function is the only total function of type 'a -> 'a.
Throwing an exception can also give you an 'a -> 'a type:
# let f (x:'a) : 'a = raise (Failure "aaa");;
val f : 'a -> 'a = <fun>
If you restrict yourself to a "reasonable" strongly normalizing typed λ-calculus, there is a single function of type ∀α α→α, which is the identity function. You can prove it by examining the possible normal forms of a term of this type.
Philip Wadler's 1989 article "Theorems for Free" explains how functions having polymorphic types necessarily satisfy certain theorems (e.g. a map-like function commutes with composition).
There are however some nonintuitive issues when one deals with much polymorphism. For instance, there is a standard trick for encoding inductive types and recursion with impredicative polymorphism, by representing an inductive object (e.g. a list) using its recursor function. In some cases, there are terms belonging to the type of the recursor function that are not recursor functions; there is an example in §4.3.1 of Christine Paulin's PhD thesis.

Higher-order type constructors and functors in Ocaml

Can the following polymorphic functions
let id x = x;;
let compose f g x = f (g x);;
let rec fix f = f (fix f);; (*laziness aside*)
be written for types/type constructors or modules/functors? I tried
type 'x id = Id of 'x;;
type 'f 'g 'x compose = Compose of ('f ('g 'x));;
type 'f fix = Fix of ('f (Fix 'f));;
for types but it doesn't work.
Here's a Haskell version for types:
data Id x = Id x
data Compose f g x = Compose (f (g x))
data Fix f = Fix (f (Fix f))
-- examples:
l = Compose [Just 'a'] :: Compose [] Maybe Char
type Natural = Fix Maybe -- natural numbers are fixpoint of Maybe
n = Fix (Just (Fix (Just (Fix Nothing)))) :: Natural -- n is 2
-- up to isomorphism composition of identity and f is f:
iso :: Compose Id f x -> f x
iso (Compose (Id a)) = a
Haskell allows type variables of higher kind. ML dialects, including Caml, allow type variables of kind "*" only. Translated into plain English,
In Haskell, a type variable g can correspond to a "type constructor" like Maybe or IO or lists. So the g x in your Haskell example would be OK (jargon: "well-kinded") if for example g is Maybe and x is Integer.
In ML, a type variable 'g can correspond only to a "ground type" like int or string, never to a type constructor like option or list. It is therefore never correct to try to apply a type variable to another type.
As far as I'm aware, there's no deep reason for this limitation in ML. The most likely explanation is historical contingency. When Milner originally came up with his ideas about polymorphism, he worked with very simple type variables standing only for monotypes of kind *. Early versions of Haskell did the same, and then at some point Mark Jones discovered that inferring the kinds of type variables is actually quite easy. Haskell was quickly revised to allow type variables of higher kind, but ML has never caught up.
The people at INRIA have made a lot of other changes to ML, and I'm a bit surprised they've never made this one. When I'm programming in ML, I might enjoy having higher-kinded type variables. But they aren't there, and I don't know any way to encode the kind of examples you are talking about except by using functors.
You can do something similar in OCaml, using modules in place of types, and functors (higher-order modules) in place of higher-order types. But it looks much uglier and it doesn't have type-inference ability, so you have to manually specify a lot of stuff.
module type Type = sig
type t
end
module Char = struct
type t = char
end
module List (X:Type) = struct
type t = X.t list
end
module Maybe (X:Type) = struct
type t = X.t option
end
(* In the following, I decided to omit the redundant
single constructors "Id of ...", "Compose of ...", since
they don't help in OCaml since we can't use inference *)
module Id (X:Type) = X
module Compose
(F:functor(Z:Type)->Type)
(G:functor(Y:Type)->Type)
(X:Type) = F(G(X))
let l : Compose(List)(Maybe)(Char).t = [Some 'a']
module Example2 (F:functor(Y:Type)->Type) (X:Type) = struct
(* unlike types, "free" module variables are not allowed,
so we have to put it inside another functor in order
to scope F and X *)
let iso (a:Compose(Id)(F)(X).t) : F(X).t = a
end
Well... I'm not an expert of higher-order-types nor Haskell programming.
But this seems to be ok for F# (which is OCaml), could you work with these:
type 'x id = Id of 'x;;
type 'f fix = Fix of ('f fix -> 'f);;
type ('f,'g,'x) compose = Compose of ('f ->'g -> 'x);;
The last one I wrapped to tuple as I didn't come up with anything better...
You can do it but you need to make a bit of a trick:
newtype Fix f = In{out:: f (Fix f)}
You can define Cata afterwards:
Cata :: (Functor f) => (f a -> a) -> Fix f -> a
Cata f = f.(fmap (cata f)).out
That will define a generic catamorphism for all functors, which you can use to build your own stuff. Example:
data ListFix a b = Nil | Cons a b
data List a = Fix (ListFix a)
instance functor (ListFix a) where
fmap f Nil = Nil
fmap f (Cons a lst) = Cons a (f lst)

Resources