Moving elements in a priority queue to a lower level - functional-programming

I have a multi-level priority queue which is implemented as a list of (level:int, priority:int, 'a). The datatype looks like this:
datatype 'a queue = NONE | Q of (int * int * 'a) list;
Elements at lower level are at the front of the queue. Elements at same level are sorted on basis of priority. I have an enqueue function
So, if existing queue is: val a = Q [(3,2,"c"),(3,2,"d"),(5,2,"b"),(5,3,"a")],
then, enqueue a 1 5 "e"
gives
val a = Q [(1,5,"e"),(3,2,"c"),(3,2,"d"),(5,2,"b"),(5,3,"a")]
I have to write a function move which operates on a predicate p which moves all elements that satisfy the predicate p to a lower level queue within q.
That is : val move : ('a -> bool) -> 'a queue -> 'a queue
Definition as below won't work.
fun move pred (Q((l,p,v)::xs)) = if (pred (v)) then enqueue (Q xs) (l-1) p v else (move pred (Q xs))
| move pred (Q[]) = raise Empty
I have just started learning sml. Please help.

First off, having a constructor named NONE is bad, since it overwrites the
built-in option-type value of the same name. Secondly, you say that elements for
which the predicate satisfies should be moved to a lower level - is that level
always one less than its previous level?
Your move will not work because you apparently do not call it recursively
when pred v is true, only when it isn't. If you instead enqueue (l,p,v) into
a queue on which move pred is called recursively (i.e. move pred (Q xs)
rather than Q xs), perhaps it will work better.
Note also that this problem is ideal for solving through folding:
fun move pred (Q elems) =
let fun enq ((l,p,v), q) = enqueue q (if pred v then l-1 else l) p v
in foldl enq (Q []) elems end

fun move pred (Q((l,p,v)::xs)) = if (pred (v)) then enqueue (Q xs) (l-1) p v else enqueue (move pred (Q xs)) l p v
| move pred (Q[]) = raise Empty
This also works !

Related

Haskell: Traversal on a Map

I'm looking for a function with this signature:
chainTraversal :: k -> (k -> a -> Maybe (k, b)) -> Map k a -> Map k b
You give it an initial key to start at, a function and a map.
It will extract the element at the position k in the Map, and feed that element to the function. Based on this, the function will return another key to look at next.
It's some mix between a filter and a traversal, with the elements themselves giving the next position to open. The result is the list of elements that has been traversed. It can be shorter than the original map.
Edit: taking into account a comment.
Since all the lookups are done in the original Map:
foo :: k -> (k -> a -> Maybe (k, b)) -> Map k a -> Map k b
foo k f m = fromList $ unfoldr g k
where
g k = (\(k', b) -> (k', (k, b))) -- k ? k' ? you decide
<$> (f' k =<< (m `at` k))
f' k (k', a) = f k a -- or: f k' a ? you decide
or something like that.
You'll have to implement the at function in terms of one of the lookupNN functions of your choosing.
It's not a filter since it must stop on the first Nothing produced by f.
There is no existing function with that signature and behavior. You'll have to write it yourself.

Fast serialization of BST using a difference list

Background
I'm working through Ullmans Elements of ML programming in my spare-time. End goal is to self-study Andrew Appels Modern Compiler Implementation in ML.
In Elements of ML, Ullman describes the difference list:
There is a trick known to LISP programmers as difference lists, in which one
manipulates lists more efficiently by keeping, as an extra parameter of your
function, a list that represents in some way what you have already accomplished.
The idea comes up in a number of different applications;
Ullman uses reverse as an example of the difference list technique. Here is a slow function that runs in O(n^2).
fun reverse nil = nil
| reverse (x::xs) = reverse(xs) # [x]
And the faster one using a difference list
fun rev1(nil, M) = M
| rev1(x::xs, ys) = rev1(xs, x::ys)
fun reverse L = rev1(L, nil)
My problem
I have this Binary Search Tree (BST) data type.
datatype 'a btree = Empty
| Node of 'a * 'a btree * 'a btree
A naive solution for collecting a list of the elements in pre-order would be
fun preOrder Empty = nil
| preOrder (Node(x, left, right)) = [x] # preOrder left # preOrder right
But Ullman points out that the # operator is slow and suggests in exercise 6.3.5 that I implement preOrder using a difference list.
After some head scratching I came up with this function:
fun preOrder tree = let
fun pre (Empty, L) = L
| pre (Node(x, left, right), L) = let
val L = pre(right, L)
val L = pre(left, L)
in
x::L
end
in
pre (tree, nil)
end
It outputs the elements in pre-order. BUT it evaluates the tree in post-order! And the code is uglier than the naive preOrder one.
> val t = Node(5,
Node(3,
Node(1, Empty, Empty),
Node(4, Empty, Empty)),
Node(9, Empty, Empty))
> preOrder t
val it = [5,3,1,4,9] : int list
Prior Art
I tried searching for references to difference lists in ML programming, and found John Hughes original article describing how to use difference lists for reverse.
I also found Matthew Brecknells difference list blog post with examples in Haskell. He makes a distinction between using an accumulator, like Ullmans reverse example and creating a new type for difference lists. He also presents a tree flattener. But I have a hard time understanding the Haskell code and would appreciate a similar expose but in Standard ML.
abc
Question
How implement a function that actually evaluate the tree in pre-order and collects the elements in pre-order? Do I have to reverse the list after my traversal? Or is there some other trick?
How can I generalize this technique to work for in-order and post-order traversal?
What is the idiomatic way for using a difference list for a BST algorithm?
Your eventual method of doing this is is the best it reasonably gets. The nice way to do this turns out to be
fun preOrderHelper (Empty, lst) = lst
| preOrderHelper (Node(x, left, right), lst) =
x :: preOrderHelper(left, preOrderHelper(right, lst))
fun preOrder tree = preOrderHelper(tree, Nil)
Note that the run time of preOrderHelper(tree, list) is only a function of tree. Call r(t) the run time of preOrderHelper on tree t. Then we have r(Empty) = O(1) and r(Node(x, left, right)) = O(1) + r(left) + r(right), so clearly r(t) is linear in the size of t.
What is the derivation of this technique? Is there a more principled way of deriving it? In general, when you're turning a data structure into a list, you want to foldr onto an empty list. I don't know enough ML to say what the equivalent of typeclasses is, but in Haskell, we would approach the situation as follows:
data Tree a = Empty | Node a (Tree a) (Tree a)
instance Foldable Tree where
foldr f acc t = foldrF t acc where
foldrF Empty acc = acc
foldrF (Node x left right) acc = f x (foldrF left (foldrF right acc))
To convert a Tree a to a [a], we would call Data.Foldable.toList, which is defined in Data.Foldable as
toList :: Foldable f => f a -> [a]
toList = foldr (:) []
Unfolding this definition gives us the equivalent of the ML definition above.
As you can see, your technique is actually a special case of a very principled way to turn data structures into lists.
In fact, in modern Haskell, we can do this totally automatically.
{-# LANGUAGE DeriveFoldable #-}
data Tree a = Empty | Node a (Tree a) (Tree a) deriving Foldable
will give us the equivalent(*) of the above Foldable implementation automatically, and we can then immediately use toList. I don't know what the equivalent is in ML, but I'm sure there's something analogous.
The difference between ML and Haskell is that Haskell is lazy. Haskell's laziness means that the evaluation of preOrder actually walks the tree in the pre-Order order. This is one of the reasons I prefer laziness. Laziness permits very fine-grained control over the order of evaluation without resorting to non-functional techniques.
(*) (up to the arguments order, which does not count in the lazy Haskell.)
What you show is not what I've seen usually referred to as difference list.
That would be, in pseudocode,
-- xs is a prefix of an eventual list xs # ys,
-- a difference between the eventual list and its suffix ys:
dl xs = (ys => xs # ys)
and then
pre Empty = (ys => ys) -- Empty contributes an empty prefix
pre (Node(x, left, right)) = (ys =>
-- [x] # pre left # pre right # ys -- this pre returns lists
(dl [x] . pre left . pre right) ys) -- this pre returns diff-lists
-- Node contributes an [x], then goes
-- prefix from `left`, then from `right`
so that
preOrder tree = pre tree []
where . is the functional composition operator,
(f . g) = (x => f (g x))
Of course since dl [x] = (ys => [x] # ys) = (ys => x::ys) this is equivalent to what you show, in the form of
--pre Empty = (ys => ys) -- Empty's resulting prefix is empty
pre' Empty ys = ys
--pre (Node(x, left, right)) = (ys =>
pre' (Node(x, left, right)) ys =
-- [x] # pre left # pre right # ys
-- (dl [x] . pre left . pre right) ys
x::( pre' left ( pre' right ys))
-- preOrder tree = pre' tree []
Operationally, this will traverse the tree right-to-left in an eager language, and left-to-right in a lazy one.
Conceptually, seen left-to-right, the resulting list has [x] and then the result of traversing left and then the result of traversing right, no matter what was the tree traversal order.
These difference lists are just partially applied # operators, and appending is just functional composition:
dl (xs # ys) == (dl xs . dl ys)
-- or:
dl (xs # ys) zs == (dl xs . dl ys) zs
== dl xs ( dl ys zs)
== xs # (ys # zs)
the prefix xs # ys is the prefix xs, followed by the prefix ys, followed by whatever the eventual suffix zs will be.
Thus appending these difference lists is an O(1) operation, the creation of a new lambda function which is a composition of the arguments:
append dl1 dl2 = (zs => dl1 ( dl2 zs))
= (zs => (dl1 . dl2) zs )
= (dl1 . dl2)
Now we can easily see how to code the in-order or post-order traversals, as
in_ Empty = (ys => ys)
in_ (Node(x, left, right)) = (ys =>
-- in_ left # [x] # in_ right # ys
(in_ left . dl [x] . in_ right) ys)
post Empty = (ys => ys)
post (Node(x, left, right)) = (ys =>
-- post left # post right # [x] # ys
(post left . post right . dl [x]) ys)
Focusing on just lists [x] and their appending # lets us treat this uniformly -- no need to concern ourselves with :: and its arguments which have different types.
The types of both arguments of # are the same, just as they are for + with integers and indeed . with functions. Such types paired with such operations are known as monoids, under the condition that the appending operation is associative, (a+b)+c == a+(b+c), and there is an "empty" element, e # s == s # e == s. This just means that the combination operation is "structural" in some way. This works with apples and oranges, but atomic nuclei -- not so much.

How do you generate all permutations of a list with repetition in a functional programming language?

I'm trying to self-learn some programming in a functional programming language and recently stumbled on the problem of generating all the permutations of length m from a list of length n, with repetition. Mathematically, this should result in a total of n^m possible permutations, because each of the m 'slots' can be filled with any of the n elements. The code I have currently, however, does not give me all the elements:
let rec permuts n list =
match n, list with
0, _ -> [[]]
| _, [] -> []
| n, h :: t -> (List.map (fun tt -> h::tt) (permuts (n-1) list))
# permuts n t;;
The algorithm basically takes one element out of a list with m elements, slaps it onto the front of all the combinations with the rest of the elements, and concatenates the results into one list, giving only n C m results.
For example, the output for permuts 2 [1;2;3] yields
[[1;1]; [1;2]; [1;3]; [2;2]; [2;3]; [3;3]]
whereas I actually want
[[1;1]; [1;2]; [1;3]; [2;1]; [2;2]; [2;3]; [3;1]; [3;2]; [3;3]]
-- a total of 9 elements. How do I fix my code so that I get the result I need? Any guidance is appreciated.
Your error appears on the second line of:
| n, h :: t -> List.map (fun tt -> h::tt) (permuts (n-1) list)
# permuts n t
Indeed, with this you are decomposing the set of n-tuples with k elements as the sum of
the set of (n-1)-tuples prefixed with the first element
the set of n-tuples with (k-1) elements
Looking at the cardinal of the three sets, there is an obvious mismatch since
k^n ≠ k^(n-1) + (k-1)^n
And the problem is that the second term doesn't fit.
To avoid this issue, it is probably better to write a couple of helper function.
I would suggest to write the following three helper functions:
val distribute: 'a list -> 'a list -> 'a list list
(** distribute [x_1;...;x_n] y returns [x_1::y;...x_n::y] *)
val distribute_on_all: 'a list -> 'a list list
(** distribute_on_all x [l_1;...;l_n] returns distribute x l_1 # ... # distribute x l_n *)
val repeat: int -> ('a -> 'a) -> 'a -> 'a
(** repeat n f x is f(...(f x)...) with f applied n times *)
then your function will be simply
let power n l = repeat n (distribute_on_all l) [[]]
In Haskell, it's very natural to do this using a list comprehension:
samples :: Int -> [a] -> [[a]]
samples 0 _ = [[]]
samples n xs =
[ p : ps
| p <- xs
, ps <- samples (n - 1) xs
]
It seems to me you never want to recurse on the tail of the list, since all your selections are from the whole list.
The Haskell code of #dfeuer looks right. Note that it never deconstructs the list xs. It just recurses on n.
You should be able to copy the Haskell code using List.map in place of the first two lines of the list comprehension, and a recursive call with (n - 1) in place of the next line.
Here's how I would write it in OCaml:
let perm src =
let rec extend remaining_count tails =
match remaining_count with
| 0 -> tails
| _ ->
(* Put an element 'src_elt' taken from all the possible elements 'src'
in front of each possible tail 'tail' taken from 'tails',
resulting in 'new_tails'. The elements of 'new_tails' are one
item longer than the elements of 'tails'. *)
let new_tails =
List.fold_left (fun new_tails src_elt ->
List.fold_left (fun new_tails tail ->
(src_elt :: tail) :: new_tails
) new_tails tails
) [] src
in
extend (remaining_count - 1) new_tails
in
extend (List.length src) [[]]
The List.fold_left calls may look a bit intimidating but they work well. So it's a good idea to practice using List.fold_left. Similarly, Hashtbl.fold is also common and idiomatic, and you'd use it to collect the keys and values of a hash table.

Nested recursion and `Program Fixpoint` or `Function`

I’d like to define the following function using Program Fixpoint or Function in Coq:
Require Import Coq.Lists.List.
Import ListNotations.
Require Import Coq.Program.Wf.
Require Import Recdef.
Inductive Tree := Node : nat -> list Tree -> Tree.
Fixpoint height (t : Tree) : nat :=
match t with
| Node x ts => S (fold_right Nat.max 0 (map height ts))
end.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map (fun t => mapTree f t) ts)
end.
Next Obligation.
Unfortunately, at this point I have a proof obligation height t < height (Node x ts) without knowing that t is a member of ts.
Similarly with Function instead of Program Fixpoint, only that Function detects the problem and aborts the definition:
Error:
the term fun t : Tree => mapTree f t can not contain a recursive call to mapTree
I would expect to get a proof obligation of In t ts → height t < height (Node x ts).
Is there a way of getting that that does not involve restructuring the function definition? (I know work-arounds that require inlining the definition of map here, for example – I’d like to avoid these.)
Isabelle
To justify that expectation, let me show what happens when I do the same in Isabelle, using the function command, which is (AFAIK) related to Coq’s Function command:
theory Tree imports Main begin
datatype Tree = Node nat "Tree list"
fun height where
"height (Node _ ts) = Suc (foldr max (map height ts) 0)"
function mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
by pat_completeness auto
termination
proof (relation "measure (λ(f,t). height t)")
show "wf (measure (λ(f, t). height t))" by auto
next
fix f :: "nat ⇒ nat" and x :: nat and ts :: "Tree list" and t
assume "t ∈ set ts"
thus "((f, t), (f, Node x ts)) ∈ measure (λ(f, t). height t)"
by (induction ts) auto
qed
In the termination proof, I get the assumption t ∈ set ts.
Note that Isabelle does not require a manual termination proof here, and the following definition works just fine:
fun mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
This works because the map function has a “congruence lemma” of the form
xs = ys ⟹ (⋀x. x ∈ set ys ⟹ f x = g x) ⟹ map f xs = map g ys
that the function command uses to find out that the termination proof only needs to consider t ∈ set ts..
If such a lemma is not available, e.g. because I define
definition "map' = map"
and use that in mapTree, I get the same unprovable proof obligation as in Coq. I can make it work again by declaring a congruence lemma for map', e.g. using
declare map_cong[folded map'_def,fundef_cong]
In this case, you actually do not need well-founded recursion in its full generality:
Require Import Coq.Lists.List.
Set Implicit Arguments.
Inductive tree := Node : nat -> list tree -> tree.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
Coq is able to figure out by itself that recursive calls to map_tree are performed on strict subterms. However, proving anything about this function is difficult, as the induction principle generated for tree is not useful:
tree_ind :
forall P : tree -> Prop,
(forall (n : nat) (l : list tree), P (Node n l)) ->
forall t : tree, P t
This is essentially the same problem you described earlier. Luckily, we can fix the issue by proving our own induction principle with a proof term.
Require Import Coq.Lists.List.
Import ListNotations.
Unset Elimination Schemes.
Inductive tree := Node : nat -> list tree -> tree.
Set Elimination Schemes.
Fixpoint tree_ind
(P : tree -> Prop)
(IH : forall (n : nat) (ts : list tree),
fold_right (fun t => and (P t)) True ts ->
P (Node n ts))
(t : tree) : P t :=
match t with
| Node n ts =>
let fix loop ts :=
match ts return fold_right (fun t' => and (P t')) True ts with
| [] => I
| t' :: ts' => conj (tree_ind P IH t') (loop ts')
end in
IH n ts (loop ts)
end.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
The Unset Elimination Schemes command prevents Coq from generating its default (and not useful) induction principle for tree. The occurrence of fold_right on the induction hypothesis simply expresses that the predicate P holds of every tree t' appearing in ts.
Here is a statement that you can prove using this induction principle:
Lemma map_tree_comp f g t :
map_tree f (map_tree g t) = map_tree (fun n => f (g n)) t.
Proof.
induction t as [n ts IH]; simpl; f_equal.
induction ts as [|t' ts' IHts]; try easy.
simpl in *.
destruct IH as [IHt' IHts'].
specialize (IHts IHts').
now rewrite IHt', <- IHts.
Qed.
You can now do this with Equations and get the right elimination principle automatically, using either structural nested recursion or well-founded recursion
In general, it might be advisable to avoid this problem. But if one really wants to obtain the proof obligation that Isabelle gives you, here is a way:
In Isabelle, we can give an external lemma that stats that map applies its arguments only to members of the given list. In Coq, we cannot do this in an external lemma, but we can do it in the type. So instead of the normal type of map
forall A B, (A -> B) -> list A -> list B
we want the type to say “f is only ever applied to elements of the list:
forall A B (xs : list A), (forall x : A, In x xs -> B) -> list B
(It requires reordering the argument so that the type of f can mention xs).
Writing this function is not trivial, and I found it easier to use a proof script:
Definition map {A B} (xs : list A) (f : forall (x:A), In x xs -> B) : list B.
Proof.
induction xs.
* exact [].
* refine (f a _ :: IHxs _).
- left. reflexivity.
- intros. eapply f. right. eassumption.
Defined.
But you can also write it “by hand”:
Fixpoint map {A B} (xs : list A) : forall (f : forall (x:A), In x xs -> B), list B :=
match xs with
| [] => fun _ => []
| x :: xs => fun f => f x (or_introl eq_refl) :: map xs (fun y h => f y (or_intror h))
end.
In either case, the result is nice: I can use this function in mapTree, i.e.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map ts (fun t _ => mapTree f t))
end.
Next Obligation.
and I don’t have to do anything with the new argument to f, but it shows up in the the termination proof obligation, as In t ts → height t < height (Node x ts) as desired. So I can prove that and define mapTree:
simpl.
apply Lt.le_lt_n_Sm.
induction ts; inversion_clear H.
- subst. apply PeanoNat.Nat.le_max_l.
- rewrite IHts by assumption.
apply PeanoNat.Nat.le_max_r.
Qed.
It only works with Program Fixpoint, not with Function, unfortunately.

Anonymous recursive functions in OCaml

How do you make an anonymous recursive function (something simple for example factorial n?) I have heard it is possible but no idea how to make it work in OCaml.
let a =
fun x -> ....
I just don't know how to keep it going...
Here is a definition of factorial using only anonymous functions:
let fact =
(fun f -> (fun x a -> f (x x) a) (fun x a -> f (x x) a))
(fun f n -> if n < 2 then 1 else n * f (n - 1))
It requires the use of the -rectypes flag.
Here's a session showing that it works:
$ rlwrap ocaml -rectypes
OCaml version 4.03.0
let fact =
(fun f -> (fun x a -> f (x x) a) (fun x a -> f (x x) a))
(fun f n -> if n < 2 then 1 else n * f (n - 1));;
val fact : int -> int = <fun>
# fact 8;;
- : int = 40320
I cheated somewhat by looking up the Y Combinator here: Rosetta Code: Y Combinator
Update
Disclaimer: you would do better to read up on lambda calculus, fixed points, and the Y Combinator than to get your info from me. I'm not a theorist, just a humble practitioner.
Following the actual computation is almost impossible (but definitely worth doing I'm sure). But at a high level the ideas are like this.
The first line of the definition is the Y Combinator, which in general calculates the fixed point of a function. It so happens that a recursive function is the fixed point of a function from functions to functions.
So the first goal is to find the function whose fixed point is the factorial function. That's the second line of the definition. If you give it a function of type int -> int, it gives you back another function of type int -> int. And if you give it the factorial function, it gives you back the factorial function. This means that the factorial function is its fixed point.
So then when you apply the Y Combinator to this function, you do indeed get the factorial function.
Let me try to expand a bit on Jeffrey Scofield's answer. A non-anonymous recursive definition of the factorial function could be
let rec fact n =
if n < 2 then 1 else n * fact (n - 1)
The first problem you encounter when you try to define an anonymous recursive function is how to do the actual recursive call (fact (n - 1) in our case). For a call we need a name and we do not have a name for an anonymous function. The solution is to use a temporary name. With the temporary name f, the definition body is just
fun n -> if n < 2 then 1 else n * f (n - 1)
This term does not have a type, because the "temporary name" f is unbound. But we can turn it into a value that does have a type by bounding f as well. Let us call the result g:
let g = fun f n -> if n < 2 then 1 else n * f (n - 1)
g is not yet anonymous at the moment, but only because I want to refer to it again.
Observe that g has type (int -> int) -> (int -> int). What we want (the factorial function) will have type (int -> int). So g takes something of the type we want (a function type in this case) and produces something of the same type. The intuition is that g takes an approximation of the factorial function, namely a function f which works for all n up to some limit N and returns a better approximation, namely a function that works for all n up to N+1.
Finally we need something that turns g into an actual recursive definition.
Doing so is a very generic task. Recall that g improves the approximation quality. The final factorial function fact is one which cannot be further improved. So applying g to fact should be the same as just fact. (Actually that is only true from a value point of view. The actual computation inherent in g fact n for some n is different from that of just fact n. But the returned values are the same.) In other words, fact is a fixed point of g. So what we need is something that computes fixed points.
Luckily, there is a single function that does so: The Y combinator. From a value point of view, the Y combinator (let us use y in OCaml, as uppercase is reserved for constructors) is defined by the fact that y g = g (y g) for all g: given some function g, the combinator returns one of its fixed points.
Consequently,
y : (`a -> `a) -> `a
In our case the type variable is instantiated by (int -> int).
One possible way to define y would be
let y = fun g -> (fun x -> g (x x)) (fun x -> g (x x))
but this works only with lazy evaluation (as, I believe, Haskell has). As OCaml has eager evaluation, it produces a stack overflow when used. The reason is that OCaml tries to turn something like y g 8 into
g (y g) 8
g (g (y g)) 8
g (g (g (y g))) 8
...
without ever getting to call g.
The solution is to use deferred computation inside of y:
let y = fun g -> (fun x a -> g (x x) a) (fun x a -> g (x x) a)
One drawback is that y does not work for arbitrary types any more. It only works for function types.
y : ((`b -> `c) -> (`b -> `c)) -> (`b -> `c)
But you asked for recursive definitions of functions anyway, not for recursive definitions of other values. So, our definition of the factorial function is y g with y and g defined as above. Neither y nor g are anonymous yet, but that can be remedied easily:
(fun g -> (fun x a -> g (x x) a) (fun x a -> g (x x) a))
(fun f n -> if n < 2 then 1 else n * f (n - 1))
UPDATE:
Defining y only works with the -rectypes option. The reason is that we apply x to itself.
There is also an "intuitive" way to accomplish anonymous recursion without resorting to Y combinators.
It makes use of a let binding to store the value of a lambda that accepts itself as an argument, so that it can call itself with itself as the first parameter, like so:
let fact = (let fact0 = (fun self n -> if n < 2 then 1 else n * self self (n - 1)) in (fun n -> fact0 fact0 n));;
It's anonymous only to the extent that it is not defined with let rec.

Resources