writing functions without using helper function such as fold_left - functional-programming

let rec ddr l =
let (r,_) =
List.fold_left (fun (a_l, a_i) x ->
((a_i + x) :: a_l , a_i+x))
([],0) l in
List.rev r
I haven't learned List.fold_left or List.rev, but the code above solves my task, which is: write a function that does
[1;2;3] -> [1; 3; 6]
[4, 5] -> [4; 9]

This seems like a homework problem straight from the course I'm teaching this term! So I won't give the solution, but I'll give you a hint.
Suppose you're processing a list [1;2;3;4] and you've reached the 3. If only you knew what the partial sum of the preceding elements were, then it would be easy to compute the new partial sum. Try to find a way to carry the partial sum so far with you, perhaps by using a helper function.

Related

Simplify multiway tree traversal with continuation passing style

I am fascinated by the approach used in this blog post to traverse a rose tree a.k.a multiway tree a.k.a n-ary tree using CPS.
Here is my code, with type annotations removed and names changed, which I did while trying to understand the technique:
type 'a Tree = Node of 'a * 'a Tree list | Leaf of 'a
let rec reduce recCalls cont =
match recCalls with
| [] -> [] |> cont
| findMaxCall :: pendingCalls ->
findMaxCall (fun maxAtNode ->
reduce pendingCalls (fun maxVals -> maxAtNode :: maxVals |> cont))
let findMaxOf (roseTree : int Tree) =
let rec findMax tr cont =
match tr with
| Leaf i -> i |> cont
| Node (i, chld) ->
let recCalls = chld |> List.map findMax
reduce recCalls (fun maxVals -> List.max (i :: maxVals) |> cont)
findMax roseTree id
// test it
let FindMaxOfRoseTree =
let t = Node (1, [ Leaf 2; Leaf 3 ])
let maxOf = findMaxOf t //will be 3
maxOf
My problem is, I find this approach hard to follow. The mutual recursion (assuming that's the right term) is really clever to my simpleton brain, but I get lost while trying to understand how it works, even when using simple examples and writing down steps manually etc.
I am in need of using CPS with Rose trees, and I'll be doing the kind of traversals that require a CPS, because just like this example, computing results based on my my tree nodes require that children of the nodes are computed first. In any case, I do like CPS and I'd like to improve my understanding of it.
So my question is: Is there an alternative way of implementing CPS on rose trees which I may manage to better follow understand? Is there a way to refactor the above code which may make it easier to follow (eliminating the mutual recursion?)
If there is a name for the above approach, or some resources/books I can read to understand it better, hints are also most welcome.
CPS can definitely be confusing, but there are some things you can do to simplify this code:
Remove the Leaf case from your type because it's redundant. A leaf is just a Node with an empty list of children.
Separate general-purpose CPS logic from logic that's specific to rose trees.
Use the continuation monad to simplify CPS code.
First, let's define the continuation monad:
type ContinuationMonad() =
member __.Bind(m, f) = fun c -> m (fun a -> f a c)
member __.Return(x) = fun k -> k x
let cont = ContinuationMonad()
Using this builder, we can define a general-purpose CPS reduce function that combines a list of "incomplete" computations into a single incomplete computation (where an incomplete computation is any function that takes a continuation of type 't -> 'u and uses it to produce a value of type 'u).
let rec reduce fs =
cont {
match fs with
| [] -> return []
| head :: tail ->
let! result = head
let! results = reduce tail
return result :: results
}
I think this is certainly clearer, but it might seem like magic. The key to understanding let! x = f for this builder is that x is the value passed to f's implied continuation. This allows us to get rid of lots of lambdas and nested parens.
Now we're ready to work with rose trees. Here's the simplified type definition:
type 'a Tree = Node of 'a * 'a Tree list
let leaf a = Node (a, [])
Finding the maximum value in a tree now looks like this:
let rec findMax (Node (i, chld)) =
cont {
let! maxVals = chld |> List.map findMax |> reduce
return List.max (i :: maxVals)
}
Note that there's no mutual recursion here. Both reduce and findMax are self-recursive, but reduce doesn't call findMax and doesn't know anything about rose trees.
You can test the refactored code like this:
let t = Node (1, [ leaf 2; leaf 3 ])
findMax t (printfn "%A") // will be 3
For convenience, I created a gist containing all the code.
The accepted answer from brianberns indeed provides an alternative way of achieving cps on a rose tree.
I'm also adding this alternative solution from Tomas Petricek. It shows how we can eliminate the extra function call by changing the type of the tree from a single node to a list of nodes in the inner loop.
I should have used the term multiway tree (which I'll change in a minute) but at least this question now documents three different methods. Hopefully it'll help others.

How to make an tail recursive function and test it?

I would like to make this functions recursive but I don't know where to start.
let rec rlist r n =
if n < 1 then []
else Random.int r :: rlist r (n-1);;
let rec divide = function
h1::h2::t -> let t1,t2 = divide t in
h1::t1, h2::t2
| l -> l,[];;
let rec merge ord (l1,l2) = match l1,l2 with
[],l | l,[] -> l
| h1::t1,h2::t2 -> if ord h1 h2
then h1::merge ord (t1,l2)
else h2::merge ord (l1,t2);;
Is there any way to test if a function is recursive or not?
If you give a man a fish, you feed him for a day. But if you give him a fishing rod, you feed him for a lifetime.
Thus, instead of giving you the solution, I would better teach you how to solve it yourself.
A tail-recursive function is a recursive function, where all recursive calls are in a tail position. A call position is called a tail position if it is the last call in a function, i.e., if the result of a called function will become a result of a caller.
Let's take the following simple function as our working example:
let rec sum n = if n = 0 then 0 else n + sum (n-1)
It is not a tail-recursive function as the call sum (n-1) is not in a tail position because its result is then incremented by one. It is not always easy to translate a general recursive function into a tail-recursive form. Sometimes, there is a tradeoff between efficiency, readability, and tail-recursion.
The general techniques are:
use accumulator
use continuation-passing style
Using accumulator
Sometimes a function really needs to store the intermediate results, because the result of recursion must be combined in a non-trivial way. A recursive function gives us a free container to store arbitrary data - the call stack. A place, where the language runtime, stores parameters for the currently called functions. Unfortunately, the stack container is bounded, and its size is unpredictable. So, sometimes, it is better to switch from the stack to the heap. The latter is slightly slower (because it introduces more work to the garbage collector), but is bigger and more controllable. In our case, we need only one word to store the running sum, so we have a clear win. We are using less space, and we're not introducing any memory garbage:
let sum n =
let rec loop n acc = if n = 0 then acc else loop (n-1) (acc+n) in
loop n 0
However, as you may see, this came with a tradeoff - the implementation became slightly bigger and less understandable.
We used here a general pattern. Since we need to introduce an accumulator, we need an extra parameter. Since we don't want or can't change the interface of our function, we introduce a new helper function, that is recursive and will carry the extra parameter. The trick here is that we apply the summation before we do the recursive call, not after.
Using continuation-passing style
It is not always the case when you can rewrite your recursive algorithm using an accumulator. In this case, a more general technique can be used - the continuation-passing style. Basically, it is close to the previous technique, but we will use a continuation in the place of an accumulator. A continuation is a function, that will actually postpone the work, that is needed to be done after the recursion, to a later time. Conventionally, we call this function return or simply k (for the continuation). Mentally, the continuation is a way of throwing the result of computation back into the future. "Back" is because you returning the result back to the caller, in the future, because, the result will be used not now, but once everything is ready. But let's look at the implementation:
let sum n =
let rec loop n k = if n = 0 then k 0 else loop (n-1) (fun x -> k (x+n)) in
loop n (fun x -> x)
You may see, that we employed the same strategy, except that instead of int accumulator we used a function k as a second parameter. If the base case, if n is zero, we will return 0, (you can read k 0 as return 0). In the general case, we recurse in a tail position, with a regular decrement of the inductive variable n, however, we pack the work, that should be done with the result of the recursive function into a function: fun x -> k (x+n). Basically, this function says, once x - the result of recursion call is ready, add it to the number n and return. (Again, if we will use name return instead of k it could be more readable: fun x -> return (x+n)).
There is no magic here, we still have the same tradeoff, as with accumulator, as we create a new closure (functional object) at every recursive call. And each newly created closure contains a reference to the previous one (that was passed via the parameter). For example, fun x -> k (x+n) is a function, that captures two free variables, the value n and function k, that was the previous continuation. Basically, these continuations form a linked list, where each node bears a computation and all arguments except one. So, the computation is delayed until the last one is known.
Of course, for our simple example, there is no need to use CPS, since it will create unnecessary garbage and be much slower. This is only for demonstration. However, for more complex algorithms, in particular for those that combine results of two or more recursive calls in a non-trivial case, e.g., folding over a graph data structure.
So now, armed with the new knowledge, I hope that you will be able to solve your problems as easy as pie.
Testing for the tail recursion
The tail call is a pretty well-defined syntactic notion, so it should be pretty obvious whether the call is in a tail position or not. However, there are still few methods that allow one to check whether the call is in a tail position. In fact, there are other cases, when tail-call optimization may come into play. For example, a call that is right to the shortcircuit logical operator is also a tail call. So, it is not always obvious when a call is using the stack or it is a tail call. The new version of OCaml allows one to put an annotation at the call place, e.g.,
let rec sum n = if n = 0 then 0 else n + (sum [#tailcall]) (n-1)
If the call is not really a tail call, a warning is issued by a compiler:
Warning 51: expected tailcall
Another method is to compile with -annot option. The annotation file will contain an annotation for each call, for example, if we will put the above function into a file sum.ml and compile with ocamlc -annot sum.ml, then we can open sum.annot file and look for all calls:
"sum.ml" 1 0 41 "sum.ml" 1 0 64
call(
stack
)
If we, however, put our third implementation, then the see that all calls are tail calls, e.g. grep call -A1 sum.annot:
call(
tail
--
call(
tail
--
call(
tail
--
call(
tail
Finally, you can just test your program with some big input, and see whether your program will fail with the stack overflow. You can even reduce the size of the stack, this can be controlled with the environment variable OCAMLRUNPARAM, for example, to limit the stack to one thousand words:
export OCAMLRUNPARAM='l=1000'
ocaml sum.ml
You could do the following :
let rlist r n =
let aux acc n =
if n < 1 then acc
else aux (Random.int r :: acc) (n-1)
in aux [] n;;
let divide l =
let aux acc1 acc2 = function
| h1::h2::t ->
aux (h1::acc1) (h2::acc2) t
| [e] -> e::acc1, acc2
| [] -> acc1, acc2
in aux [] [] l;;
But for divide I prefer this solution :
let divide l =
let aux acc1 acc2 = function
| [] -> acc1, acc2
| hd::tl -> aux acc2 (hd :: acc1) tl
in aux [] [] l;;
let merge ord (l1,l2) =
let rec aux acc l1 l2 =
match l1,l2 with
| [],l | l,[] -> List.rev_append acc l
| h1::t1,h2::t2 -> if ord h1 h2
then aux (h1 :: acc) t1 l2
else aux (h2 :: acc) l1 t2
in aux [] l1 l2;;
As to your question about testing if a function is tail recursive or not, by looking out for it a bit you would have find it here.

Simple functions for SML/NJ

I was required to write a set of functions for problems in class. I think the way I wrote them was a bit more complicated than they needed to be. I had to implement all the functions myself, without using and pre-defined ones. I'd like to know if there are any quick any easy "one line" versions of these answers?
Sets can be represented as lists. The members of a set may appear in any order on the list, but there shouldn't be more than one
occurrence of an element on the list.
(a) Define dif(A, B) to
compute the set difference of A and B, A-B.
(b) Define cartesian(A,
B) to compute the Cartesian product of set A and set B, { (a, b) |
a∈A, b∈B }.
(c) Consider the mathematical-induction proof of the
following: If a set A has n elements, then the powerset of A has 2n
elements. Following the proof, define powerset(A) to compute the
powerset of set A, { B | B ⊆ A }.
(d) Define a function which, given
a set A and a natural number k, returns the set of all the subsets of
A of size k.
(* Takes in an element and a list and compares to see if element is in list*)
fun helperMem(x,[]) = false
| helperMem(x,n::y) =
if x=n then true
else helperMem(x,y);
(* Takes in two lists and gives back a single list containing unique elements of each*)
fun helperUnion([],y) = y
| helperUnion(a::x,y) =
if helperMem(a,y) then helperUnion(x,y)
else a::helperUnion(x,y);
(* Takes in an element and a list. Attaches new element to list or list of lists*)
fun helperAttach(a,[]) = []
helperAttach(a,b::y) = helperUnion([a],b)::helperAttach(a,y);
(* Problem 1-a *)
fun myDifference([],y) = []
| myDifference(a::x,y) =
if helper(a,y) then myDifference(x,y)
else a::myDifference(x,y);
(* Problem 1-b *)
fun myCartesian(xs, ys) =
let fun first(x,[]) = []
| first(x, y::ys) = (x,y)::first(x,ys)
fun second([], ys) = []
| second(x::xs, ys) = first(x, ys) # second(xs,ys)
in second(xs,ys)
end;
(* Problem 1-c *)
fun power([]) = [[]]
| power(a::y) = union(power(y),insert(a,power(y)));
I never got to problem 1-d, as these took me a while to get. Any suggestions on cutting these shorter? There was another problem that I didn't get, but I'd like to know how to solve it for future tests.
(staircase problem) You want to go up a staircase of n (>0) steps. At one time, you can go by one step, two steps, or three steps. But,
for example, if there is one step left to go, you can go only by one
step, not by two or three steps. How many different ways are there to
go up the staircase? Solve this problem with sml. (a) Solve it
recursively. (b) Solve it iteratively.
Any help on how to solve this?
Your set functions seem nice. I would not change anything principal about them except perhaps their formatting and naming:
fun member (x, []) = false
| member (x, y::ys) = x = y orelse member (x, ys)
fun dif ([], B) = []
| dif (a::A, B) = if member (a, B) then dif (A, B) else a::dif(A, B)
fun union ([], B) = B
| union (a::A, B) = if member (a, B) then union (A, B) else a::union(A, B)
(* Your cartesian looks nice as it is. Here is how you could do it using map: *)
local val concat = List.concat
val map = List.map
in fun cartesian (A, B) = concat (map (fn a => map (fn b => (a,b)) B) A) end
Your power is also very neat. If you call your function insert, it deserves a comment about inserting something into many lists. Perhaps insertEach or similar is a better name.
On your last task, since this is a counting problem, you don't need to generate the actual combinations of steps (e.g. as lists of steps), only count them. Using the recursive approach, try and write the base cases down as they are in the problem description.
I.e., make a function steps : int -> int where the number of ways to take 0, 1 and 2 steps are pre-calculated, but for n steps, n > 2, you know that there is a set of combinations of steps that begin with either 1, 2 or 3 steps plus the number combinations of taking n-1, n-2 and n-3 steps respectively.
Using the iterative approach, start from the bottom and use parameterised counting variables. (Sorry for the vague hint here.)

Keeping a counter at each recursive call in OCaml

I am trying to write a function that returns the index of the passed value v in a given list x; -1 if not found. My attempt at the solution:
let rec index (x, v) =
let i = 0 in
match x with
[] -> -1
| (curr::rest) -> if(curr == v) then
i
else
succ i; (* i++ *)
index(rest, v)
;;
This is obviously wrong to me (it will return -1 every time) because it redefines i at each pass. I have some obscure ways of doing it with separate functions in my head, none which I can write down at the moment. I know this is a common pattern in all programming, so my question is, what's the best way to do this in OCaml?
Mutation is not a common way to solve problems in OCaml. For this task, you should use recursion and accumulate results by changing the index i on certain conditions:
let index(x, v) =
let rec loop x i =
match x with
| [] -> -1
| h::t when h = v -> i
| _::t -> loop t (i+1)
in loop x 0
Another thing is that using -1 as an exceptional case is not a good idea. You may forget this assumption somewhere and treat it as other indices. In OCaml, it's better to treat this exception using option type so the compiler forces you to take care of None every time:
let index(x, v) =
let rec loop x i =
match x with
| [] -> None
| h::t when h = v -> Some i
| _::t -> loop t (i+1)
in loop x 0
This is pretty clearly a homework problem, so I'll just make two comments.
First, values like i are immutable in OCaml. Their values don't change. So succ i doesn't do what your comment says. It doesn't change the value of i. It just returns a value that's one bigger than i. It's equivalent to i + 1, not to i++.
Second the essence of recursion is to imagine how you would solve the problem if you already had a function that solves the problem! The only trick is that you're only allowed to pass this other function a smaller version of the problem. In your case, a smaller version of the problem is one where the list is shorter.
You can't mutate variables in OCaml (well, there is a way but you really shouldn't for simple things like this)
A basic trick you can do is create a helper function that receives extra arguments corresponding to the variables you want to "mutate". Note how I added an extra parameter for the i and also "mutate" the current list head in a similar way.
let rec index_helper (x, vs, i) =
match vs with
[] -> -1
| (curr::rest) ->
if(curr == x) then
i
else
index_helper (x, rest, i+1)
;;
let index (x, vs) = index_helper (x, vs, 0) ;;
This kind of tail-recursive transformation is a way to translate loops to functional programming but to be honest it is kind of low level (you have full power but the manual recursion looks like programming with gotos...).
For some particular patterns what you can instead try to do is take advantage of reusable higher order functions, such as map or folds.

Can somebody please explain in layman terms what a functional language is? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Functional programming and non-functional programming
I'm afraid Wikipedia did not bring me any further.
Many thanks
PS: This past thread is also very good, however I am happy I asked this question again as the new answers were great - thanks
Functional programming and non-functional programming
First learn what a Turing machine is (from wikipedia).
A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This is about Lamda calculus. (from wikipedia).
In mathematical logic and computer science, the lambda calculus, also written as the λ-calculus, is a formal system for studying computable recursive functions, a la computability theory, and related phenomena such as variable binding and substitution.
The functional programming languages use, as their fundamental model of computation, the lambda calculus, while all the other programming languages use the Turing machine as their fundamental model of computation. (Well, technically, I should say functional programming languages vr.s imperative programming languages- as languages in other paradigms use other models. For example, SQL uses the relational model, Prolog uses a logic model, and so on. However, pretty much all the languages people actually think about when discussing programming languages are either functional or imperative, so I’ll stick with the easy generality.)
What do I mean by “fundamental model of computation”? Well, all languages can be thought of in two layers: one, some core Turing-complete language, and then layers of either abstractions or syntactic sugar (depending upon whether you like them or not) which are defined in terms of the base Turing-complete language. The core language for imperative languages is then a variant of the classic Turing machine model of computation one might call “the C language”. In this language, memory is an array of bytes that can be read from and written to, and you have one or more CPUs which read memory, perform simple arithmetic, branch on conditions, and so on. That’s what I mean by the fundamental model of computation of these languages is the Turing Machine.
The fundamental model of computation for functional languages is the Lambda Calculus, and this shows up in two different ways. First, one thing that many functional languages do is to write their specifications explicitly in terms of a translation to the lambda calculus to specify the behavior of a program written in the language (this is known as “denotational semantics”). And second, almost all functional programming languages implement their compilers to use an explicit lambda-calculus-like intermediate language- Haskell has Core, Lisp and Scheme have their “desugared” representation (after all macros have been applied), Ocaml (Objective Categorical Abstract Machine Language) has it’s lispish intermediate representation, and so on.
So what is this lambda calculus I’ve been going on about? Well, the basic idea is that, to do any computation, you only need two things. The first thing you need is function abstraction- the definition of an unnamed, single-argument, function. Alonzo Church, who first defined the Lambda calculus used the rather obscure notation to define a function as the greek letter lambda, followed by the one-character name of the argument to the function, followed by a period, followed by the expression which was the body of the function. So the identity function, which given any value, simply returns that value, would look like “λx.x” I’m going to use a slight more human-readable approach- I’m going to replace the λ character with the word “fun”, the period with “->”, and allow white space and allow multi-character names. So I might write the identity function as “fun x -> x”, or even “fun whatever -> whatever”. The change in notation doesn’t change the fundamental nature. Note that this is the source of the name “lambda expression” in languages like Haskell and Lisp- expressions that introduce unnamed local functions.
The only other thing you can do in the Lambda Calculus is to call functions. You call a function by applying an argument to it. I’m going to follow the standard convention that application is just the two names in a row- so f x is applying the value x to the function named f. We can replace f with some other expression, including a Lambda expression, if we want- and we can When you apply an argument to an expression, you replace the application with the body of the function, with all the occurrences of the argument name replaced with whatever value was applied. So the expression (fun x -> x x) y becomes y y.
The theoreticians went to great lengths to precisely define what they mean by “replacing all occurrences of the variable with the the value applied”, and can go on at great lengths about how precisely this works (throwing around terms like “alpha renaming”), but in the end things work exactly like you expect them to. The expression (fun x -> x x) (x y) becomes (x y) (x y)- there is no confusion between the argument x within the anonymous function, and the x in the value being applied. This works even in multiple levels- the expression (fun x -> (fun x -> x x)) (x x)) (x y) becomes first (fun x -> x x) ((x y) (x y)) and then ((x y) (x y)) ((x y) (x y)). The x in the innermost function (“(fun x -> x x)”) is a different x than the other x’s.
It is perfectly valid to think of function application as a string manipulation. If I have a (fun x -> some expression), and I apply some value to it, then the result is just some expression with all the x’s textually replaced with the “some value” (except for those which are shadowed by another argument).
As an aside, I will add parenthesis where needed to disambiguate things, and also elide them where not needed. The only difference they make is grouping, they have no other meaning.
So that’s all there is too it to the Lambda calculus. No, really, that’s all- just anonymous function abstraction, and function application. I can see you’re doubtful about this, so let me address some of your concerns.
First, I specified that a function only took one argument- how do you have a function that takes two, or more, arguments? Easy- you have a function that takes one argument, and returns a function that takes the second argument. For example, function composition could be defined as fun f -> (fun g -> (fun x -> f (g x))) – read that as a function that takes an argument f, and returns a function that takes an argument g and return a function that takes an argument x and return f (g x).
So how do we represent integers, using only functions and applications? Easily (if not obviously)- the number one, for instance, is a function fun s -> fun z -> s z – given a “successor” function s and a “zero” z, one is then the successor to zero. Two is fun s -> fun z -> s s z, the successor to the successor to zero, three is fun s -> fun z -> s s s z, and so on.
To add two numbers, say x and y, is again simple, if subtle. The addition function is just fun x -> fun y -> fun s -> fun z -> x s (y s z). This looks odd, so let me run you through an example to show that it does, in fact work- let’s add the numbers 3 and 2. Now, three is just (fun s -> fun z -> s s s z) and two is just (fun s -> fun z -> s s z), so then we get (each step applying one argument to one function, in no particular order):
(fun x -> fun y -> fun s -> fun z -> x s (y s z)) (fun s -> fun z -> s s s z) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun s -> fun z -> s s s z) s (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun z -> s s s z) (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> s s s (y s z)) (fun s -> fun z -> s s z)
(fun s -> fun z -> s s s ((fun s -> fun z -> s s z) s z))
(fun s -> fun z -> s s s (fun z -> s s z) z)
(fun s -> fun z -> s s s s s z)
And at the end we get the unsurprising answer of the successor to the successor to the successor to successor to the successor to zero, known more colloquially as five. Addition works by replacing the zero (or where we start counting) of the x value with the y value- to define multiplication, we instead diddle with the concept of “successor”:
(fun x -> fun y -> fun s -> fun z -> x (y s) z)
I’ll leave it to you to verify that the above code does
Wikipedia says
Imperative programs tend to emphasize the series of steps taken by a program in carrying out an action, while functional programs tend to emphasize the composition and arrangement of functions, often without specifying explicit steps. A simple example illustrates this with two solutions to the same programming goal (calculating Fibonacci numbers). The imperative example is in C++.
// Fibonacci numbers, imperative style
int fibonacci(int iterations)
{
int first = 0, second = 1; // seed values
for (int i = 0; i < iterations; ++i) {
int sum = first + second;
first = second;
second = sum;
}
return first;
}
std::cout << fibonacci(10) << "\n";
A functional version (in Haskell) has a different feel to it:
-- Fibonacci numbers, functional style
-- describe an infinite list based on the recurrence relation for Fibonacci numbers
fibRecurrence first second = first : fibRecurrence second (first + second)
-- describe fibonacci list as fibRecurrence with initial values 0 and 1
fibonacci = fibRecurrence 0 1
-- describe action to print the 10th element of the fibonacci list
main = print (fibonacci !! 10)
See this PDF also
(A) Functional programming describes solutions mechanically. You define a machine that is constantly outputting correctly, e.g. with Caml:
let rec factorial = function
| 0 -> 1
| n -> n * factorial(n - 1);;
(B) Procedural programming describes solutions temporally. You describe a series of steps to transform a given input into the correct output, e.g. with Java:
int factorial(int n) {
int result = 1;
while (n > 0) {
result *= n--;
}
return result;
}
A Functional Programming Language wants you to do (A) all the time. To me, the greatest tangible uniqueness of purely functional programming is statelessness: you never declare variables independently of what they are used for.

Resources