Related
I'm trying to make sense of this Caml function to calculate the n-th power of a number.
let rec iterate n f d =
if n = 0 then d
else iterate (n-1) f (f d)
let power i n =
let i_times a = a * i in
iterate n i_times 1
I understand what it does conceptually, but I am having trouble understanding the i_times bit.
Per my understanding, i_times takes a value a and returns a*i, whre i is passed onto power upon calling it. In the expression where i_times is defined, it's followed by 1, so wouldn't this mean that i_times 1, all together, is evaluated to 1*i?
In this case, I fail to see how 3 parameters are being passed to iterate. Why doesn't it just end up being 2, that is n and i_times 1?
I know this is a pretty basic function, but I'm just getting started with functional programming and I want to make sense of it.
Basically you're asking how OCaml parses a series of juxtaposed values.
This expression:
f a b c
is iterpreted as a call to a function f that passes 3 separate parameters. It is not parsed like this:
f a (b c)
which would be passing 2 parameters to f.
So indeed, three parameters are being passed to iterate. There are no subexpressions in the list of parameters, just three separate parameters.
Why didn't you figure that the subexpression n i_times would be evaluated before calling iterate? The reason (I suspect) is that you know that n isn't a function. But the parsing of the language doesn't depend on the types of things. If you wrote (n i_times) (with parentheses) it would be parsed as a call to n as a function. (This would, of course, be an error.)
I am trying to print the size of a list created from below power set function
fun add x ys = x :: ys;
fun powerset ([]) = [[]]
| powerset (x::xr) = powerset xr # map (add x) (powerset xr) ;
val it = [[],[3],[2],[2,3],[1],[1,3],[1,2],[1,2,3]] : int list list;
I have the list size function
fun size xs = (foldr op+ 0 o map (fn x => 1)) xs;
I couldnt able to merge these two functions and get the result like
I need something like this:
[(0,[]),(1,[3]),(1,[2]),(2,[2,3]),(1,[1]),(2,[1,3]),(2,[1,2]),(3,[1,2,3])]
Could anyone please help me with this?
You can get the length of a list using the built-in List.length.
You seem to forget to mention that you have the constraint that you can only use higher-order functions. (I am guessing you have this constraint because others these days are asking how to write powerset functions with this constraint, and using foldr to count, like you do, seems a little constructed.)
Your example indicates that you are trying to count each list in a list of lists, and not just the length of one list. For that you'd want to map the counting function across your list of lists. But that'd just give you a list of lengths, and your desired output seems to be a list of tuples containing both the length and the actual list.
Here are some hints:
You might as well use foldl rather than foldr since addition is associative.
You don't need to first map (fn x => 1) - this adds an unnecessary iteration of the list. You're probably doing this because folding seems complicated and you only just managed to write foldr op+ 0. This is symptomatic of not having understood the first argument of fold.
Try, instead of op+, to write the fold expression using an anonymous function:
fun size L = foldl (fn (x, acc) => ...) 0 L
Compare this to op+ which, if written like an anonymous function, would look like:
fn (x, y) => x + y
Folding with op+ carries some very implicit uses of the + operator: You want to discard one operand (since not its value but its presence counts) and use the other one as an accumulating variable (which is better understood by calling it acc rather than y).
If you're unsure what I mean about accumulating variable, consider this recursive version of size:
fun size L =
let fun sizeHelper ([], acc) = acc
| sizeHelper (x::xs, acc) = sizeHelper (xs, 1+acc)
in sizeHelper (L, 0) end
Its helper function has an extra argument for carrying a result through recursive calls. This makes the function tail-recursive, and folding is one generalisation of this technique; the second argument to fold's helper function (given as an argument) is the accumulating variable. (The first argument to fold's helper function is a single argument rather than a list, unlike the explicitly recursive version of size above.)
Given your size function (aka List.length), you're only a third of the way, since
size [[],[3],[2],[2,3],[1],[1,3],[1,2],[1,2,3]]
gives you 8 and not [(0,[]),(1,[3]),(1,[2]),(2,[2,3]),...)]
So you need to write another function that (a) applies size to each element, which would give you [0,1,1,2,...], and (b) somehow combine that with the input list [[],[3],[2],[2,3],...]. You could do that either in two steps using zip/map, or in one step using only foldr.
Try and write a foldr expression that does nothing to an input list L:
foldr (fn (x, acc) => ...) [] L
(Like with op+, doing op:: instead of writing an anonymous function would be cheating.)
Then think of each x as a list.
I would like to make this functions recursive but I don't know where to start.
let rec rlist r n =
if n < 1 then []
else Random.int r :: rlist r (n-1);;
let rec divide = function
h1::h2::t -> let t1,t2 = divide t in
h1::t1, h2::t2
| l -> l,[];;
let rec merge ord (l1,l2) = match l1,l2 with
[],l | l,[] -> l
| h1::t1,h2::t2 -> if ord h1 h2
then h1::merge ord (t1,l2)
else h2::merge ord (l1,t2);;
Is there any way to test if a function is recursive or not?
If you give a man a fish, you feed him for a day. But if you give him a fishing rod, you feed him for a lifetime.
Thus, instead of giving you the solution, I would better teach you how to solve it yourself.
A tail-recursive function is a recursive function, where all recursive calls are in a tail position. A call position is called a tail position if it is the last call in a function, i.e., if the result of a called function will become a result of a caller.
Let's take the following simple function as our working example:
let rec sum n = if n = 0 then 0 else n + sum (n-1)
It is not a tail-recursive function as the call sum (n-1) is not in a tail position because its result is then incremented by one. It is not always easy to translate a general recursive function into a tail-recursive form. Sometimes, there is a tradeoff between efficiency, readability, and tail-recursion.
The general techniques are:
use accumulator
use continuation-passing style
Using accumulator
Sometimes a function really needs to store the intermediate results, because the result of recursion must be combined in a non-trivial way. A recursive function gives us a free container to store arbitrary data - the call stack. A place, where the language runtime, stores parameters for the currently called functions. Unfortunately, the stack container is bounded, and its size is unpredictable. So, sometimes, it is better to switch from the stack to the heap. The latter is slightly slower (because it introduces more work to the garbage collector), but is bigger and more controllable. In our case, we need only one word to store the running sum, so we have a clear win. We are using less space, and we're not introducing any memory garbage:
let sum n =
let rec loop n acc = if n = 0 then acc else loop (n-1) (acc+n) in
loop n 0
However, as you may see, this came with a tradeoff - the implementation became slightly bigger and less understandable.
We used here a general pattern. Since we need to introduce an accumulator, we need an extra parameter. Since we don't want or can't change the interface of our function, we introduce a new helper function, that is recursive and will carry the extra parameter. The trick here is that we apply the summation before we do the recursive call, not after.
Using continuation-passing style
It is not always the case when you can rewrite your recursive algorithm using an accumulator. In this case, a more general technique can be used - the continuation-passing style. Basically, it is close to the previous technique, but we will use a continuation in the place of an accumulator. A continuation is a function, that will actually postpone the work, that is needed to be done after the recursion, to a later time. Conventionally, we call this function return or simply k (for the continuation). Mentally, the continuation is a way of throwing the result of computation back into the future. "Back" is because you returning the result back to the caller, in the future, because, the result will be used not now, but once everything is ready. But let's look at the implementation:
let sum n =
let rec loop n k = if n = 0 then k 0 else loop (n-1) (fun x -> k (x+n)) in
loop n (fun x -> x)
You may see, that we employed the same strategy, except that instead of int accumulator we used a function k as a second parameter. If the base case, if n is zero, we will return 0, (you can read k 0 as return 0). In the general case, we recurse in a tail position, with a regular decrement of the inductive variable n, however, we pack the work, that should be done with the result of the recursive function into a function: fun x -> k (x+n). Basically, this function says, once x - the result of recursion call is ready, add it to the number n and return. (Again, if we will use name return instead of k it could be more readable: fun x -> return (x+n)).
There is no magic here, we still have the same tradeoff, as with accumulator, as we create a new closure (functional object) at every recursive call. And each newly created closure contains a reference to the previous one (that was passed via the parameter). For example, fun x -> k (x+n) is a function, that captures two free variables, the value n and function k, that was the previous continuation. Basically, these continuations form a linked list, where each node bears a computation and all arguments except one. So, the computation is delayed until the last one is known.
Of course, for our simple example, there is no need to use CPS, since it will create unnecessary garbage and be much slower. This is only for demonstration. However, for more complex algorithms, in particular for those that combine results of two or more recursive calls in a non-trivial case, e.g., folding over a graph data structure.
So now, armed with the new knowledge, I hope that you will be able to solve your problems as easy as pie.
Testing for the tail recursion
The tail call is a pretty well-defined syntactic notion, so it should be pretty obvious whether the call is in a tail position or not. However, there are still few methods that allow one to check whether the call is in a tail position. In fact, there are other cases, when tail-call optimization may come into play. For example, a call that is right to the shortcircuit logical operator is also a tail call. So, it is not always obvious when a call is using the stack or it is a tail call. The new version of OCaml allows one to put an annotation at the call place, e.g.,
let rec sum n = if n = 0 then 0 else n + (sum [#tailcall]) (n-1)
If the call is not really a tail call, a warning is issued by a compiler:
Warning 51: expected tailcall
Another method is to compile with -annot option. The annotation file will contain an annotation for each call, for example, if we will put the above function into a file sum.ml and compile with ocamlc -annot sum.ml, then we can open sum.annot file and look for all calls:
"sum.ml" 1 0 41 "sum.ml" 1 0 64
call(
stack
)
If we, however, put our third implementation, then the see that all calls are tail calls, e.g. grep call -A1 sum.annot:
call(
tail
--
call(
tail
--
call(
tail
--
call(
tail
Finally, you can just test your program with some big input, and see whether your program will fail with the stack overflow. You can even reduce the size of the stack, this can be controlled with the environment variable OCAMLRUNPARAM, for example, to limit the stack to one thousand words:
export OCAMLRUNPARAM='l=1000'
ocaml sum.ml
You could do the following :
let rlist r n =
let aux acc n =
if n < 1 then acc
else aux (Random.int r :: acc) (n-1)
in aux [] n;;
let divide l =
let aux acc1 acc2 = function
| h1::h2::t ->
aux (h1::acc1) (h2::acc2) t
| [e] -> e::acc1, acc2
| [] -> acc1, acc2
in aux [] [] l;;
But for divide I prefer this solution :
let divide l =
let aux acc1 acc2 = function
| [] -> acc1, acc2
| hd::tl -> aux acc2 (hd :: acc1) tl
in aux [] [] l;;
let merge ord (l1,l2) =
let rec aux acc l1 l2 =
match l1,l2 with
| [],l | l,[] -> List.rev_append acc l
| h1::t1,h2::t2 -> if ord h1 h2
then aux (h1 :: acc) t1 l2
else aux (h2 :: acc) l1 t2
in aux [] l1 l2;;
As to your question about testing if a function is tail recursive or not, by looking out for it a bit you would have find it here.
I am trying to write a function that returns the index of the passed value v in a given list x; -1 if not found. My attempt at the solution:
let rec index (x, v) =
let i = 0 in
match x with
[] -> -1
| (curr::rest) -> if(curr == v) then
i
else
succ i; (* i++ *)
index(rest, v)
;;
This is obviously wrong to me (it will return -1 every time) because it redefines i at each pass. I have some obscure ways of doing it with separate functions in my head, none which I can write down at the moment. I know this is a common pattern in all programming, so my question is, what's the best way to do this in OCaml?
Mutation is not a common way to solve problems in OCaml. For this task, you should use recursion and accumulate results by changing the index i on certain conditions:
let index(x, v) =
let rec loop x i =
match x with
| [] -> -1
| h::t when h = v -> i
| _::t -> loop t (i+1)
in loop x 0
Another thing is that using -1 as an exceptional case is not a good idea. You may forget this assumption somewhere and treat it as other indices. In OCaml, it's better to treat this exception using option type so the compiler forces you to take care of None every time:
let index(x, v) =
let rec loop x i =
match x with
| [] -> None
| h::t when h = v -> Some i
| _::t -> loop t (i+1)
in loop x 0
This is pretty clearly a homework problem, so I'll just make two comments.
First, values like i are immutable in OCaml. Their values don't change. So succ i doesn't do what your comment says. It doesn't change the value of i. It just returns a value that's one bigger than i. It's equivalent to i + 1, not to i++.
Second the essence of recursion is to imagine how you would solve the problem if you already had a function that solves the problem! The only trick is that you're only allowed to pass this other function a smaller version of the problem. In your case, a smaller version of the problem is one where the list is shorter.
You can't mutate variables in OCaml (well, there is a way but you really shouldn't for simple things like this)
A basic trick you can do is create a helper function that receives extra arguments corresponding to the variables you want to "mutate". Note how I added an extra parameter for the i and also "mutate" the current list head in a similar way.
let rec index_helper (x, vs, i) =
match vs with
[] -> -1
| (curr::rest) ->
if(curr == x) then
i
else
index_helper (x, rest, i+1)
;;
let index (x, vs) = index_helper (x, vs, 0) ;;
This kind of tail-recursive transformation is a way to translate loops to functional programming but to be honest it is kind of low level (you have full power but the manual recursion looks like programming with gotos...).
For some particular patterns what you can instead try to do is take advantage of reusable higher order functions, such as map or folds.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Functional programming and non-functional programming
I'm afraid Wikipedia did not bring me any further.
Many thanks
PS: This past thread is also very good, however I am happy I asked this question again as the new answers were great - thanks
Functional programming and non-functional programming
First learn what a Turing machine is (from wikipedia).
A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This is about Lamda calculus. (from wikipedia).
In mathematical logic and computer science, the lambda calculus, also written as the λ-calculus, is a formal system for studying computable recursive functions, a la computability theory, and related phenomena such as variable binding and substitution.
The functional programming languages use, as their fundamental model of computation, the lambda calculus, while all the other programming languages use the Turing machine as their fundamental model of computation. (Well, technically, I should say functional programming languages vr.s imperative programming languages- as languages in other paradigms use other models. For example, SQL uses the relational model, Prolog uses a logic model, and so on. However, pretty much all the languages people actually think about when discussing programming languages are either functional or imperative, so I’ll stick with the easy generality.)
What do I mean by “fundamental model of computation”? Well, all languages can be thought of in two layers: one, some core Turing-complete language, and then layers of either abstractions or syntactic sugar (depending upon whether you like them or not) which are defined in terms of the base Turing-complete language. The core language for imperative languages is then a variant of the classic Turing machine model of computation one might call “the C language”. In this language, memory is an array of bytes that can be read from and written to, and you have one or more CPUs which read memory, perform simple arithmetic, branch on conditions, and so on. That’s what I mean by the fundamental model of computation of these languages is the Turing Machine.
The fundamental model of computation for functional languages is the Lambda Calculus, and this shows up in two different ways. First, one thing that many functional languages do is to write their specifications explicitly in terms of a translation to the lambda calculus to specify the behavior of a program written in the language (this is known as “denotational semantics”). And second, almost all functional programming languages implement their compilers to use an explicit lambda-calculus-like intermediate language- Haskell has Core, Lisp and Scheme have their “desugared” representation (after all macros have been applied), Ocaml (Objective Categorical Abstract Machine Language) has it’s lispish intermediate representation, and so on.
So what is this lambda calculus I’ve been going on about? Well, the basic idea is that, to do any computation, you only need two things. The first thing you need is function abstraction- the definition of an unnamed, single-argument, function. Alonzo Church, who first defined the Lambda calculus used the rather obscure notation to define a function as the greek letter lambda, followed by the one-character name of the argument to the function, followed by a period, followed by the expression which was the body of the function. So the identity function, which given any value, simply returns that value, would look like “λx.x” I’m going to use a slight more human-readable approach- I’m going to replace the λ character with the word “fun”, the period with “->”, and allow white space and allow multi-character names. So I might write the identity function as “fun x -> x”, or even “fun whatever -> whatever”. The change in notation doesn’t change the fundamental nature. Note that this is the source of the name “lambda expression” in languages like Haskell and Lisp- expressions that introduce unnamed local functions.
The only other thing you can do in the Lambda Calculus is to call functions. You call a function by applying an argument to it. I’m going to follow the standard convention that application is just the two names in a row- so f x is applying the value x to the function named f. We can replace f with some other expression, including a Lambda expression, if we want- and we can When you apply an argument to an expression, you replace the application with the body of the function, with all the occurrences of the argument name replaced with whatever value was applied. So the expression (fun x -> x x) y becomes y y.
The theoreticians went to great lengths to precisely define what they mean by “replacing all occurrences of the variable with the the value applied”, and can go on at great lengths about how precisely this works (throwing around terms like “alpha renaming”), but in the end things work exactly like you expect them to. The expression (fun x -> x x) (x y) becomes (x y) (x y)- there is no confusion between the argument x within the anonymous function, and the x in the value being applied. This works even in multiple levels- the expression (fun x -> (fun x -> x x)) (x x)) (x y) becomes first (fun x -> x x) ((x y) (x y)) and then ((x y) (x y)) ((x y) (x y)). The x in the innermost function (“(fun x -> x x)”) is a different x than the other x’s.
It is perfectly valid to think of function application as a string manipulation. If I have a (fun x -> some expression), and I apply some value to it, then the result is just some expression with all the x’s textually replaced with the “some value” (except for those which are shadowed by another argument).
As an aside, I will add parenthesis where needed to disambiguate things, and also elide them where not needed. The only difference they make is grouping, they have no other meaning.
So that’s all there is too it to the Lambda calculus. No, really, that’s all- just anonymous function abstraction, and function application. I can see you’re doubtful about this, so let me address some of your concerns.
First, I specified that a function only took one argument- how do you have a function that takes two, or more, arguments? Easy- you have a function that takes one argument, and returns a function that takes the second argument. For example, function composition could be defined as fun f -> (fun g -> (fun x -> f (g x))) – read that as a function that takes an argument f, and returns a function that takes an argument g and return a function that takes an argument x and return f (g x).
So how do we represent integers, using only functions and applications? Easily (if not obviously)- the number one, for instance, is a function fun s -> fun z -> s z – given a “successor” function s and a “zero” z, one is then the successor to zero. Two is fun s -> fun z -> s s z, the successor to the successor to zero, three is fun s -> fun z -> s s s z, and so on.
To add two numbers, say x and y, is again simple, if subtle. The addition function is just fun x -> fun y -> fun s -> fun z -> x s (y s z). This looks odd, so let me run you through an example to show that it does, in fact work- let’s add the numbers 3 and 2. Now, three is just (fun s -> fun z -> s s s z) and two is just (fun s -> fun z -> s s z), so then we get (each step applying one argument to one function, in no particular order):
(fun x -> fun y -> fun s -> fun z -> x s (y s z)) (fun s -> fun z -> s s s z) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun s -> fun z -> s s s z) s (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun z -> s s s z) (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> s s s (y s z)) (fun s -> fun z -> s s z)
(fun s -> fun z -> s s s ((fun s -> fun z -> s s z) s z))
(fun s -> fun z -> s s s (fun z -> s s z) z)
(fun s -> fun z -> s s s s s z)
And at the end we get the unsurprising answer of the successor to the successor to the successor to successor to the successor to zero, known more colloquially as five. Addition works by replacing the zero (or where we start counting) of the x value with the y value- to define multiplication, we instead diddle with the concept of “successor”:
(fun x -> fun y -> fun s -> fun z -> x (y s) z)
I’ll leave it to you to verify that the above code does
Wikipedia says
Imperative programs tend to emphasize the series of steps taken by a program in carrying out an action, while functional programs tend to emphasize the composition and arrangement of functions, often without specifying explicit steps. A simple example illustrates this with two solutions to the same programming goal (calculating Fibonacci numbers). The imperative example is in C++.
// Fibonacci numbers, imperative style
int fibonacci(int iterations)
{
int first = 0, second = 1; // seed values
for (int i = 0; i < iterations; ++i) {
int sum = first + second;
first = second;
second = sum;
}
return first;
}
std::cout << fibonacci(10) << "\n";
A functional version (in Haskell) has a different feel to it:
-- Fibonacci numbers, functional style
-- describe an infinite list based on the recurrence relation for Fibonacci numbers
fibRecurrence first second = first : fibRecurrence second (first + second)
-- describe fibonacci list as fibRecurrence with initial values 0 and 1
fibonacci = fibRecurrence 0 1
-- describe action to print the 10th element of the fibonacci list
main = print (fibonacci !! 10)
See this PDF also
(A) Functional programming describes solutions mechanically. You define a machine that is constantly outputting correctly, e.g. with Caml:
let rec factorial = function
| 0 -> 1
| n -> n * factorial(n - 1);;
(B) Procedural programming describes solutions temporally. You describe a series of steps to transform a given input into the correct output, e.g. with Java:
int factorial(int n) {
int result = 1;
while (n > 0) {
result *= n--;
}
return result;
}
A Functional Programming Language wants you to do (A) all the time. To me, the greatest tangible uniqueness of purely functional programming is statelessness: you never declare variables independently of what they are used for.