How does erlang handle case statements mixed with tail recursion - recursion

Let's say I have this code here:
do_recv_loop(State) ->
receive
{do,Stuff} ->
case Stuff of
one_thing ->
do_one_thing(),
do_recv_loop(State);
another_thing ->
do_another_thing(),
do_recv_loop(State);
_ ->
im_dead_now
end
{die} -> im_dead_now;
_ -> do_recv_loop(State)
end.
Now, in theory this is tail-recursive, as none of the three calls to do_recv_loop require anything to be returned. But will erlang recognize that this is tail recursive and optimize appropriately? I'm worried that the nested structure might make it not able to recognize it.

Yes, it will. Erlang is required to optimize tail calls, and this is clearly a tail call since nothing happens after the function is called.
I used to wish there were a tailcall keyword in Erlang so the compiler could warn me about invalid uses, but then I got used to it.

Yes, it is tail recursive. The main gotcha to be aware of is if you are wrapped inside exceptions. In that case, sometimes the exception needs to live on the stack and that will make something that looks tail-recursive into something deceptively not so.
The tail-call optimization is applicable if the call is in tail-position. Tail position is the "last thing before the function will return". Note that in
fact(0) -> 1;
fact(N) -> N * fact(N-1).
the recursive call to fact is not in tail position because after fact(N-1) is calculated, you need to run the continuation N * _ (i.e., multiply by N).

This I think is relevant because you are asking about how you know if your recursive function is optimized by the compiler. Since you aren't using lists:reverse/1 the below might not apply but for someone else with the exact same question but with a different code example it might be very relevant.
From the The Eight Myths of Erlang Performance in the Erlang Efficiency Guide
In R12B and later releases, there is
an optimization that will in many
cases reduces the number of words used
on the stack in body-recursive calls,
so that a body-recursive list function
and tail-recursive function that calls
lists:reverse/1 at the end will use
exactly the same amount of memory.
http://www.erlang.org/doc/efficiency_guide/myths.html#id58884
I think the take away message is that you may have to measure in some cases to see what will be best.

I'm pretty new to Erlang but from what I've gathered, the rule seems to be that in order to be tail-recursive, the function has to do one of two things in any given logical branch:
not make a recursive call
return the value of the recursive call and do nothing else after it
That recursive call can be nested into as many if, case, or receive calls as you want as long as nothing actually happens after it.

Related

Does using a free monad in F# imply a higher startup time and limited instructions?

I am reading this excellent article by Mark Seemann.
In it, he gives a simple demonstration of using the free monad to model interactions using pure functions.
I understand it enough to be able to write such a program, and I can appreciate the virtues of such an approach. There is one bit of code though, that has me wondering about the implications.
let rec bind f = function
| Free instruction -> instruction |> mapI (bind f) |> Free
| Pure x -> f x
The function is recursive.
Given that this is not tail recursive, does that mean that the number of instructions I include is limited by stack space, because every time I add an instruction, it has to unwrap the whole thing to add it?
Similarly to the above, does this imply a higher startup time because each new instruction has to go further down to be added? It might be negligible, but that is not the point of the question; I am trying to understand the theory, not the practicality.
If the answer to the above questions is 'Yes', would an alternative implementation along the lines of (make a list of instructions in the order you want them, then process the list in reverse order such that each instruction gets added to the top of your program, rather than deep down in the bottom) be preferable?
I may (probably) have missed something fundamental here that means that the recursion never has to go very far, please let me know.
I think the answer to the first two questions is "it depends" - there is some overhead associated with free monads and you can certainly run into stack overflows, but you can always use an implementation that avoids this and if you're doing I/O, then the overhead probably is not too bad.
I think the much bigger issue with free monads is that they are just too complicated abstraction. They let you solve one problem (abstracting over how you run code with lots of iteractions), but at a cost that you are making code very complicated. Most of the time, I think you can just keep a "functional core" as a nice testable pure part and "imperative wrapper" around it that is simple enough that you do not need to test it and abstract over it.
Free monads are very universal way of modeling this and you could use a more concrete representation. For reading and writing, you could do:
type Instruction =
| WriteLine of string
| ReadLine of (string -> Instruction list)
As you can see, this is not just a simple list - ReadLine is tricky, because it takes a function that, when called with the string from the console, produces more instructions (that may, or may not, depend on this string):
let program =
[ yield WriteLine "Enter your name:"
yield ReadLine (fun name ->
[ if name <> "" then
yield WriteLine ("Hello " + name) ] ) ]
This says "Hello " but only when you enter non-empty name, so it shows how the instructions depend on the name you read. The code to run a program then looks like this:
let rec runProgram program =
match program with
| [] -> ()
| WriteLine s :: rest ->
Console.WriteLine s
runProgram rest
| ReadLine f :: rest ->
let input = Console.ReadLine()
runProgram (f input # rest)
The first two cases are nice and fully tail-recursive. The last case is tail-recursive in runProgram, but it is not tail recursive when calling f. This should be fine, because f only generates instructions and does not call runProgram. It also uses slow list append #, but you could fix that by using a better data structure (if it actually turned out to be a problem).
Free monads are basically an abstraction over this - but I personally think that they are going one step too far and if I really needed this, then something like the above - with concrete instructions - is better solution because it's simpler.

Why use a recursive function rather than `while true do` in F#?

Whilst watching a Pluralsight course by Tomas Petricek (who I assume knows what he is talking about), I saw code like the following...
let echo =
MailboxProcessor<string>.Start(fun inbox ->
async {
while do true
let! msg = inbox.Receive()
printfn "Hello %s" msg
})
Ignore the fact that this was to demo agents, I'm interested in the inner function, which uses while do true to keep it running indefinitely.
Whilst looking around for other example of agents, I saw that many other people use code like this...
let counter =
MailboxProcessor.Start(fun inbox ->
let rec loop n =
async { do printfn "n = %d, waiting..." n
let! msg = inbox.Receive()
return! loop(n+msg) }
loop 0)
Code copied from Wikibooks.
The inner function here is recursive, and is started off by calling it with a base value before the main function declaration ends.
Now I realise that in the second case recursion is a handy way of passing a private value to the inner function without having to use a mutable local value, but is there any other reason to use recursion here rather than while do true? Would there be any benefit in writing the first code snippet using recursion?
I find the non-recursive version much easier to read (subjective opinion of course), which seems like a good reason to use that whenever possible.
Talking about MailboxProcessor specifically, I think the choice depends on what exactly you are doing. In general, you can always use while loop or recursion.
Recursion makes it easier to use immutable state and I find while loop nicer if you have no state or if you use mutable state. Using mutable state is often quite useful, because MailboxProcessor protects you from concurrent accesses and you can keep the state local, so things like Dictionary (efficient hash table) are often useful.
In general:
If you don't need any state, I would prefer while
If you have mutable state (like Dictionary or ResizeArray), I'd go for while
If you have some immutable state (like functional list or an integer), then recursion is nicer
If your logic switches between multiple modes of operation then you can write it as two mutually recursive functions, which is not doable nicely with loops.
In many cases it depends on how you like to code. Like in your example.
Everything you can write recursiv you can also write with a loop, but sometime like with recursive data structures it is easier to write the in a recursive style.
At university I learned that with recursive programming you only have to look at your next step which is pretty handy!
You might be interested in this question as it explains my answer a bit further:
recursion versus iteration
In F# for and while loop expressions lacks capabilities common in other languages:
continue - Skip the rest of the loop and restart from the top of loop expression.
break - Stop the loop prematurely.
If you want continue and break don't do what I did in the beginning a write a very complex test expression for while loops. Instead tail recursion is the best answer in F#:
let vs : int [] = ...
let rec findPositiveNumberIndex i =
if i < vs.Length then
if vs.[i] > 0 then
Some i
else
findPositiveNumberIndex (i + 1)
else
None
match findPositiveNumberIndex 0 with
| Some i -> printfn "First positive number index: %d" i
| None -> printfn "No positive numbers found"
In code like this F# applies something call tail-call optimization (TCO) which transforms the code above into a while loop with break. This means that we won't run out of stack space and the loop is efficient. TCO is a feature that C# lacks so we don't want to write code like above in C#.
Like others are saying with tail recursion you can sometimes avoid mutable state but that's not all.
With tail recursion your loop expressions can return a result which is nice.
In addition, if you like to iterate fast in F# over types like int64 or with increment other than 1 and -1 you have to rely on tail recursion. The reason is that F# only does efficient for expressions for ints and increments of 1 and -1.
for i in 0..100 do
printfn "This is a fast loop"
for i in 0..2..100 do
printfn "This is a slow loop"
for i in 0L..100L do
printfn "This is a slow loop"
Sometimes when you are hunting for performance a trick is to loop towards 0 (saves a CPU register). Unfortunately the way the F# generates for loop code this doesn't perform as well one would hope:
for i = 100 downto 0 do
printfn "Unfortunately this is not as efficient as it can be"
Tail recursion towards 0 saves the CPU register.
(Unfortunately the F# compiler don't merge the test and the loop instruction for tail-recursion so it's not so good as it could be)

In functional languages, how is the compiler able to translate non-tail recursion into loops to avoid stack overflows (if at all)?

I've been recently learning about functional languages and how many don't include for loops. While I don't personally view recursion as more difficult than a for loop (and often easier to reason out) I realized that many examples of recursion aren't tail recursive and therefor cannot use simple tail recursion optimization in order to avoid stack overflows. According to this question, all iterative loops can be translated into recursion, and those iterative loops can be transformed into tail recursion, so it confuses me when the answers on a question like this suggest that you have to explicitly manage the translation of your recursion into tail recursion yourself if you want to avoid stack overflows. It seems like it should be possible for a compiler to do all the translation from either recursion to tail recursion, or from recursion straight to an iterative loop with out stack overflows.
Are functional compilers able to avoid stack overflows in more general recursive cases? Are you really forced to transform your recursive code in order to avoid stack overflows yourself? If they aren't able to perform general recursive stack-safe compilation, why aren't they?
Any recursive function can be converted into a tail recursive one.
For instance, consider the transition function of a Turing machine, that
is the mapping from a configuration to the next one. To simulate the
turing machine you just need to iterate the transition function until
you reach a final state, that is easily expressed in tail recursive
form. Similarly, a compiler typically translates a recursive program into
an iterative one simply adding a stack of activation records.
You can also give a translation into tail recursive form using continuation
passing style (CPS). To make a classical example, consider the fibonacci
function.
This can be expressed in CPS style in the following way, where the second
parameter is the continuation (essentially, a callback function):
def fibc(n, cont):
if n <= 1:
return cont(n)
return fibc(n - 1, lambda a: fibc(n - 2, lambda b: cont(a + b)))
Again, you are simulating the recursion stack using a dynamic data structure:
in this case, lambda abstractions.
The use of dynamic structures (lists, stacks, functions, etc.) in all previous
examples is essential. That is to say, that in order to simulate a generic
recursive function iteratively, you cannot avoid dynamic memory allocation,
and hence you cannot avoid stack overflow, in general.
So, memory consumption is not only related to the iterative/recursive
nature of the program. On the other side, if you prevent dynamic memory
allocation, your
programs are essentially finite state machines, with limited computational
capabilities (more interesting would be to parametrise memory according to
the dimension of inputs).
In general, in the same way as you cannot predict termination, you cannot
predict an unbound memory consumption of your program: working with
a Turing complete language, at compile time
you cannot avoid divergence, and you cannot avoid stack overflow.
Tail Call Optimization:
The natural way to do arguments and calls is to sort out the cleaning up when exiting or when returning.
For tail calls to work you need to alter it so that the tail call inherits the current frame. Thus instead of making a new frame it massages the frame so that the next call returns to the current functions caller instead of this function, which really only cleans up and returns if it's a tail call.
Thus TCO is all about cleaning up before the last call.
Continuation Passing Style - make tail calls out of everything
A compiler can change the code such that it only does primitive operations and pass it to continuations. Thus the stack usage gets moved onto the heap since the computation to be continued is made a function.
An example is:
function hypotenuse(k1, k2) {
return sqrt(add(square(k1), square(k2)))
}
becomes
function hypotenuse(k, k1, k2) {
(function (sk1) {
(function (sk2) {
(function (ar) {
k(sqrt(ar));
}(add(sk1,sk2));
}(square(k2));
}(square(k1));
}
Notice every function has exactly one call now and the order of evaluation is set.
According to this question, all iterative loops can be translated into recursion
"Translated" might be a bit of a stretch. The proof that for every iterative loop there is an equivalent recursive program is trivial if you understand Turing completeness: since a Turing machine can be implemented using strictly iterative structures and strictly recursive structures, every program that can be expressed in an iterative language can be expressed in a recursive language, and vice-versa. This means that for every iterative loop there is an equivalent recursive construct (and the other way around). However, that doesn't mean we have some automated way of transforming one into the other.
and those iterative loops can be transformed into tail recursion
Tail recursion can perhaps be easily transformed into an iterative loop, and the other way around. But not all recursion is tail recursion. Here's an example. Suppose we have some binary tree. It consists of nodes. Each node can have a left and a right child and a value. If a node has no children, then isLeaf returns true for it. We'll assume there's some function max that returns the maximum of two values, and if one of the values is null it returns the other one. Now we want to define a function that finds the maximum value among all the leaf nodes. Here it is in some pseudo-code I cooked up.
findmax(node) {
if (node == null) {
return null
}
if (node.isLeaf) {
return node.value
} else {
return max(findmax(node.left), findmax(node.right))
}
}
There's two recursive calls in the max function, so we can't optimize for tail recursion. We need the results of both before we can supply them to the max function and determine the result of the call for the current node.
Now, there may be a way of getting the same result, using recursion and only a single tail-recursive call. It is functionally equivalent, but it is a different algorithm. Compilers can do a lot of transformations to create a functionally equivalent program with lots of optimizations, but they're not quite clever enough to create functionally equivalent algorithms.
Even the transformation of a function that only calls itself recursively once into a tail-recursive version would be far from trivial. Such an adaptation usually employs some argument passed into the recursive invocation that is used as an "accumulator" for the current results.
Look at the next naive implementation for calculating a factorial of a number (e.g. fact(5) = 5*4*3*2*1):
fact(number) {
if (number == 1) {
return 1
} else {
return number * fact(number - 1)
}
}
It's not tail-recursive. But it can be made so in this way:
fact(number, acc) {
if (number == 1) {
return acc
} else {
return fact(number - 1, number * acc)
}
}
// Helper function
fact(number) {
return fact(number, 1)
}
This requires an interpretation of what is being done. Recognizing the case for stuff like this is easy enough, but what if you call a function instead of a multiplication? How will the compiler know that for the initial call the accumulator must be 1 and not, say, 0? How do you translate this program?
recsub(number) {
if (number == 1) {
return 1
} else {
return number - recsub(number - 1)
}
}
This is as of yet outside the scope of the sort of compiler we have now, and may in fact always be.
Maybe it would be interesting to ask this on the computer science Stack Exchange to see if they know of some papers or proofs that investigate this more in-depth.

What do "continuations" mean in functional programming?(Specfically SML)

I have read a lot about continuations and a very common definition I saw is, it returns the control state.
I am taking a functional programming course taught in SML.
Our professor defined continuations to be:
"What keeps track of what we still have to do"
; "Gives us control of the call stack"
A lot of his examples revolve around trees. Before this chapter, we did tail recursion. I understand that tail recursion lets go of the stack to hold the recursively called functions by having an additional argument to "build" up the answer. Reversing a list would be built in a new accumulator where we append to it accordingly. Also, he said something about functions are called(but not evaluated) except till we reach the end where we replace backwards. He said an improved version of tail recursion would be using CPS(Continuation Programming Style).
Could someone give a simplified explanation of what continuations are and why they are favoured over other programming styles?
I found this stackoverflow link that helped me, but still did not clarify the idea for me:
I just don't get continuations!
Continuations simply treat "what happens next" as first class objects that can be used once unconditionally, ignored in favour of something else, or used multiple times.
To address what Continuation Passing Style is, here is some expression written normally:
let h x = f (g x)
g is applied to x and f is applied to the result.
Notice that g does not have any control. Its result will be passed to f no matter what.
in CPS this is written
let h x next = (g x (fun result -> f result next))
g not only has x as an argument, but a continuation that takes the output of g and returns the final value. This function calls f in the same manner, and gives next as the continuation.
What happened? What changed that made this so much more useful than f (g x)? The difference is that now g is in control. It can decide whether to use what happens next or not. That is the essence of continuations.
An example of where continuations arise are imperative programming languages where you have control structures. Whiles, blocks, ordinary statements, breaks and continues are all generalized through continuations, because these control structures take what happens next and decide what to do with it, for example we can have
...
while(condition1) {
statement1;
if(condition2) break;
statement2;
if(condition3) continue;
statement3;
}
return statement3;
...
The while, the block, the statement, the break and the continue can all be described in a functional model through continuations. Each construct can be considered to be a function that accepts the
current environment containing
the enclosing scopes
optional functions accepting the current environment and returning a continuation to
break from the inner most loop
continue from the inner most loop
return from the current function.
all the blocks associated with it (if-blocks, while-block, etc)
a continuation to the next statement
and returns the new environment.
In the while loop, the condition is evaluated according to the current environment. If it is evaluated to true, then the block is evaluated and returns the new environment. The result of evaluating the while loop again with the new environment is returned. If it is evaluated to false, the result of evaluating the next statement is returned.
With the break statement, we lookup the break function in the environment. If there is no function found then we are not inside a loop and we give an error. Otherwise we give the current environment to the function and return the evaluated continuation, which would be the statement after the the while loop.
With the continue statement the same would happen, except the continuation would be the while loop.
With the return statement the continuation would be the statement following the call to the current function, but it would remove the current enclosing scope from the environment.

How does one implement a "stackless" interpreted language?

I am making my own Lisp-like interpreted language, and I want to do tail call optimization. I want to free my interpreter from the C stack so I can manage my own jumps from function to function and my own stack magic to achieve TCO. (I really don't mean stackless per se, just the fact that calls don't add frames to the C stack. I would like to use a stack of my own that does not grow with tail calls). Like Stackless Python, and unlike Ruby or... standard Python I guess.
But, as my language is a Lisp derivative, all evaluation of s-expressions is currently done recursively (because it's the most obvious way I thought of to do this nonlinear, highly hierarchical process). I have an eval function, which calls a Lambda::apply function every time it encounters a function call. The apply function then calls eval to execute the body of the function, and so on. Mutual stack-hungry non-tail C recursion. The only iterative part I currently use is to eval a body of sequential s-expressions.
(defun f (x y)
(a x y)) ; tail call! goto instead of call.
; (do not grow the stack, keep return addr)
(defun a (x y)
(+ x y))
; ...
(print (f 1 2)) ; how does the return work here? how does it know it's supposed to
; return the value here to be used by print, and how does it know
; how to continue execution here??
So, how do I avoid using C recursion? Or can I use some kind of goto that jumps across c functions? longjmp, perhaps? I really don't know. Please bear with me, I am mostly self- (Internet- ) taught in programming.
One solution is what is sometimes called "trampolined style". The trampoline is a top-level loop that dispatches to small functions that do some small step of computation before returning.
I've sat here for nearly half an hour trying to contrive a good, short example. Unfortunately, I have to do the unhelpful thing and send you to a link:
http://en.wikisource.org/wiki/Scheme:_An_Interpreter_for_Extended_Lambda_Calculus/Section_5
The paper is called "Scheme: An Interpreter for Extended Lambda Calculus", and section 5 implements a working scheme interpreter in an outdated dialect of Lisp. The secret is in how they use the **CLINK** instead of a stack. The other globals are used to pass data around between the implementation functions like the registers of a CPU. I would ignore **QUEUE**, **TICK**, and **PROCESS**, since those deal with threading and fake interrupts. **EVLIS** and **UNEVLIS** are, specifically, used to evaluate function arguments. Unevaluated args are stored in **UNEVLIS**, until they are evaluated and out into **EVLIS**.
Functions to pay attention to, with some small notes:
MLOOP: MLOOP is the main loop of the interpreter, or "trampoline". Ignoring **TICK**, its only job is to call whatever function is in **PC**. Over and over and over.
SAVEUP: SAVEUP conses all the registers onto the **CLINK**, which is basically the same as when C saves the registers to the stack before a function call. The **CLINK** is actually a "continuation" for the interpreter. (A continuation is just the state of a computation. A saved stack frame is technically continuation, too. Hence, some Lisps save the stack to the heap to implement call/cc.)
RESTORE: RESTORE restores the "registers" as they were saved in the **CLINK**. It's similar to restoring a stack frame in a stack-based language. So, it's basically "return", except some function has explicitly stuck the return value into **VALUE**. (**VALUE** is obviously not clobbered by RESTORE.) Also note that RESTORE doesn't always have to return to a calling function. Some functions will actually SAVEUP a whole new computation, which RESTORE will happily "restore".
AEVAL: AEVAL is the EVAL function.
EVLIS: EVLIS exists to evaluate a function's arguments, and apply a function to those args. To avoid recursion, it SAVEUPs EVLIS-1. EVLIS-1 would just be regular old code after the function application if the code was written recursively. However, to avoid recursion, and the stack, it is a separate "continuation".
I hope I've been of some help. I just wish my answer (and link) was shorter.
What you're looking for is called continuation-passing style. This style adds an additional item to each function call (you could think of it as a parameter, if you like), that designates the next bit of code to run (the continuation k can be thought of as a function that takes a single parameter). For example you can rewrite your example in CPS like this:
(defun f (x y k)
(a x y k))
(defun a (x y k)
(+ x y k))
(f 1 2 print)
The implementation of + will compute the sum of x and y, then pass the result to k sort of like (k sum).
Your main interpreter loop then doesn't need to be recursive at all. It will, in a loop, apply each function application one after another, passing the continuation around.
It takes a little bit of work to wrap your head around this. I recommend some reading materials such as the excellent SICP.
Tail recursion can be thought of as reusing for the callee the same stack frame that you are currently using for the caller. So you could just re-set the arguments and goto to the beginning of the function.

Resources