Chapel supports recursive function calls, but does it support tail call optimisation so that tail recursion does not use an extra stack frame?
I'm reasonably certain that the Chapel compiler does not implement tail recursion optimizations itself. Depending on the complexity of the function, it may be that the back-end compiler (C compiler or LLVM) would perform such optimizations on the generated code.
[Edit: This characterization is for versions 1.14 and earlier of the Chapel compiler]
Related
Where is recursion applied in industrial programming. I do understand the notion that it is a function that calls itself but my question is what's its major use in programming paradigm
You can encounter recursive calls in many situations, the common ones are:
to traverse data structures that are recursive in nature (trees, graphs)
to perform retries of the same function in case of errors
for numeric calculations if the recursive notation brings clarity (if performance is critical it's pretty common to turn them into loops, unless you use language optimized to do tail calls)
I tried using the y-combinator (in both Lua and Clojure) as I thought that would allow me to exceed the size of default stack implementations when using recursion. It seems I was mistaken. Yes, it works, but in both of these systems, the stack blows at exactly the same point as it does using plain old recursion. A lowish ~3600 in Clojure and a highish ~333000 on my Android Lua implementation. It is also a bit slower than regular recursion.
So is there anything to be gained by using the y-combinator, or is it just an intellectual exercise to prove a point? Have I missed something?
===
PS. Sorry, I should have made it clearer that I am aware I can use TCO to exceed the stack. My question does not relate to that. I am interested in this
a) from the academic/intellectual point of view
b) whether there is anything that can be done about those function that cannot be written tail recursively.
The Y combinator allows a non-recursive function to be used recursively, but that recursion still consumes stack space through nested function invocations.
For functions that can't be made tail-recursive, you could try refactoring them using continuation passing style, which would consume heap space instead of stack.
Here's a good overview of the topic: https://www.cs.cmu.edu/~15150/previous-semesters/2012-spring/resources/lectures/11.pdf
A “tail call” will allow you to exceed any stack size limitations. See Programming in Lua, section 6.3: Proper Tail Calls:
...after the tail call, the program does not need to keep any information about the calling function in the stack. Some language implementations, such as the Lua interpreter, take advantage of this fact and actually do not use any extra stack space when doing a tail call. We say that those implementations support proper tail calls.
If you haven't seen it yet, there is a good explanation here: What is a Y-combinator?
In summary, it helps to prove that the lambda calculus is Turing complete, but is useless for normal programming tasks.
As you are probably aware, in Clojure you would just use loop/recur to implement a loop that does not consume the stack.
Why does Clojure have the "recur" special form?
When I replace the "recur" with the function itself I get the same result:
(defn print-down-from [x]
(when (pos? x)
(println x)
(recur (dec x))
)
)
(print-down-from 5)
Has the same result as
(defn print-down-from [x]
(when (pos? x)
(println x)
(print-down-from (dec x))
)
)
(print-down-from 5)
I was wondering if "recur" is just a safety measure so that the compiler will throw an error if a developer happens to use a non-tail recursion. Or maybe it's necessary for the compiler to optimize the tail recursion?
But what I'm wondering most is about any other reasons other than stack consumption.
As explained in the clojure.org page on functional programming:
In the absence of mutable local variables, looping and iteration must take a different form than in languages with built-in for or while constructs that are controlled by changing state. In functional languages looping and iteration are replaced/implemented via recursive function calls. Many such languages guarantee that function calls made in tail position do not consume stack space, and thus recursive loops utilize constant space. Since Clojure uses the Java calling conventions, it cannot, and does not, make the same tail call optimization guarantees. Instead, it provides the recur special operator, which does constant-space recursive looping by rebinding and jumping to the nearest enclosing loop or function frame. While not as general as tail-call-optimization, it allows most of the same elegant constructs, and offers the advantage of checking that calls to recur can only happen in a tail position.
When you don't use recur (or trampoline), your function calls consume stack: Thus, if you ran (print-down-from 100000), you would quickly observe a crash.
Providing this language facility, then, offers several benefits:
Contrary to conventional recursive calls (as given in the question's example), using recur does not consume stack.
The author knows that TCO is in use (and stack is not consumed), as use in a position where TCO is not possible would cause a compile-time failure. As such, the stack-consumptions characteristics are obvious both to the code's author and its readers, unlike languages with only automatic TCO (where one would need to read carefully -- taking macro expansions into account -- to determine whether a call is genuinely in tail position before knowing whether it is optimized).
Compatibility with conventional JVM calling conventions, and thus native interoperability with code written in other JVM-centric languages is preserved.
Finally, for background, see one of several other questions on the topic on StackOverflow:
Automatic TCO in Clojure (asking whether an automatic facility to make explicit invocation of recur unnecessary is available).
Why does TCO require support from the VM? (asking for the basis behind the claim made in the above-quoted clojure.org documentation that automatic TCO cannot be implemented without either JVM support or breaking use of JVM calling conventions and thus harming interoperability).
Why can't tail calls be optimized in JVM-based Lisps? (likewise, more generally)
As I understand it, the loop .. recur uses tail-end recursion, so your program doesn't blow the stack, and regular recursion does not. Some solutions to problems on 4clojure wind up not using loop .. recur, because -- I'm taking an educated guess -- the solution can only be derived using a direct, recursive function call, instead of loop .. recur.
From what I have read, in some of the Clojure books from a few years ago, you should feel free to use loop .. recur.
However, at least from the discussion in those books I have read and from answers I have received to my Clojure questions here in SO, there is some general consensus to try to solve your problem first using constructs like map. If that is not possible, then by all means go ahead and use loop .. recur, and if you do not think there is a possibility of blowing your stack, a direct recursion call.
As far as I can tell, Clojure's recur is backed by the compiler whereas in other lisps it is implemented at a lower level.
As I read, this wouldn't be a "general" TCO. Aside from the obvious (a keyword + checking are needed), is in any way recur less powerful?
recur only supports tail-recursion optimization, which is a subclass of general TCO. Clojure also supports mutual or indirect recursion through trampoline.
EDIT
Also, I think general TCO was expected to land in JVM with Java 7 and recur was meant as a temporary solution. Then Oracle happened. I've mixed that with Project Lambda's (adding closures in Java) schedule
recur differs slightly from full TCO in that recur works with both loops and functions and does not do some of the things that a full implementation of TCO would. The philosophical backing for this is to make the special part look special as opposed to silently optimizing a uniform syntax.
What is the theoretical/practical limit to the recursion depth in languages implementing Tail Call optimisation? (Please assume that the recurring function is properly tail call-ed).
My guess is that the theoretical limit is NONE, as there is no recursive process, even though it is recursive procedure. Practical limit would be that allowed by the main memory available to be used. Please clarify or correct if I am wrong somewhere.
When a tail recursive function is optimized, it'll essentially become an iterative function. The compiler reuses the stack frame of the original call for subsequent calls, so you won't run out of stack space. If you are not allocating any heap memory (or any other kind of memory that's not on the stack, for that matter), you can have infinitely deep (as long as you are patient enough ;)) recursion (think of it as an infinite loop; it has the same characteristics).
To summarize, there's no practical limit.
In addition to what #Mehrdad Afshari wrote, I just want to point out that it is actually very important that tail recursion (or more generally a chain of tail calls) can be potentially infinite, because otherwise you couldn't write a web server, an operating system, an interpreter, a REPL, or really any kind of event processing loop in a functional language.
After all, an operating system is nothing but an infinite loop, and the way to write a loop in a functional language is using tail recursion. If tail recursion weren't infinite, the loop wouldn't be infinite. Therefore, you could not only not write an operating system, the language wouldn't even be Turing-complete.
Basically, this is how you write a web server in a functional language:
def loop(queue) = {
// handle first request in queue
loop(queue)
}
Without infinite tail recursion, this would quickly run out of memory.