I've looked at everything I can find about letrec, and I still don't understand what it brings to a language as a feature. It seems like everything expressible with letrec could just as easily be written as a recursive function. But are there any reasons to expose letrec as a feature of a programming language, if the language already supports recursive functions? Why do several languages expose both?
I get that letrec might be used to implement other features including recursive functions, but that's not relevant to why it should itself be a feature. I've also read that some people find it more readable than recursive functions in some lisps, but again this is not relevant, because the designer of the language can make an effort to make recursive functions readable enough to not need another feature. Finally, I've been told that letrec makes it possible to express some kinds of recursive values more succinctly, but I have yet to find a motivating example.
TL;DR: define is letrec. This is what enables us to write recursive defintions in the first place.
Consider
let fact = fun (n => (n==0 -> 1 ; n * fact (n-1)))
To what entity does the name fact inside the body of this definiton refer? With let foo = val, val is defined in terms of already known entities, so it can't refer to foo which is not defined yet. In terms of scope this can be said (and usually is) that the RHS of the let equation is defined in the outer scope.
The only way for the inner fact to actually point at the one being defined, is to use letrec, where the entity being defined is allowed to refer to the scope in which it is being defined. So while causing evaluation of an entity while its definition is in progress is an error, storing a reference to its (future, at this point in time) value is fine -- in the case of using letrec that is.
The define you refer to, is just letrec under another name. In Scheme as well.
Without the ability of an entity being defined to refer to itself, i.e. in languages with non-recursive let, to have recursion one has to resort to the use of arcane devices such as the y-combinator. Which is cumbersome and usually inefficient. Another way is the definitions like
let fact = (fun (f => f f)) (fun (r => n => (n==0 -> 1 ; n * r r (n-1))))
So letrec brings to the table the efficiency of implementation, and convenience for a programmer.
The quesion then becomes, why expose the non-recursive let? Haskell indeed does not. Scheme has both letrec and let. One reason might be for completeness. Another might be a simpler implementation for let, with less self-referential run-time structures in memory making it easier on the garbage collector.
You ask for a motivational example. Consider defining Fibonacci numbers as a self-referential lazy list:
letrec fibs = {0} + {1} + add fibs (tail fibs)
With non-recursive let another copy of the list fibs will be defined, to be used as the input to the element-wise addition function add. Which will cause the definition of another copy of fibs for this one to be defined in its terms. And so on; accessing the nth Fibonacci number will cause a chain of n-1 lists to be created and maintained at run-time! Not a pretty picture.
And that's assuming the same fibs was used for tail fibs as well. If not, all bets are off.
What is needed is that fibs uses itself, refers to itself, so only one copy of the list is maintained.
NB: Although this is not a Scheme specific problem I'm using Scheme to demonstrate the differences. Hope you can read a little lisp code
A letrec is just a special let where the bindings themselves are defined before the expressions that represent their values are evaluated. Imagine this:
(define (fib n)
(let ((fib (lambda (n a b)
(if (zero? n)
a
(fib (- n 1) b (+ a b))))))
(fib n))
This code fails since while fib does exist in the body of the let it does exist in the closure it defines since the binding didn't exist when the lambda was evaluated. To fix this letrec comes to the rescue:
(define (fib n)
(letrec ((fib (lambda (n a b)
(if (zero? n)
a
(fib (- n 1) b (+ a b))))))
(fib n))
That letrec is just syntax that does something like this:
(define (fib n)
(let ((fib 'undefined))
(let ((tmp (lambda (n a b)
(if (zero? n)
a
(fib (- n 1) b (+ a b))))))
(set! fib tmp))
(fib n)))
So here you clearly see fib exists when the lambda gets evaluated and the binding is later set to the closure itself. The binding is the same, only it's pointer has changed. It's circular reference 101..
So what happens when you make a global function? Clearly if it is to recurse it needs to exist before the lambda is evaluated or the environment has to be mutated. It needs to fix the same problem here too.
In a functional language implementation where mutation is not ok you can solve this problem with a Y (or Z) combinator.
If you are interested in how languages are implemented I suggest you start at Matt Mights articles.
Related
I'm Haruo. My pleasure is solving SPOJ in Common Lisp(CLISP). Today I solved Classical/Balk! but in SBCL not CLISP. My CLISP submit failed due to runtime error (NZEC).
I hope my code becomes more sophisticated. Today's problem is just a chance. Please the following my code and tell me your refactoring strategy. I trust you.
https://github.com/haruo-wakakusa/SPOJ-ClispAnswers/blob/0978813be14b536bc3402f8238f9336a54a04346/20040508_adrian_b.lisp
Haruo
Take for example get-x-depth-for-yz-grid.
(defun get-x-depth-for-yz-grid (planes//yz-plane grid)
(let ((planes (get-planes-including-yz-grid-in planes//yz-plane grid)))
(unless (evenp (length planes))
(error "error in get-x-depth-for-yz-grid"))
(sort planes (lambda (p1 p2) (< (caar p1) (caar p2))))
(do* ((rest planes (cddr rest)) (res 0))
((null rest) res)
(incf res (- (caar (second rest)) (caar (first rest)))))))
style -> ERROR can be replaced by ASSERT.
possible bug -> SORT is possibly destructive -> make sure you have a fresh list consed!. If it is already fresh allocated by get-planes-including-yz-grid-in, then we don't need that.
bug -> SORT returns a sorted list. The sorted list is possibly not a side-effect. -> use the returned value
style -> DO replaced with LOOP.
style -> meaning of CAAR unclear. Find better naming or use other data structures.
(defun get-x-depth-for-yz-grid (planes//yz-plane grid)
(let ((planes (get-planes-including-yz-grid-in planes//yz-plane grid)))
(assert (evenp (length planes)) (planes)
"error in get-x-depth-for-yz-grid")
(setf planes (sort (copy-list planes) #'< :key #'caar))
(loop for (p1 p2) on planes by #'cddr
sum (- (caar p2) (caar p1)))))
Some documentation makes a bigger improvement than refactoring.
Your -> macro will confuse sbcl’s type inference. You should have (-> x) expand into x, and (-> x y...) into (let (($ x)) (-> y...))
You should learn to use loop and use it in more places. dolist with extra mutation is not great
In a lot of places you should use destructuring-bind instead of eg (rest (rest )). You’re also inconsistent as sometimes you’d write (cddr...) for that instead.
Your block* suffers from many problems:
It uses (let (foo) (setf foo...)) which trips up sbcl type inference.
The name block* implies that the various bindings are scoped in a way that they may refer to those previously defined things but actually all initial value may refer to any variable or function name and if that variable has not been initialised then it evaluates to nil.
The style of defining lots of functions inside another function when they can be outside is more typical of scheme (which has syntax for it) than Common Lisp.
get-x-y-and-z-ranges really needs to use loop. I think it’s wrong too: the lists are different lengths.
You need to define some accessor functions instead of using first, etc. Maybe even a struct(!)
(sort foo) might destroy foo. You need to do (setf foo (sort foo)).
There’s basically no reason to use do. Use loop.
You should probably use :key in a few places.
You write defvar but I think you mean defparameter
*t* is a stupid name
Most names are bad and don’t seem to tell me what is going on.
I may be an idiot but I can’t tell at all what your program is doing. It could probably do with a lot of work
In functional languages such as Scheme or Lisp there exist for and for-all loops. However for loops require mutation since it's not a new stack frame each iteration. Since mutation is not available in these languages explicitly how do these functional languages implement their respective iterative loops?
Scheme loops are implemented using recursion under the hood; constructs such as do are just macros that get translated to recursive procedures. For example, this loop in a typical procedural language:
void print(int n) {
for (int i = 0; i < n; i++) {
display(i);
}
}
... Is equivalent to the following procedure in Scheme; here you can see that each part of the loop (initialization, exit condition, increment, body) has a corresponding expression:
(define (print n)
(define (loop i) ; helper procedure, a "named let" would be better
(when (< i n) ; exit condition, if this is false the recursion ends
(display i) ; body
(loop (+ i 1)))) ; increment
(loop 0)) ; initialization
Did you notice that there's nothing left to do after the recursion is called? the compiler is smart enough to optimize this to use a single stack frame, effectively making it as efficient as a for loop - read about tail recursion for more details. And just to clarify, in Scheme mutation is explicitly available, read about the set! instruction.
This question is really two questions, and a confusion.
Iteration in Scheme
In Scheme, iteration is implemented by recursion together with the semantics of the language mandating that certain kinds of recursion do not consume memory, in particular tail recursion. Note that this does not imply mutation. So, for instance, here is a definition of a while loop i Racket.
(define-syntax-rule (while test form ...)
(let loop ([val test])
(if val
(begin
form ...
(loop test))
(void))))
As you can see the recursive call to loop is in tail position and thus consumes no memory.
Iteration in traditional Lisps
Traditional Lisps do not mandate tail-call elimination and thus require iterative constructs: these are generally provided by the language, but usually can be implemented in terms of lower-level constructs, such as GO TO. Here is a definition of while in Common Lisp which does this:
(defmacro while (test &body forms)
(let ((lname (make-symbol "LOOP")))
`(tagbody
,lname
(if ,test
(progn
,#forms
(go ,lname))))))
A confusion about mutation
Both Scheme and traditional Lisps provide mutation operators: neither are the pure functional languages that you may think they are. Scheme is closer to being one, but it still isn't very close.
I found a code snippet somewhere online:
(letrec
([id (lambda (v) v)]
[ctx0 (lambda (v) `(k ,v))]
.....
.....
(if (memq ctx (list ctx0 id)) <---- condition always return false
.....
where ctx is also a function:
However I could never make the test-statement return true.
Then I have the following test:
(define ctx0 (lambda (v) `(k ,v)))
(define ctx1 (lambda (v) `(k ,v)))
(eq? ctx0 ctx1)
=> #f
(eqv? ctx0 ctx1)
=> #f
(equal? ctx0 ctx1)
=> #f
Which make me suspect that two function are always different since they have different memory location.
But if functions can be compared against other functions, how can I test if two function are the same? and what if they have different variable name? for example:
(lambda (x) (+ x 1)) and (lambda (y) (+ y 1))
P.S. I use DrRacket to test the code.
You can’t. Functions are treated as opaque values: they are only compared by identity, nothing more. This is by design.
But why? Couldn’t languages implement meaningful ways to compare functions that might sometimes be useful? Well, not really, but sometimes it’s hard to see why without elaboration. Let’s consider your example from your question—these two functions seem equivalent:
(define ctx0 (lambda (v) `(k ,v)))
(define ctx1 (lambda (v) `(k ,v)))
And indeed, they are. But what would comparing these functions for equality accomplish? After all, we could just as easily implement another function:
(define ctx2 (lambda (w) `(k ,w)))
This function is, for all intents and purposes, identical to the previous two, but it would fail a naïve equality check!
In order to decide whether or not two values are equivalent, we must define some algorithm that defines equality. Given the examples I’ve provided thus far, such an algorithm seems obvious: two functions should be considered equal if (and only if) they are α-equivalent. With this in hand, we can now meaningfully check if two functions are equal!
...right?
(define ctx3 (lambda (v) (list 'k v)))
Uh, oh. This function does exactly the same thing, but it’s not implemented exactly the same way, so it fails our equality check. Surely, though, we can fix this. Quasiquotation and using the list constructor are pretty much the same, so we can define them to be equivalent in most circumstances.
(define ctx4 (lambda (v) (reverse (list v 'k))))
Gah! That’s also operationally equivalent, but it still fails our equivalence algorithm. How can we possibly make this work?
Turns out we can’t, really. Functions are units of abstraction—by their nature, we are not supposed to need to know how they are implemented, only what they do. This means that function equality can really only be correctly defined in terms of operational equivalence; that is, the implementation doesn’t matter, only the behavior does.
This is an undecidable problem in any nontrivial language. It’s impossible to determine if any two functions are operationally equivalent because, if we could, we could solve the halting problem.
Programming languages could theoretically provide a best-effort algorithm to determine function equivalency, perhaps using α-equivalency or some other sort of metric. Unfortunately, this really wouldn’t be useful—depending on the implementation of a function rather than its behavior to determine the semantics of a program breaks a fundamental law of functional abstraction, and as such any program that depended on such a system would be an antipattern.
Function equality is a very tempting problem to want to solve when the simple cases seem so easy, but most languages take the right approach and don’t even try. That’s not to say it isn’t a useful idea: if it were possible, it would be incredibly useful! But since it isn’t, you’ll have to use a different tool for the job.
Semantically, two function f and g are equal if they agree for every input, i.e. if for all x, we have (= (f x) (g x)). Of course, there's no way to test that for every possible value of x.
If all you want to do is be reasonably confident that (lambda (x) (+ x 1)) and (lambda (y) (+ y 1)) are the same, then you might try asserting that
(map (lambda (x) (+ x 1)) [(-5) (-4) (-3) (-2) (-1) 0 1 2 3 4 5])
and
(map (lambda (y) (+ y 1)) [(-5) (-4) (-3) (-2) (-1) 0 1 2 3 4 5])
are the same in your unit tests.
While working in the repl is there a way to specify the maximum times to recur before the repl will automatically end the evaluation of an expression. As an example, suppose the following function:
(defn looping []
(loop [n 1]
(recur (inc n))))
(looping)
Is there a way to instruct the repl to give up after 100 levels of recursion? Something similar to print-level.
I respectfully hope that I'm not ignoring the spirit of your question, but why not simply use a when expression? It's nice and succinct and wouldn't change the body of your function much at all (1 extra line and a closing paren).
Whilst I don't believe what you want exists, it would be trivial to implement your own:
(def ^:dynamic *recur-limit* 100)
(defn looping []
(loop [n 1]
(when (< n *recur-limit*)
(recur (inc n)))))
Presumably this hasn't been added to the language because it's easy to construct what you need with the existing language primitives; apart from that, if the facility did exist but was 'invisible', it could cause an awful lot of confusion and bugs because code wouldn't always behave in a predictable and referentially transparent manner.
The classic book The Little Lisper (The Little Schemer) is founded on two big ideas
You can solve most problems in a recursive way (instead of using loops) (assuming you have Tail Call Optimisation)
Lisp is great because it is easy to implement in itself.
Now one might think this holds true for all Lispy languages (including Clojure). The trouble is, the book is an artefact of its time (1989), probably before Functional Programming with Higher Order Functions (HOFs) was what we have today.(Or was at least considered palatable for undergraduates).
The benefit of recursion (at least in part) is the ease of traversal of nested data structures like ('a 'b ('c ('d 'e))).
For example:
(def leftmost
(fn [l]
(println "(leftmost " l)
(println (non-atom? l))
(cond
(null? l) '()
(non-atom? (first l)) (leftmost (first l))
true (first l))))
Now with Functional Zippers - we have a non-recursive approach to traversing nested data structures, and can traverse them as we would any lazy data structure. For example:
(defn map-zipper [m]
(zip/zipper
(fn [x] (or (map? x) (map? (nth x 1))))
(fn [x] (seq (if (map? x) x (nth x 1))))
(fn [x children]
(if (map? x)
(into {} children)
(assoc x 1 (into {} children))))
m))
(def m {:a 3 :b {:x true :y false} :c 4})
(-> (map-zipper m) zip/down zip/right zip/node)
;;=> [:b {:y false, :x true}]
Now it seems you can solve any nested list traversal problem with either:
a zipper as above, or
a zipper that walks the structure and returns a set of keys that will let you modify the structure using assoc.
Assumptions:
I'm assuming of course data structures that fixed-size, and fully known prior to traversal
I'm excluding the streaming data source scenario.
My question is: Is recursion a smell (in idiomatic Clojure) because of of zippers and HOFs?
I would say that, yes, if you are doing manual recursion you should at least reconsider whether you need to. But I wouldn't say that zippers have anything to do with this. My experience with zippers has been that they are of theoretical use, and are very exciting to Clojure newcomers, but of little practical value once you get the hang of things, because the situations in which they are useful are vanishingly rare.
It's really because of higher-order functions that have already implemented the common recursive patterns for you that manual recursion is uncommon. However, it's certainly not the case that you should never use manual recursion: it's just a warning sign, suggesting you might be able to do something else. I can't even recall a situation in my four years of using Clojure that I've actually needed a zipper, but I end up using recursion fairly often.
Clojure idioms discourage explicit recursion because the call stack is limited: usually to about 10K deep. Amending the first of Halloway & Bedra's Six Rules of Clojure Functional Programming (Programming Clojure (p 89)),
Avoid unbounded recursion. The JVM cannot optimize recursive calls and
Clojure programs that recurse without bound will blow their stack.
There are a couple of palliatives:
recur deals with tail recursion.
Lazy sequences can turn a deep call stack into a shallow call stack
across an unfolding data structure. Many HOFs in the sequence
library, such as map and filter, do this.