What is the difference between generative and structural recursion in Racket?
Structural recursion happens when the "sub-problems" match up exactly with the possible pieces of the data.
For example, processing a list lox. The trivial case is when lox is empty. Otherwise the first sub-problem is dealing with the (first lox), and the second sub-problem is dealing with the (rest lox). You solve each sub-problem by calling a helper function, and you combine the solutions.
(define (process-list-of-x lox)
(cond
;; trivial case
[(empty? lox) ...]
[(cons? lox)
; combine the solutions
(...
; solve the first sub-problem
(process-x (first lox)) ; data-def tells you what the first sub-problem is
...
; solve the second sub-problem
(process-list-of-x (rest lox)) ; data-def tells you what the second sub-problem is
...)]))
What's different is that structural recursion tells you what the sub-problems are, where in generative recursion the sub-problems could be anything. You often need a new idea for how to break it up. A Eureka moment specific to the problem, not the data.
(define (solve-problem prob)
(cond
;; trivial case
[(trival-problem? prob) (... prob ...)]
[else
; combine the solutions
(...
; solve the first sub-problem
(solve-problem (generate-first-sub-problem prob)) ; you have to figure this out
...
; solve the second sub-problem
(solve-problem (generate-second-sub-problem prob)) ; data-def doesn't tell you how
...)]))
Also, structural recursion guarantees that it terminates because the sub-problems come from breaking up the data. In generative recursion the sub-problems can be more complicated, so you need some other way of figuring out whether it terminates.
Related
I wonder how do you, experienced lispers / functional programmers usually make decision what to use. Compare:
(define (my-map1 f lst)
(reverse
(let loop ([lst lst] [acc '()])
(if (empty? lst)
acc
(loop (cdr lst) (cons (f (car lst)) acc))))))
and
(define (my-map2 f lst)
(if (empty? lst)
'()
(cons (f (car lst)) (my-map2 f (cdr lst)))))
The problem can be described in the following way: whenever we have to traverse a list, should we collect results in accumulator, which preserves tail recursion, but requires list reversion in the end? Or should we use unoptimized recursion, but then we don't have to reverse anything?
It seems to me the first solution is always better. Indeed, there's additional complexity (O(n)) there. However, it uses much less memory, let alone calling a function isn't done instantly.
Yet I've seen different examples where the second approach was used. Either I'm missing something or these examples were only educational. Are there situations where unoptimized recursion is better?
When possible, I use higher-order functions like map which build a list under the hood. In Common Lisp I also tend to use loop a lot, which has a collect keyword for building list in a forward way (I also use the series library which also implements it transparently).
I sometimes use recursive functions that are not tail-recursive because they better express what I want and because the size of the list is going to be relatively small; in particular, when writing a macro, the code being manipulated is not usually very large.
For more complex problems I don't collect into lists, I generally accept a callback function that is being called for each solution. This ensures that the work is more clearly separated between how the data is produced and how it is used.
This approach is to me the most flexible of all, because no assumption is made about how the data should be processed or collected. But it also means that the callback function is likely to perform side-effects or non-local returns (see example below). I don't think it is particularly a problem as long the the scope of the side-effects is small (local to a function).
For example, if I want to have a function that generates all natural numbers between 0 and N-1, I write:
(defun range (n f)
(dotimes (i n)
(funcall f i)))
The implementation here iterates over all values from 0 below N and calls F with the value I.
If I wanted to collect them in a list, I'd write:
(defun range-list (N)
(let ((list nil))
(range N (lambda (v) (push v list)))
(nreverse list)))
But, I can also avoid the whole push/nreverse idiom by using a queue. A queue in Lisp can be implemented as a pair (first . last) that keeps track of the first and last cons cells of the underlying linked-list collection. This allows to append elements in constant time to the end, because there is no need to iterate over the list (see Implementing queues in Lisp by P. Norvig, 1991).
(defun queue ()
(let ((list (list nil)))
(cons list list)))
(defun qpush (queue element)
(setf (cdr queue)
(setf (cddr queue)
(list element))))
(defun qlist (queue)
(cdar queue))
And so, the alternative version of the function would be:
(defun range-list (n)
(let ((q (queue)))
(range N (lambda (v) (qpush q v)))
(qlist q)))
The generator/callback approach is also useful when you don't want to build all the elements; it is a bit like the lazy model of evaluation (e.g. Haskell) where you only use the items you need.
Imagine you want to use range to find the first empty slot in a vector, you could do this:
(defun empty-index (vector)
(block nil
(range (length vector)
(lambda (d)
(when (null (aref vector d))
(return d))))))
Here, the block of lexical name nil allows the anonymous function to call return to exit the block with a return value.
In other languages, the same behaviour is often reversed inside-out: we use iterator objects with a cursor and next operations. I tend to think it is simpler to write the iteration plainly and call a callback function, but this would be another interesting approach too.
Tail recursion with accumulator
Traverses the list twice
Constructs two lists
Constant stack space
Can crash with malloc errors
Naive recursion
Traverses list twice (once building up the stack, once tearing down the stack).
Constructs one list
Linear stack space
Can crash with stack overflow (unlikely in racket), or malloc errors
It seems to me the first solution is always better
Allocations are generally more time-expensive than extra stack frames, so I think the latter one will be faster (you'll have to benchmark it to know for sure though).
Are there situations where unoptimized recursion is better?
Yes, if you are creating a lazily evaluated structure, in haskell, you need the cons-cell as the evaluation boundary, and you can't lazily evaluate a tail recursive call.
Benchmarking is the only way to know for sure, racket has deep stack frames, so you should be able to get away with both versions.
The stdlib version is quite horrific, which shows that you can usually squeeze out some performance if you're willing to sacrifice readability.
Given two implementations of the same function, with the same O notation, I will choose the simpler version 95% of the time.
There are many ways to make recursion preserving iterative process.
I usually do continuation passing style directly. This is my "natural" way to do it.
One takes into account the type of the function. Sometimes you need to connect your function with the functions around it and depending on their type you can choose another way to do recursion.
You should start by solving "the little schemer" to gain a strong foundation about it. In the "little typer" you can discover another type of doing recursion, founded on other computational philosophy, used in languages like agda, coq.
In scheme you can write code that is actually haskell sometimes (you can write monadic code that would be generated by a haskell compiler as intermediate language). In that case the way to do recursion is also different that "usual" way, etc.
false dichotomy
You have other options available to you. Here we can preserve tail-recursion and map over the list with a single traversal. The technique used here is called continuation-passing style -
(define (map f lst (return identity))
(if (null? lst)
(return null)
(map f
(cdr lst)
(lambda (r) (return (cons (f (car lst)) r))))))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
This question is tagged with racket, which has built-in support for delimited continuations. We can accomplish map using a single traversal, but this time without using recursion. Enjoy -
(require racket/control)
(define (yield x)
(shift return (cons x (return (void)))))
(define (map f lst)
(reset (begin
(for ((x lst))
(yield (f x)))
null)))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
It's my intention that this post will show you the detriment of pigeonholing your mind into a particular construct. The beauty of Scheme/Racket, I have come to learn, is that any implementation you can dream of is available to you.
I would highly recommend Beautiful Racket by Matthew Butterick. This easy-to-approach and freely-available ebook shatters the glass ceiling in your mind and shows you how to think about your solutions in a language-oriented way.
The Third Commandment of The Little Schemer states:
When building a list, describe the first typical element, and then cons it onto the natural recursion.
What is the exact definition of "natural recursion"? The reason why I am asking is because I am taking a class on programming language principles by Daniel Friedman and the following code is not considered "naturally recursive":
(define (plus x y)
(if (zero? y) x
(plus (add1 x) (sub1 y))))
However, the following code is considered "naturally recursive":
(define (plus x y)
(if (zero? y) x
(add1 (plus x (sub1 y)))))
I prefer the "unnaturally recursive" code because it is tail recursive. However, such code is considered anathema. When I asked as to why we shouldn't write the function in tail recursive form then the associate instructor simply replied, "You don't mess with the natural recursion."
What's the advantage of writing the function in the "naturally recursive" form?
"Natural" (or just "Structural") recursion is the best way to start teaching students about recursion. This is because it has the wonderful guarantee that Joshua Taylor points out: it's guaranteed to terminate[*]. Students have a hard enough time wrapping their heads around this kind of program that making this a "rule" can save them a huge amount of head-against-wall-banging.
When you choose to leave the realm of structural recursion, you (the programmer) have taken on an additional responsibility, which is to ensure that your program halts on all inputs; it's one more thing to think about & prove.
In your case, it's a bit more subtle. You have two arguments, and you're making structurally recursive calls on the second one. In fact, with this observation (program is structurally recursive on argument 2), I would argue that your original program is pretty much just as legitimate as the non-tail-calling one, since it inherits the same proof-of-convergence. Ask Dan about this; I'd be interested to hear what he has to say.
[*] To be precise here you have to legislate out all kinds of other goofy stuff like calls to other functions that don't terminate, etc.
The natural recursion has to do with the "natural", recursive definition of the type you are dealing with. Here, you are working with natural numbers; since "obviously" a natural number is either zero or the successor of another natural number, when you want to build a natural number, you naturally output 0 or (add1 z) for some other natural z which happens to be computed recursively.
The teacher probably wants you to make the link between recursive type definitions and recursive processing of values of that type. You would not have the kind of problem you have with numbers if you tried to process trees or lists, because you routinely use natural numbers in "unnatural ways" and thus, you might have natural objections thinking in terms of Church numerals.
The fact that you already know how to write tail-recursive functions is irrelevant in that context: this is apparently not the objective of your teacher to talk about tail-call optimizations, at least for now.
The associate instructor was not very helpful at first ("messing with natural recursion" sounds as "don't ask"), but the detailed explanation he/she gave in the snapshot you gave was more appropriate.
(define (plus x y)
(if (zero? y) x
(add1 (plus x (sub1 y)))))
When y != 0 it has to remember that once the result of (plus x (sub1 y)) is known, it has to compute add1 on it. Hence when y reaches zero, the recursion is at its deepest. Now the backtracking phase begins and the add1's are executed. This process can be observed using trace.
I did the trace for :
(require racket/trace)
(define (add1 x) ...)
(define (sub1 x) ...)
(define (plus x y) ...)
(trace plus)
(plus 2 3)
Here's the trace :
>(plus 2 3)
> (plus 2 2)
> >(plus 2 1)
> > (plus 2 0) // Deepest point of recursion
< < 2 // Backtracking begins, performing add1 on the results
< <3
< 4
<5
5 // Result
The difference is that the other version has no backtracking phase. It is calling itself for a few times but it is iterative, because it is remembering intermediate results (passed as arguments). Hence the process is not consuming extra space.
Sometimes implementing a tail-recursive procedure is easier or more elegant then writing it's iterative equivalent. But for some purposes you can not/may not implement it in a recursive way.
PS : I had a class which was covering a bit about garbage collection algorithms. Such algorithms may not be recursive as there may be no space left, hence having no space for the recursion. I remember an algorithm called "Deutsch-Schorr-Waite" which was really hard to understand at first. First he implemented the recursive version just to understand the concept, afterwards he wrote the iterative version (hence manually having to remember from where in memory he came), believe me the recursive one was way easier but could not be used in practice...
I have some code to find the maximum height and replace it with the associated name. There are separate lists for height and names, each the same length and non-empty.
I can solve this using structural recursion but have to change it into accumulative, and I am unsure how to do that. All the examples I have seen are confusing me. Is anybody able to turn the code into one using accumulative recursion?
(define (tallest names heights)
(cond
[(empty? names) heights]
[(> (first heights) (first (rest heights)))
(cons (first names) (tallest (rest (rest names)) (rest (rest heights))))]
[else (tallest (rest names) (rest heights))]))
First of all, your provided tallest function doesn't actually work (calling (tallest '(Bernie Raj Amy) (list 1.5 1.6 1.7)) fails with a contract error), but I see what you're getting at. What's the difference between structural recursion and accumulative recursion?
Well, structural recursion works by building a structure as a return value, in which one of the values inside that structure is the result of a recursive call to the same function. Take the recursive calculation of a factorial, for example. You might define it like this:
(define (factorial n)
(if (zero? n) 1
(* n (factorial (sub1 n)))))
Visualize how this program would execute for an input of, say, 4. Each call leaves a "hole" in the multiplication expression to be filled in by the result of a recursive subcall. Here's what that would look like, visualized, using _ to represent one of those "holes".
(* 4 _)
(* 3 _)
(* 2 _)
(* 1 _)
1
Notice how a large portion of the work is done only after the final case is reached. Much of the work must be done in the process of popping the calls off the stack as they return because each call depends on performing some additional operation on its subcall's result.
How is accumulative recursion different? Well, in accumulative recursion, we use an extra argument to the function called an accumulator. Rewriting the above factorial function to use an accumulator would make it look like this:
(define (factorial n acc)
(if (zero? n) acc
(factorial (sub1 n) (* acc n))))
Now if we wanted to find the factorial of 4, we'd have to call (factorial 4 1), providing a starting value for the accumulator (I'll address how to avoid that in a moment). If you think about the call stack now, it would look quite different.
Notice how there are no "holes" to be filled in—the result of the factorial function is either the accumulator or a direct call to itself. This is referred to as a tail call, and the recursive call to factorial is referred to as being in tail position.
This turns out to be helpful for a few reasons. First of all, some functions are just easier to express with accumulative recursion, though factorial probably isn't one of them. More importantly, however, Scheme requires that implementations provide proper tails calls (sometimes also called "tail call optimization"), which means that the call stack will not grow in depth when tail calls are made.
There is plenty of existing information about how tail calls work and why they're useful, so I won't repeat that here. What's important to understand is that accumulative recursion involves an accumulator argument, which usually causes the resulting function to be implemented with a tail call.
But what do we do about the extra parameter? Well, we can actually just make a "helper" function that will do the accumulative recursion, but we will provide a function that automatically fills in the base case.
(define (factorial n)
(define (factorial-helper n acc)
(if (zero? n) acc
(factorial-helper (sub1 n) (* acc n))))
(factorial-helper n 1))
This sort of idiom is common enough that Racket provides a "named let" form, which simplifies the above function to this:
(define (factorial n)
(let helper ([n n] [acc 1])
(if (zero? n) acc
(helper (sub1 n) (* acc n)))))
But that's just some syntactic sugar for the same idea.
Okay, so: how does any of this apply to your question? Well, actually, using accumulative recursion makes implementing your problem quite easy. Here's a breakdown of how you'd structure the algorithm:
Just like in your original example, you'd iterate through the list until you get empty. This will form your "base case".
Your accumulator will be simple—it will be the current maximum element you've found.
Upon each iteration, if you find an element greater than the current maximum, that becomes the new accumulator. Otherwise, the accumulator remains the same.
Putting these all together, and here's a simple implementation:
(define (tallest-helper names heights current-tallest)
(cond
[(empty? names)
(car current-tallest)]
[(> (first heights) (cdr current-tallest))
(tallest-helper (rest names) (rest heights)
(cons (first names) (first heights)))]
[else
(tallest-helper (rest names) (rest heights)
current-tallest)]))
This can be further improved in plenty of ways—wrapping it in a function that provides the accumulator's starting value, using named let, removing some of the repetition, etc.—but I'll leave that as an exercise for you.
Just remember: the accumulator is effectively your "working sum". It's your "running total". Understand that, and things should make sense.
I am self-studyinig SICP and having a hard time finding order of growth of recursive functions.
The following procedure list->tree converts an ordered list to a balanced search tree:
(define (list->tree elements)
(car (partial-tree elements (length elements))))
(define (partial-tree elts n)
(if (= n 0)
(cons '() elts)
(let ((left-size (quotient (- n 1) 2)))
(let ((left-result (partial-tree elts left-size)))
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
(right-size (- n (+ left-size 1))))
(let ((this-entry (car non-left-elts))
(right-result (partial-tree (cdr non-left-elts)
right-size)))
(let ((right-tree (car right-result))
(remaining-elts (cdr right-result)))
(cons (make-tree this-entry left-tree right-tree)
remaining-elts))))))))
I have been looking at the solution online, and the following website I believe offers the best solution but I have trouble making sense of it:
jots-jottings.blogspot.com/2011/12/sicp-exercise-264-constructing-balanced.html
My understanding is that the procedure 'partial-tree' repeatedly calls three argument each time it is called - 'this-entry', 'left-tree', and 'right-tree' respectively. (and 'remaining-elts' only when it is necessary - either in very first 'partial-tree' call or whenever 'non-left-elts' is called)
this-entry calls : car, cdr, and cdr(left-result)
left-entry calls : car, cdr, and itself with its length halved each step
right-entry calls: car, itself with cdr(cdr(left-result)) as argument and length halved
'left-entry' would have base 2 log(n) steps, and all three argument calls 'left-entry' separately.
so it would have Ternary-tree-like structure and the total number of steps I thought would be similar to 3^log(n). but the solution says it only uses each index 1..n only once. But doesn't 'this-entry' for example reduce same index at every node separate to 'right-entry'?
I am confused..
Further, in part (a) the solution website states:
"in the non-terminating case partial-tree first calculates the number
of elements that should go into the left sub-tree of a balanced binary
tree of size n, then invokes partial-tree with the elements and that
value which both produces such a sub-tree and the list of elements not
in that sub-tree. It then takes the head of the unused elements as the
value for the current node"
I believe the procedure does this-entry before left-tree. Why am I wrong?
This is my very first book on CS and I have yet to come across Master Theorem.
It is mentioned in some solutions but hopefully I should be able to do the question without using it.
Thank you for reading and I look forward to your kind reply,
Chris
You need to understand how let forms work. In
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
left-tree does not "call" anything. It is created as a new lexical variable, and assigned the value of (car left-result). The parentheses around it are just for grouping together the elements describing one variable introduced by a let form: the variable's name and its value:
(let ( ( left-tree (car left-result) )
;; ^^ ^^
( non-left-elts (cdr left-result) )
;; ^^ ^^
Here's how to understand how the recursive procedure works: don't.
Just don't try to understand how it works; instead analyze what it does, assuming that it does (for the smaller cases) what it's supposed to do.
Here, (partial-tree elts n) receives two arguments: the list of elements (to be put into tree, presumably) and the list's length. It returns
(cons (make-tree this-entry left-tree right-tree)
remaining-elts)
a cons pair of a tree - the result of conversion, and the remaining elements, which are supposed to be none left, in the topmost call, if the length argument was correct.
Now that we know what it's supposed to do, we look inside it. And indeed assuming the above what it does makes total sense: halve the number of elements, process the list, get the tree and the remaining list back (non-empty now), and then process what's left.
The this-entry is not a tree - it is an element that is housed in a tree's node:
(let ((this-entry (car non-left-elts))
Setting
(right-size (- n (+ left-size 1))
means that n == right-size + 1 + left-size. That's 1 element that goes into the node itself, the this-entry element.
And since each element goes directly into its node, once, the total running time of this algorithm is linear in the number of elements in the input list, with logarithmic stack space use.
I enjoy using recursion whenever I can, it seems like a much more natural way to loop over something then actual loops. I was wondering if there is any limit to recursion in lisp? Like there is in python where it freaks out after like 1000 loops?
Could you use it for say, a game loop?
Testing it out now, simple counting recursive function. Now at >7000000!
Thanks alot
First, you should understand what tail call is about.
Tail call are call that do not consumes stack.
Now you need to recognize when your are consuming stack.
Let's take the factorial example:
(defun factorial (n)
(if (= n 1)
1
(* n (factorial (- n 1)))))
Here is the non-tail recursive implementation of factorial.
Why? This is because in addition to a return from factorial, there is a pending computation.
(* n ..)
So you are stacking n each time you call factorial.
Now let's write the tail recursive factorial:
(defun factorial-opt (n &key (result 1))
(if (= n 1)
result
(factorial-opt (- n 1) :result (* result n))))
Here, the result is passed as an argument to the function.
So you're also consuming stack, but the difference is that the stack size stays constant.
Thus, the compiler can optimize it by using only registers and leaving the stack empty.
The factorial-opt is then faster, but is less readable.
factorial is limited to the size of the stack will factorial-opt is not.
So you should learn to recognize tail recursive function in order to know if the recursion is limited.
There might be some compiler technique to transform a non-tail recursive function into a tail recursive one. Maybe someone could point out some link here.
Scheme mandates tail call optimization, and some CL implementations offer it as well. However, CL does not mandate it.
Note that for tail call optimization to work, you need to make sure that you don't have to return. E.g. a naive implementation of Fibonacci where there is a need to return and add to another recursive call, will not be tail call optimized and as a result you will run out of stack space.