SICP 2.64 order of growth of recursive procedure - recursion

I am self-studyinig SICP and having a hard time finding order of growth of recursive functions.
The following procedure list->tree converts an ordered list to a balanced search tree:
(define (list->tree elements)
(car (partial-tree elements (length elements))))
(define (partial-tree elts n)
(if (= n 0)
(cons '() elts)
(let ((left-size (quotient (- n 1) 2)))
(let ((left-result (partial-tree elts left-size)))
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
(right-size (- n (+ left-size 1))))
(let ((this-entry (car non-left-elts))
(right-result (partial-tree (cdr non-left-elts)
right-size)))
(let ((right-tree (car right-result))
(remaining-elts (cdr right-result)))
(cons (make-tree this-entry left-tree right-tree)
remaining-elts))))))))
I have been looking at the solution online, and the following website I believe offers the best solution but I have trouble making sense of it:
jots-jottings.blogspot.com/2011/12/sicp-exercise-264-constructing-balanced.html
My understanding is that the procedure 'partial-tree' repeatedly calls three argument each time it is called - 'this-entry', 'left-tree', and 'right-tree' respectively. (and 'remaining-elts' only when it is necessary - either in very first 'partial-tree' call or whenever 'non-left-elts' is called)
this-entry calls : car, cdr, and cdr(left-result)
left-entry calls : car, cdr, and itself with its length halved each step
right-entry calls: car, itself with cdr(cdr(left-result)) as argument and length halved
'left-entry' would have base 2 log(n) steps, and all three argument calls 'left-entry' separately.
so it would have Ternary-tree-like structure and the total number of steps I thought would be similar to 3^log(n). but the solution says it only uses each index 1..n only once. But doesn't 'this-entry' for example reduce same index at every node separate to 'right-entry'?
I am confused..
Further, in part (a) the solution website states:
"in the non-terminating case partial-tree first calculates the number
of elements that should go into the left sub-tree of a balanced binary
tree of size n, then invokes partial-tree with the elements and that
value which both produces such a sub-tree and the list of elements not
in that sub-tree. It then takes the head of the unused elements as the
value for the current node"
I believe the procedure does this-entry before left-tree. Why am I wrong?
This is my very first book on CS and I have yet to come across Master Theorem.
It is mentioned in some solutions but hopefully I should be able to do the question without using it.
Thank you for reading and I look forward to your kind reply,
Chris

You need to understand how let forms work. In
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
left-tree does not "call" anything. It is created as a new lexical variable, and assigned the value of (car left-result). The parentheses around it are just for grouping together the elements describing one variable introduced by a let form: the variable's name and its value:
(let ( ( left-tree (car left-result) )
;; ^^ ^^
( non-left-elts (cdr left-result) )
;; ^^ ^^
Here's how to understand how the recursive procedure works: don't.
Just don't try to understand how it works; instead analyze what it does, assuming that it does (for the smaller cases) what it's supposed to do.
Here, (partial-tree elts n) receives two arguments: the list of elements (to be put into tree, presumably) and the list's length. It returns
(cons (make-tree this-entry left-tree right-tree)
remaining-elts)
a cons pair of a tree - the result of conversion, and the remaining elements, which are supposed to be none left, in the topmost call, if the length argument was correct.
Now that we know what it's supposed to do, we look inside it. And indeed assuming the above what it does makes total sense: halve the number of elements, process the list, get the tree and the remaining list back (non-empty now), and then process what's left.
The this-entry is not a tree - it is an element that is housed in a tree's node:
(let ((this-entry (car non-left-elts))
Setting
(right-size (- n (+ left-size 1))
means that n == right-size + 1 + left-size. That's 1 element that goes into the node itself, the this-entry element.
And since each element goes directly into its node, once, the total running time of this algorithm is linear in the number of elements in the input list, with logarithmic stack space use.

Related

Reversing list vs non tail recursion when traversing lists

I wonder how do you, experienced lispers / functional programmers usually make decision what to use. Compare:
(define (my-map1 f lst)
(reverse
(let loop ([lst lst] [acc '()])
(if (empty? lst)
acc
(loop (cdr lst) (cons (f (car lst)) acc))))))
and
(define (my-map2 f lst)
(if (empty? lst)
'()
(cons (f (car lst)) (my-map2 f (cdr lst)))))
The problem can be described in the following way: whenever we have to traverse a list, should we collect results in accumulator, which preserves tail recursion, but requires list reversion in the end? Or should we use unoptimized recursion, but then we don't have to reverse anything?
It seems to me the first solution is always better. Indeed, there's additional complexity (O(n)) there. However, it uses much less memory, let alone calling a function isn't done instantly.
Yet I've seen different examples where the second approach was used. Either I'm missing something or these examples were only educational. Are there situations where unoptimized recursion is better?
When possible, I use higher-order functions like map which build a list under the hood. In Common Lisp I also tend to use loop a lot, which has a collect keyword for building list in a forward way (I also use the series library which also implements it transparently).
I sometimes use recursive functions that are not tail-recursive because they better express what I want and because the size of the list is going to be relatively small; in particular, when writing a macro, the code being manipulated is not usually very large.
For more complex problems I don't collect into lists, I generally accept a callback function that is being called for each solution. This ensures that the work is more clearly separated between how the data is produced and how it is used.
This approach is to me the most flexible of all, because no assumption is made about how the data should be processed or collected. But it also means that the callback function is likely to perform side-effects or non-local returns (see example below). I don't think it is particularly a problem as long the the scope of the side-effects is small (local to a function).
For example, if I want to have a function that generates all natural numbers between 0 and N-1, I write:
(defun range (n f)
(dotimes (i n)
(funcall f i)))
The implementation here iterates over all values from 0 below N and calls F with the value I.
If I wanted to collect them in a list, I'd write:
(defun range-list (N)
(let ((list nil))
(range N (lambda (v) (push v list)))
(nreverse list)))
But, I can also avoid the whole push/nreverse idiom by using a queue. A queue in Lisp can be implemented as a pair (first . last) that keeps track of the first and last cons cells of the underlying linked-list collection. This allows to append elements in constant time to the end, because there is no need to iterate over the list (see Implementing queues in Lisp by P. Norvig, 1991).
(defun queue ()
(let ((list (list nil)))
(cons list list)))
(defun qpush (queue element)
(setf (cdr queue)
(setf (cddr queue)
(list element))))
(defun qlist (queue)
(cdar queue))
And so, the alternative version of the function would be:
(defun range-list (n)
(let ((q (queue)))
(range N (lambda (v) (qpush q v)))
(qlist q)))
The generator/callback approach is also useful when you don't want to build all the elements; it is a bit like the lazy model of evaluation (e.g. Haskell) where you only use the items you need.
Imagine you want to use range to find the first empty slot in a vector, you could do this:
(defun empty-index (vector)
(block nil
(range (length vector)
(lambda (d)
(when (null (aref vector d))
(return d))))))
Here, the block of lexical name nil allows the anonymous function to call return to exit the block with a return value.
In other languages, the same behaviour is often reversed inside-out: we use iterator objects with a cursor and next operations. I tend to think it is simpler to write the iteration plainly and call a callback function, but this would be another interesting approach too.
Tail recursion with accumulator
Traverses the list twice
Constructs two lists
Constant stack space
Can crash with malloc errors
Naive recursion
Traverses list twice (once building up the stack, once tearing down the stack).
Constructs one list
Linear stack space
Can crash with stack overflow (unlikely in racket), or malloc errors
It seems to me the first solution is always better
Allocations are generally more time-expensive than extra stack frames, so I think the latter one will be faster (you'll have to benchmark it to know for sure though).
Are there situations where unoptimized recursion is better?
Yes, if you are creating a lazily evaluated structure, in haskell, you need the cons-cell as the evaluation boundary, and you can't lazily evaluate a tail recursive call.
Benchmarking is the only way to know for sure, racket has deep stack frames, so you should be able to get away with both versions.
The stdlib version is quite horrific, which shows that you can usually squeeze out some performance if you're willing to sacrifice readability.
Given two implementations of the same function, with the same O notation, I will choose the simpler version 95% of the time.
There are many ways to make recursion preserving iterative process.
I usually do continuation passing style directly. This is my "natural" way to do it.
One takes into account the type of the function. Sometimes you need to connect your function with the functions around it and depending on their type you can choose another way to do recursion.
You should start by solving "the little schemer" to gain a strong foundation about it. In the "little typer" you can discover another type of doing recursion, founded on other computational philosophy, used in languages like agda, coq.
In scheme you can write code that is actually haskell sometimes (you can write monadic code that would be generated by a haskell compiler as intermediate language). In that case the way to do recursion is also different that "usual" way, etc.
false dichotomy
You have other options available to you. Here we can preserve tail-recursion and map over the list with a single traversal. The technique used here is called continuation-passing style -
(define (map f lst (return identity))
(if (null? lst)
(return null)
(map f
(cdr lst)
(lambda (r) (return (cons (f (car lst)) r))))))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
This question is tagged with racket, which has built-in support for delimited continuations. We can accomplish map using a single traversal, but this time without using recursion. Enjoy -
(require racket/control)
(define (yield x)
(shift return (cons x (return (void)))))
(define (map f lst)
(reset (begin
(for ((x lst))
(yield (f x)))
null)))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
It's my intention that this post will show you the detriment of pigeonholing your mind into a particular construct. The beauty of Scheme/Racket, I have come to learn, is that any implementation you can dream of is available to you.
I would highly recommend Beautiful Racket by Matthew Butterick. This easy-to-approach and freely-available ebook shatters the glass ceiling in your mind and shows you how to think about your solutions in a language-oriented way.

Is there a way to use iteration in Common Lisp and avoid side-effects at the same time?

I've written two versions of a lisp function. The main difference between the two is that one is done with recursion, while the other is done with iteration.
Here's the recursive version (no side effects!):
(defun simple-check (counter list)
"This function takes two arguments:
the number 0 and a list of atoms.
It returns the number of times the
atom 'a' appears in that list."
(if (null list)
counter
(if (equal (car list) 'a)
(simple-check (+ counter 1) (cdr list))
(simple-check counter (cdr list)))))
Here's the iterative version (with side effects):
(defun a-check (counter list)
"This function takes two arguments:
the number 0 and a list of atoms.
It returns the number of times the
atom 'a' appears in that list."
(dolist (item list)
(if (equal item 'a)
(setf counter (+ counter 1))
(setf counter (+ counter 0))))
counter)
As far as I know, they both work. But I'd really like to avoid side-effects in the iterative version. Two questions I'd like answered:
Is it possible to avoid side effects and keep iteration?
Assuming the answer to #1 is a yes, what are the best ways to do so?
For completeness, note that Common Lisp has a built-in COUNT:
(count 'a list)
In some ways, the difference between side-effect or no side-effect is a bit blurred. Take the following loop version (ignoring that loop also has better ways):
(loop :for x :in list
:for counter := (if (eq x 'a) (1+ counter) counter)
:finally (return counter))
Is counter set at each step, or is it rebound? I. e., is an existing variable modified (like in setf), or is a new variable binding created (as in a recursion)?
This do version is very much like the recursive version:
(do ((list args (rest list))
(counter 0 (+ counter (if (eq (first list) 'a) 1 0))))
((endp list) counter))
Same question as above.
Now the “obvious” loop version:
(loop :for x :in list
:count (eq x 'a))
There isn't even an explicit variable for the counter. Are there side-effects?
Internally, of course there are effects: environments are created, bindings established, and, especially if there is tail call optimization, even in the recursive version destroyed/replaced at each step.
I see as side effects only effects that affect things outside of some defined scope. Of course, things appear more elegant if you can also on the level of your internal definition avoid the explicit setting of things, and instead use some more declarative expression.
You can also iterate with map, mapcar and friends.
https://lispcookbook.github.io/cl-cookbook/iteration.html
I also suggest a look at remove-if[-not] and other reduce and apply:
(length (remove-if-not (lambda (x) (equal :a x)) '(:a :b :a))) ;; 2
Passing counter to the recursive procedure was a means to enable a tail recursive definition. This is unnecessary for the iterative definition.
As others have pointed out, there are several language constructs which solve the stated problem elegantly.
I assume you are interested in this in a more general sense such as when you cannot find
a language feature that solves a problem directly.
In general, one can maintain a functional interface by keeping the mutation private as below:
(defun simple-check (list)
"return the number of times the symbol `a` appears in `list`"
(let ((times 0))
(dolist (elem list times)
(when (equal elem 'a)
(incf times)))))

Turning structural recursion into accumulative recursion in Racket

I have some code to find the maximum height and replace it with the associated name. There are separate lists for height and names, each the same length and non-empty.
I can solve this using structural recursion but have to change it into accumulative, and I am unsure how to do that. All the examples I have seen are confusing me. Is anybody able to turn the code into one using accumulative recursion?
(define (tallest names heights)
(cond
[(empty? names) heights]
[(> (first heights) (first (rest heights)))
(cons (first names) (tallest (rest (rest names)) (rest (rest heights))))]
[else (tallest (rest names) (rest heights))]))
First of all, your provided tallest function doesn't actually work (calling (tallest '(Bernie Raj Amy) (list 1.5 1.6 1.7)) fails with a contract error), but I see what you're getting at. What's the difference between structural recursion and accumulative recursion?
Well, structural recursion works by building a structure as a return value, in which one of the values inside that structure is the result of a recursive call to the same function. Take the recursive calculation of a factorial, for example. You might define it like this:
(define (factorial n)
(if (zero? n) 1
(* n (factorial (sub1 n)))))
Visualize how this program would execute for an input of, say, 4. Each call leaves a "hole" in the multiplication expression to be filled in by the result of a recursive subcall. Here's what that would look like, visualized, using _ to represent one of those "holes".
(* 4 _)
(* 3 _)
(* 2 _)
(* 1 _)
1
Notice how a large portion of the work is done only after the final case is reached. Much of the work must be done in the process of popping the calls off the stack as they return because each call depends on performing some additional operation on its subcall's result.
How is accumulative recursion different? Well, in accumulative recursion, we use an extra argument to the function called an accumulator. Rewriting the above factorial function to use an accumulator would make it look like this:
(define (factorial n acc)
(if (zero? n) acc
(factorial (sub1 n) (* acc n))))
Now if we wanted to find the factorial of 4, we'd have to call (factorial 4 1), providing a starting value for the accumulator (I'll address how to avoid that in a moment). If you think about the call stack now, it would look quite different.
Notice how there are no "holes" to be filled in—the result of the factorial function is either the accumulator or a direct call to itself. This is referred to as a tail call, and the recursive call to factorial is referred to as being in tail position.
This turns out to be helpful for a few reasons. First of all, some functions are just easier to express with accumulative recursion, though factorial probably isn't one of them. More importantly, however, Scheme requires that implementations provide proper tails calls (sometimes also called "tail call optimization"), which means that the call stack will not grow in depth when tail calls are made.
There is plenty of existing information about how tail calls work and why they're useful, so I won't repeat that here. What's important to understand is that accumulative recursion involves an accumulator argument, which usually causes the resulting function to be implemented with a tail call.
But what do we do about the extra parameter? Well, we can actually just make a "helper" function that will do the accumulative recursion, but we will provide a function that automatically fills in the base case.
(define (factorial n)
(define (factorial-helper n acc)
(if (zero? n) acc
(factorial-helper (sub1 n) (* acc n))))
(factorial-helper n 1))
This sort of idiom is common enough that Racket provides a "named let" form, which simplifies the above function to this:
(define (factorial n)
(let helper ([n n] [acc 1])
(if (zero? n) acc
(helper (sub1 n) (* acc n)))))
But that's just some syntactic sugar for the same idea.
Okay, so: how does any of this apply to your question? Well, actually, using accumulative recursion makes implementing your problem quite easy. Here's a breakdown of how you'd structure the algorithm:
Just like in your original example, you'd iterate through the list until you get empty. This will form your "base case".
Your accumulator will be simple—it will be the current maximum element you've found.
Upon each iteration, if you find an element greater than the current maximum, that becomes the new accumulator. Otherwise, the accumulator remains the same.
Putting these all together, and here's a simple implementation:
(define (tallest-helper names heights current-tallest)
(cond
[(empty? names)
(car current-tallest)]
[(> (first heights) (cdr current-tallest))
(tallest-helper (rest names) (rest heights)
(cons (first names) (first heights)))]
[else
(tallest-helper (rest names) (rest heights)
current-tallest)]))
This can be further improved in plenty of ways—wrapping it in a function that provides the accumulator's starting value, using named let, removing some of the repetition, etc.—but I'll leave that as an exercise for you.
Just remember: the accumulator is effectively your "working sum". It's your "running total". Understand that, and things should make sense.

How to find whether a sequence is increasing?

It's a fragment of a larger task, but I'm really struggling with this. Resources for scheme/lisp are a lot more limited than C, Java, and Python.
If I pass in a var list1 that contains a list of numbers, how can I tell whether the list is in monotonically increasing order or not?
If a list has less than two elements, then we'll say 'yes.'
If a list has two or more elements, then we'll say 'no' if the first is larger than the second, otherwise recurse on the tail of the list.
(define (monotonically-increasing? lst)
(apply < lst))
Or, if you want monotonically-non-decreasing:
(define (monotonically-non-decreasing? lst)
(apply <= lst))
Yes, it's really as simple as that. Totally O(n), and no manual recursion required.
Bonus: For good measure:
(define (sum lst)
(apply + lst))
:-P
If efficiency is not an issue, you can do this:
(equal? list1 (sort list1 <=))
That's an O(n log n) solution, because of the sorting. For an optimal solution, simply compare each element with the next one and test if the current element is less than or equal to the next one, being careful with the last element (which doesn't have a next element). That'll yield an O(n) solution.
This is the general idea of what needs to be done, written in a functional style. Still not the fastest way to write the solution, but at least it's O(n) and very short; you can use it as a basis for writing a simpler solution from scratch:
(define (increasing? lst)
(andmap <=
lst
(append (cdr lst) '(+inf.0))))
The above checks for each number in lst if it's less than or equal to the next number. The andmap procedure tests if the <= condition holds for all pairs of elements in two lists. The first list is the one passed as a parameter, the second list is the same list, but shifted one position to the right, with the positive infinite value added at the end to preserve the same size in both lists - it works for the last element because any number will be smaller than infinite. For example, with this list:
(increasing? '(1 2 3))
The above procedure call will check that (<= 1 2) and (<= 2 3) and (<= 3 +inf.0), because all conditions evaluate to #t the whole procedure returns #t. If just one of the conditions had failed, the whole procedure would have returned #f.

Check If Two Sets Are Equal - Scheme

;checks to see if two sets (represented as lists) are equal
(define (setsEqual? S1 S2)
(cond
( (null? (cdr S1)) (in_member S1 S2))
( (in_member (car S1) S2) (setsEqual? (cdr S1) S2))
(else false)))
;checks for an element in the list
(define (in_member x list)
(cond
( (eq? x (car list)) true)
( (null? list) false)
(else (in_member x (cdr list)))))
Can't seem to find a base case to get this working. Any help is appreciated!
What is a set? (It's a list---okay, what is a list?) Here's a hint: there are two variants of "list": '() (aka null, aka empty) and those made with cons. Your function(s) should follow the structure of the data they consume. Yours don't.
I recommend reading How to Design Programs (text available online); it will teach you a "design recipe" for tackling problems like these. The rough outline is: describe your data semi-formally (that answers what is a list?), formulate examples and tests; use your description of the data to create a template for processing that kind of data; finally, fill in the template. The key thing is that the template is determined by the data definition. It's reusable, and filling in the template for a specific function can often be done in seconds---if you've created your template and examples correctly.
HtDP Chapter 9 talks about processing lists, specifically. That'll help with in_member.
Chapter 17 talks about processing multiple complex arguments (eg, two lists at once). One more hint: if I were writing this function, I would make use of the following fact about sets: two sets are equal if each is a subset of the other.

Resources