I have this node structure defined as
(defstruct node
parent
left
right
data)
When I fill in the parent node, is there any way to do it in such a way that avoids an infinite recursion in computing the parent?
For example: Say I have a node (A :parent B :left 2 :right 3 :data nil), and a node (B :parent nil :left 4 :right A :data nil). When you evalutate the parent of A, you get an infinite recursion (Parent of A is B -> B's right is A -> Parent of A is B -> ...). Is there a way to avoid this while keeping mlg(n) performance on all splay tree operations?
I appreciate it
The infinite recursion probably happens as part of the printing. You can avoid this by:
Don't print the circular structure
Enable circle-detection in the standard printer: (setf *print-circle* t)
Related
I am using Common Lisp, SBCL and Slime. I am new to Common Lisp.
Apparently, this is a circular list in Common Lisp:
#1=('a 'b 'c . #1#)
This would provide an infinite 'a 'b 'c 'a 'b 'c 'a...
When I put this on the REPL it keeps running forever:
CL-USER> #1=('a 'b 'c . #1#)
Why does that happen? Why the REPL does not return the "object" it received?
I can understand an infinite behavior if I was asking for the next element of the list. However, I asked the REPL about the object itself.
I was expecting the same behavior that happens with proper lists or dotted lists:
CL-USER> (list 'a 'b 'c)
(A B C)
CL-USER> (cons 'a (cons 'b 'c))
(A B . C)
I can understand an infinite behavior if I was asking for the next element of the list.
Why? The next element is not an operation with infinite compute time. It's just getting the next element, which is only a simple operation. It's just the next element in a circular list.
What's infinite is when one asks for the next element in a loop, checking for the end of the list.
(cdr list)
vs.
(dolist (e list)
...)
Probably the printer (the part of the Read Eval Print Loop which prints the result) will do something similar when it wants to print the elements of an infinite list. TFB mentioned that one needs special checks to detect circularity to keep the computation bounded.
Think about what printing such an object involves: how long will a naïve printer take to print it and how large will the output be?
Well, CL lets you tame this: there are a number of printer-control variables, of which perhaps the most immediately useful here is *print-circle*. You can set this to be true to enable circularity detection in the printer. If you do that you'll also probably realise why it is nil by default.
I wanted to implement a doubly linked list in racket. At the beginning I wrote a simple two node list to test referencing in racket. Here is my code:
#lang racket
(struct node (val next) #:transparent #:mutable)
(define n0 (node 0 null))
(define n1 (node 1 n0))
n0
n1
(set-node-next! n0 n1)
n0
n1
Here is the corresponding output:
(node 0 '())
(node 1 (node 0 '()))
#0=(node 0 (node 1 #0#))
#0=(node 1 (node 0 #0#))
The first and second line of the output is what I expected from the code but after that I don't have any clue about what it is doing. I guess those # are related to reference, but I couldn't find anything on the web. Can anyone explain this output for me?
Thanks.
You are correct about them being references. In Racket, pretty much all types (except for integers, and other compiler optimizations I won't go into here) refer to an object stored in a sort of heap. Because of that, when you called set-node-next!, you told Racket to point the structures next field back to the original struct. Basically forming a cycle:
The #0= notation is a way Racket is able to print out back references. You can read about it more in the docs, but in short, when you see #0=, its saying that #0# refers to this structure, and therefore if you see it, that's where the back-reference (so to speak) is.
I implemented a member? function in clojure as follows:
(defn member? [item seq]
(cond (empty? seq) false
(= item (first seq)) true
:else (recur item (rest seq))))
Unfortunately this doesn't work with infinite lists. Does anybody know of a way to implement it in order to be able to get:
(member? 3 (range)) -> true
Your implementation behaves correctly for an infinite input sequence. It does not terminate until an element has been found because the (empty? seq) case never falls.
Consider searching for something in an infinite space. When is it a good time to say it isn't there? There is no reliable way to tell. Limit the space you are searching in, e. g.:
(member? 3 (take 10 (range)))
You can't. I mean, at all.
In order to make sure there is no certain element you need to traverse the entire collection. Then and only then you can guarantee it's not there.
In some cases, such as your example, input sequence is ascending, i. e. every element of your sequence is less than its successive element. You can leverage that and make your sequence finite using take-while:
(member? 3 # is 3 a member of
(take-while # a sequence of elements
#(<= % 3) # up to 3 inclusively
range)) # from range
For me, your code already works: (member? 3 (range)) returns true.
But what is the point of checking for the existence of a value in an infinite sequence? It will either return true or it will never return.
I am self-studyinig SICP and having a hard time finding order of growth of recursive functions.
The following procedure list->tree converts an ordered list to a balanced search tree:
(define (list->tree elements)
(car (partial-tree elements (length elements))))
(define (partial-tree elts n)
(if (= n 0)
(cons '() elts)
(let ((left-size (quotient (- n 1) 2)))
(let ((left-result (partial-tree elts left-size)))
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
(right-size (- n (+ left-size 1))))
(let ((this-entry (car non-left-elts))
(right-result (partial-tree (cdr non-left-elts)
right-size)))
(let ((right-tree (car right-result))
(remaining-elts (cdr right-result)))
(cons (make-tree this-entry left-tree right-tree)
remaining-elts))))))))
I have been looking at the solution online, and the following website I believe offers the best solution but I have trouble making sense of it:
jots-jottings.blogspot.com/2011/12/sicp-exercise-264-constructing-balanced.html
My understanding is that the procedure 'partial-tree' repeatedly calls three argument each time it is called - 'this-entry', 'left-tree', and 'right-tree' respectively. (and 'remaining-elts' only when it is necessary - either in very first 'partial-tree' call or whenever 'non-left-elts' is called)
this-entry calls : car, cdr, and cdr(left-result)
left-entry calls : car, cdr, and itself with its length halved each step
right-entry calls: car, itself with cdr(cdr(left-result)) as argument and length halved
'left-entry' would have base 2 log(n) steps, and all three argument calls 'left-entry' separately.
so it would have Ternary-tree-like structure and the total number of steps I thought would be similar to 3^log(n). but the solution says it only uses each index 1..n only once. But doesn't 'this-entry' for example reduce same index at every node separate to 'right-entry'?
I am confused..
Further, in part (a) the solution website states:
"in the non-terminating case partial-tree first calculates the number
of elements that should go into the left sub-tree of a balanced binary
tree of size n, then invokes partial-tree with the elements and that
value which both produces such a sub-tree and the list of elements not
in that sub-tree. It then takes the head of the unused elements as the
value for the current node"
I believe the procedure does this-entry before left-tree. Why am I wrong?
This is my very first book on CS and I have yet to come across Master Theorem.
It is mentioned in some solutions but hopefully I should be able to do the question without using it.
Thank you for reading and I look forward to your kind reply,
Chris
You need to understand how let forms work. In
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
left-tree does not "call" anything. It is created as a new lexical variable, and assigned the value of (car left-result). The parentheses around it are just for grouping together the elements describing one variable introduced by a let form: the variable's name and its value:
(let ( ( left-tree (car left-result) )
;; ^^ ^^
( non-left-elts (cdr left-result) )
;; ^^ ^^
Here's how to understand how the recursive procedure works: don't.
Just don't try to understand how it works; instead analyze what it does, assuming that it does (for the smaller cases) what it's supposed to do.
Here, (partial-tree elts n) receives two arguments: the list of elements (to be put into tree, presumably) and the list's length. It returns
(cons (make-tree this-entry left-tree right-tree)
remaining-elts)
a cons pair of a tree - the result of conversion, and the remaining elements, which are supposed to be none left, in the topmost call, if the length argument was correct.
Now that we know what it's supposed to do, we look inside it. And indeed assuming the above what it does makes total sense: halve the number of elements, process the list, get the tree and the remaining list back (non-empty now), and then process what's left.
The this-entry is not a tree - it is an element that is housed in a tree's node:
(let ((this-entry (car non-left-elts))
Setting
(right-size (- n (+ left-size 1))
means that n == right-size + 1 + left-size. That's 1 element that goes into the node itself, the this-entry element.
And since each element goes directly into its node, once, the total running time of this algorithm is linear in the number of elements in the input list, with logarithmic stack space use.
I have a list of nodes, each with a parent and I want to construct a tree out of these.
(def elems '[{:node A :parent nil} {:node B :parent A} {:node C :parent A} {:node D :parent C}])
(build-tree elems)
=> (A (B) (C (D)))
Currently I have this code:
(defn root-node [elems]
(:node (first (remove :parent elems))))
(defn children [elems root]
(map :node (filter #(= root (:parent %)) elems)))
(defn create-sub-tree [elems root-node]
(conj (map #(create-sub-tree elems %) (children elems root-node)) root-node))
(defn build-tree [elems]
(create-sub-tree elems (root-node elems)))
In this solution recursion is used, but not with the loop recur syntax.
Which is bad, because the code can't be optimized and a StackOverflowError is possible.
It seems that I can only use recur if I have one recursion in each step.
In the case of a tree I have a recursion for each child of a node.
I am looking for an adjusted solution that wouldn't run into this problem.
If you have a complete different solution for this problem I would love to see it.
I read a bit about zipper, perhaps this is a better way of building a tree.
This is the solution I would go with. It is still susceptible to a StackOverflowError, but only for very "tall" trees.
(defn build-tree [elems]
(let [vec-conj (fnil conj [])
adj-map (reduce (fn [acc {:keys [node parent]}]
(update-in acc [parent] vec-conj node))
{} elems)
construct-tree (fn construct-tree [node]
(cons node
(map construct-tree
(get adj-map node))))
tree (construct-tree nil)]
(assert (= (count tree) 2) "Must only have one root node")
(second tree)))
We can remove the StackOverflowError issue, but it's a bit of a pain to do so. Instead of processing each leaf immediately with construct-tree we could leave something else there to indicate there's more work to be done (like a zero arg function), then do another step of processing to process each of them, continually processing until there's no work left to do. It would be possible to do this in constant stack space, but unless you're expecting really tall trees it's probably unnecessary (even clojure.walk/prewalk and postwalk will overflow the stack on a tall enough tree).