I have a list of nodes, each with a parent and I want to construct a tree out of these.
(def elems '[{:node A :parent nil} {:node B :parent A} {:node C :parent A} {:node D :parent C}])
(build-tree elems)
=> (A (B) (C (D)))
Currently I have this code:
(defn root-node [elems]
(:node (first (remove :parent elems))))
(defn children [elems root]
(map :node (filter #(= root (:parent %)) elems)))
(defn create-sub-tree [elems root-node]
(conj (map #(create-sub-tree elems %) (children elems root-node)) root-node))
(defn build-tree [elems]
(create-sub-tree elems (root-node elems)))
In this solution recursion is used, but not with the loop recur syntax.
Which is bad, because the code can't be optimized and a StackOverflowError is possible.
It seems that I can only use recur if I have one recursion in each step.
In the case of a tree I have a recursion for each child of a node.
I am looking for an adjusted solution that wouldn't run into this problem.
If you have a complete different solution for this problem I would love to see it.
I read a bit about zipper, perhaps this is a better way of building a tree.
This is the solution I would go with. It is still susceptible to a StackOverflowError, but only for very "tall" trees.
(defn build-tree [elems]
(let [vec-conj (fnil conj [])
adj-map (reduce (fn [acc {:keys [node parent]}]
(update-in acc [parent] vec-conj node))
{} elems)
construct-tree (fn construct-tree [node]
(cons node
(map construct-tree
(get adj-map node))))
tree (construct-tree nil)]
(assert (= (count tree) 2) "Must only have one root node")
(second tree)))
We can remove the StackOverflowError issue, but it's a bit of a pain to do so. Instead of processing each leaf immediately with construct-tree we could leave something else there to indicate there's more work to be done (like a zero arg function), then do another step of processing to process each of them, continually processing until there's no work left to do. It would be possible to do this in constant stack space, but unless you're expecting really tall trees it's probably unnecessary (even clojure.walk/prewalk and postwalk will overflow the stack on a tall enough tree).
Related
I've written two versions of a lisp function. The main difference between the two is that one is done with recursion, while the other is done with iteration.
Here's the recursive version (no side effects!):
(defun simple-check (counter list)
"This function takes two arguments:
the number 0 and a list of atoms.
It returns the number of times the
atom 'a' appears in that list."
(if (null list)
counter
(if (equal (car list) 'a)
(simple-check (+ counter 1) (cdr list))
(simple-check counter (cdr list)))))
Here's the iterative version (with side effects):
(defun a-check (counter list)
"This function takes two arguments:
the number 0 and a list of atoms.
It returns the number of times the
atom 'a' appears in that list."
(dolist (item list)
(if (equal item 'a)
(setf counter (+ counter 1))
(setf counter (+ counter 0))))
counter)
As far as I know, they both work. But I'd really like to avoid side-effects in the iterative version. Two questions I'd like answered:
Is it possible to avoid side effects and keep iteration?
Assuming the answer to #1 is a yes, what are the best ways to do so?
For completeness, note that Common Lisp has a built-in COUNT:
(count 'a list)
In some ways, the difference between side-effect or no side-effect is a bit blurred. Take the following loop version (ignoring that loop also has better ways):
(loop :for x :in list
:for counter := (if (eq x 'a) (1+ counter) counter)
:finally (return counter))
Is counter set at each step, or is it rebound? I. e., is an existing variable modified (like in setf), or is a new variable binding created (as in a recursion)?
This do version is very much like the recursive version:
(do ((list args (rest list))
(counter 0 (+ counter (if (eq (first list) 'a) 1 0))))
((endp list) counter))
Same question as above.
Now the “obvious” loop version:
(loop :for x :in list
:count (eq x 'a))
There isn't even an explicit variable for the counter. Are there side-effects?
Internally, of course there are effects: environments are created, bindings established, and, especially if there is tail call optimization, even in the recursive version destroyed/replaced at each step.
I see as side effects only effects that affect things outside of some defined scope. Of course, things appear more elegant if you can also on the level of your internal definition avoid the explicit setting of things, and instead use some more declarative expression.
You can also iterate with map, mapcar and friends.
https://lispcookbook.github.io/cl-cookbook/iteration.html
I also suggest a look at remove-if[-not] and other reduce and apply:
(length (remove-if-not (lambda (x) (equal :a x)) '(:a :b :a))) ;; 2
Passing counter to the recursive procedure was a means to enable a tail recursive definition. This is unnecessary for the iterative definition.
As others have pointed out, there are several language constructs which solve the stated problem elegantly.
I assume you are interested in this in a more general sense such as when you cannot find
a language feature that solves a problem directly.
In general, one can maintain a functional interface by keeping the mutation private as below:
(defun simple-check (list)
"return the number of times the symbol `a` appears in `list`"
(let ((times 0))
(dolist (elem list times)
(when (equal elem 'a)
(incf times)))))
I'm studying lambda calculus with the book "An Introduction to Functional Programming Through Lambda Calculus" by Greg Michaelson.
I implement examples in Clojure using only a subset of the language. I only allow :
symbols
one-arg lambda functions
function application
var definition for convenience.
So far I have those functions working :
(def identity (fn [x] x))
(def self-application (fn [s] (s s)))
(def select-first (fn [first] (fn [second] first)))
(def select-second (fn [first] (fn [second] second)))
(def make-pair (fn [first] (fn [second] (fn [func] ((func first) second))))) ;; def make-pair = λfirst.λsecond.λfunc.((func first) second)
(def cond make-pair)
(def True select-first)
(def False select-second)
(def zero identity)
(def succ (fn [n-1] (fn [s] ((s False) n-1))))
(def one (succ zero))
(def zero? (fn [n] (n select-first)))
(def pred (fn [n] (((zero? n) zero) (n select-second))))
But now I am stuck on recursive functions. More precisely on the implementation of add. The first attempt mentioned in the book is this one :
(def add-1
(fn [a]
(fn [b]
(((cond a) ((add-1 (succ a)) (pred b))) (zero? b)))))
((add zero) zero)
Lambda calculus rules of reduction force to replace the inner call to add-1 with the actual definition that contains the definition itself... endlessly.
In Clojure, wich is an application order language, add-1 is also elvaluated eagerly before any execution of any kind, and we got a StackOverflowError.
After some fumblings, the book propose a contraption that is used to avoid the infinite replacements of the previous example.
(def add2 (fn [f]
(fn [a]
(fn [b]
(((zero? b) a) (((f f) (succ a)) (pred b)))))))
(def add (add2 add2))
The definition of add expands to
(def add (fn [a]
(fn [b]
(((zero? b) a) (((add2 add2) (succ a)) (pred b))))))
Which is totally fine until we try it! This is what Clojure will do (referential transparency) :
((add zero) zero)
;; ~=>
(((zero? zero) zero) (((add2 add2) (succ zero)) (pred zero)))
;; ~=>
((select-first zero) (((add2 add2) (succ zero)) (pred zero)))
;; ~=>
((fn [second] zero) ((add (succ zero)) (pred zero)))
On the last line (fn [second] zero) is a lambda that expects one argument when applied. Here the argument is ((add (succ zero)) (pred zero)).
Clojure is an "applicative order" language so the argument is evaluated before function application, even if in that case the argument won't be used at all. Here we recur in add that will recur in add... until the stack blows up.
In a language like Haskell I think that would be fine because it's lazy (normal order), but I'm using Clojure.
After that, the book go in length presenting the tasty Y-combinator that avoid the boilerplate but I came to the same gruesome conclusion.
EDIT
As #amalloy suggests, I defined the Z combinator :
(def YC (fn [f] ((fn [x] (f (fn [z] ((x x) z)))) (fn [x] (f (fn [z] ((x x) z)))))))
I defined add2 like this :
(def add2 (fn [f]
(fn [a]
(fn [b]
(((zero? b) a) ((f (succ a)) (pred b)))))))
And I used it like this :
(((YC add2) zero) zero)
But I still get a StackOverflow.
I tried to expand the function "by hand" but after 5 rounds of beta reduction, it looks like it expands infinitely in a forest of parens.
So what is the trick to make Clojure "normal order" and not "applicative order" without macros. Is it even possible ? Is it even the solution to my question ?
This question is very close to this one : How to implement iteration of lambda calculus using scheme lisp? . Except that mine is about Clojure and not necessarily about Y-Combinator.
For strict languages, you need the Z combinator instead of the Y combinator. It's the same basic idea but replacing (x x) with (fn [v] (x x) v) so that the self-reference is wrapped in a lambda, meaning it is only evaluated if needed.
You also need to fix your definition of booleans in order to make them work in a strict language: you can't just pass it the two values you care about and select between them. Instead, you pass it thunks for computing the two values you care about, and call the appropriate function with a dummy argument. That is, just as you fix the Y combinator by eta-expanding the recursive call, you fix booleans by eta-expanding the two branches of the if and eta-reduce the boolean itself (I'm not 100% sure that eta-reducing is the right term here).
(def add2 (fn [f]
(fn [a]
(fn [b]
((((zero? b) (fn [_] a)) (fn [_] ((f (succ a)) (pred b)))) b)))))
Note that both branches of the if are now wrapped with (fn [_] ...), and the if itself is wrapped with (... b), where b is a value I chose arbitrarily to pass in; you could pass zero instead, or anything at all.
The problem I'm seeing is that you have too strong of a coupling between your Clojure program and your Lambda Calculus program
you're using Clojure lambdas to represent LC lambdas
you're using Clojure variables/definitions to represent LC variables/definitions
you're using Clojure's application mechanism (Clojure's evaluator) as LC's application mechanism
So you're actually writing a clojure program (not an LC program) that is subject to the effects of the clojure compiler/evaluator – which means strict evaluation and non-constant-space direction recursion. Let's look at:
(def add2 (fn [f]
(fn [a]
(fn [b]
(((zero? b) a) ((f (succ a)) (pred b)))))))
As a Clojure program, in a strictly evaluated environment, each time we call add2, we evaluate
(zero? b) as value1
(value1 a) as value2
(succ a) as value3
(f value2) as value4
(pred b) as value5
(value2 value4) as value6
(value6 value5)
We can now see that calling to add2 always results in call to the recursion mechanism f – of course the program never terminates and we get a stack overflow!
You have a few options
per #amalloy's suggestions, use thunks to delay the evaluation of certain expressions and then force (run) them when you're ready to continue the computation – tho I don't think this is going to teach you much
you can use Clojure's loop/recur or trampoline for constant-space recursions to implement your Y or Z combinator – there's a little hang-up here tho because you're only wishing to support single-parameter lambdas, and it's going to be a tricky (maybe impossible) to do so in a strict evaluator that doesn't optimise tail calls
I do a lot of this kind of work in JS because most JS machines suffer the same problem; if you're interested in seeing homebrew workarounds, check out: How do I replace while loops with a functional programming alternative without tail call optimization?
write an actual evaluator – this means you can decouple your the representation of your Lambda Calculus program from datatypes and behaviours of Clojure and Clojure's compiler/evaluator – you get to choose how those things work because you're the one writing the evaluator
I've never done this exercise in Clojure, but I've done it a couple times in JavaScript – the learning experience is transformative. Just last week, I wrote https://repl.it/Kluo which is uses a normal order substitution model of evaluation. The evaluator here is not stack-safe for large LC programs, but you can see that recursion is supported via Curry's Y on line 113 - it supports the same recursion depth in the LC program as the underlying JS machine supports. Here's another evaluator using memoisation and the more familiar environment model: https://repl.it/DHAT/2 – also inherits the recursion limit of the underlying JS machine
Making recursion stack-safe is really difficult in JavaScript, as I linked above, and (sometimes) considerable transformations need to take place in your code before you can make it stack-safe. It took me two months of many sleepless nights to adapt this to a stack-safe, normal-order, call-by-need evaluator: https://repl.it/DIfs/2 – this is like Haskell or Racket's #lang lazy
As for doing this in Clojure, the JavaScript code could be easily adapted, but I don't know enough Clojure to show you what a sensible evaluator program might look like – In the book, Structure and Interpretation of Computer Programs,
(chapter 4), the authors show you how to write an evaluator for Scheme (a Lisp) using Scheme itself. Of course this is 10x more complicated than primitive Lambda Calculus, so it stands to reason that if you can write a Scheme evaluator, you can write an LC one too. This might be more helpful to you because the code examples look much more like Clojure
a starting point
I studied a little Clojure for you and came up with this – it's only the beginning of a strict evaluator, but it should give you an idea of how little work it takes to get pretty close to a working solution.
Notice we use a fn when we evaluate a 'lambda but this detail is not revealed to the user of the program. The same is true for the env – ie, the env is just an implementation detail and should not be the user's concern.
To beat a dead horse, you can see that the substitution evaluator and the environment-based evaluator both arrive at the equivalent answers for same input program – I can't stress enough how these choices are up to you – in SICP, the authors even go on to change the evaluator to use a simple register-based model for binding variables and calling procs. The possibilities are endless because we've elected to control the evaluation; writing everything in Clojure (as you did originally) does not give us that kind of flexibility
;; lambda calculus expression constructors
(defn variable [identifier]
(list 'variable identifier))
(defn lambda [parameter body]
(list 'lambda parameter body))
(defn application [proc argument]
(list 'application proc argument))
;; environment abstraction
(defn empty-env []
(hash-map))
(defn env-get [env key]
;; implement
)
(defn env-set [env key value]
;; implement
)
;; meat & potatoes
(defn evaluate [env expr]
(case (first expr)
;; evaluate a variable
variable (let [[_ identifier] expr]
(env-get env identifier))
;; evaluate a lambda
lambda (let [[_ parameter body] expr]
(fn [argument] (evaluate (env-set env parameter argument) body)))
;; evaluate an application
;; this is strict because the argument is evaluated first before being given to the evaluated proc
application (let [[_ proc argument] expr]
((evaluate env proc) (evaluate env argument)))
;; bad expression given
(throw (ex-info "invalid expression" {:expr expr}))))
(evaluate (empty-env)
;; ((λx.x) y)
(application (lambda 'x (variable 'x)) (variable 'y))) ;; should be 'y
* or it could throw an error for unbound identifier 'y; your choice
I'm trying to reverse a list in scheme and I came up with to the following solution:
(define l (list 1 2 3 4))
(define (reverse lista)
(car (cons (reverse (cdr (cons 0 lista))) 0)))
(display (reverse l))
Although it works I don't really understand why it works.
In my head, it would evaluate to a series of nested cons until cons of () (which the cdr of a list with one element).
I guess I am not understanding the substitution model, could someone explain me why it works?
Obs:
It is supposed to work only in not nested lists.
Taken form SICP, exercise 2.18.
I know there are many similar questions, but as far as I saw, none presented
this solution.
Thank you
[As this happens quite often, I write the answer anyway]
Scheme implementations do have their builtin versions of reverse, map, append etc. as they are specified in RxRS (e.g. https://www.cs.indiana.edu/scheme-repository/R4RS/r4rs_8.html).
In the course of learning scheme (and actually any lisp dialect) it's really valuable to implement them anyway. The danger is, one's definition can collide with the built-in one (although e.g. scheme's define or lisp's label should shadow them). Therefore it's always worth to call this hand-made implementation with some other name, like "my-reverse", "my-append" etc. This way you will save yourself much confusion, like in the following:
(let ([append
(lambda (xs ys)
(if (null? xs)
ys
(cons (car xs) (append (cdr xs) ys))))])
(append '(hello) '(there!)))
-- this one seems to work, creating a false impression that "let" works the same as "letrec". But just change the name to "my-append" and it breaks, because at the moment of evaluating the lambda form, the symbol "my-append" is not yet bound to anything (unlike "append" which was defined as a builtin procedure).
Of course such let form will work in a language with dynamic scoping, but scheme is lexical (with the exception of "define"s), and the reason is referential transparency (but that's so far offtopic that I can only refer interested reader to one of the lambda papers http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-453.pdf).
This reads pretty much the same as the solutions in other languages:
if the list is empty, return an empty list. Otherwise ...
chop off the first element (CAR)
reverse the remainder of the list (CDR)
append (CONS) the first element to that reversal
return the result
Now ... given my understanding from LISP days, the code would look more like this:
(append (reverse (cdr lista)) (list (car lista)))
... which matches my description above.
There are several ways to do it. Here is another:
(define my-reverse
(lambda (lst)
(define helper
(lambda (lst result)
(if (null? lst)
result
(helper (cdr lst) (cons (car lst) result)))))
(helper lst '())))
I'm writing a program that uses classical cons pairs a la Common Lisp, Scheme, et al.
(deftype Cons [car cdr]
clojure.lang.ISeq
(first [c] (.car c))
(more [c] (.cdr c))
I create lists by chaining cons cells, e.g. (Cons. a (Cons. b nil)) for the list containing a and b. I wrote a function to convert a Clojure collection into a cons list:
(defn conslist [xs]
(if (empty? xs)
nil
(Cons. (first xs) (conslist (rest xs)))))
This works but will overflow if xs is too big. recur doesn't work because the recursive call isn't in a tail position. Using loop with an accumulator wouldn't work, because cons only puts stuff at the front, when each recurse gives you the next item, and I can't use conj.
What can I do?
Edit: In the end, it turns out if you get this working, Clojure fundamentally isn't designed to support cons pairs (you can't set the tail to a non-seq). I ended up just creating a custom data structure and car/cdr functions.
as usual, i would propose the simplest loop/recur:
(defn conslist [xs]
(loop [xs (reverse xs) res nil]
(if (empty? xs)
res
(recur (rest xs) (Cons. (first xs) res)))))
lazy-seq is your friend here. It takes a body that evaluates to an ISeq, but the body will not be evaluated until the result of lazy-seq is invoked.
(defn conslist [xs]
(if (empty? xs)
nil
(lazy-seq (Cons. (first xs) (conslist (rest xs))))))
NOTE: I would like to do this without rackets built in exceptions if possible.
I have many functions which call other functions and may recursively make a call back to the original function. Under certain conditions along the way I want to stop any further recursive steps, and no longer call any other functions and simply return some value/string (the stack can be ignored if the condition is met).. here is a contrived example that hopefully will show what I'm trying to accomplish:
(define (add expr0 expr1)
(cond
[(list? expr0) (add (cadr expr0) (cadr (cdr expr0)))]
[(list? expr1) (add (cadr expr1) (cadr (cdr expr1)))]
[else (if (or (equal? expr0 '0) (equal? expr1 '0))
'(Adding Zero)
(+ expr0 expr1))]
))
If this were my function and I called it with (add (add 2 0) 3), Then the goal would be to simply return the entire string '(Adding Zero) ANYTIME that a zero is one of the expressions, instead of making the recursive call to (add '(Adding Zero) 3)
Is there a way to essentially "break" out of recursion? My problem is that if i'm already deep inside then it will eventually try to evaluate '(Adding Zero) which it doesn't know how to do and I feel like I should be able to do this without making an explicit check to each expr..
Any guidance would be great.
In your specific case, there's no need to "escape" from normal processing. Simply having '(Adding Zero) in tail position will cause your add function to return (Adding Zero).
To create a situation where you might need to escape, you need something a
little more complicated:
(define (recursive-find/collect collect? tree (result null))
(cond ((null? tree) (reverse result))
((collect? tree) (reverse (cons tree result)))
((not (pair? tree)) (reverse result))
(else
(let ((hd (car tree))
(tl (cdr tree)))
(cond ((collect? hd)
(recursive-find/collect collect? tl (cons hd result)))
((pair? hd)
(recursive-find/collect collect? tl
(append (reverse (recursive-find/collect collect? hd)) result)))
(else (recursive-find/collect collect? tl result)))))))
Suppose you wanted to abort processing and just return 'Hahaha! if any node in the tree had the value 'Joker. Just evaluating 'Hahaha! in tail position
wouldn't be enough because recursive-find/collect isn't always used in
tail position.
Scheme provides continuations for this purpose. The easiest way to do it in my particular example would be to use the continuation from the predicate function, like this:
(call/cc
(lambda (continuation)
(recursive-find/collect
(lambda (node)
(cond ((eq? node 'Joker)
(continuation 'Hahaha!)) ;; Processing ends here
;; Otherwise find all the symbols
;; in the tree
(else (symbol? node))))
'(Just 1 arbitrary (tree (stucture) ((((that "has" a Joker in it)))))))))
A continuation represents "the rest of the computation" that is going to happen after the call/cc block finishes. In this case, it just gives you a way to escape from the call/cc block from anywhere in the stack.
But continuations also have other strange properties, such as allowing you to jump back to whatever block of code this call/cc appears in even after execution has left this part of the program. For example:
(define-values a b (call/cc
(lambda (cc)
(values 1 cc))))
(cc 'one 'see-see)
In this case, calling cc jumps back to the define-values form and redefines a and b to one and see-see, respectively.
Racket also has "escape continuations" (call/ec or let/ec) which can escape from their form, but can't jump back into it. In exchange for this limitation you get better performance.