Here is an example:
;; Helper function for marking multiples of a number as 0
(def mark (fn [[x & xs] k m]
(if (= k m)
(cons 0 (mark xs 1 m))
(cons x (mark xs (inc k) m))
)))
;; Sieve of Eratosthenes
(defn sieve
[x & xs]
(if (= x 0)
(sieve xs)
(cons x (sieve (mark xs 1 x)))
))
(take 10 (lazy-seq (sieve (iterate inc 2))))
It produces a StackOverflowError.
There are a couple of issues here. First, as pointed out in the other answer, your mark and sieve functions don't have terminating conditions. It looks like they are designed to work with infinite sequences, but if you passed a finite-length sequence they'd keep going off the end.
The deeper problem here is that it looks like you're trying to have a function create a lazy infinite sequence by recursively calling itself. However, cons is not lazy in any way; it is a pure function call, so the recursive calls to mark and sieve are invoked immediately. Wrapping the outer-most call to sieve in lazy-seq only serves to defer the initial call; it does not make the entire sequence lazy. Instead, each call to cons must be wrapped in its own lazy sequence.
For instance:
(defn eager-iterate [f x]
(cons x (eager-iterate f (f x))))
(take 3 (eager-iterate inc 0)) ; => StackOverflowError
(take 3 (lazy-seq (eager-iterate inc 0))) ; => Still a StackOverflowError
Compare this with the actual source code of iterate:
(defn iterate
"Returns a lazy sequence of x, (f x), (f (f x)) etc. f must be free of side-effects"
{:added "1.0"
:static true}
[f x] (cons x (lazy-seq (iterate f (f x)))))
Putting it together, here's an implementation of mark that works correctly for finite sequences and preserves laziness for infinite sequences. Fixing sieve is left as an exercise for the reader.
(defn mark [[x :as xs] k m]
(lazy-seq
(when (seq xs)
(if (= k m)
(cons 0 (mark (next xs) 1 m))
(cons x (mark (next xs) (inc k) m))))))
(mark (range 4 14) 1 3)
; => (4 5 0 7 8 0 10 11 0 13)
(take 10 (mark (iterate inc 4) 1 3))
; => (4 5 0 7 8 0 10 11 0 13)
Need terminating conditions
The problem here is both your mark and sieve functions have no terminating conditions. There must be some set of inputs for which each function does not call itself, but returns an answer. Additionally, every set of (valid) inputs to these functions should eventually resolve to a non-recursive return value.
But even if you get it right...
I'll add that even if you succeed in creating the correct terminating conditions, there is still the possibility of having a stack overflow if the depth of the recursion in too large. This can be mitigated to some extent by increasing the JVM stack size, but this has it's limits.
A way around this for some functions is to use tail call optimization. Some recursive functions are tail recursive, meaning that all recursive calls to the function being defined within it's definition are in the tail call position (are the final function called in the definition body). For example, in your sieve function's (= x 0) case, sieve is the tail call, since the result of sieve doesn't get passed into any other function. However, in the case that (not (= x 0)), the result of calling sieve gets passed to cons, so this is not a tail call. When a function is fully tail recursive, it is possible to behind the scenes transform the function definition into a looping construct which avoids consuming the stack. In clojure this is possible by using recur in the function definition instead of the function name (there is also a loop construct which can sometimes be helpful). Again, because not all recursive functions are tail recursive, this isn't a panacea. But when they are it's good to know that you can do this.
Thanks to #Alex's answer I managed to come up with a working lazy solution:
;; Helper function for marking mutiples of a number as 0
(defn mark [[x :as xs] k m]
(lazy-seq
(when-not (empty? xs)
(if (= k m)
(cons 0 (mark (rest xs) 1 m))
(cons x (mark (rest xs) (inc k) m))))))
;; Sieve of Eratosthenes
(defn sieve
[[x :as xs]]
(lazy-seq
(when-not (empty? xs)
(if (= x 0)
(sieve (rest xs))
(cons x (sieve (mark (rest xs) 1 x)))))))
I was adviced by someone else to use rest instead of next.
Related
The ClojureDocs page for lazy-seq gives an example of generating a lazy-seq of all positive numbers:
(defn positive-numbers
([] (positive-numbers 1))
([n] (cons n (lazy-seq (positive-numbers (inc n))))))
This lazy-seq can be evaluated for pretty large indexes without throwing a StackOverflowError (unlike the sieve example on the same page):
user=> (nth (positive-numbers) 99999999)
100000000
If only recur can be used to avoid consuming stack frames in a recursive function, how is it possible this lazy-seq example can seemingly call itself without overflowing the stack?
A lazy sequence has the rest of the sequence generating calculation in a thunk. It is not immediately called. As each element (or chunk of elements as the case may be) is requested, a call to the next thunk is made to retrieve the value(s). That thunk may create another thunk to represent the tail of the sequence if it continues. The magic is that (1) these special thunks implement the sequence interface and can transparently be used as such and (2) each thunk is only called once -- its value is cached -- so the realized portion is a sequence of values.
Here it is the general idea without the magic, just good ol' functions:
(defn my-thunk-seq
([] (my-thunk-seq 1))
([n] (list n #(my-thunk-seq (inc n)))))
(defn my-next [s] ((second s)))
(defn my-realize [s n]
(loop [a [], s s, n n]
(if (pos? n)
(recur (conj a (first s)) (my-next s) (dec n))
a)))
user=> (-> (my-thunk-seq) first)
1
user=> (-> (my-thunk-seq) my-next first)
2
user=> (my-realize (my-thunk-seq) 10)
[1 2 3 4 5 6 7 8 9 10]
user=> (count (my-realize (my-thunk-seq) 100000))
100000 ; Level stack consumption
The magic bits happen inside of clojure.lang.LazySeq defined in Java, but we can actually do the magic directly in Clojure (implementation that follows for example purposes), by implementing the interfaces on a type and using an atom to cache.
(deftype MyLazySeq [thunk-mem]
clojure.lang.Seqable
(seq [_]
(if (fn? #thunk-mem)
(swap! thunk-mem (fn [f] (seq (f)))))
#thunk-mem)
;Implementing ISeq is necessary because cons calls seq
;on anyone who does not, which would force realization.
clojure.lang.ISeq
(first [this] (first (seq this)))
(next [this] (next (seq this)))
(more [this] (rest (seq this)))
(cons [this x] (cons x (seq this))))
(defmacro my-lazy-seq [& body]
`(MyLazySeq. (atom (fn [] ~#body))))
Now this already works with take, etc., but as take calls lazy-seq we'll make a my-take that uses my-lazy-seq instead to eliminate any confusion.
(defn my-take
[n coll]
(my-lazy-seq
(when (pos? n)
(when-let [s (seq coll)]
(cons (first s) (my-take (dec n) (rest s)))))))
Now let's make a slow infinite sequence to test the caching behavior.
(defn slow-inc [n] (Thread/sleep 1000) (inc n))
(defn slow-pos-nums
([] (slow-pos-nums 1))
([n] (cons n (my-lazy-seq (slow-pos-nums (slow-inc n))))))
And the REPL test
user=> (def nums (slow-pos-nums))
#'user/nums
user=> (time (doall (my-take 10 nums)))
"Elapsed time: 9000.384616 msecs"
(1 2 3 4 5 6 7 8 9 10)
user=> (time (doall (my-take 10 nums)))
"Elapsed time: 0.043146 msecs"
(1 2 3 4 5 6 7 8 9 10)
Keep in mind that lazy-seq is a macro, and therefore does not evaluate its body when your positive-numbers function is called. In that sense, positive-numbers isn't truly recursive. It returns immediately, and the inner "recursive" call to positive-numbers doesn't happen until the seq is consumed.
user=> (source lazy-seq)
(defmacro lazy-seq
"Takes a body of expressions that returns an ISeq or nil, and yields
a Seqable object that will invoke the body only the first time seq
is called, and will cache the result and return it on all subsequent
seq calls. See also - realized?"
{:added "1.0"}
[& body]
(list 'new 'clojure.lang.LazySeq (list* '^{:once true} fn* [] body)))
I think the trick is that the producer function (positive-numbers) isn't getting called recursively, it doesn't accumulate stack frames as if it was called with basic recursion Little-Schemer style, because LazySeq is invoking it as needed for the individual entries in the sequence. Once a closure gets evaluated for an entry then it can be discarded. So stack frames from previous invocations of the function can get garbage-collected as the code churns through the sequence.
I'm trying to solve this problem in a pure-functional way, without using set!.
I've written a function that calls a given lambda for each number in the fibonacci series, forever.
(define (each-fib fn)
(letrec
((next (lambda (a b)
(fn a)
(next b (+ a b)))))
(next 0 1)))
I think this is as succinct as it can be, but if I can shorten this, please enlighten me :)
With a definition like the above, is it possible to write another function that takes the first n numbers from the fibonacci series and gives me a list back, but without using variable mutation to track the state (which I understand is not really functional).
The function signature doesn't need to be the same as the following... any approach that will utilize each-fib without using set! is fine.
(take-n-fibs 7) ; (0 1 1 2 3 5 8)
I'm guessing there's some sort of continuations + currying trick I can use, but I keep coming back to wanting to use set!, which is what I'm trying to avoid (purely for learning purposes/shifting my thinking to purely functional).
Try this, implemented using lazy code by means of delayed evaluation:
(define (each-fib fn)
(letrec
((next (lambda (a b)
(fn a)
(delay (next b (+ a b))))))
(next 0 1)))
(define (take-n-fibs n fn)
(let loop ((i n)
(promise (each-fib fn)))
(when (positive? i)
(loop (sub1 i) (force promise)))))
As has been mentioned, each-fib can be further simplified by using a named let:
(define (each-fib fn)
(let next ((a 0) (b 1))
(fn a)
(delay (next b (+ a b)))))
Either way, it was necessary to modify each-fib a little for using the delay primitive, which creates a promise:
A promise encapsulates an expression to be evaluated on demand via force. After a promise has been forced, every later force of the promise produces the same result.
I can't think of a way to stop the original (unmodified) procedure from iterating indefinitely. But with the above change in place, take-n-fibs can keep forcing the lazy evaluation of as many values as needed, and no more.
Also, take-n-fibs now receives a function for printing or processing each value in turn, use it like this:
(take-n-fibs 10 (lambda (n) (printf "~a " n)))
> 0 1 1 2 3 5 8 13 21 34 55
You provide an iteration function over fibonacci elements. If you want, instead of iterating over each element, to accumulate a result, you should use a different primitive that would be a fold (or reduce) rather than an iter.
(It might be possible to use continuations to turn an iter into a fold, but that will probably be less readable and less efficient that a direct solution using either a fold or mutation.)
Note however that using an accumulator updated by mutation is also fine, as long as you understand what you are doing: you are using mutable state locally for convenience, but the function take-n-fibs is, seen from the outside, observationally pure, so you do not "contaminate" your program as a whole with side effects.
A quick prototype for fold-fib, adapted from your own code. I made an arbitrary choice as to "when stop folding": if the function returns null, we return the current accumulator instead of continuing folding.
(define (fold-fib init fn) (letrec ([next (lambda (acc a b)
(let ([acc2 (fn acc a)])
(if (null? acc2) acc
(next acc2 b (+ a b)))))])
(next init 0 1)))
(reverse (fold-fib '() (lambda (acc n) (if (> n 10) null (cons n acc)))))
It would be better to have a more robust convention to end folding.
I have written few variants. First you ask if
(define (each-fib fn)
(letrec
((next (lambda (a b)
(fn a)
(next b (+ a b)))))
(next 0 1)))
can be written any shorter. The pattern is used so often that special syntax called named let has been introduced. Your function looks like this using a named let:
(define (each-fib fn)
(let next ([a 0] [b 1])
(fn a)
(next b (+ a b))))
In order to get the control flowing from one function to another, one can in languages with supports TCO use continuation passing style. Each function gets an extra argument often called k (for continuation). The function k represents what-to-do-next.
Using this style, one can write your program as follows:
(define (generate-fibs k)
(let next ([a 0] [b 1] [k k])
(k a (lambda (k1)
(next b (+ a b) k1)))))
(define (count-down n k)
(let loop ([n n] [fibs '()] [next generate-fibs])
(if (zero? n)
(k fibs)
(next (λ (a next)
(loop (- n 1) (cons a fibs) next))))))
(count-down 5 values)
Now it is a bit annoying to write in style manually, so it could
be convenient to introduce the co-routines. Breaking your rule of not using set! I have chosen to use a shared variable fibs in which generate-fibs repeatedly conses new fibonacci numbers onto. The count-down routine merely read the values, when the count down is over.
(define (make-coroutine co-body)
(letrec ([state (lambda () (co-body resume))]
[resume (lambda (other)
(call/cc (lambda (here)
(set! state here)
(other))))])
(lambda ()
(state))))
(define fibs '())
(define generate-fib
(make-coroutine
(lambda (resume)
(let next ([a 0] [b 1])
(set! fibs (cons a fibs))
(resume count-down)
(next b (+ a b))))))
(define count-down
(make-coroutine
(lambda (resume)
(let loop ([n 10])
(if (zero? n)
fibs
(begin
(resume generate-fib)
(loop (- n 1))))))))
(count-down)
And a bonus you get a version with communicating threads:
#lang racket
(letrec ([result #f]
[count-down
(thread
(λ ()
(let loop ([n 10] [fibs '()])
(if (zero? n)
(set! result fibs)
(loop (- n 1) (cons (thread-receive) fibs))))))]
[produce-fibs
(thread
(λ ()
(let next ([a 0] [b 1])
(when (thread-running? count-down)
(thread-send count-down a)
(next b (+ a b))))))])
(thread-wait count-down)
result)
The thread version is Racket specific, the others ought to run anywhere.
Building a list would be hard. But displaying the results can still be done (in a very bad fashion)
#lang racket
(define (each-fib fn)
(letrec
((next (lambda (a b)
(fn a)
(next b (+ a b)))))
(next 0 1)))
(define (take-n-fibs n fn)
(let/cc k
(begin
(each-fib (lambda (x)
(if (= x (fib (+ n 1)))
(k (void))
(begin
(display (fn x))
(newline))))))))
(define fib
(lambda (n)
(letrec ((f
(lambda (i a b)
(if (<= n i)
a
(f (+ i 1) b (+ a b))))))
(f 1 0 1))))
Notice that i am using the regular fibonacci function as an escape (like i said, in a very bad fashion). I guess nobody will recommend programming like this.
Anyway
(take-n-fibs 7 (lambda (x) (* x x)))
0
1
1
4
9
25
64
How do I do recursion in an anonymous function, without using tail recursion?
For example (from Vanderhart 2010, p 38):
(defn power
[number exponent]
(if (zero? exponent)
1
(* number (power number (- exponent 1)))))
Let's say I wanted to do this as an anonymous function. And for some reason I didn't want to use tail recursion. How would I do it? For example:
( (fn [number exponent] ......))))) 5 3)
125
Can I use loop for this, or can loop only be used with recur?
The fn special form gives you the option to provide a name that can be used internally for recursion.
(doc fn)
;=> (fn name? [params*] exprs*)
So, add "power" as the name to complete your example.
(fn power [n e]
(if (zero? e)
1
(* n (power n (dec e)))))
Even if the recursion happened in the tail position, it will not be optimized to replace the current stack frame. Clojure enforces you to be explicit about it with loop/recur and trampoline.
I know that in Clojure there's syntactic support for "naming" an anonymous function, as other answers have pointed out. However, I want to show a first-principles approach to solve the question, one that does not depend on the existence of special syntax on the programming language and that would work on any language with first-order procedures (lambdas).
In principle, if you want to do a recursive function call, you need to refer to the name of the function so "anonymous" (i.e. nameless functions) can not be used for performing a recursion ... unless you use the Y-Combinator. Here's an explanation of how it works in Clojure.
Let me show you how it's used with an example. First, a Y-Combinator that works for functions with a variable number of arguments:
(defn Y [f]
((fn [x] (x x))
(fn [x]
(f (fn [& args]
(apply (x x) args))))))
Now, the anonymous function that implements the power procedure as defined in the question. Clearly, it doesn't have a name, power is only a parameter to the outermost function:
(fn [power]
(fn [number exponent]
(if (zero? exponent)
1
(* number (power number (- exponent 1))))))
Finally, here's how to apply the Y-Combinator to the anonymous power procedure, passing as parameters number=5 and exponent=3 (it's not tail-recursive BTW):
((Y
(fn [power]
(fn [number exponent]
(if (zero? exponent)
1
(* number (power number (- exponent 1)))))))
5 3)
> 125
fn takes an optional name argument that can be used to call the function recursively.
E.g.:
user> ((fn fact [x]
(if (= x 0)
1
(* x (fact (dec x)))))
5)
;; ==> 120
Yes you can use loop for this. recur works in both loops and fns
user> (loop [result 5 x 1] (if (= x 3) result (recur (* result 5) (inc x))))
125
an idomatic clojure solution looks like this:
user> (reduce * (take 3 (repeat 5)))
125
or uses Math.pow() ;-)
user> (java.lang.Math/pow 5 3)
125.0
loop can be a recur target, so you could do it with that too.
In clojure, I would like to write a tail-recursive function that memoizes its intermediate results for subsequent calls.
[EDIT: this question has been rewritten using gcd as an example instead of factorial.]
The memoized gcd (greatest common divisor) could be implemented like this:
(def gcd (memoize (fn [a b]
(if (zero? b)
a
(recur b (mod a b))))
In this implementation, intermediate results are not memoized for subsequent calls. For example, in order to calculate gcd(9,6), gcd(6,3) is called as an intermediate result. However, gcd(6,3) is not stored in the cache of the memoized function because the recursion point of recur is the anonymous function that is not memoized.
Therefore, if after having called gcd(9,6), we call gcd(6,3) we won't benefit from the memoization.
The only solution I can think about will be to use mundane recursion (explicitely call gcd instead of recur) but then we will not benefit from Tail Call Optimization.
Bottom Line
Is there a way to achieve both:
Tail call optimization
Memoization of intermediate results for subsequent calls
Remarks
This question is similar to Combine memoization and tail-recursion. But all the answers there are related to F#. Here, I am looking for an answer in clojure.
This question has been left as an exercise for the reader by The Joy of Clojure (chap 12.4). You can consult the relevant page of the book at http://bit.ly/HkQrio.
in your case it's hard to show memoize do anything with factorial because the intermediate calls are unique, so I'll rewrite a somewhat contrived example assuming the point is to explore ways to avoid blowing the stack:
(defn stack-popper [n i]
(if (< i n) (* i (stack-popper n (inc i))) 1))
which can then get something out of a memoize:
(def stack-popper
(memoize (fn [n i] (if (< i n) (* i (stack-popper n (inc i))) 1))))
the general approaches to not blowing the stack are:
use tail calls
(def stack-popper
(memoize (fn [n acc] (if (> n 1) (recur (dec n) (* acc (dec n))) acc))))
use trampolines
(def stack-popper
(memoize (fn [n acc]
(if (> n 1) #(stack-popper (dec n) (* acc (dec n))) acc))))
(trampoline (stack-popper 4 1))
use a lazy sequence
(reduce * (range 1 4))
None of these work all the time, though I have yet to hit a case where none of them work. I almost always go for the lazy ones first because I find them to be most clojure like, then I head for tail calling with recur or tramplines
(defmacro memofn
[name args & body]
`(let [cache# (atom {})]
(fn ~name [& args#]
(let [update-cache!# (fn update-cache!# [state# args#]
(if-not (contains? state# args#)
(assoc state# args#
(delay
(let [~args args#]
~#body)))
state#))]
(let [state# (swap! cache# update-cache!# args#)]
(-> state# (get args#) deref))))))
This will allow a recursive definition of a memoized function, which also caches intermediate results. Usage:
(def fib (memofn fib [n]
(case n
1 1
0 1
(+ (fib (dec n)) (fib (- n 2))))))
(def gcd
(let [cache (atom {})]
(fn [a b]
#(or (#cache [a b])
(let [p (promise)]
(deliver p
(loop [a a b b]
(if-let [p2 (#cache [a b])]
#p2
(do
(swap! cache assoc [a b] p)
(if (zero? b)
a
(recur b (mod a b))))))))))))
There is some concurrency issues (double evaluation, the same problem as with memoize, but worse because of the promises) which may be fixed using #kotarak's advice.
Turning the above code into a macro is left as an exercise to the reader. (Fogus's note was imo tongue-in-cheek.)
Turning this into a macro is really a simple exercise in macrology, please remark that the body (the 3 last lines) remain unchanged.
Using Clojure's recur you can write factorial using an accumulator that has no stack growth, and just memoize it:
(defn fact
([n]
(fact n 1))
([n acc]
(if (= 1 n)
acc
(recur (dec n)
(* n acc)))))
This is factorial function implemented with anonymous recursion with tail call and memoization of intermediate results. The memoization is integrated with the function and a reference to shared buffer (implemented using Atom reference type) is passed by a lexical closure.
Since the factorial function operates on natural numbers and the arguments for succesive results are incremental, Vector seems more tailored data structure to store buffered results.
Instead of passing the result of a previous computation as an argument (accumulator) we're getting it from the buffer.
(def ! ; global variable referring to a function
(let [m (atom [1 1 2 6 24])] ; buffer of results
(fn [n] ; factorial function definition
(let [m-count (count #m)] ; number of results in a buffer
(if (< n m-count) ; do we have buffered result for n?
(nth #m n) ; · yes: return it
(loop [cur m-count] ; · no: compute it recursively
(let [r (*' (nth #m (dec cur)) cur)] ; new result
(swap! m assoc cur r) ; store the result
(if (= n cur) ; termination condition:
r ; · base case
(recur (inc cur)))))))))) ; · recursive case
(time (do (! 8000) nil)) ; => "Elapsed time: 154.280516 msecs"
(time (do (! 8001) nil)) ; => "Elapsed time: 0.100222 msecs"
(time (do (! 7999) nil)) ; => "Elapsed time: 0.090444 msecs"
(time (do (! 7999) nil)) ; => "Elapsed time: 0.055873 msecs"
I'm trying to teach myself clojure and I'm using the principles of Prime Factors Kata and TDD to do so.
Via a series of Midje tests like this:
(fact (primefactors 1) => (list))
(fact (primefactors 2) => (list 2))
(fact (primefactors 3) => (list 3))
(fact (primefactors 4) => (list 2 2))
I was able to create the following function:
(defn primefactors
([n] (primefactors n 2))
([n candidate]
(cond (<= n 1) (list)
(= 0 (rem n candidate)) (conj (primefactors (/ n candidate)) candidate)
:else (primefactors n (inc candidate))
)
)
)
This works great until I throw the following edge case test at it:
(fact (primefactors 1000001) => (list 101 9901))
I end up with a stack overflow error. I know I need to turn this into a proper recur loops but all the examples I see seem to be too simplistic and only point to a counter or numerical variable as the focus. How do I make this recursive?
Thanks!
Here's a tail recursive implementation of the primefactors procedure, it should work without throwing a stack overflow error:
(defn primefactors
([n]
(primefactors n 2 '()))
([n candidate acc]
(cond (<= n 1) (reverse acc)
(zero? (rem n candidate)) (recur (/ n candidate) candidate (cons candidate acc))
:else (recur n (inc candidate) acc))))
The trick is using an accumulator parameter for storing the result. Notice that the reverse call at the end of the recursion is optional, as long as you don't care if the factors get listed in the reverse order they were found.
Your second recursive call already is in the tail positions, you can just replace it with recur.
(primefactors n (inc candidate))
becomes
(recur n (inc candidate))
Any function overload opens an implicit loop block, so you don't need to insert that manually. This should already improve the stack situation somewhat, as this branch will be more commonly taken.
The first recursion
(primefactors (/ n candidate))
isn't in the tail position as its result is passed to conj. To put it in the tail position, you'll need to collect the prime factors in an additional accumulator argument onto which you conj the result from the current recursion level and then pass to recur on each invocation. You'll need to adjust your termination condition to return that accumulator.
The typical way is to include an accumulator as one of the function arguments. Add a 3-arity version to your function definition:
(defn primefactors
([n] (primefactors n 2 '()))
([n candidate acc]
...)
Then modify the (conj ...) form to call (recur ...) and pass (conj acc candidate) as the third argument. Make sure you pass in three arguments to recur, i.e. (recur (/ n candidate) 2 (conj acc candidate)), so that you're calling the 3-arity version of primefactors.
And the (<= n 1) case need to return acc rather than an empty list.
I can go into more detail if you can't figure the solution out for yourself, but I thought I should give you a chance to try to work it out first.
This function really shouldn't be tail-recursive: it should build a lazy sequence instead. After all, wouldn't it be nice to know that 4611686018427387902 is non-prime (it's divisible by two), without having to crunch the numbers and find that its other prime factor is 2305843009213693951?
(defn prime-factors
([n] (prime-factors n 2))
([n candidate]
(cond (<= n 1) ()
(zero? (rem n candidate)) (cons candidate (lazy-seq (prime-factors (/ n candidate)
candidate)))
:else (recur n (inc candidate)))))
The above is a fairly unimaginative translation of the algorithm you posted; of course better algorithms exist, but this gets you correctness and laziness, and fixes the stack overflow.
A tail recursive, accumulator-free, lazy-sequence solution:
(defn prime-factors [n]
(letfn [(step [n div]
(when (< 1 n)
(let [q (quot n div)]
(cond
(< q div) (cons n nil)
(zero? (rem n div)) (cons div (lazy-step q div))
:else (recur n (inc div))))))
(lazy-step [n div]
(lazy-seq
(step n div)))]
(lazy-step n 2)))
Recursive calls embedded in lazy-seq are not evaluated before iteration upon the sequence, eliminating the risks of stack-overflow without resorting to an accumulator.