Library function for sorting based on a predicate - collections

Does a function exist in the Clojure library for filtering a collection, and returning a pair of collections, one of which contains the items where the predicate returned true, and the other which contains the items where the predicate returned false?
For example:
(let [[yays nays] (some-fn pred coll)] ... )
More or less, I am looking for a way to sort based on the predicate, rather than throw away (like with filter or remove).
(Note: I know that a solution is to call filter and remove on the collection separately; I just would like to know if there is a builtin function that can accomplish this more efficiently).
(Edit: seq-utils/separate doesn't qualify as more efficient. It evaluates the predicate twice for each item.)

clojure.core/group-by

If you want maximum performance you'll want to do this using loop/recur, something like:
(defn separate-by [pred coll]
(loop [yays nil
nays nil
s (seq coll)]
(if s
(let [item (first s)
test (pred item)]
(if test
(recur (conj yays item) nays (next s))
(recur yays (conj nays item) (next s))))
{:yays yays :nays nays})))
The reason this is most efficient is that the loop/recur enables you to iteratively build the two output lists without any extra memory allocations (which would happen if you repeated updated a map for example) or reference overhead (which would happen if you used two atoms to accumulate the results).

Related

Reversing list vs non tail recursion when traversing lists

I wonder how do you, experienced lispers / functional programmers usually make decision what to use. Compare:
(define (my-map1 f lst)
(reverse
(let loop ([lst lst] [acc '()])
(if (empty? lst)
acc
(loop (cdr lst) (cons (f (car lst)) acc))))))
and
(define (my-map2 f lst)
(if (empty? lst)
'()
(cons (f (car lst)) (my-map2 f (cdr lst)))))
The problem can be described in the following way: whenever we have to traverse a list, should we collect results in accumulator, which preserves tail recursion, but requires list reversion in the end? Or should we use unoptimized recursion, but then we don't have to reverse anything?
It seems to me the first solution is always better. Indeed, there's additional complexity (O(n)) there. However, it uses much less memory, let alone calling a function isn't done instantly.
Yet I've seen different examples where the second approach was used. Either I'm missing something or these examples were only educational. Are there situations where unoptimized recursion is better?
When possible, I use higher-order functions like map which build a list under the hood. In Common Lisp I also tend to use loop a lot, which has a collect keyword for building list in a forward way (I also use the series library which also implements it transparently).
I sometimes use recursive functions that are not tail-recursive because they better express what I want and because the size of the list is going to be relatively small; in particular, when writing a macro, the code being manipulated is not usually very large.
For more complex problems I don't collect into lists, I generally accept a callback function that is being called for each solution. This ensures that the work is more clearly separated between how the data is produced and how it is used.
This approach is to me the most flexible of all, because no assumption is made about how the data should be processed or collected. But it also means that the callback function is likely to perform side-effects or non-local returns (see example below). I don't think it is particularly a problem as long the the scope of the side-effects is small (local to a function).
For example, if I want to have a function that generates all natural numbers between 0 and N-1, I write:
(defun range (n f)
(dotimes (i n)
(funcall f i)))
The implementation here iterates over all values from 0 below N and calls F with the value I.
If I wanted to collect them in a list, I'd write:
(defun range-list (N)
(let ((list nil))
(range N (lambda (v) (push v list)))
(nreverse list)))
But, I can also avoid the whole push/nreverse idiom by using a queue. A queue in Lisp can be implemented as a pair (first . last) that keeps track of the first and last cons cells of the underlying linked-list collection. This allows to append elements in constant time to the end, because there is no need to iterate over the list (see Implementing queues in Lisp by P. Norvig, 1991).
(defun queue ()
(let ((list (list nil)))
(cons list list)))
(defun qpush (queue element)
(setf (cdr queue)
(setf (cddr queue)
(list element))))
(defun qlist (queue)
(cdar queue))
And so, the alternative version of the function would be:
(defun range-list (n)
(let ((q (queue)))
(range N (lambda (v) (qpush q v)))
(qlist q)))
The generator/callback approach is also useful when you don't want to build all the elements; it is a bit like the lazy model of evaluation (e.g. Haskell) where you only use the items you need.
Imagine you want to use range to find the first empty slot in a vector, you could do this:
(defun empty-index (vector)
(block nil
(range (length vector)
(lambda (d)
(when (null (aref vector d))
(return d))))))
Here, the block of lexical name nil allows the anonymous function to call return to exit the block with a return value.
In other languages, the same behaviour is often reversed inside-out: we use iterator objects with a cursor and next operations. I tend to think it is simpler to write the iteration plainly and call a callback function, but this would be another interesting approach too.
Tail recursion with accumulator
Traverses the list twice
Constructs two lists
Constant stack space
Can crash with malloc errors
Naive recursion
Traverses list twice (once building up the stack, once tearing down the stack).
Constructs one list
Linear stack space
Can crash with stack overflow (unlikely in racket), or malloc errors
It seems to me the first solution is always better
Allocations are generally more time-expensive than extra stack frames, so I think the latter one will be faster (you'll have to benchmark it to know for sure though).
Are there situations where unoptimized recursion is better?
Yes, if you are creating a lazily evaluated structure, in haskell, you need the cons-cell as the evaluation boundary, and you can't lazily evaluate a tail recursive call.
Benchmarking is the only way to know for sure, racket has deep stack frames, so you should be able to get away with both versions.
The stdlib version is quite horrific, which shows that you can usually squeeze out some performance if you're willing to sacrifice readability.
Given two implementations of the same function, with the same O notation, I will choose the simpler version 95% of the time.
There are many ways to make recursion preserving iterative process.
I usually do continuation passing style directly. This is my "natural" way to do it.
One takes into account the type of the function. Sometimes you need to connect your function with the functions around it and depending on their type you can choose another way to do recursion.
You should start by solving "the little schemer" to gain a strong foundation about it. In the "little typer" you can discover another type of doing recursion, founded on other computational philosophy, used in languages like agda, coq.
In scheme you can write code that is actually haskell sometimes (you can write monadic code that would be generated by a haskell compiler as intermediate language). In that case the way to do recursion is also different that "usual" way, etc.
false dichotomy
You have other options available to you. Here we can preserve tail-recursion and map over the list with a single traversal. The technique used here is called continuation-passing style -
(define (map f lst (return identity))
(if (null? lst)
(return null)
(map f
(cdr lst)
(lambda (r) (return (cons (f (car lst)) r))))))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
This question is tagged with racket, which has built-in support for delimited continuations. We can accomplish map using a single traversal, but this time without using recursion. Enjoy -
(require racket/control)
(define (yield x)
(shift return (cons x (return (void)))))
(define (map f lst)
(reset (begin
(for ((x lst))
(yield (f x)))
null)))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
It's my intention that this post will show you the detriment of pigeonholing your mind into a particular construct. The beauty of Scheme/Racket, I have come to learn, is that any implementation you can dream of is available to you.
I would highly recommend Beautiful Racket by Matthew Butterick. This easy-to-approach and freely-available ebook shatters the glass ceiling in your mind and shows you how to think about your solutions in a language-oriented way.

Is there a way to use iteration in Common Lisp and avoid side-effects at the same time?

I've written two versions of a lisp function. The main difference between the two is that one is done with recursion, while the other is done with iteration.
Here's the recursive version (no side effects!):
(defun simple-check (counter list)
"This function takes two arguments:
the number 0 and a list of atoms.
It returns the number of times the
atom 'a' appears in that list."
(if (null list)
counter
(if (equal (car list) 'a)
(simple-check (+ counter 1) (cdr list))
(simple-check counter (cdr list)))))
Here's the iterative version (with side effects):
(defun a-check (counter list)
"This function takes two arguments:
the number 0 and a list of atoms.
It returns the number of times the
atom 'a' appears in that list."
(dolist (item list)
(if (equal item 'a)
(setf counter (+ counter 1))
(setf counter (+ counter 0))))
counter)
As far as I know, they both work. But I'd really like to avoid side-effects in the iterative version. Two questions I'd like answered:
Is it possible to avoid side effects and keep iteration?
Assuming the answer to #1 is a yes, what are the best ways to do so?
For completeness, note that Common Lisp has a built-in COUNT:
(count 'a list)
In some ways, the difference between side-effect or no side-effect is a bit blurred. Take the following loop version (ignoring that loop also has better ways):
(loop :for x :in list
:for counter := (if (eq x 'a) (1+ counter) counter)
:finally (return counter))
Is counter set at each step, or is it rebound? I. e., is an existing variable modified (like in setf), or is a new variable binding created (as in a recursion)?
This do version is very much like the recursive version:
(do ((list args (rest list))
(counter 0 (+ counter (if (eq (first list) 'a) 1 0))))
((endp list) counter))
Same question as above.
Now the “obvious” loop version:
(loop :for x :in list
:count (eq x 'a))
There isn't even an explicit variable for the counter. Are there side-effects?
Internally, of course there are effects: environments are created, bindings established, and, especially if there is tail call optimization, even in the recursive version destroyed/replaced at each step.
I see as side effects only effects that affect things outside of some defined scope. Of course, things appear more elegant if you can also on the level of your internal definition avoid the explicit setting of things, and instead use some more declarative expression.
You can also iterate with map, mapcar and friends.
https://lispcookbook.github.io/cl-cookbook/iteration.html
I also suggest a look at remove-if[-not] and other reduce and apply:
(length (remove-if-not (lambda (x) (equal :a x)) '(:a :b :a))) ;; 2
Passing counter to the recursive procedure was a means to enable a tail recursive definition. This is unnecessary for the iterative definition.
As others have pointed out, there are several language constructs which solve the stated problem elegantly.
I assume you are interested in this in a more general sense such as when you cannot find
a language feature that solves a problem directly.
In general, one can maintain a functional interface by keeping the mutation private as below:
(defun simple-check (list)
"return the number of times the symbol `a` appears in `list`"
(let ((times 0))
(dolist (elem list times)
(when (equal elem 'a)
(incf times)))))

Finding duplicate elements in a Scheme List using recursion

I'm trying to create a function that will check a list of numbers for duplicates and return either #t or #f. I can only use car, cdr and conditionals, no cons.
This is what I have so far but it's giving me the error "car: contract violation expected: pair? given: #f"
(define (dups a)
(if (null? a)
#f
(if (= (car a)(car(dups(cdr a))))
#t
(dups (cdr))
)
)
)
I'm new to both scheme and recursion, any help/advice would be greatly appreciated.
Your second if doesn't make much sense. I'm assuming you wanted to check whether (car a) appears somewhere further down the list, but (car (dups (cdr a))) doesn't give you that. Also, (car (dups ...)) is a type-issue since dups will return a boolean instead of a list, and car is expecting a list (or actually, a pair, which is what lists are composed of).
What you need is a second function to call in the test of that second if. That function takes an element and a list and searches for that element in the list. Of course, if you're allowed, use find, otherwise implement some sort of my-find - it's quite simple and similar to your dups function.

How to understand clojure's lazy-seq

I'm trying to understand clojure's lazy-seq operator, and the concept of lazy evaluation in general. I know the basic idea behind the concept: Evaluation of an expression is delayed until the value is needed.
In general, this is achievable in two ways:
at compile time using macros or special forms;
at runtime using lambda functions
With lazy evaluation techniques, it is possible to construct infinite data structures that are evaluated as consumed. These infinite sequences utilizes lambdas, closures and recursion. In clojure, these infinite data structures are generated using lazy-seq and cons forms.
I want to understand how lazy-seq does it's magic. I know it is actually a macro. Consider the following example.
(defn rep [n]
(lazy-seq (cons n (rep n))))
Here, the rep function returns a lazily-evaluated sequence of type LazySeq, which now can be transformed and consumed (thus evaluated) using the sequence API. This API provides functions take, map, filter and reduce.
In the expanded form, we can see how lambda is utilized to store the recipe for the cell without evaluating it immediately.
(defn rep [n]
(new clojure.lang.LazySeq (fn* [] (cons n (rep n)))))
But how does the sequence API actually work with LazySeq?
What actually happens in the following expression?
(reduce + (take 3 (map inc (rep 5))))
How is the intermediate operation map applied to the sequence,
how does take limit the sequence and
how does terminal operation reduce evaluate the sequence?
Also, how do these functions work with either a Vector or a LazySeq?
Also, is it possible to generate nested infinite data structures?: list containing lists, containing lists, containing lists... going infinitely wide and deep, evaluated as consumed with the sequence API?
And last question, is there any practical difference between this
(defn rep [n]
(lazy-seq (cons n (rep n))))
and this?
(defn rep [n]
(cons n (lazy-seq (rep n))))
That's a lot of questions!
How does the seq API actually works with LazySeq?
If you take a look at LazySeq's class source code you will notice that it implements ISeq interface providing methods like first, more and next.
Functions like map, take and filter are built using lazy-seq (they produce lazy sequences) and first and rest (which in turn uses more) and that's how they can work with lazy seq as their input collection - by using first and more implementations of LazySeq class.
What actually happens in the following expression?
(reduce + (take 3 (map inc (rep 5))))
The key is to look how LazySeq.first works. It will invoke the wrapped function to obtain and memoize the result. In your case it will be the following code:
(cons n (rep n))
Thus it will be a cons cell with n as its value and another LazySeq instance (result of a recursive call to rep) as its rest part. It will become the realised value of this LazySeq object and first will return the value of the cached cons cell.
When you call more on it, it will in the same way ensure that the value of the particular LazySeq object is realised (or reused memoized value) and call more on it (in this case more on the cons cell containing another LazySeq object).
Once you obtain another instance of LazySeq object with more the story repeats when you call first on it.
map and take will create another lazy-seq that will call first and more of the collection passed as their argument (just another lazy seq) so it will be similar story. The difference will be only in how the values passed to cons are generated (e.g. calling f to a value obtained by first invoked on the LazySeq value mapped over in map instead of a raw value like n in your rep function).
With reduce it's a bit simpler as it will use loop with first and more to iterate over the input lazy seq and apply the reducing function to produce the final result.
As the actual implementation looks like for map and take I encourage you to check their source code - it's quite easy to follow.
How seq API can works with different collection types (e.g. lazy seq and persistent vector)?
As mentioned above, map, take and other functions work in terms of first and rest (reminder - rest is implemented on top of more). Thus we need to explain how first and rest/more can work with different collection types: they check if the collection implements ISeq (and then it implement those functions directly) or they try to create a seq view of the collection and coll its implementation of first and more.
Is it possible to generate nested infinite data structures?
It's definitely possible but I am not sure what the exact data shape you would like to get. Do you mean getting a lazy seq which generates another sequence as it's value (instead of a single value like n in your rep) but returns it as a flat sequence?
(defn nested-cons [n]
(lazy-seq (cons (repeat n n) (nested-cons (inc n)))))
(take 3 (nested-cons 1))
;; => ((1) (2 2) (3 3 3))
that would rather return (1 2 2 3 3 3)?
For such cases you might use concat instead of cons which creates a lazy sequence of two or more sequences:
(defn nested-concat [n]
(lazy-seq (concat (repeat n n) (nested-concat (inc n)))))
(take 6 (nested-concat 1))
;; => (1 2 2 3 3 3)
Is there any practical difference with this
(defn rep [n]
(lazy-seq (cons n (rep n))))
and this?
(defn rep [n]
(cons n (lazy-seq (rep n))))
In this particular case not really. But in the case where a cons cell doesn't wrap a raw value but a result of a function call to calculate it, the latter form is not fully lazy. For example:
(defn calculate-sth [n]
(println "Calculating" n)
n)
(defn rep1 [n]
(lazy-seq (cons (calculate-sth n) (rep1 (inc n)))))
(defn rep2 [n]
(cons (calculate-sth n) (lazy-seq (rep2 (inc n)))))
(take 0 (rep1 1))
;; => ()
(take 0 (rep2 1))
;; Prints: Calculating 1
;; => ()
Thus the latter form will evaluate its first element even if you might not need it.

SICP 2.64 order of growth of recursive procedure

I am self-studyinig SICP and having a hard time finding order of growth of recursive functions.
The following procedure list->tree converts an ordered list to a balanced search tree:
(define (list->tree elements)
(car (partial-tree elements (length elements))))
(define (partial-tree elts n)
(if (= n 0)
(cons '() elts)
(let ((left-size (quotient (- n 1) 2)))
(let ((left-result (partial-tree elts left-size)))
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
(right-size (- n (+ left-size 1))))
(let ((this-entry (car non-left-elts))
(right-result (partial-tree (cdr non-left-elts)
right-size)))
(let ((right-tree (car right-result))
(remaining-elts (cdr right-result)))
(cons (make-tree this-entry left-tree right-tree)
remaining-elts))))))))
I have been looking at the solution online, and the following website I believe offers the best solution but I have trouble making sense of it:
jots-jottings.blogspot.com/2011/12/sicp-exercise-264-constructing-balanced.html
My understanding is that the procedure 'partial-tree' repeatedly calls three argument each time it is called - 'this-entry', 'left-tree', and 'right-tree' respectively. (and 'remaining-elts' only when it is necessary - either in very first 'partial-tree' call or whenever 'non-left-elts' is called)
this-entry calls : car, cdr, and cdr(left-result)
left-entry calls : car, cdr, and itself with its length halved each step
right-entry calls: car, itself with cdr(cdr(left-result)) as argument and length halved
'left-entry' would have base 2 log(n) steps, and all three argument calls 'left-entry' separately.
so it would have Ternary-tree-like structure and the total number of steps I thought would be similar to 3^log(n). but the solution says it only uses each index 1..n only once. But doesn't 'this-entry' for example reduce same index at every node separate to 'right-entry'?
I am confused..
Further, in part (a) the solution website states:
"in the non-terminating case partial-tree first calculates the number
of elements that should go into the left sub-tree of a balanced binary
tree of size n, then invokes partial-tree with the elements and that
value which both produces such a sub-tree and the list of elements not
in that sub-tree. It then takes the head of the unused elements as the
value for the current node"
I believe the procedure does this-entry before left-tree. Why am I wrong?
This is my very first book on CS and I have yet to come across Master Theorem.
It is mentioned in some solutions but hopefully I should be able to do the question without using it.
Thank you for reading and I look forward to your kind reply,
Chris
You need to understand how let forms work. In
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
left-tree does not "call" anything. It is created as a new lexical variable, and assigned the value of (car left-result). The parentheses around it are just for grouping together the elements describing one variable introduced by a let form: the variable's name and its value:
(let ( ( left-tree (car left-result) )
;; ^^ ^^
( non-left-elts (cdr left-result) )
;; ^^ ^^
Here's how to understand how the recursive procedure works: don't.
Just don't try to understand how it works; instead analyze what it does, assuming that it does (for the smaller cases) what it's supposed to do.
Here, (partial-tree elts n) receives two arguments: the list of elements (to be put into tree, presumably) and the list's length. It returns
(cons (make-tree this-entry left-tree right-tree)
remaining-elts)
a cons pair of a tree - the result of conversion, and the remaining elements, which are supposed to be none left, in the topmost call, if the length argument was correct.
Now that we know what it's supposed to do, we look inside it. And indeed assuming the above what it does makes total sense: halve the number of elements, process the list, get the tree and the remaining list back (non-empty now), and then process what's left.
The this-entry is not a tree - it is an element that is housed in a tree's node:
(let ((this-entry (car non-left-elts))
Setting
(right-size (- n (+ left-size 1))
means that n == right-size + 1 + left-size. That's 1 element that goes into the node itself, the this-entry element.
And since each element goes directly into its node, once, the total running time of this algorithm is linear in the number of elements in the input list, with logarithmic stack space use.

Resources