I am using common-lisp for my real-time graphics experiments and so far it has being great. My requirements for speed and easy compatibility with cffi mean I am using 'typed' arrays. The one area of the code which really feels ugly is the generic versions of my matrix and vector math functions. As CLOS cant specialize on length of an array I am doing something like this:
(defun v+ (vec-a vec-b)
(%v+ vec-a vec-b (length a) (length b)))
(defmethod %v+ (va vb (la (eql 3)) (lb (eql 3)))
***CODE HERE***)
This works but doesn't feel right. I have seen extensions to various CL implementations and heard about the promise of MOP.
I have steered away from this as I feared it would break functionality with some CL implementations but I have more recently seen the Closer-to-Mop project.
Core Question:
Does MOP provide a more efficient method for specializing on length? Are there any area/techniques I should focus on?
Your code feels right for me, and what you are using is type tagging.
(defmethod v+ (vec-a vec-b)
(labels ((find-tag (vec)
(if (> (length vec) 3)
:more-than-3
:less-than-4)))
(%v+ vec-a vec-b (find-tag a) (find-tag b)))
(defmethod %v+ (va vb (va-tag (eql :less-than-4)) (vb-tag (eql :less-than-4)))
***CODE HERE***)
Related
I wonder how do you, experienced lispers / functional programmers usually make decision what to use. Compare:
(define (my-map1 f lst)
(reverse
(let loop ([lst lst] [acc '()])
(if (empty? lst)
acc
(loop (cdr lst) (cons (f (car lst)) acc))))))
and
(define (my-map2 f lst)
(if (empty? lst)
'()
(cons (f (car lst)) (my-map2 f (cdr lst)))))
The problem can be described in the following way: whenever we have to traverse a list, should we collect results in accumulator, which preserves tail recursion, but requires list reversion in the end? Or should we use unoptimized recursion, but then we don't have to reverse anything?
It seems to me the first solution is always better. Indeed, there's additional complexity (O(n)) there. However, it uses much less memory, let alone calling a function isn't done instantly.
Yet I've seen different examples where the second approach was used. Either I'm missing something or these examples were only educational. Are there situations where unoptimized recursion is better?
When possible, I use higher-order functions like map which build a list under the hood. In Common Lisp I also tend to use loop a lot, which has a collect keyword for building list in a forward way (I also use the series library which also implements it transparently).
I sometimes use recursive functions that are not tail-recursive because they better express what I want and because the size of the list is going to be relatively small; in particular, when writing a macro, the code being manipulated is not usually very large.
For more complex problems I don't collect into lists, I generally accept a callback function that is being called for each solution. This ensures that the work is more clearly separated between how the data is produced and how it is used.
This approach is to me the most flexible of all, because no assumption is made about how the data should be processed or collected. But it also means that the callback function is likely to perform side-effects or non-local returns (see example below). I don't think it is particularly a problem as long the the scope of the side-effects is small (local to a function).
For example, if I want to have a function that generates all natural numbers between 0 and N-1, I write:
(defun range (n f)
(dotimes (i n)
(funcall f i)))
The implementation here iterates over all values from 0 below N and calls F with the value I.
If I wanted to collect them in a list, I'd write:
(defun range-list (N)
(let ((list nil))
(range N (lambda (v) (push v list)))
(nreverse list)))
But, I can also avoid the whole push/nreverse idiom by using a queue. A queue in Lisp can be implemented as a pair (first . last) that keeps track of the first and last cons cells of the underlying linked-list collection. This allows to append elements in constant time to the end, because there is no need to iterate over the list (see Implementing queues in Lisp by P. Norvig, 1991).
(defun queue ()
(let ((list (list nil)))
(cons list list)))
(defun qpush (queue element)
(setf (cdr queue)
(setf (cddr queue)
(list element))))
(defun qlist (queue)
(cdar queue))
And so, the alternative version of the function would be:
(defun range-list (n)
(let ((q (queue)))
(range N (lambda (v) (qpush q v)))
(qlist q)))
The generator/callback approach is also useful when you don't want to build all the elements; it is a bit like the lazy model of evaluation (e.g. Haskell) where you only use the items you need.
Imagine you want to use range to find the first empty slot in a vector, you could do this:
(defun empty-index (vector)
(block nil
(range (length vector)
(lambda (d)
(when (null (aref vector d))
(return d))))))
Here, the block of lexical name nil allows the anonymous function to call return to exit the block with a return value.
In other languages, the same behaviour is often reversed inside-out: we use iterator objects with a cursor and next operations. I tend to think it is simpler to write the iteration plainly and call a callback function, but this would be another interesting approach too.
Tail recursion with accumulator
Traverses the list twice
Constructs two lists
Constant stack space
Can crash with malloc errors
Naive recursion
Traverses list twice (once building up the stack, once tearing down the stack).
Constructs one list
Linear stack space
Can crash with stack overflow (unlikely in racket), or malloc errors
It seems to me the first solution is always better
Allocations are generally more time-expensive than extra stack frames, so I think the latter one will be faster (you'll have to benchmark it to know for sure though).
Are there situations where unoptimized recursion is better?
Yes, if you are creating a lazily evaluated structure, in haskell, you need the cons-cell as the evaluation boundary, and you can't lazily evaluate a tail recursive call.
Benchmarking is the only way to know for sure, racket has deep stack frames, so you should be able to get away with both versions.
The stdlib version is quite horrific, which shows that you can usually squeeze out some performance if you're willing to sacrifice readability.
Given two implementations of the same function, with the same O notation, I will choose the simpler version 95% of the time.
There are many ways to make recursion preserving iterative process.
I usually do continuation passing style directly. This is my "natural" way to do it.
One takes into account the type of the function. Sometimes you need to connect your function with the functions around it and depending on their type you can choose another way to do recursion.
You should start by solving "the little schemer" to gain a strong foundation about it. In the "little typer" you can discover another type of doing recursion, founded on other computational philosophy, used in languages like agda, coq.
In scheme you can write code that is actually haskell sometimes (you can write monadic code that would be generated by a haskell compiler as intermediate language). In that case the way to do recursion is also different that "usual" way, etc.
false dichotomy
You have other options available to you. Here we can preserve tail-recursion and map over the list with a single traversal. The technique used here is called continuation-passing style -
(define (map f lst (return identity))
(if (null? lst)
(return null)
(map f
(cdr lst)
(lambda (r) (return (cons (f (car lst)) r))))))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
This question is tagged with racket, which has built-in support for delimited continuations. We can accomplish map using a single traversal, but this time without using recursion. Enjoy -
(require racket/control)
(define (yield x)
(shift return (cons x (return (void)))))
(define (map f lst)
(reset (begin
(for ((x lst))
(yield (f x)))
null)))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
It's my intention that this post will show you the detriment of pigeonholing your mind into a particular construct. The beauty of Scheme/Racket, I have come to learn, is that any implementation you can dream of is available to you.
I would highly recommend Beautiful Racket by Matthew Butterick. This easy-to-approach and freely-available ebook shatters the glass ceiling in your mind and shows you how to think about your solutions in a language-oriented way.
I'm Haruo. My pleasure is solving SPOJ in Common Lisp(CLISP). Today I solved Classical/Balk! but in SBCL not CLISP. My CLISP submit failed due to runtime error (NZEC).
I hope my code becomes more sophisticated. Today's problem is just a chance. Please the following my code and tell me your refactoring strategy. I trust you.
https://github.com/haruo-wakakusa/SPOJ-ClispAnswers/blob/0978813be14b536bc3402f8238f9336a54a04346/20040508_adrian_b.lisp
Haruo
Take for example get-x-depth-for-yz-grid.
(defun get-x-depth-for-yz-grid (planes//yz-plane grid)
(let ((planes (get-planes-including-yz-grid-in planes//yz-plane grid)))
(unless (evenp (length planes))
(error "error in get-x-depth-for-yz-grid"))
(sort planes (lambda (p1 p2) (< (caar p1) (caar p2))))
(do* ((rest planes (cddr rest)) (res 0))
((null rest) res)
(incf res (- (caar (second rest)) (caar (first rest)))))))
style -> ERROR can be replaced by ASSERT.
possible bug -> SORT is possibly destructive -> make sure you have a fresh list consed!. If it is already fresh allocated by get-planes-including-yz-grid-in, then we don't need that.
bug -> SORT returns a sorted list. The sorted list is possibly not a side-effect. -> use the returned value
style -> DO replaced with LOOP.
style -> meaning of CAAR unclear. Find better naming or use other data structures.
(defun get-x-depth-for-yz-grid (planes//yz-plane grid)
(let ((planes (get-planes-including-yz-grid-in planes//yz-plane grid)))
(assert (evenp (length planes)) (planes)
"error in get-x-depth-for-yz-grid")
(setf planes (sort (copy-list planes) #'< :key #'caar))
(loop for (p1 p2) on planes by #'cddr
sum (- (caar p2) (caar p1)))))
Some documentation makes a bigger improvement than refactoring.
Your -> macro will confuse sbcl’s type inference. You should have (-> x) expand into x, and (-> x y...) into (let (($ x)) (-> y...))
You should learn to use loop and use it in more places. dolist with extra mutation is not great
In a lot of places you should use destructuring-bind instead of eg (rest (rest )). You’re also inconsistent as sometimes you’d write (cddr...) for that instead.
Your block* suffers from many problems:
It uses (let (foo) (setf foo...)) which trips up sbcl type inference.
The name block* implies that the various bindings are scoped in a way that they may refer to those previously defined things but actually all initial value may refer to any variable or function name and if that variable has not been initialised then it evaluates to nil.
The style of defining lots of functions inside another function when they can be outside is more typical of scheme (which has syntax for it) than Common Lisp.
get-x-y-and-z-ranges really needs to use loop. I think it’s wrong too: the lists are different lengths.
You need to define some accessor functions instead of using first, etc. Maybe even a struct(!)
(sort foo) might destroy foo. You need to do (setf foo (sort foo)).
There’s basically no reason to use do. Use loop.
You should probably use :key in a few places.
You write defvar but I think you mean defparameter
*t* is a stupid name
Most names are bad and don’t seem to tell me what is going on.
I may be an idiot but I can’t tell at all what your program is doing. It could probably do with a lot of work
The classic book The Little Lisper (The Little Schemer) is founded on two big ideas
You can solve most problems in a recursive way (instead of using loops) (assuming you have Tail Call Optimisation)
Lisp is great because it is easy to implement in itself.
Now one might think this holds true for all Lispy languages (including Clojure). The trouble is, the book is an artefact of its time (1989), probably before Functional Programming with Higher Order Functions (HOFs) was what we have today.(Or was at least considered palatable for undergraduates).
The benefit of recursion (at least in part) is the ease of traversal of nested data structures like ('a 'b ('c ('d 'e))).
For example:
(def leftmost
(fn [l]
(println "(leftmost " l)
(println (non-atom? l))
(cond
(null? l) '()
(non-atom? (first l)) (leftmost (first l))
true (first l))))
Now with Functional Zippers - we have a non-recursive approach to traversing nested data structures, and can traverse them as we would any lazy data structure. For example:
(defn map-zipper [m]
(zip/zipper
(fn [x] (or (map? x) (map? (nth x 1))))
(fn [x] (seq (if (map? x) x (nth x 1))))
(fn [x children]
(if (map? x)
(into {} children)
(assoc x 1 (into {} children))))
m))
(def m {:a 3 :b {:x true :y false} :c 4})
(-> (map-zipper m) zip/down zip/right zip/node)
;;=> [:b {:y false, :x true}]
Now it seems you can solve any nested list traversal problem with either:
a zipper as above, or
a zipper that walks the structure and returns a set of keys that will let you modify the structure using assoc.
Assumptions:
I'm assuming of course data structures that fixed-size, and fully known prior to traversal
I'm excluding the streaming data source scenario.
My question is: Is recursion a smell (in idiomatic Clojure) because of of zippers and HOFs?
I would say that, yes, if you are doing manual recursion you should at least reconsider whether you need to. But I wouldn't say that zippers have anything to do with this. My experience with zippers has been that they are of theoretical use, and are very exciting to Clojure newcomers, but of little practical value once you get the hang of things, because the situations in which they are useful are vanishingly rare.
It's really because of higher-order functions that have already implemented the common recursive patterns for you that manual recursion is uncommon. However, it's certainly not the case that you should never use manual recursion: it's just a warning sign, suggesting you might be able to do something else. I can't even recall a situation in my four years of using Clojure that I've actually needed a zipper, but I end up using recursion fairly often.
Clojure idioms discourage explicit recursion because the call stack is limited: usually to about 10K deep. Amending the first of Halloway & Bedra's Six Rules of Clojure Functional Programming (Programming Clojure (p 89)),
Avoid unbounded recursion. The JVM cannot optimize recursive calls and
Clojure programs that recurse without bound will blow their stack.
There are a couple of palliatives:
recur deals with tail recursion.
Lazy sequences can turn a deep call stack into a shallow call stack
across an unfolding data structure. Many HOFs in the sequence
library, such as map and filter, do this.
Does anyone know how I can figure out the free variables in a lambda expression? Free variables are the variables that aren't part of the lambda parameters.
My current method (which is getting me nowhere) is to simply use car and cdr to go through the expression. My main problem is figuring out if a value is a variable or if it's one of the scheme primitives. Is there a way to test if something evaluates to one of scheme's built-in functions? For example:
(is-scheme-primitive? 'and)
;Value: #t
I'm using MIT scheme.
For arbitrary MIT Scheme programs, there isn't any way to do this. One problem is that the function you describe just can't work. For example, this doesn't use the 'scheme primitive' and:
(let ((and 7)) (+ and 1))
but it certainly uses the symbol 'and.
Another problem is that lots of things, like and, are special forms that are implemented with macros. You need to know what all of the macros in your program expand into to figure out even what variables are used in your program.
To make this work, you need to restrict the set of programs that you accept as input. The best choice is to restrict it to "fully expanded" programs. In other words, you want to make sure that there aren't any uses of macros left in the input to your free-variables function.
To do this, you can use the expand function provided by many Scheme systems. Unfortunately, from the online documentation, it doesn't look like MIT Scheme provides this function. If you're able to use a different system, Racket provides the expand function as well as local-expand which works correctly inside macros.
Racket actually also provides an implementation of the free-variables function that you ask for, which, as I described, requires fully expanded programs as input (such as the output of expand or local-expand). You can see the source code as well.
For a detailed discussion of the issues involved with full expansion of source code, see this upcoming paper by Flatt, Culpepper, Darais and Findler.
[EDIT 4] Disclaimer; or, looking back a year later:
This is actually a really bad way to go about solving this problem. It works as a very quick and dirty method that accomplishes the basic goal of the OP, but does not stand up to any 'real life' use cases. Please see the discussion in the comments on this answer as well as the other answer to see why.
[/EDIT]
This solution is probably less than ideal, but it will work for any lambda form you want to give it in the REPL environment of mit-scheme (see edits). Documentation for the procedures I used is found at the mit.edu doc site. get-vars takes a quoted lambda and returns a list of pairs. The first element of each pair is the symbol and the second is the value returned by environment-reference-type.
(define (flatten lst)
(cond ((null? lst) ())
((pair? (car lst)) (append (flatten (car lst)) (flatten (cdr lst))))
(else
(cons (car lst) (flatten (cdr lst))))))
(define (get-free-vars proc-form)
(let ((env (ge (eval proc-form user-initial-environment))))
(let loop ((pf (flatten proc-form))
(out ()))
(cond ((null? pf) out)
((symbol? (car pf))
(loop (cdr pf) (cons (cons (car pf) (environment-reference-type env (car pf))) out)))
(else
(loop (cdr pf) out))))))
EDIT: Example usage:
(define a 100)
(get-vars '(lambda (x) (* x a g)))
=> ((g . unbound) (a . normal) (x . unbound) (* . normal) (x . unbound) (lambda . macro))
EDIT 2: Changed code to guard agains calling environment-reference-type being called with something other than a symbol.
EDIT 3: As Sam has pointed out in the comments, this will not see the symbols bound in a let under the lambda as having any value.. not sure there is an easy fix for this. So, my statement about this taking any lambda is wrong, and should have read more like "Any simple lambda that doesn't contain new binding forms"... oh well.
I have a problem, which I believe to be best solved through a functional style of programming.
Coming from a very imperative background, I am used to program design involving class diagrams/descriptions, communication diagrams, state diagrams etc. These diagrams however, all imply, or are used to describe, the state of a system and the various side effects that actions have on the system.
Are there any standardised set of diagrams or mathematical symbols used in the design of functional programs, or are such programs best designed in short functional-pseudo code (given that functions will be much shorter than imperative counterparts).
Thanks, Mike
There's a secret trick to functional programming.
It's largely stateless, so the traditional imperative diagrams don't matter.
Most of ordinary, garden-variety math notation is also stateless.
Functional design is more like algebra than anything else. You're going to define functions, and show that the composition of those functions produces the desired result.
Diagrams aren't as necessary because functional programming is somewhat simpler than procedural programming. It's more like conventional mathematical notation. Use mathematical techniques to show that your various functions do the right things.
Functional programmers are more into writing equations then writing diagrams. The game is called equational reasoning and it mostly involves
Substituting equals for equals
Applying algebraic laws
The occasional proof by induction
The idea is that you write really simple code that is "manifestly correct", then you use equational reasoning to turn it into something that is cleaner and/or will perform better. The master of this art is an Oxford professor named Richard Bird.
For example, if I want to simplify the Scheme expression
(append (list x) l)
I will subsitute equals for equals like crazy. Using the definition of list I get
(append (cons x '()) l)
Subsituting the body of append I have
(if (null? (cons x '()))
l
(cons (car (cons x '())) (append (cdr (cons x '())) l)))
Now I have these algebraic laws:
(null? (cons a b)) == #f
(car (cons a b)) == a
(cdr (cons a b)) == b
and substituting equals for equals I get
(if #f
l
(cons x (append '() l))
With another law, (if #f e1 e2) == e2, I get
(cons x (append '() l))
And if I expend the definition of append again I get
(cons x l)
which I have proved is equal to
(append (list x) l)
There is this very good article explaining Lambda Calculus using animations: To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus with Animated Reduction
This one is very similar to the previous, but has a actual implementation: Lambda Animator
I don't know much about functional programming, but here are two things I have run into
λ (lambda) is often used to denote a
function
f ο g is used to indicate function
composition