I wrote this quicksort function:
(defun quicksort (lst)
(if (null lst)
nil
(let ((div (car lst))
(tail (cdr lst)))
(append (quicksort (remove-if-not (lambda (x) (< x div)) tail))
(list div)
(quicksort (remove-if (lambda (x) (< x div)) tail))))))
but I can't rewrite it as macro, it does not work, nor does, for example, this simple foo (recursive sum - I know, a little silly, but just as example):
(defun Suma (lst)
(if (cdr lst)
(+ (Suma (cdr lst))
(car lst))
(car lst)))
works properly, but the macro:
(defmacro SumaMacro (lst)
'(if (cdr lst)
'(+ (prog (SUMAMACRO (cdr lst)))
(prog (car lst)))
'(car lst)))
seems to be wrong. Does someone have any suggestions about rewriting recursive functions as macro?
You're mixing macro and runtime; or in other words, you're mixing values and syntax. Here's a very simple example:
(defmacro while (condition &body body)
`(when ,condition ,#body (while ,condition ,#body)))
The bad thing here is that the macro doesn't execute the body, it just constructs a piece of code with the given body in it. So, when there's that kind of a loop in a function, it's protected by some conditional like if which will prevent an infinite loop. But in this macro code there is no such condition -- you can see that the macro expands into the exact original form, which means that it's trying to expand into some infinite piece of code. It's just as if you've written
(defun foo (blah)
(cons 1 (foo blah)))
then hooked that generator function into the compiler. So to do these kinds of runtime loops, you'll have to use a real function. (And when that's needed, you can use labels to create a local function to do the recursive work.)
It makes no sense to write recursive functions like SUM or QUICKSORT as macros. Also, no, in general it is not possible. A macro expands source code. At compile time the macro sees only the source code, but not the real arguments the code is being called with. After compilation the macros is gone and replaced with the code it produces. This code then later gets called with arguments. So the macro can't do computation at compile time, based on argument values that are known only at runtime.
The exception is: when the argument value is known at compile time / macro expansion time, then the macro can expand to a recursive macro call to itself. But that is really advanced macro usage and nothing that one would add to code to be maintained by other programmers.
Rule of thumb: If you want to do recursive computations, then use functions. If you want to process source code, then use a macro.
Also, try to use Lisp-like formatting. The editor counts the parentheses, does highlighting and indentation. Don't put parentheses on their own lines, they feel lonely there. The usual Lisp style is more compact and uses also the horizontal space more. If you work with lists, then use FIRST and REST, instead of CAR and CDR.
Your 'suma' function would look like this:
(defun suma (list)
(if (rest list)
(+ (suma (rest list))
(first list))
(first list)))
Forget about the macro. But, if you want to learn more about macros, then the book 'On Lisp' by Paul Graham (available as a download) is a good source of knowledge.
Related
I wonder how do you, experienced lispers / functional programmers usually make decision what to use. Compare:
(define (my-map1 f lst)
(reverse
(let loop ([lst lst] [acc '()])
(if (empty? lst)
acc
(loop (cdr lst) (cons (f (car lst)) acc))))))
and
(define (my-map2 f lst)
(if (empty? lst)
'()
(cons (f (car lst)) (my-map2 f (cdr lst)))))
The problem can be described in the following way: whenever we have to traverse a list, should we collect results in accumulator, which preserves tail recursion, but requires list reversion in the end? Or should we use unoptimized recursion, but then we don't have to reverse anything?
It seems to me the first solution is always better. Indeed, there's additional complexity (O(n)) there. However, it uses much less memory, let alone calling a function isn't done instantly.
Yet I've seen different examples where the second approach was used. Either I'm missing something or these examples were only educational. Are there situations where unoptimized recursion is better?
When possible, I use higher-order functions like map which build a list under the hood. In Common Lisp I also tend to use loop a lot, which has a collect keyword for building list in a forward way (I also use the series library which also implements it transparently).
I sometimes use recursive functions that are not tail-recursive because they better express what I want and because the size of the list is going to be relatively small; in particular, when writing a macro, the code being manipulated is not usually very large.
For more complex problems I don't collect into lists, I generally accept a callback function that is being called for each solution. This ensures that the work is more clearly separated between how the data is produced and how it is used.
This approach is to me the most flexible of all, because no assumption is made about how the data should be processed or collected. But it also means that the callback function is likely to perform side-effects or non-local returns (see example below). I don't think it is particularly a problem as long the the scope of the side-effects is small (local to a function).
For example, if I want to have a function that generates all natural numbers between 0 and N-1, I write:
(defun range (n f)
(dotimes (i n)
(funcall f i)))
The implementation here iterates over all values from 0 below N and calls F with the value I.
If I wanted to collect them in a list, I'd write:
(defun range-list (N)
(let ((list nil))
(range N (lambda (v) (push v list)))
(nreverse list)))
But, I can also avoid the whole push/nreverse idiom by using a queue. A queue in Lisp can be implemented as a pair (first . last) that keeps track of the first and last cons cells of the underlying linked-list collection. This allows to append elements in constant time to the end, because there is no need to iterate over the list (see Implementing queues in Lisp by P. Norvig, 1991).
(defun queue ()
(let ((list (list nil)))
(cons list list)))
(defun qpush (queue element)
(setf (cdr queue)
(setf (cddr queue)
(list element))))
(defun qlist (queue)
(cdar queue))
And so, the alternative version of the function would be:
(defun range-list (n)
(let ((q (queue)))
(range N (lambda (v) (qpush q v)))
(qlist q)))
The generator/callback approach is also useful when you don't want to build all the elements; it is a bit like the lazy model of evaluation (e.g. Haskell) where you only use the items you need.
Imagine you want to use range to find the first empty slot in a vector, you could do this:
(defun empty-index (vector)
(block nil
(range (length vector)
(lambda (d)
(when (null (aref vector d))
(return d))))))
Here, the block of lexical name nil allows the anonymous function to call return to exit the block with a return value.
In other languages, the same behaviour is often reversed inside-out: we use iterator objects with a cursor and next operations. I tend to think it is simpler to write the iteration plainly and call a callback function, but this would be another interesting approach too.
Tail recursion with accumulator
Traverses the list twice
Constructs two lists
Constant stack space
Can crash with malloc errors
Naive recursion
Traverses list twice (once building up the stack, once tearing down the stack).
Constructs one list
Linear stack space
Can crash with stack overflow (unlikely in racket), or malloc errors
It seems to me the first solution is always better
Allocations are generally more time-expensive than extra stack frames, so I think the latter one will be faster (you'll have to benchmark it to know for sure though).
Are there situations where unoptimized recursion is better?
Yes, if you are creating a lazily evaluated structure, in haskell, you need the cons-cell as the evaluation boundary, and you can't lazily evaluate a tail recursive call.
Benchmarking is the only way to know for sure, racket has deep stack frames, so you should be able to get away with both versions.
The stdlib version is quite horrific, which shows that you can usually squeeze out some performance if you're willing to sacrifice readability.
Given two implementations of the same function, with the same O notation, I will choose the simpler version 95% of the time.
There are many ways to make recursion preserving iterative process.
I usually do continuation passing style directly. This is my "natural" way to do it.
One takes into account the type of the function. Sometimes you need to connect your function with the functions around it and depending on their type you can choose another way to do recursion.
You should start by solving "the little schemer" to gain a strong foundation about it. In the "little typer" you can discover another type of doing recursion, founded on other computational philosophy, used in languages like agda, coq.
In scheme you can write code that is actually haskell sometimes (you can write monadic code that would be generated by a haskell compiler as intermediate language). In that case the way to do recursion is also different that "usual" way, etc.
false dichotomy
You have other options available to you. Here we can preserve tail-recursion and map over the list with a single traversal. The technique used here is called continuation-passing style -
(define (map f lst (return identity))
(if (null? lst)
(return null)
(map f
(cdr lst)
(lambda (r) (return (cons (f (car lst)) r))))))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
This question is tagged with racket, which has built-in support for delimited continuations. We can accomplish map using a single traversal, but this time without using recursion. Enjoy -
(require racket/control)
(define (yield x)
(shift return (cons x (return (void)))))
(define (map f lst)
(reset (begin
(for ((x lst))
(yield (f x)))
null)))
(define (square x)
(* x x))
(map square '(1 2 3 4))
'(1 4 9 16)
It's my intention that this post will show you the detriment of pigeonholing your mind into a particular construct. The beauty of Scheme/Racket, I have come to learn, is that any implementation you can dream of is available to you.
I would highly recommend Beautiful Racket by Matthew Butterick. This easy-to-approach and freely-available ebook shatters the glass ceiling in your mind and shows you how to think about your solutions in a language-oriented way.
I'm trying to reverse a list in scheme and I came up with to the following solution:
(define l (list 1 2 3 4))
(define (reverse lista)
(car (cons (reverse (cdr (cons 0 lista))) 0)))
(display (reverse l))
Although it works I don't really understand why it works.
In my head, it would evaluate to a series of nested cons until cons of () (which the cdr of a list with one element).
I guess I am not understanding the substitution model, could someone explain me why it works?
Obs:
It is supposed to work only in not nested lists.
Taken form SICP, exercise 2.18.
I know there are many similar questions, but as far as I saw, none presented
this solution.
Thank you
[As this happens quite often, I write the answer anyway]
Scheme implementations do have their builtin versions of reverse, map, append etc. as they are specified in RxRS (e.g. https://www.cs.indiana.edu/scheme-repository/R4RS/r4rs_8.html).
In the course of learning scheme (and actually any lisp dialect) it's really valuable to implement them anyway. The danger is, one's definition can collide with the built-in one (although e.g. scheme's define or lisp's label should shadow them). Therefore it's always worth to call this hand-made implementation with some other name, like "my-reverse", "my-append" etc. This way you will save yourself much confusion, like in the following:
(let ([append
(lambda (xs ys)
(if (null? xs)
ys
(cons (car xs) (append (cdr xs) ys))))])
(append '(hello) '(there!)))
-- this one seems to work, creating a false impression that "let" works the same as "letrec". But just change the name to "my-append" and it breaks, because at the moment of evaluating the lambda form, the symbol "my-append" is not yet bound to anything (unlike "append" which was defined as a builtin procedure).
Of course such let form will work in a language with dynamic scoping, but scheme is lexical (with the exception of "define"s), and the reason is referential transparency (but that's so far offtopic that I can only refer interested reader to one of the lambda papers http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-453.pdf).
This reads pretty much the same as the solutions in other languages:
if the list is empty, return an empty list. Otherwise ...
chop off the first element (CAR)
reverse the remainder of the list (CDR)
append (CONS) the first element to that reversal
return the result
Now ... given my understanding from LISP days, the code would look more like this:
(append (reverse (cdr lista)) (list (car lista)))
... which matches my description above.
There are several ways to do it. Here is another:
(define my-reverse
(lambda (lst)
(define helper
(lambda (lst result)
(if (null? lst)
result
(helper (cdr lst) (cons (car lst) result)))))
(helper lst '())))
NOTE: I would like to do this without rackets built in exceptions if possible.
I have many functions which call other functions and may recursively make a call back to the original function. Under certain conditions along the way I want to stop any further recursive steps, and no longer call any other functions and simply return some value/string (the stack can be ignored if the condition is met).. here is a contrived example that hopefully will show what I'm trying to accomplish:
(define (add expr0 expr1)
(cond
[(list? expr0) (add (cadr expr0) (cadr (cdr expr0)))]
[(list? expr1) (add (cadr expr1) (cadr (cdr expr1)))]
[else (if (or (equal? expr0 '0) (equal? expr1 '0))
'(Adding Zero)
(+ expr0 expr1))]
))
If this were my function and I called it with (add (add 2 0) 3), Then the goal would be to simply return the entire string '(Adding Zero) ANYTIME that a zero is one of the expressions, instead of making the recursive call to (add '(Adding Zero) 3)
Is there a way to essentially "break" out of recursion? My problem is that if i'm already deep inside then it will eventually try to evaluate '(Adding Zero) which it doesn't know how to do and I feel like I should be able to do this without making an explicit check to each expr..
Any guidance would be great.
In your specific case, there's no need to "escape" from normal processing. Simply having '(Adding Zero) in tail position will cause your add function to return (Adding Zero).
To create a situation where you might need to escape, you need something a
little more complicated:
(define (recursive-find/collect collect? tree (result null))
(cond ((null? tree) (reverse result))
((collect? tree) (reverse (cons tree result)))
((not (pair? tree)) (reverse result))
(else
(let ((hd (car tree))
(tl (cdr tree)))
(cond ((collect? hd)
(recursive-find/collect collect? tl (cons hd result)))
((pair? hd)
(recursive-find/collect collect? tl
(append (reverse (recursive-find/collect collect? hd)) result)))
(else (recursive-find/collect collect? tl result)))))))
Suppose you wanted to abort processing and just return 'Hahaha! if any node in the tree had the value 'Joker. Just evaluating 'Hahaha! in tail position
wouldn't be enough because recursive-find/collect isn't always used in
tail position.
Scheme provides continuations for this purpose. The easiest way to do it in my particular example would be to use the continuation from the predicate function, like this:
(call/cc
(lambda (continuation)
(recursive-find/collect
(lambda (node)
(cond ((eq? node 'Joker)
(continuation 'Hahaha!)) ;; Processing ends here
;; Otherwise find all the symbols
;; in the tree
(else (symbol? node))))
'(Just 1 arbitrary (tree (stucture) ((((that "has" a Joker in it)))))))))
A continuation represents "the rest of the computation" that is going to happen after the call/cc block finishes. In this case, it just gives you a way to escape from the call/cc block from anywhere in the stack.
But continuations also have other strange properties, such as allowing you to jump back to whatever block of code this call/cc appears in even after execution has left this part of the program. For example:
(define-values a b (call/cc
(lambda (cc)
(values 1 cc))))
(cc 'one 'see-see)
In this case, calling cc jumps back to the define-values form and redefines a and b to one and see-see, respectively.
Racket also has "escape continuations" (call/ec or let/ec) which can escape from their form, but can't jump back into it. In exchange for this limitation you get better performance.
I have a recursive function which needs to recurse until it finds a certain result. However in the body of my function after my first recursive call I might do some other calculations or possibly recurse again. But, if I recurse and find the result I'm looking for, then I'd like to just stop out of any recursive I've been doing and return that result to avoid doing unnecessary computations.
In a normal recursive call once you get to the "base case" that gets returned to the function that called, then that gets returned to the one that called it, and so on. I'd like to know how to just return to the very first time the function was called, and not have to return something for all those intermediate steps.
For my basic recursion I could write a function like this:
(defun recurse (x)
(if (= x 10)
(return-from recurse x)
(progn (recurse (+ x 1)) (print "Recursed!")))))
(recurse 1)
It has been written to illustrate what I mean about the function running more computations after a recursive call. And, as written this doesn't even return the value I'm interested in since I do some printings after I've returned the value I care about. (Note: The return-from command is extraneous here as I could just write "x" in its place. It's just there to draw parallels for when I try to return to the top level recursion in my second example below.)
Now, if I want to ditch all those extra "Recursed!" printings I could encase everything in a block and then just return to that block instead:
EDIT: Here is a function wrapper for my original example. This example should be clearer now.
(defun recurse-to-top (start)
(block top-level
(labels ((recurse (x)
(if (= x 10)
(return-from top-level x)
(progn (recurse (+ x 1)) (print "Recursed!")))))
(recurse start))))
And running this block keeps going until 10 "is found" and then returns to from the top-level block with no extraneous printing, just like I wanted. But, this seems like a really clunky way to get this feature. I'd like to know if there's a standard or "best" way for getting this type of behavior.
DEFUN already sets up a lexical block:
(defun recurse (start)
(labels ((recurse-aux (x)
(case x
(10 (return-from recurse x))
(15 x)
(otherwise
(recurse-aux (+ x 1))
(print "Recursed!")))))
(recurse-aux start)))
Older is the use of CATCH and THROW, which is a more dynamic construct and thus allows an exit across functions:
(defun recurse (start)
(catch 'recurse-exit
(recurse-aux start)))
(defun recurse-aux (x)
(case x
(10 (throw 'recurse-exit x))
(15 x)
(otherwise
(recurse-aux (+ x 1))
(print "Recursed!")))))
(recurse-aux start))))
As mentioned by Lars, there are even more way to program control flow like this.
You want some kind of non-local exit. There are a few choices: return-from, go, throw, signal.
Maybe some variation on this?
(defun recurse (x &optional (tag 'done))
(catch tag
(when (= x 10)
(throw 'done x))
(recurse (1+ x) nil)
(print "Cursed!")))
I believe it does what you want, although there may be a lot of needless catching going on.
As always with Lisp, you can imagine there is a perfect language for your problem, and write your program in that language. E.g. something like
(defun recurse (x)
(top-level-block recurse
(when (= x 10)
(return-from-top-level recurse x))
(recurse (1+ x))
(print "Cursed!")))
Then there is just a simple matter of programming to implement the new macros top-level-block and return-from-top-level.
Imperfect sample code follows:
(defmacro top-level-block (name &body body)
`(if (boundp ',name)
(progn ,#body)
(catch ',name
(let ((,name t))
(declare (special ,name))
,#body))))
(defmacro return-from-top-level (name value)
`(throw ',name ,value))
Does anyone know how I can figure out the free variables in a lambda expression? Free variables are the variables that aren't part of the lambda parameters.
My current method (which is getting me nowhere) is to simply use car and cdr to go through the expression. My main problem is figuring out if a value is a variable or if it's one of the scheme primitives. Is there a way to test if something evaluates to one of scheme's built-in functions? For example:
(is-scheme-primitive? 'and)
;Value: #t
I'm using MIT scheme.
For arbitrary MIT Scheme programs, there isn't any way to do this. One problem is that the function you describe just can't work. For example, this doesn't use the 'scheme primitive' and:
(let ((and 7)) (+ and 1))
but it certainly uses the symbol 'and.
Another problem is that lots of things, like and, are special forms that are implemented with macros. You need to know what all of the macros in your program expand into to figure out even what variables are used in your program.
To make this work, you need to restrict the set of programs that you accept as input. The best choice is to restrict it to "fully expanded" programs. In other words, you want to make sure that there aren't any uses of macros left in the input to your free-variables function.
To do this, you can use the expand function provided by many Scheme systems. Unfortunately, from the online documentation, it doesn't look like MIT Scheme provides this function. If you're able to use a different system, Racket provides the expand function as well as local-expand which works correctly inside macros.
Racket actually also provides an implementation of the free-variables function that you ask for, which, as I described, requires fully expanded programs as input (such as the output of expand or local-expand). You can see the source code as well.
For a detailed discussion of the issues involved with full expansion of source code, see this upcoming paper by Flatt, Culpepper, Darais and Findler.
[EDIT 4] Disclaimer; or, looking back a year later:
This is actually a really bad way to go about solving this problem. It works as a very quick and dirty method that accomplishes the basic goal of the OP, but does not stand up to any 'real life' use cases. Please see the discussion in the comments on this answer as well as the other answer to see why.
[/EDIT]
This solution is probably less than ideal, but it will work for any lambda form you want to give it in the REPL environment of mit-scheme (see edits). Documentation for the procedures I used is found at the mit.edu doc site. get-vars takes a quoted lambda and returns a list of pairs. The first element of each pair is the symbol and the second is the value returned by environment-reference-type.
(define (flatten lst)
(cond ((null? lst) ())
((pair? (car lst)) (append (flatten (car lst)) (flatten (cdr lst))))
(else
(cons (car lst) (flatten (cdr lst))))))
(define (get-free-vars proc-form)
(let ((env (ge (eval proc-form user-initial-environment))))
(let loop ((pf (flatten proc-form))
(out ()))
(cond ((null? pf) out)
((symbol? (car pf))
(loop (cdr pf) (cons (cons (car pf) (environment-reference-type env (car pf))) out)))
(else
(loop (cdr pf) out))))))
EDIT: Example usage:
(define a 100)
(get-vars '(lambda (x) (* x a g)))
=> ((g . unbound) (a . normal) (x . unbound) (* . normal) (x . unbound) (lambda . macro))
EDIT 2: Changed code to guard agains calling environment-reference-type being called with something other than a symbol.
EDIT 3: As Sam has pointed out in the comments, this will not see the symbols bound in a let under the lambda as having any value.. not sure there is an easy fix for this. So, my statement about this taking any lambda is wrong, and should have read more like "Any simple lambda that doesn't contain new binding forms"... oh well.