How is it possible that a function can call itself - recursion

I know about recursion, but I don't know how it's possible. I'll use the fallowing example to further explain my question.
(def (pow (x, y))
(cond ((y = 0) 1))
(x * (pow (x , y-1))))
The program above is in the Lisp language. I'm not sure if the syntax is correct since I came up with it in my head, but it will do. In the program, I am defining the function pow, and in pow it calls itself. I don't understand how it's able to do this. From what I know the computer has to completely analyze a function before it can be defined. If this is the case, then the computer should give an undefined message when I use pow because I used it before it was defined. The principle I'm describing is the one at play when you use an x in x = x + 1, when x was not defined previously.

Compilers are much smarter than you think.
A compiler can turn the recursive call in this definition:
(defun pow (x y)
(cond ((zerop y) 1)
(t (* x (pow x (1- y))))))
into a goto intruction to re-start the function from scratch:
Disassembly of function POW
(CONST 0) = 1
2 required arguments
0 optional arguments
No rest parameter
No keyword parameters
12 byte-code instructions:
0 L0
0 (LOAD&PUSH 1)
1 (CALLS2&JMPIF 172 L15) ; ZEROP
4 (LOAD&PUSH 2)
5 (LOAD&PUSH 3)
6 (LOAD&DEC&PUSH 3)
8 (JSR&PUSH L0)
10 (CALLSR 2 57) ; *
13 (SKIP&RET 3)
15 L15
15 (CONST 0) ; 1
16 (SKIP&RET 3)
If this were a more complicated recursive function that a compiler cannot unroll into a loop, it would merely call the function again.

From what I know the computer has to completely analyze a function before it can be defined.
When the compiler sees that one defines a function POW, then it tells itself: now we are defining function POW. If it then inside the definition sees a call to POW, then the compiler says to itself: oh, this seems to be a call to the function that I'm currently compiling and it can then create code to make a recursive call.

A function is just a block of code. It's name is just help so you don't have to calculate the exact address it will end up in. The programming language will turn the names into where the program is to go to execute.
How one function call another is by storing the address of the next command in this function on the stack, perhaps add arguments to the stack and then jump to the address location of the function. The function itself jumps to the return address it finds so that control goes back to the callee. There are several calling conventions implemented by the language on which side do what. CPUs don't really have function support so just like there is nothing called a while loop in CPUs functions are emulated.
Just like functions have names, arguments have names too, however they are mere pointers just like the return address. When calling itself it just adds a new return address and arguments onto the stack and jump to itself. The top of the stack will be different and thus the same variable names are unique addresses to the call so x and y in the previous call is somewhere else than the current x and y. In fact there is no special treatment needed for calling itself than calling anything else.
Historically the first high level language, Fortran, did not support recursion. It would call itself but when it returned it returned to the original callee without doing the rest of the function after the self call. Fortran itself would have been impossible to write without recursion so while itself used recursion it did not offer it to the programmer that used it. This limitation is the reason why John McCarthy discovered Lisp.

I think to see how this can work in general, and in particular in cases where recursive calls can't be turned into loops, it's worth thinking about how a general compiled language might work, because the problems are not different.
Let's imagine how a compiler might turn this function into machine code:
(defun foo (x)
(+ x (bar x)))
And let's assume that it does not know anything about bar at the time of compilation. Well, it has two options.
It can compile foo in such a way that the call to bar is translated a set of instructions which say, 'look up the function definition stored under the name bar, whatever it currently is, and arrange to call that function with the right arguments'.
It can compile foo in such a way that there is a machine-level function call to a function but the address of that function is left as a placeholder of some kind. And it can then attach some metadata to foo which says: 'before this function is called you need to find the function named bar, find its address, splice it into the code in the right place, and remove this metadata.
Both of these mechanisms allow foo to be defined before it's known what bar is. And note that instead of bar I could have written foo: these mechanisms deal with recursive calls too. They differ apart from that, however.
The first mechanism means that, every time foo is called it needs to do some kind of dynamic lookup for bar which will involve some overhead (but this overhead can be pretty small):
as a consequence of this the first mechanism will be slightly slower than it might be;
but, also as a consequence of this, if bar gets redefined, then the new definition will get picked up, which is a very desirable thing for an interactive language, which Lisp implementations usually are.
The second mechanism means that, after foo has all its references to other functions linked in to it, then the calls happen at the machine level:
this means they will be quick;
but that redefinition will be, at best, more complicated or, at worst, not possible at all.
The second of these implementations is close to how traditional compilers compile code: they compile code leaving a bunch of placeholders with associated metadata saying what names those placeholders correspond to. A linker, (sometimes known as a link-loader, or loader) then grovels over all the files produced by the compiler as well as other libraries of code and resolves all these references, resulting in a bit of code which can actually be run.
A very simple-minded Lisp system might work entirely by the first mechanism (I am pretty sure that this is how Python works, for instance). A more advanced compiler will probably work by some combination of the first and second mechanism. As an example of this, CL allows the compiler to make assumptions that apparent self-calls in functions really are self-calls, and so the compiler may well compile them as direct calls (essentially it will compile the function and then link it on the fly). But when compiling code in general, it might call 'through the name' of the function.
There are also more-or-less heroic strategies which things could do: for instance at the first call of a function link it, on the fly, to all the things it refers to, and note in their definitions that if they change then this thing needs to be unlinked as well so it all happens again. These kind of tricks once seemed implausible, but compilers for languages like JavaScript do things at least as hairy as this all the time now.
Note that compilers and linkers for modern systems actually do something more complicated than I've described, because of shared libraries &c: what I described is more-or-less what happened pre shared-library.

Related

Knowing when what you're looking at must be a macro

I know there is macro-function, explained here, which allows you to check, but is it also possible in simply reading lisp source to sometimes infer of what you're looking at "that must be a macro"? (assuming of course you have never seen the function/macro before).
I'm fairly sure the answer is yes, but as this seems so fundamental, I thought worth asking, especially because any nuances on this may be valuable & interesting to know about.
In Paul Graham's ANSI Common Lisp, p70, he is describing how to use defstruct.
When I see (defstruct point x y), were I to know absolutely nothing about what defstruct was, this could just as well be a function.
But when I see
(defstruct polemic
(subject "foo")
(effect "bar"))
I know that must be a macro because (let's assume), I also know that subject and effect are undefined functions. (I know that because they error with undefined function when called 'at the top level'(?)) (if that's the right term).
If the two list arguments to defstruct above were quoted, it would not be so simple. Because they're not quoted, it must be a macro.
Is it as simple as that?
I've changed the field names slightly from those used on the book to make this question clearer.
Finally, Graham writes:
"We can specify default values for structure fields by enclosing the field name and a default expression in a list in the original definition"
What I'm noticing is that that's true but it is not a (quoted) list. Would any readers of this post have phrased the above sentence at all differently (given that macros haven't been introduced in the book yet (though I have a basic awareness of what they are)).
My feeling is it's not a "data list" those default expressions are enclosed in. (apologies for bad terminology) - seeking how rightly to conceptualise here.
In general, you're right: if there's some nesting inside the call and you are sure that the car's of the nested lists aren't functions - it's a macro.
Also, almost always, def-something and with-something are macros.
But there's no guarantee. The question is, what are you trying to accomplish? Some code walking/transformation or external processing (like in an editor). For the latter, you should keep in mind that full control is possible only if you perform code evaluation, although heuristics (like in Emacs) can take you pretty far. Or you just want to develop your intuition for faster code reading...
There is a set of conventions that identify quite cleary what forms are supposed to be macros, simply by mimicking the syntax of existing macros or special operators of CL.
For example, the following is a mix of various imaginary macros, but even without knowing their definition, the code shouldn't be too hard to figure out:
(defun/typed example ((id (integer 0 10)))
(with-connection (connection (connect id))
(do-events (event connection)
(event-case event
(:quit (&optional code) (return code))))))
The usual advice about macros is to avoid them if possible, so if you spot something that doesn't make sense as a lisp expression, it probably is, or is enclosed in, a macro.
(defstruct point x y)
[...] were I to know absolutely nothing about what defstruct was, this could just as well be a function.
There are various hints that this is not a function. First of all, the name starts with def. Then, if defstruct was a function, then point, x and y would all be evaluated before calling the function, and that means the code would be relying on global variables, even though they are not wearing earmuffs (e.g. *point*, *x*, *y*), and you probably won't find any definition for them in the preceding forms (or later in the same compilation unit). Also, if it was a function, the result would be discarded directly since it is not used (this is a toplevel form). That only indicates the probable presence of side-effects, but still, this would be unusual.
A top-level function with side-effects would look like this instead, with quoted data:
(register-struct 'point '(x y))
Finally, there are cases where you cannot easily guess if you are using a macro or a function:
(my-get object :slot)
This could be a function call, or you could have a macro that turns the above to (aref object 0) (assuming :slot is the zeroth slot in object, because all your objects are assumed to be of a certain custom type backed by a vector). You could also have compiler macros. In case of doubt, try to macroexpand it and look at the documentation.

What do "continuations" mean in functional programming?(Specfically SML)

I have read a lot about continuations and a very common definition I saw is, it returns the control state.
I am taking a functional programming course taught in SML.
Our professor defined continuations to be:
"What keeps track of what we still have to do"
; "Gives us control of the call stack"
A lot of his examples revolve around trees. Before this chapter, we did tail recursion. I understand that tail recursion lets go of the stack to hold the recursively called functions by having an additional argument to "build" up the answer. Reversing a list would be built in a new accumulator where we append to it accordingly. Also, he said something about functions are called(but not evaluated) except till we reach the end where we replace backwards. He said an improved version of tail recursion would be using CPS(Continuation Programming Style).
Could someone give a simplified explanation of what continuations are and why they are favoured over other programming styles?
I found this stackoverflow link that helped me, but still did not clarify the idea for me:
I just don't get continuations!
Continuations simply treat "what happens next" as first class objects that can be used once unconditionally, ignored in favour of something else, or used multiple times.
To address what Continuation Passing Style is, here is some expression written normally:
let h x = f (g x)
g is applied to x and f is applied to the result.
Notice that g does not have any control. Its result will be passed to f no matter what.
in CPS this is written
let h x next = (g x (fun result -> f result next))
g not only has x as an argument, but a continuation that takes the output of g and returns the final value. This function calls f in the same manner, and gives next as the continuation.
What happened? What changed that made this so much more useful than f (g x)? The difference is that now g is in control. It can decide whether to use what happens next or not. That is the essence of continuations.
An example of where continuations arise are imperative programming languages where you have control structures. Whiles, blocks, ordinary statements, breaks and continues are all generalized through continuations, because these control structures take what happens next and decide what to do with it, for example we can have
...
while(condition1) {
statement1;
if(condition2) break;
statement2;
if(condition3) continue;
statement3;
}
return statement3;
...
The while, the block, the statement, the break and the continue can all be described in a functional model through continuations. Each construct can be considered to be a function that accepts the
current environment containing
the enclosing scopes
optional functions accepting the current environment and returning a continuation to
break from the inner most loop
continue from the inner most loop
return from the current function.
all the blocks associated with it (if-blocks, while-block, etc)
a continuation to the next statement
and returns the new environment.
In the while loop, the condition is evaluated according to the current environment. If it is evaluated to true, then the block is evaluated and returns the new environment. The result of evaluating the while loop again with the new environment is returned. If it is evaluated to false, the result of evaluating the next statement is returned.
With the break statement, we lookup the break function in the environment. If there is no function found then we are not inside a loop and we give an error. Otherwise we give the current environment to the function and return the evaluated continuation, which would be the statement after the the while loop.
With the continue statement the same would happen, except the continuation would be the while loop.
With the return statement the continuation would be the statement following the call to the current function, but it would remove the current enclosing scope from the environment.

Percent sign in defun and defstruct

I am teaching myself Common Lisp. I have been looking at an example of Conway's game of life, and there is a piece of syntax I do not understand.
The complete code is available here. The part in particular I am having trouble with is as follows:
(defstruct (world (:constructor %make-world))
current
next)
(defun make-world (width height)
(flet ((make-plane (width height)
(make-array (list width height)
:element-type 'bit
:initial-element 0)))
(%make-world
:current (make-plane width height)
:next (make-plane width height))))
I am wondering, first, what is the significance of the percent-sign in %make-world? Second, why does the constructor specify two different names? (make-world and %make-world) I have seen this syntax in use before, but the names are always the same. It seems like there is some deeper functionality, but it is escaping me.
There are several naming conventions is the Lisp world when it comes to identifiers. For an overview see: http://www.cliki.net/Naming+conventions
Making objects or structures can be done with system generated functions. DEFSTRUCT will create a MAKE-FOO function with init values for the slots as keyword arguments.
Sometimes people prefer functions with normal positional arguments - it's shorter to write and the arguments have to be given when calling the function - you can't omit them.
Here in this case there is the need to name the DEFSTRUCT generated function in such a way that it does not collide with the name, which the user should use. So %MAKE-FOO says that is an internal helper function to the library and is expected to NOT be called by user-level code.
My Lisp is a little rusty, but I believe it goes like this:
The % sign has no special meaning. I've seen it used for internal functions (e.g. defined by labels), but nothing wil stop you from calling it normally. If you look at defstruct documetation, you'll see that (:constructor %make-world) defines a named constructor %make-world (by default the constructor would be called make-world. This contructor can be used to create world structs, initializing fields using named parameters.
The function make-world exists to make creating these structs easier. The thing is, current and next should be 2-dimensional arrays, but it's more convenient if, instead of passing these arrays to the constructor, you could just say what the dimensions are and a function would create those arrays for you. Which is exactly what make-world does here. It first defines an internal function make-plane, which can create an array, then uses it to create 2 arrays and pass them to the constructor %make-plane.
In line with the usual usage of the % character (again, this is just a convention), it tells you that, as a programmer wishing to use the world struct, you should not use the %make-world constructor, but the make-world function instead.

What does it mean to "open code" something in Common Lisp?

In the SBCL user manual there are several references to the term "open code". Common Lisp hackers also use this term when referring to optimizing code.
Could you please explain what it means to "open code" something and give an example of how it works?
What It Is
Open-coding, AKA inlining, means replacing function calls with inline assembly.
The idea is that funcall and apply are expensive (they require saving and restoring the stack &c) and replacing them with the few operations which constitute the function can be beneficial.
E.g., the function 1+ is a single instruction when the argument is a fixnum (which it usually is in practice), so turning the funcall into the two parallel branches (fixnum and otherwise) would be a win.
How to Control it
Declarations
They user can control this optimization explicitly by the inline
declaration (which implementations are free to ignore).
The user can also influence this optimization by the optimize declaration.
Both will affect inlining the code of a function defined just as a function (see below).
Macros
The "old" way is to implement the function as a macro. E.g., instead of
(defun last1f (list)
(car (last list)))
write
(defmacro last1m (list)
`(car (last ,list)))
and last1m will be always open-coded. The problem with this approach is that you cannot use last1m as a function - you cannot pass it to, say, mapcar.
Thus Common Lisp has an alternative way - compiler macros, which tell the compiler how to transform the form before compiling it:
(define-compiler-macro last1f (list)
;; use in conjunction with (defun last1f ...)
`(car (last ,list)))
See also the excellent examples in the aforelinked CLHS page.
Its Effects on Optimization
A comment asked about the effects of inlining, i.e., what optimizations result from it. E.g., constant propagation in addition to eliminating a function call.
The answer to this question is left to implementations.
IOW, the CL standard does not specify what optimizations must be done.
However, Minimal Compilation implies that
if an implementation does something (e.g. constant folding), it will be
done for compiler macros too.
For more details, you should compare the results of
disassemble with and
without the declarations and whatnot and see the effects.
For explanations, you should ask the vendor (e.g., by using the appropriate
tag here - e.g., sbcl, clisp, &c).

How does one implement a "stackless" interpreted language?

I am making my own Lisp-like interpreted language, and I want to do tail call optimization. I want to free my interpreter from the C stack so I can manage my own jumps from function to function and my own stack magic to achieve TCO. (I really don't mean stackless per se, just the fact that calls don't add frames to the C stack. I would like to use a stack of my own that does not grow with tail calls). Like Stackless Python, and unlike Ruby or... standard Python I guess.
But, as my language is a Lisp derivative, all evaluation of s-expressions is currently done recursively (because it's the most obvious way I thought of to do this nonlinear, highly hierarchical process). I have an eval function, which calls a Lambda::apply function every time it encounters a function call. The apply function then calls eval to execute the body of the function, and so on. Mutual stack-hungry non-tail C recursion. The only iterative part I currently use is to eval a body of sequential s-expressions.
(defun f (x y)
(a x y)) ; tail call! goto instead of call.
; (do not grow the stack, keep return addr)
(defun a (x y)
(+ x y))
; ...
(print (f 1 2)) ; how does the return work here? how does it know it's supposed to
; return the value here to be used by print, and how does it know
; how to continue execution here??
So, how do I avoid using C recursion? Or can I use some kind of goto that jumps across c functions? longjmp, perhaps? I really don't know. Please bear with me, I am mostly self- (Internet- ) taught in programming.
One solution is what is sometimes called "trampolined style". The trampoline is a top-level loop that dispatches to small functions that do some small step of computation before returning.
I've sat here for nearly half an hour trying to contrive a good, short example. Unfortunately, I have to do the unhelpful thing and send you to a link:
http://en.wikisource.org/wiki/Scheme:_An_Interpreter_for_Extended_Lambda_Calculus/Section_5
The paper is called "Scheme: An Interpreter for Extended Lambda Calculus", and section 5 implements a working scheme interpreter in an outdated dialect of Lisp. The secret is in how they use the **CLINK** instead of a stack. The other globals are used to pass data around between the implementation functions like the registers of a CPU. I would ignore **QUEUE**, **TICK**, and **PROCESS**, since those deal with threading and fake interrupts. **EVLIS** and **UNEVLIS** are, specifically, used to evaluate function arguments. Unevaluated args are stored in **UNEVLIS**, until they are evaluated and out into **EVLIS**.
Functions to pay attention to, with some small notes:
MLOOP: MLOOP is the main loop of the interpreter, or "trampoline". Ignoring **TICK**, its only job is to call whatever function is in **PC**. Over and over and over.
SAVEUP: SAVEUP conses all the registers onto the **CLINK**, which is basically the same as when C saves the registers to the stack before a function call. The **CLINK** is actually a "continuation" for the interpreter. (A continuation is just the state of a computation. A saved stack frame is technically continuation, too. Hence, some Lisps save the stack to the heap to implement call/cc.)
RESTORE: RESTORE restores the "registers" as they were saved in the **CLINK**. It's similar to restoring a stack frame in a stack-based language. So, it's basically "return", except some function has explicitly stuck the return value into **VALUE**. (**VALUE** is obviously not clobbered by RESTORE.) Also note that RESTORE doesn't always have to return to a calling function. Some functions will actually SAVEUP a whole new computation, which RESTORE will happily "restore".
AEVAL: AEVAL is the EVAL function.
EVLIS: EVLIS exists to evaluate a function's arguments, and apply a function to those args. To avoid recursion, it SAVEUPs EVLIS-1. EVLIS-1 would just be regular old code after the function application if the code was written recursively. However, to avoid recursion, and the stack, it is a separate "continuation".
I hope I've been of some help. I just wish my answer (and link) was shorter.
What you're looking for is called continuation-passing style. This style adds an additional item to each function call (you could think of it as a parameter, if you like), that designates the next bit of code to run (the continuation k can be thought of as a function that takes a single parameter). For example you can rewrite your example in CPS like this:
(defun f (x y k)
(a x y k))
(defun a (x y k)
(+ x y k))
(f 1 2 print)
The implementation of + will compute the sum of x and y, then pass the result to k sort of like (k sum).
Your main interpreter loop then doesn't need to be recursive at all. It will, in a loop, apply each function application one after another, passing the continuation around.
It takes a little bit of work to wrap your head around this. I recommend some reading materials such as the excellent SICP.
Tail recursion can be thought of as reusing for the callee the same stack frame that you are currently using for the caller. So you could just re-set the arguments and goto to the beginning of the function.

Resources