Lisp %var naming convention - common-lisp

In the https://www.cliki.net/Naming+conventions page i read
low-level, fast, dangerous function, or Lisp system specific implementation of foo
Can someone translate this into again with perhaps an example of those contexts?

Take this function:
(defun add (x y)
(+ x y))
which does general addition. It supports integers, floats, complex numbers and ratios as arguments - all the numeric types defined in Lisp.
However if you have:
(defun add (x y)
(declare (fixnum x y) (optimize speed (safety 0) (debug 0)))
(the fixnum (+ x y)))
this tells the compiler to optimize the function for fixnum values that fit in an assembly register. When you use an implementation that compiles down to assembly this results in very efficient code, as you can check with (disassemble (compile 'add)). For example in Allegro CL:
cl-user(10): (disassemble 'add)
;; disassembly of #<Function add>
;; formals: x y
;; code start: #x10008ae4740:
0: 48 01 f7 addq rdi,rsi
3: f8 clc
4: 4c 8b 74 24 10 movq r14,[rsp+16]
9: c3 ret
However this faster code comes at the cost of not having any error checking: the code assumes you pass in 2 fixnum arguments, no more or less, and also you promise that the result will not overflow the fixnum range - so not e.g. (add most-positive-fixnum most-positive-fixnum).
If you break this promise by passing floats like (add 3.4 1.2) or only arg (add 3) you are asking for big trouble - this might corrupt data structures or even exit the Lisp.
The second function could be called %add-2-fixnums to signal it's special, not intended for general use but instead specialized for certain arguments, and callers need to be very careful.

The function without % is the general interface into some functionality and may even check its arguments. It then calls a low-level implementation.
The same function with the prefix % then might implement the functionality efficiently, even in a platform specific way.
If this function then uses a macro for even more specific low-level code generation, one might prefix the macro with %%.
these other cases can be seen, too:
type specific functions
inlined functions
implementation specific functions
very low-level code working near the Lisp implementation level
Lisp assembly functions (functions written in inline assembler)
Example: If one checks for example the source code of Clozure Common Lisp, there is a lot of use of the % naming conventions.

Related

What is the name of this q/kdb+ feature? Does any flavor of LISP implement it? How?

The q programming language has a feature (which this tutorial calls "function projection") where a function of two or more parameters can be called with fewer parameters than it requires, but the result is an intermediate object, and the function will not be executed until all remaining parameters are passed; one way to see it is that functions behave like multi-dimensional arrays, so that (f[x])[y] is equivalent to f[x;y]. For example ...
q)add:{x+y}
q)add[42;]
{x+y}[42;]
q)add[42;][3]
45
q)g:add[42;]
q)g[3]
45
Since q does not have lexical scoping, this features become very useful in obtaining lexical scoping behavior by passing the necessary variables to an inner function as a partial list of parameters; e.g. a print parameter decorator can be constructed using this feature:
q)printParameterDecorator:{[f] {[f;x] -1 "Input: ",string x; f x}f};
q)f: printParameterDecorator (2+);
q)f 3
Input: 3
5
My questions:
Is the term "function projection" a standard term? Or does this feature carry a different name in the functional programming literature?
Does any variety of LISP implement this feature? Which ones?
Could you provide some example LISP code please?
Is the term "function projection" a standard term? Or does this feature carry a different name in the functional programming literature?
No, you usually call it partial application.
Does any variety of LISP implement this feature? Which ones?
Practically all Lisp allow you to partially apply a function, but usually you need to write a closure explicitly. For example in Common Lisp:
(defun add (x y)
(+ x y))
The utility function curry from alexandria can be used to create a closure:
USER> (alexandria:curry #'add 42)
#<CLOSURE (LAMBDA (&REST ALEXANDRIA.1.0.0::MORE) :IN CURRY) {1019FE178B}>
USER> (funcall * 3) ;; asterisk (*) is the previous value, the closure
45
The resulting closure is equivalent to the following one:
(lambda (y) (add 42 y))
Some functional languages like OCaml only allow functions to have a single parameter, but syntactically you can define functions of multiple parameters:
(fun x y -> x + y)
The above is equivalent to:
(function x -> (function y -> x + y))
See also What is the difference between currying and partial application?
Nb. in fact the q documentation refers to it as partial application:
Notationally, projection is a partial application in which some arguments are supplied and the others are omitted
I think another way doing this :
q)f:2+
q)g:{"result: ",string x}
q)'[g;f]3
"result: 5"
It is composite function, passing 3 to f, then the result from f will be passed to g.
I'm not sure if it is LISP, but it could achieve the same result.

Using deftransform/defknown in SBCL internals to get the compiler to transform user authored functions

At the end of section 6.5 in the current SBCL manual, we have the following quote:
If your system's performance is suffering because of some construct which could in principle be compiled efficiently, but which the SBCL compiler can't in practice compile efficiently, consider writing a patch to the compiler and submitting it for inclusion in the main sources. Such code is often reasonably straightforward to write; search the sources for the string “deftransform” to find many examples (some straightforward, some less so).
I've been playing around and found the likes of sb-c::defknown and sb-c::deftransform but thus far have had little luck in successfully adding any new transforms that do anything.
Lets pretend i have the following 3 toy functions:
(defun new-+ (x y)
(+ x y))
(defun fixnum-+ (x y)
(declare (optimize (speed 3) (safety 0))
(fixnum x y))
(+ x y))
(defun string-+ (x y)
(declare (optimize (speed 3) (safety 0))
(string x y))
(concatenate 'string x y))
As a purely toy example, lets say we wanted to tell the compiler that it could transform calls to my user defined function new-+ into calls to either fixnum-+ or string-+.
The condition for the compiler transforming (new-+ x y) into (fixnum-+ x y) would be knowing that the arguments x and y are both of type fixnum, and the conditions for transforming into (string-+ x y) would be knowing that the arguments x and y are both of type string.
So the questions:
Can I actually do this?
What are the actual mechanics of doing so and generating other user based transforms/extensions?
Any reading or sources apart from manually reading through the source to discover more info regarding this?
If i can't do this using the likes of deftransform, is there any other way I could do so?
Note: I'm aware of the operations and nature of macros and generic functions in general common lisp coding, and don't consider using them an answer to this question, since I'm specifically curious about extending the SBCL internals and interacting with its compiler.
You achieve what you want in portable Common Lisp using define-compiler-macro
AFAIK reading the SBCL sources is the only way to learn how deftransform works. But before diving into SBCL sources checkout Paul Khuong's Starting to Hack on SBCL or at the very least The Python Compiler for CMU Common Lisp it links to to have an overview of how SBCL works.
I now attempt to provide a broad overview that answers my questions and may point others towards constructively investigating similar directions.
Can I actually do this?
Yes. Though depending on the specifics of how and why, you may have a choice of options available to you, and they may have variable levels of portability between Common Lisp implementations.
What are the actual mechanics of doing so and generating other user based transforms/extensions?
I answer this with respect to two possible methods that the programmer may choose to get started, and which seem most applicable.
For both examples, I reiterate that with limited reflection on the topic, i think it bad form to transform relationships between the input/output mappings of a function. I do so here for demonstration purposes only, to verify that the transformations I'm implementing are actually taking place.
I actually had quite a difficult time testing my transformations were actually happening: SBCL especially seems quite happy to optimise certain expressions and forms, there are additional pieces of information you can make available to the compiler not covered here. Additionally, there may be other transformations available, and so just because your transform isn't used, doesn't necessarily mean it isn't "working".
Environments and Define-Compiler-Macro Extensions using Common Lisp the Language 2
I was previously under the impression that DEFINE-COMPILER-MACRO was relatively limited in its abilities, working only on types connected with literal values, but this is not necessarily the case.
To demonstrate this, i use three user-defined functions and a compiler macro.
First: We will begin with a general addition function gen+ that decides at run-time to either add two numbers together, or concatenate two strings:
(defun gen+ (x y)
(if (and (numberp x)
(numberp y))
(+ x y)
(concatenate 'string x y)))
But say we know at compile time that in certain instances, only strings will be fed to this function. Let's define our specialised string addition function, and to prove its actually being used, we'll do a very bad thing stylistically and additionally concatenate the string "kapow" as well:
(defun string+ (x y)
(declare (optimize (speed 3) (safety 0))
(string x y))
(concatenate 'string x y "kapow"))
The following function is a very simple convenience function that checks an environment to establish whether the declared type of the variable bound in that environment is eq to STRING. We're using a NON-ANSI function here from Common Lisp the Language 2. In sbcl, the function VARIABLE-INFORMATION, and other cltl2 functions are available in the sb-ctlt2 package.
(defun env-stringp (symbol environment)
(eq 'string
(cdr (assoc 'type
(nth-value 2 (sb-cltl2:variable-information symbol environment))))))
Lastly, we use DEFINE-COMPILER-MACRO to generate the transformation. I've tried to name things in this code differently from other examples I've seen so that people can follow along and not get mixed up with what variable/symbol is in which scope/context. A couple of things I didn't know previously about DEFINE-COMPILER-MACRO.
The variable that immediately follows the &whole parameter is a variable which represents the form of the initial call. In our example it will be bound to the list (GEN+ A B)
arg1 is bound to the symbol A
arg2 is bound to the symbol B
The &environment parameter says that within this macro, the symbol ENV will be bound to the environment in which the macro is being evaluated. This is what lets us "kind of step back out of the macro" and check the surrounding code for declarations regarding the type of the variables represented by the symbols bound to 'ARG1' and 'ARG2'
In this definition, we tell the compiler macro that if the user has declared the parameters of GEN+ to be strings, then replace the call to (GEN+ ARG1 ARG2) with a call to (STRING+ ARG1 ARG2).
Note that because the condition of this transformation is the result of a user-defined operation on the environment, if the parameters to GEN+ are literal strings, the transformation will not be triggered, because the environment does not see that the variables have been declared strings. To do that, you would have to add another option and transformation to explicitly check the types of the values in ARG1 and ARG2 as per a traditional use of DEFINE-COMPILER-MACRO. This can be left as an exercise for the reader. But beware about the utility of doing so, because SBCL, for instance, might constant-fold your expression rather than use your transformation anyway.
(define-compiler-macro gen+ (&whole form arg1 arg2 &environment env)
(cond ((and (env-stringp arg1 env)
(env-stringp arg2 env))
`(string+ ,arg1 ,arg2))
(t form)))
Now we can test it with a simple call with type declarations:
(let ((a "bob")
(b "dole"))
(declare (string a b))
(gen+ a b))
This should return the string "bobdolekapow" as the call to GEN+ was transformed into a call to STRING+ based on the declared types of the variables A and B, not just literal types.
Using Basic (defknown)/(deftransform) Combinations with the SBCL Implementation Compiler
The previous technique is indeed potentially useful, more powerful and flexible than transforming on the types of literals, and while not standard ANSI Common Lisp, is more portable/adaptable to other implementations than the technique that follows.
A reason you might forego the former technique in preference of the one that follows, is that the former doesn't get you everything. You still had to declare the types of the variables a and b and write the user-defined function to extract the declared type information from the environment.
If you can interact directly with the SBCL compiler however, with the cost of potentially some brittle-ness and extreme non-portability, you now gain the ability to hack into the compiler itself and gain the benefits of things like type propagation: you might not need to explicitly inform the compiler of the types of A and B for it to implement your transformation.
For our example, we will implement a very basic transformation on the functions wat and string-wat, which are identical in form to our previous functions gen+ and string+.
Understand there are many more pieces of information and optimisation you can feed the SBCL compiler not covered here. And if anyone more experienced with SBCL internals wants to correct/extent anything regarding my impressions here, please comment and i'll be happy to update my answer:
First we tell the compiler about the existence and type signature of wat. We do this by calling defknown in the sb-c package and inform it that wat takes two parameters of any type: (T T) and that it returns a single value of any type: *
(sb-c:defknown wat (T T) *)
Then we define a simple transform using sb-c:deftransform, essentially saying when the two parameters fed to wat are strings, we transform the code into a call to string-wat.
(sb-c:deftransform wat ((x y) (string string) *)
`(string-wat x y))
The forms of wat and string-wat for completeness:
(defun wat (x y)
(if (and (numberp x)
(numberp y))
(+ x y)
(concatenate 'string x y)))
(defun string-wat (x y)
(declare (optimize (speed 3) (safety 0))
(string x y))
(concatenate 'string x y "watpow"))
And this time a demonstration in SBCL using bound variables but no explicit type declarations:
(let ((a (concatenate 'string "bo" "b"))
(b (concatenate 'string "dole")))
(wat a b))
And the returned string should be "bobdolewatpow".
Any reading or sources apart from manually reading through the source to discover more info regarding this?
I haven't been able to find anything much about this out there, and would say that to get much deeper, you're going to have to start trawling through some source code.
SBCL github mirror is currently available here.
User #PuercoPop has suggested background reading of Starting to Hack on SBCL and The Python Compiler for CMU Common Lisp, albeit I am including a link to a .pdf version rather than a .ps version commonly linked to.

What can you do with macros that can't be done with procedures? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've been reading sicp trying to understand scheme, specially macros. I noticed that sicp doesnt talk about macros at all. I read on Paul Graham's site that:
The source code of the Viaweb editor was probably about 20-25% macros. Macros are harder to write than ordinary Lisp functions, and it's considered to be bad style to use them when they're not necessary. So every macro in that code is there because it has to be.
So I desperately wanted to know how to write macros and what to use them for, so I read this site about macros: http://www.willdonnelly.net/blog/scheme-syntax-rules/
But that site just explains how to write a "for" macro. I think Paul Graham talks about CL and the other blog talks about scheme, but they are partly the same. So, wat can be an example of something that cannot be done with ordinary procedures and thus must be done with macros?
EDIT: I have already seen a similar question, but wanted to ask if there was some sort of crazy algorithm that you could do with macros that couldn't possible be done with procedures(other than syntactic sugar as described in that question's answer).
Macros are a tricky subject that I've personally reversed my own opinions on no less than a dozen times, so take everything with a grain of salt.
If you are just getting acquainted with macros, you'll find the most helpful to be those macros that clarify or expedite existing expressions.
One such macro that you may have been exposed to is the anaphoric macro, which remains as popular today as the day Paul Graham coined the term:
(define-syntax (aif x)
(syntax-case x ()
[(src-aif test then else)
(syntax-case (datum->syntax-object (syntax src-aif) '_) ()
[_ (syntax (let ([_ test]) (if (and _ (not (null? _))) then else)))])]))
These macros introduce "anaphora" variables, namely it that you can use in the consequent and alternative clauses of an if statement, e.g:
(aif (+ 2 7)
(format nil "~A does not equal NIL." it)
(format nil "~A does equal NIL." it))
This saves you the hassle of typing a let statement -- which can add up in large projects.
More broadly, transformations that change the structure of a program, dynamically generate "templated" code or otherwise "break" rules (that you hopefully know well enough to break!) are the traditional domain of the macro. I could write on and on about how I've simplified countless projects and assignments with macros -- but I think you'll get the idea.
Learning Scheme-style macros is a little daunting, but I assure you that there are excellent guides to syntax-rules-style macros for the merely eccentric, and as you gain more and more experience with macros, you'll probably come to the conclusion that they are a beautiful idea, worthy of the name "Scheme".
If you happen to use Racket (previously known as PLT Scheme) -- It has excellent documentation on macros, even providing a few neat tricks along the way (I'd also guess most of what is written there can be readily written in another Scheme dialect without issue)
Here is one standard answer (part of which is also covered in SICP - see Exercise 1.6 for example; also search for the keyword "special form" in the text): in a call-by-value language like Scheme, an ordinary procedure (function) must evaluate its argument(s) before the body. Thus, special forms such as if, and, or, for, while, etc. cannot be implemented by ordinary procedures (at least without using thunks like (lambda () body), which are introduced for delaying the evaluation of the body and may incur performance overheads); on the other hand, many of them can be implemented with macros as indeed done in RnRS (see, for instance, the definition of and by define-syntax on page 69).
The simple answer is that you can make new syntax where delaying evaluation is essential for the functionality. Imagine you want a new if that works the same way as if-elseif. In Lisp it would be something that is like cond, but without explicit begin. Example usage is:
(if* (<= 0 x 9) (list x)
(< x 100) (list (quotient x 10) (remainder x 10))
(error "Needs to be below 100"))
It's impossible to implement this as a procedure. We can implement something that works the same by demanding the user to give us thunks and thus the parts will become procedures that can be run instead:
(pif* (lambda () (<= 0 x 9)) (lambda () (list x))
(lambda () (< x 100)) (lambda () (list (quotient x 10) (remainder x 10)))
(lambda () (error "Needs to be below 100")))
Now it's easy implementation:
(define (pif* p c a . rest)
(if (p)
(c)
(if (null? rest)
(a)
(apply pif* a rest))))
This is how JavaScript and almost every other language that doesn't support macros do it. Now if you want to give the user the power of making the first instead of the second you need macros. Your macro can write the first to the second so that your implementation can be a function or you can just change the code to a nested if:
(define-macro if*
(syntax-rules ()
((_ x) (error "wrong use of if*"))
((_ p c a) (if p c a))
((_ p c next ...) (if p c (if* next ...)))))
Or if you want to use the procedure it's even simpler to just wrap each argument in a lambda:
(define-syntax if*
(syntax-rules ()
((_ arg ...) (pif* (lambda () arg) ...)))
The need of macros is to reduce boilerplate and simplify syntax. When you see the same structure you should try to abstract it into a procedure. If that is impossible since the arguments are used in special forms you do it with a macro.
Babel, a ES6 to ES5 transpiler converts JavaSript ES6 syntax to ES5 code. If ES5 had macros it would have been as easy as making compability macros to support ES6. In fact most features of newer versions of the language would have been unnecessary since programmers don't have to wait for a new version of a language to give it new fancy features. Almost nothing of new language features in Algol languages (PHP, Java, JavaScript, Python) would have been necessary if the language had hygenic macro support.

Using (declare (type ...)) but still have 'safe' functions

Is it possible to use (declare (type ...)) declarations in functions but also perform type-checking on the function arguments, in order to produce faster but still safe code?
For instance,
(defun add (x y)
(declare (type fixnum x y))
(the fixnum x y))
when called as (add 1 "a") would result in undefined behaviour, so preferably I'd like to modify it as
(defun add (x y)
(declare (type fixnum x y))
(check-type x fixnum)
(check-type y fixnum)
(the fixnum x y))
but I worry that the compiler is allowed to assume that the check-type always passes and thus omit the check.
So my question is, is the above example wrong as I expect it, and secondly, is there any common idiom* in use to achieve type-safety with optimised code?
*) I can imagine, for instance, using an optimised lambda and calling that after doing the type-checking, but I wonder if that's the most elegant way.
You can always check the types first and then enter optimized code:
(defun foo (x)
(check-type x fixnum)
(locally
(declare (fixnum x)
(optimize (safety 0)))
x))
The LOCALLY is used for local declarations.
Since you are asking:
Is it possible to use (declare (type ...)) declarations in functions but also perform type-checking on the function arguments, in order to produce faster but still safe code?
it seems to me that you are missing an important point about the type system of Common Lisp, that is that the language is not (and cannot-be) statically typed. Let's clarify this aspect.
Programming languages can be roughly classified in three broad categories:
with static type-checking: every expression or statement is
checked for type-correctness at compile time, so that type errors
can be detected during program development, and the code is more
efficient since no check for types must be done at run-time;
with dynamic type checking: every operation is checked at run time
for type correctness, so that no type-error can occur at run-time;
without type checking: type-errors can occur at run-time so that the
program can stop for error or have an undefined behaviour.
Edited
With the respect to the previous classification, the Common Lisp specification left to the implementations the burden of deciding if they want to follow the second or the third approach! Not only, but through the optimize declaration the specification lets the implementations free to change this dinamically, in the same program.
So, most implementations, at the initial optimization and safety levels, implements the second approach, enriched with the following two possibilities:
one can request to the compiler to omit run time type-checking when compiling some piece of code, typically for efficiency reasons, so that, inside that particular piece of code, and depending on the optimization and safety settings, the language can behave like the languages of the third category: this could be supported by hints through type declarations, like (declare (type fixnum x)) for variables and (the fixnum (f x)) for values;
one can insert into the code explicit type-checking tests to be performed at run-time, through check-type, so that an eventual difference in the type of the value checked will cause a “correctable error”.
Note, moreover, that different compilers can behave differently in checking types at compile times, but they can never reach the level of compilers for languages with static type checking because Common Lisp is a highly dynamical language. Consider for instance this simple case:
(defun plus1(x) (1+ x))
(defun read-and-apply-plus1()
(plus1 (read)))
in which no static type-checking can be done for the call (plus1 (read)), since the type of (read) is not known at compile time.

How to get the function expression from the function symbol in Common Lisp?

I'll show the thing that I want from Common Lisp that already works in Elisp:
(defun square (x)
(* x x))
(symbol-function 'square)
;; => (lambda (x) (* x x))
So by just knowing the symbol square, I want to retrieve its whole body.
I've looked at CL's:
(function-lambda-expression #'square)
;; =>
(SB-INT:NAMED-LAMBDA SQUARE
(X)
(BLOCK SQUARE (* X X)))
NIL
SQUARE
The return result is close to what I need, but it only works sometimes.
Most of the time, I get:
(function-lambda-expression #'list)
;; =>
NIL
T
LIST
Is there a more reliable function that does this? I'm aware of swank-backend:arglist that's very
good at retrieving the arguments, but I can't find a retriever for the body there.
UPDATE
It seems that the general solution isn't possible in plain lisp. Is this possible to do in SLIME?
Assume that I always have SLIME running, and all the code was loaded through there. Can the code be
obtained more reliably than just using SLIME's goto-definition and copying the text from there?
No. The saving of the source code at the symbol is not mandated by the standard, so any implementation may choose not to do that (this also has performance implications).
I guess that you might get that behaviour more often by declaiming optimize (debug 3), but I have not tried that; it is most likely implementation dependent.
Answer to update: Since you want to implement an editor functionality interacting with the Lisp image, I'd say that using the Lisp Interaction Mode (the LIM in SLIME) is exactly what you need to do.
Perhaps the reason is that the system/core functions, like #'list, has been compiled for speed when the image was built? Perhaps if you bootstrapped SBCL yourself you could make one that was slower and had more information. Or you can just look the the source sode:
(describe #'list)
#<FUNCTION LIST>
[compiled function]
Lambda-list: (&REST ARGS)
Declared type: (FUNCTION * (VALUES LIST &OPTIONAL))
Derived type: (FUNCTION (&REST T) (VALUES LIST &OPTIONAL))
Documentation:
Return constructs and returns a list of its arguments.
Known attributes: flushable, unsafely-flushable, movable
Source file: SYS:SRC;CODE;LIST.LISP
That last line is correct since under src/code/list.lisp you actually find the definition.
Notice the behaviour of describe and function-lambda-expression are implementation dependent. Eg. in CLISP the output is completely different.
(function-lambda-expression #'list) heretries to retrieve the code of a built-in function. Not every built-in function needs to have Lisp code as its implementation.
I haven't tried it, but maybe you can build SBCL yourself, such that it records the source code and saves it in the image.
For a standard image of SBCL I would not expect that it contains source code.

Resources