I want to write code with a multiple user interface backends (textual and graphical for instance), so they are easy to switch. My approach is using CLOS:
(defgeneric draw-user-interface (argument ui)
(:documentation "Present the user interface")
(:method (argument (ui (eql :tui)))
(format t "Textual user interface! (~A)" argument))
(:method (argument (ui (eql :gui)))
(format t "Graphical user interface! (~A)" argument)))
This approach seems OK at a first glance, but it has a few cons. To simplify calls, I define parameter ui-type which will be used in each function call, to simplify switching the backend, but it causes a problem when using higher order functions:
(defparameter *ui-type* :tui
"Preferred user interface type")
(draw-user-interface 3 *ui-type*)
;;; I can't use the following due to the `ui' argument:
;(mapcar #'draw-user-interface '(1 2 3))
;;; Instead I have to write this
(mapcar #'(lambda (arg)
(draw-user-interface arg *ui-type*))
'(1 2 3))
;; or this
(mapcar #'draw-user-interface
'(1 2 3)
(make-list 3 :initial-element *ui-type*))
;; The another approach would be defining a function
(defun draw-user-interface* (argument)
(draw-user-interface argument *ui-type*))
;; and calling mapcar
(mapcar #'draw-user-interface* '(1 2 3))
If such approach is taken, we could name the generic function %draw-user-interface and the wrapper function just draw-user-interface.
Is it valid approach or there is something more straightforward? Question is about providing a different backends for the same functionality, not necessarily the user interface.
The another use case might be a situation, when I have many implementations of the same algorithm (optimized for speed, memory consumption etc.) and I want to switch them in a clean way preserving the interface and the argument types.
I would implement the backends as separate classes, instead of passing a keyword around, since that would allow me to hook various state into an object and keep that around.
I'd probably (otherwise) use the generic function design you've been alluding to.
Common Lisp Interface Manager and multiple backends
An example for a UI layer in CLOS supporting multiple backends is CLIM, the Common Lisp Interface Manager. You could study its software design. See the links below. See for example the protocols around classes like port (a connection to a display service), medium (where drawing happens, a protocol class that corresponds to the output state for some kind of sheet), sheet (a surface for painting and input, roughly similar to hierarchical windows), graft (a sheet which stands in for a host window), ... In an application one opens a port (for example to a specific window system like X11/Motif) and the rest of the application should be running mostly unchanged. The architecture of CLIM maps all its services then to a specific CLIM backend, which provides the interface to X11/Motif (or whatever port you would be using).
For example the function draw-line would be drawing to sheets, streams and mediums. The generic function medium-draw-line* would then implement various versions of drawing lines to one or more medium subclasses.
In general this was not very successful, because a portable user interface layer brings complexity and needs a lot of work to develop and maintain. In the mid 90s the market for Lisp applications was small (see the AI Winter), CLIM wasn't good enough and the implementation was closed source and or proprietary. Later an open source / free implementation called McCLIM was developed which created working software - but eventually developers/users lost interest.
A bit of history
In former times Symbolics developed a user interface system called 'Dynamic Windows'. It was released in 1986. It ran in the Symbolics operating system and could draw to its native OS/Hardware combination and X11. Starting in around 1988 a portable CLOS-based version was developed. The first available versions (especially version 1.0 in 1991) was available on several platforms: Genera, X11, Mac and Windows. Later a new version was developed (version 2.0) which again ran on various systems, but included a complex object-oriented layer which provided a more explicit backend layer called Silica. This backend layer did not only support things like portable drawing, but also parts of an abstract window system. The more ambitious parts, like support for adaption in look and feel (sliders, window styles, scrollbars, menus, dialog elements, ...) were not fully worked out, but were at least available as a first generation version.
Pointers
A Guided Tour of CLIM, Common Lisp Interface Manager (PDF)
Silica : Implementation Reflection in Silica (PDF)
The Spec (which includes Silica): Common Lisp Interface Manager 2.0 Specification
To complement other answers, there are two libraries for this use case. Both are inspired in the Magritte Meta Model, you should check it out.
One is descriptions which allows you to define different 'views' of an object. It doesn't use CLOS but Sheeple, a prototype-based object system for CL. An earlier approach is MAO, which is CLOS based. It adds 3 additional slots to the standard slot object. attribute-label, attribute-function, and attribute-value. The function in attribute-function a transforms the slot-value into the final representation, if function is nil the value in attribute-value is used as is. And label is a description of the value, similar to labels in html5 forms.
By "backend" you mean frontend, right? As in, the part that the user interacts with, rather than the part that handles the application logic?
The cleanest option would be to divide your program into a library (which provides all the logic and features of the program without any UI code) and two completely separate UI programs that don't implement any features themselves, but rather just use the library. You can of course have a wrapper that chooses which interface to run, if necessary. You should keep each component in their own system.
Edit: When you want to switch between different algorithms, the best option is probably simply to define the interface as a class, and all the different algorithms as subclasses.
(defclass backend () ())
(defgeneric do-something (backend x y))
(defclass fast-backend (backend) ())
(defmethod do-something ((backend fast-backend) x y)
(format t "Using fast backend with arguments ~a, ~a.~%" x y))
(defclass low-mem-backend (backend) ())
(defmethod do-something ((backend low-mem-backend) x y)
(format t "Using memory efficient backend with arguments ~a, ~a.~%" x y))
(defun main (x y)
(let ((backends (list (make-instance 'fast-backend)
(make-instance 'low-mem-backend))))
(dolist (b backends)
(do-something b x y))))
Another edit: If you need to be able to use functions like mapcar, you might want to have a global variable containing the current backend. Then define a wrapper function that uses the global.
(defparameter *backend* (make-instance 'fast-backend))
(defun foobar (x y)
(do-something *backend* x y))
(defun main (x y)
(foobar x y)
(let ((*backend* (make-instance 'low-mem-backend)))
(foobar x y))
(foobar x y))
Related
When developing with Common Lisp, we have three possibilities to define new setf-forms:
We can define a function whose name is a list of two symbols, the first one being setf, e.g. (defun (setf some-observable) (…)).
We can use the short form of defsetf.
We can use the long form of defsetf.
We can use define-setf-expander.
I am not sure what is the right or intended use-case for each of these possibilities.
A response to this question could hint at the most generic solution and outline contexts where other solutions are superior.
define-setf-expander is the most general of these. All of setf's functionality is encompassed by it.
Defining a setf function works fine for most accessors. It is also valid to use a generic function, so polymorphism is insufficient to require using something else. Controlling evaluation either for correctness or performance is the main reason to not use a setf function.
For correctness, many forms of destructuring are not possible to do with a setf function (e.g. (setf (values ...) ...)). Similarly I've seen an example that makes functional data structures behave locally like a mutable one by changing (setf (dict-get key some-dict) 2) to assign a new dictionary to some-dict.
For performance, consider the silly case of (incf (nth 10000 list)) which if the nth writer were implemented as a function would require traversing 10k list nodes twice, but in a setf expander can be done with a single traversal.
At the end of section 6.5 in the current SBCL manual, we have the following quote:
If your system's performance is suffering because of some construct which could in principle be compiled efficiently, but which the SBCL compiler can't in practice compile efficiently, consider writing a patch to the compiler and submitting it for inclusion in the main sources. Such code is often reasonably straightforward to write; search the sources for the string “deftransform” to find many examples (some straightforward, some less so).
I've been playing around and found the likes of sb-c::defknown and sb-c::deftransform but thus far have had little luck in successfully adding any new transforms that do anything.
Lets pretend i have the following 3 toy functions:
(defun new-+ (x y)
(+ x y))
(defun fixnum-+ (x y)
(declare (optimize (speed 3) (safety 0))
(fixnum x y))
(+ x y))
(defun string-+ (x y)
(declare (optimize (speed 3) (safety 0))
(string x y))
(concatenate 'string x y))
As a purely toy example, lets say we wanted to tell the compiler that it could transform calls to my user defined function new-+ into calls to either fixnum-+ or string-+.
The condition for the compiler transforming (new-+ x y) into (fixnum-+ x y) would be knowing that the arguments x and y are both of type fixnum, and the conditions for transforming into (string-+ x y) would be knowing that the arguments x and y are both of type string.
So the questions:
Can I actually do this?
What are the actual mechanics of doing so and generating other user based transforms/extensions?
Any reading or sources apart from manually reading through the source to discover more info regarding this?
If i can't do this using the likes of deftransform, is there any other way I could do so?
Note: I'm aware of the operations and nature of macros and generic functions in general common lisp coding, and don't consider using them an answer to this question, since I'm specifically curious about extending the SBCL internals and interacting with its compiler.
You achieve what you want in portable Common Lisp using define-compiler-macro
AFAIK reading the SBCL sources is the only way to learn how deftransform works. But before diving into SBCL sources checkout Paul Khuong's Starting to Hack on SBCL or at the very least The Python Compiler for CMU Common Lisp it links to to have an overview of how SBCL works.
I now attempt to provide a broad overview that answers my questions and may point others towards constructively investigating similar directions.
Can I actually do this?
Yes. Though depending on the specifics of how and why, you may have a choice of options available to you, and they may have variable levels of portability between Common Lisp implementations.
What are the actual mechanics of doing so and generating other user based transforms/extensions?
I answer this with respect to two possible methods that the programmer may choose to get started, and which seem most applicable.
For both examples, I reiterate that with limited reflection on the topic, i think it bad form to transform relationships between the input/output mappings of a function. I do so here for demonstration purposes only, to verify that the transformations I'm implementing are actually taking place.
I actually had quite a difficult time testing my transformations were actually happening: SBCL especially seems quite happy to optimise certain expressions and forms, there are additional pieces of information you can make available to the compiler not covered here. Additionally, there may be other transformations available, and so just because your transform isn't used, doesn't necessarily mean it isn't "working".
Environments and Define-Compiler-Macro Extensions using Common Lisp the Language 2
I was previously under the impression that DEFINE-COMPILER-MACRO was relatively limited in its abilities, working only on types connected with literal values, but this is not necessarily the case.
To demonstrate this, i use three user-defined functions and a compiler macro.
First: We will begin with a general addition function gen+ that decides at run-time to either add two numbers together, or concatenate two strings:
(defun gen+ (x y)
(if (and (numberp x)
(numberp y))
(+ x y)
(concatenate 'string x y)))
But say we know at compile time that in certain instances, only strings will be fed to this function. Let's define our specialised string addition function, and to prove its actually being used, we'll do a very bad thing stylistically and additionally concatenate the string "kapow" as well:
(defun string+ (x y)
(declare (optimize (speed 3) (safety 0))
(string x y))
(concatenate 'string x y "kapow"))
The following function is a very simple convenience function that checks an environment to establish whether the declared type of the variable bound in that environment is eq to STRING. We're using a NON-ANSI function here from Common Lisp the Language 2. In sbcl, the function VARIABLE-INFORMATION, and other cltl2 functions are available in the sb-ctlt2 package.
(defun env-stringp (symbol environment)
(eq 'string
(cdr (assoc 'type
(nth-value 2 (sb-cltl2:variable-information symbol environment))))))
Lastly, we use DEFINE-COMPILER-MACRO to generate the transformation. I've tried to name things in this code differently from other examples I've seen so that people can follow along and not get mixed up with what variable/symbol is in which scope/context. A couple of things I didn't know previously about DEFINE-COMPILER-MACRO.
The variable that immediately follows the &whole parameter is a variable which represents the form of the initial call. In our example it will be bound to the list (GEN+ A B)
arg1 is bound to the symbol A
arg2 is bound to the symbol B
The &environment parameter says that within this macro, the symbol ENV will be bound to the environment in which the macro is being evaluated. This is what lets us "kind of step back out of the macro" and check the surrounding code for declarations regarding the type of the variables represented by the symbols bound to 'ARG1' and 'ARG2'
In this definition, we tell the compiler macro that if the user has declared the parameters of GEN+ to be strings, then replace the call to (GEN+ ARG1 ARG2) with a call to (STRING+ ARG1 ARG2).
Note that because the condition of this transformation is the result of a user-defined operation on the environment, if the parameters to GEN+ are literal strings, the transformation will not be triggered, because the environment does not see that the variables have been declared strings. To do that, you would have to add another option and transformation to explicitly check the types of the values in ARG1 and ARG2 as per a traditional use of DEFINE-COMPILER-MACRO. This can be left as an exercise for the reader. But beware about the utility of doing so, because SBCL, for instance, might constant-fold your expression rather than use your transformation anyway.
(define-compiler-macro gen+ (&whole form arg1 arg2 &environment env)
(cond ((and (env-stringp arg1 env)
(env-stringp arg2 env))
`(string+ ,arg1 ,arg2))
(t form)))
Now we can test it with a simple call with type declarations:
(let ((a "bob")
(b "dole"))
(declare (string a b))
(gen+ a b))
This should return the string "bobdolekapow" as the call to GEN+ was transformed into a call to STRING+ based on the declared types of the variables A and B, not just literal types.
Using Basic (defknown)/(deftransform) Combinations with the SBCL Implementation Compiler
The previous technique is indeed potentially useful, more powerful and flexible than transforming on the types of literals, and while not standard ANSI Common Lisp, is more portable/adaptable to other implementations than the technique that follows.
A reason you might forego the former technique in preference of the one that follows, is that the former doesn't get you everything. You still had to declare the types of the variables a and b and write the user-defined function to extract the declared type information from the environment.
If you can interact directly with the SBCL compiler however, with the cost of potentially some brittle-ness and extreme non-portability, you now gain the ability to hack into the compiler itself and gain the benefits of things like type propagation: you might not need to explicitly inform the compiler of the types of A and B for it to implement your transformation.
For our example, we will implement a very basic transformation on the functions wat and string-wat, which are identical in form to our previous functions gen+ and string+.
Understand there are many more pieces of information and optimisation you can feed the SBCL compiler not covered here. And if anyone more experienced with SBCL internals wants to correct/extent anything regarding my impressions here, please comment and i'll be happy to update my answer:
First we tell the compiler about the existence and type signature of wat. We do this by calling defknown in the sb-c package and inform it that wat takes two parameters of any type: (T T) and that it returns a single value of any type: *
(sb-c:defknown wat (T T) *)
Then we define a simple transform using sb-c:deftransform, essentially saying when the two parameters fed to wat are strings, we transform the code into a call to string-wat.
(sb-c:deftransform wat ((x y) (string string) *)
`(string-wat x y))
The forms of wat and string-wat for completeness:
(defun wat (x y)
(if (and (numberp x)
(numberp y))
(+ x y)
(concatenate 'string x y)))
(defun string-wat (x y)
(declare (optimize (speed 3) (safety 0))
(string x y))
(concatenate 'string x y "watpow"))
And this time a demonstration in SBCL using bound variables but no explicit type declarations:
(let ((a (concatenate 'string "bo" "b"))
(b (concatenate 'string "dole")))
(wat a b))
And the returned string should be "bobdolewatpow".
Any reading or sources apart from manually reading through the source to discover more info regarding this?
I haven't been able to find anything much about this out there, and would say that to get much deeper, you're going to have to start trawling through some source code.
SBCL github mirror is currently available here.
User #PuercoPop has suggested background reading of Starting to Hack on SBCL and The Python Compiler for CMU Common Lisp, albeit I am including a link to a .pdf version rather than a .ps version commonly linked to.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've been reading sicp trying to understand scheme, specially macros. I noticed that sicp doesnt talk about macros at all. I read on Paul Graham's site that:
The source code of the Viaweb editor was probably about 20-25% macros. Macros are harder to write than ordinary Lisp functions, and it's considered to be bad style to use them when they're not necessary. So every macro in that code is there because it has to be.
So I desperately wanted to know how to write macros and what to use them for, so I read this site about macros: http://www.willdonnelly.net/blog/scheme-syntax-rules/
But that site just explains how to write a "for" macro. I think Paul Graham talks about CL and the other blog talks about scheme, but they are partly the same. So, wat can be an example of something that cannot be done with ordinary procedures and thus must be done with macros?
EDIT: I have already seen a similar question, but wanted to ask if there was some sort of crazy algorithm that you could do with macros that couldn't possible be done with procedures(other than syntactic sugar as described in that question's answer).
Macros are a tricky subject that I've personally reversed my own opinions on no less than a dozen times, so take everything with a grain of salt.
If you are just getting acquainted with macros, you'll find the most helpful to be those macros that clarify or expedite existing expressions.
One such macro that you may have been exposed to is the anaphoric macro, which remains as popular today as the day Paul Graham coined the term:
(define-syntax (aif x)
(syntax-case x ()
[(src-aif test then else)
(syntax-case (datum->syntax-object (syntax src-aif) '_) ()
[_ (syntax (let ([_ test]) (if (and _ (not (null? _))) then else)))])]))
These macros introduce "anaphora" variables, namely it that you can use in the consequent and alternative clauses of an if statement, e.g:
(aif (+ 2 7)
(format nil "~A does not equal NIL." it)
(format nil "~A does equal NIL." it))
This saves you the hassle of typing a let statement -- which can add up in large projects.
More broadly, transformations that change the structure of a program, dynamically generate "templated" code or otherwise "break" rules (that you hopefully know well enough to break!) are the traditional domain of the macro. I could write on and on about how I've simplified countless projects and assignments with macros -- but I think you'll get the idea.
Learning Scheme-style macros is a little daunting, but I assure you that there are excellent guides to syntax-rules-style macros for the merely eccentric, and as you gain more and more experience with macros, you'll probably come to the conclusion that they are a beautiful idea, worthy of the name "Scheme".
If you happen to use Racket (previously known as PLT Scheme) -- It has excellent documentation on macros, even providing a few neat tricks along the way (I'd also guess most of what is written there can be readily written in another Scheme dialect without issue)
Here is one standard answer (part of which is also covered in SICP - see Exercise 1.6 for example; also search for the keyword "special form" in the text): in a call-by-value language like Scheme, an ordinary procedure (function) must evaluate its argument(s) before the body. Thus, special forms such as if, and, or, for, while, etc. cannot be implemented by ordinary procedures (at least without using thunks like (lambda () body), which are introduced for delaying the evaluation of the body and may incur performance overheads); on the other hand, many of them can be implemented with macros as indeed done in RnRS (see, for instance, the definition of and by define-syntax on page 69).
The simple answer is that you can make new syntax where delaying evaluation is essential for the functionality. Imagine you want a new if that works the same way as if-elseif. In Lisp it would be something that is like cond, but without explicit begin. Example usage is:
(if* (<= 0 x 9) (list x)
(< x 100) (list (quotient x 10) (remainder x 10))
(error "Needs to be below 100"))
It's impossible to implement this as a procedure. We can implement something that works the same by demanding the user to give us thunks and thus the parts will become procedures that can be run instead:
(pif* (lambda () (<= 0 x 9)) (lambda () (list x))
(lambda () (< x 100)) (lambda () (list (quotient x 10) (remainder x 10)))
(lambda () (error "Needs to be below 100")))
Now it's easy implementation:
(define (pif* p c a . rest)
(if (p)
(c)
(if (null? rest)
(a)
(apply pif* a rest))))
This is how JavaScript and almost every other language that doesn't support macros do it. Now if you want to give the user the power of making the first instead of the second you need macros. Your macro can write the first to the second so that your implementation can be a function or you can just change the code to a nested if:
(define-macro if*
(syntax-rules ()
((_ x) (error "wrong use of if*"))
((_ p c a) (if p c a))
((_ p c next ...) (if p c (if* next ...)))))
Or if you want to use the procedure it's even simpler to just wrap each argument in a lambda:
(define-syntax if*
(syntax-rules ()
((_ arg ...) (pif* (lambda () arg) ...)))
The need of macros is to reduce boilerplate and simplify syntax. When you see the same structure you should try to abstract it into a procedure. If that is impossible since the arguments are used in special forms you do it with a macro.
Babel, a ES6 to ES5 transpiler converts JavaSript ES6 syntax to ES5 code. If ES5 had macros it would have been as easy as making compability macros to support ES6. In fact most features of newer versions of the language would have been unnecessary since programmers don't have to wait for a new version of a language to give it new fancy features. Almost nothing of new language features in Algol languages (PHP, Java, JavaScript, Python) would have been necessary if the language had hygenic macro support.
I'll show the thing that I want from Common Lisp that already works in Elisp:
(defun square (x)
(* x x))
(symbol-function 'square)
;; => (lambda (x) (* x x))
So by just knowing the symbol square, I want to retrieve its whole body.
I've looked at CL's:
(function-lambda-expression #'square)
;; =>
(SB-INT:NAMED-LAMBDA SQUARE
(X)
(BLOCK SQUARE (* X X)))
NIL
SQUARE
The return result is close to what I need, but it only works sometimes.
Most of the time, I get:
(function-lambda-expression #'list)
;; =>
NIL
T
LIST
Is there a more reliable function that does this? I'm aware of swank-backend:arglist that's very
good at retrieving the arguments, but I can't find a retriever for the body there.
UPDATE
It seems that the general solution isn't possible in plain lisp. Is this possible to do in SLIME?
Assume that I always have SLIME running, and all the code was loaded through there. Can the code be
obtained more reliably than just using SLIME's goto-definition and copying the text from there?
No. The saving of the source code at the symbol is not mandated by the standard, so any implementation may choose not to do that (this also has performance implications).
I guess that you might get that behaviour more often by declaiming optimize (debug 3), but I have not tried that; it is most likely implementation dependent.
Answer to update: Since you want to implement an editor functionality interacting with the Lisp image, I'd say that using the Lisp Interaction Mode (the LIM in SLIME) is exactly what you need to do.
Perhaps the reason is that the system/core functions, like #'list, has been compiled for speed when the image was built? Perhaps if you bootstrapped SBCL yourself you could make one that was slower and had more information. Or you can just look the the source sode:
(describe #'list)
#<FUNCTION LIST>
[compiled function]
Lambda-list: (&REST ARGS)
Declared type: (FUNCTION * (VALUES LIST &OPTIONAL))
Derived type: (FUNCTION (&REST T) (VALUES LIST &OPTIONAL))
Documentation:
Return constructs and returns a list of its arguments.
Known attributes: flushable, unsafely-flushable, movable
Source file: SYS:SRC;CODE;LIST.LISP
That last line is correct since under src/code/list.lisp you actually find the definition.
Notice the behaviour of describe and function-lambda-expression are implementation dependent. Eg. in CLISP the output is completely different.
(function-lambda-expression #'list) heretries to retrieve the code of a built-in function. Not every built-in function needs to have Lisp code as its implementation.
I haven't tried it, but maybe you can build SBCL yourself, such that it records the source code and saves it in the image.
For a standard image of SBCL I would not expect that it contains source code.
I've been writing more Lisp code recently. In particular, recursive functions that take some data, and build a resulting data structure. Sometimes it seems I need to pass two or three pieces of information to the next invocation of the function, in addition to the user supplied data. Lets call these accumulators.
What is the best way to organize these interfaces to my code?
Currently, I do something like this:
(defun foo (user1 user2 &optional acc1 acc2 acc3)
;; do something
(foo user1 user2 (cons x acc1) (cons y acc2) (cons z acc3)))
This works as I'd like it to, but I'm concerned because I don't really need to present the &optional parameters to the programmer.
3 approaches I'm somewhat considering:
have a wrapper function that a user is encouraged to use that immediately invokes the extended definiton.
use labels internally within a function whose signature is concise.
just start using a loop and variables. However, I'd prefer not since I'd like to really wrap my head around recursion.
Thanks guys!
If you want to write idiomatic Common Lisp, I'd recommend the loop and variables for iteration. Recursion is cool, but it's only one tool of many for the Common Lisper. Besides, tail-call elimination is not guaranteed by the Common Lisp spec.
That said, I'd recommend the labels approach if you have a structure, a tree for example, that is unavoidably recursive and you can't get tail calls anyway. Optional arguments let your implementation details leak out to the caller.
Your impulse to shield implementation details from the user is a smart one, I think. I don't know common lisp, but in Scheme you do it by defining your helper function in the public function's lexical scope.
(define (fibonacci n)
(let fib-accum ((a 0)
(b 1)
(n n))
(if (< n 1)
a
(fib-accum b (+ a b) (- n 1)))))
The let expression defines a function and binds it to a name that's only visible within the let, then invokes the function.
I have used all the options you mention. All have their merits, so it boils down to personal preference.
I have arrived at using whatever I deem appropriate. If I think that leaving the &optional accumulators in the API might make sense for the user, I leave it in. For example, in a reduce-like function, the accumulator can be used by the user for providing a starting value. Otherwise, I'll often rewrite it as a loop, do, or iter (from the iterate library) form, if it makes sense to perceive it as such. Sometimes, the labels helper is also used.