Many of the commonly used libraries I see use a single packages.lisp file to declare all of the packages in the library (system) in one place.
Since exported symbols are part of the package definition, this means individual source files don't list their exported symbols.
In my own projects, I prefer the style of defining one source file per package, and defining its interface/exports at the top of the file.
I am wondering if I am doing it wrong, or missing an essential concept that leads to the preference for a single packages.lisp file.
In case it's relevant, I'm also using ASDF's :package-inferred-system approach, and uiop:define-package instead of defpackage, to make use of it's handy symbol-shadowing :mix feature -- because I haven't figured out how to :use a package which shadows built-in symbols without re-declaring the shadows in each package that uses it.
Traditional view of using Packages
Sometimes a package (a symbol namespace) is used in tens or even hundreds of files and packages might need to be computed. This might be simpler in one file and it might be easier to get a textual overview when the package declaration(s) are in one file. For example the implementation of a text editor could be in just one package using around 100 files. Notice also that a package is really a runtime data structure in Common Lisp with a programmer interface.
Influence from other languages like Java
The style to have one package and one corresponding source file is often coming from outside of Common Lisp, from languages which usually have a correspondence of something like one class = one namespace = one file. This leads to a multitude of files, in nested directories, with often small pieces of code per file.
Common Lisp does not have these restrictions: methods are not organized in classes, classes are not namespaces, files can have any mix of definitions, ... The Lisp libraries tend to have large packages/namespaces and large files.
Structure and its limitations
Packages
Packages in Common Lisp are namespaces for symbols. This is not a full-blown module system. Also there is no actual information hiding. There is a distinction between symbols and exported symbols. Note also that symbols have multiple meanings as a name (variable, function/macro/special operator, class name, slot name, type, data object, package name, ...) and exporting a symbol from a package does not make a distinction between those meanings. Note also that it is possible, but not recommended, to use more than one package namespace in a file (for example by using multiple in-package forms). Typically one would have only one namespace in a file. This is usual declared by an in-package somewhere at the top of the file.
Classes
Classes are no namespace. Typically classes are defined using the built-in Common Lisp Object System, called CLOS. They bundle slots and are used for dispatching in CLOS generic functions. But they don't contain their methods.
Systems
Systems are not a language construct. They are an extension to Common Lisp. The idea of a system is provided by tools like ASDF. A system is a tool for organizing a bunch of files which make a library/application and their dependencies. It's also where functionality is provided to use actions on a system (compiling, loading, delivering, ...).
???
There might be something missing for better organization of code. Each project might need a slightly different way.
I would typically use files to put related functionality into one file and set up a bunch of files for system - if needed. But that could mean that a file implements more than one class and a lot of functionality. I tend to organize files in sections where directly related code is implemented. I might describe a few elements (classes, functions) at the top of the file, but that would be more of a local overview and less than a list of exported symbols. Once you load a system and its files into a Lisp with its IDE, it is the purpose of the development environment to let me query for code (where is? who uses? what is used? sub/super class? package contents? ...) and to provide browsers for that.
There have alternative ways to organize code. For example by using PROVIDE and REQUIRE, which are only very lightly described in the language standard. These tend to pull in functionality on demand and create package structures on the go.
There might be the need for something like object-oriented protocols, which provide more structure for CLOS.
Early use of packages and systems in the MIT Lisp Machine operating system
One of the early demands for packages to avoid name clashes and systems to organize code came from the Lisp Machine operating system developed at MIT in the late 70s and the 80s. It was developed in Lisp Machine Lisp and later in Common Lisp. There, a lot of different functionality (compiler, editor, listener, inspector, mail client, graphics library, file system, serial interface, ethernet interface, networking code, ...) was running in a single address space with probably tens of thousands of symbols. The package declaration was usually done in the corresponding system description file (here we again use the Lisp meaning of a system as a library or an application, a collection of files for a certain purpose) or sometimes in a separate file. Packages provided the namespace for large libraries or even entire applications. Thus files, packages, systems and even classes (earlier called Flavors) were already used then to structure software implementations. See the Lisp Machine Manual (version from 1984), Chapter 29, Maintaining Large Systems and Kent Pitman's MIT AI Memo 801 from 1984: The Description of Large Systems. Systems on the Lisp Machine had versions and supported patching (incremental versioned changes). Files had versions, too.
Related
May I get recommendations or links to representative code repositories with good style for multiple related Common Lisp packages, please?
For instance, consider a high-level workflow library with accompanying lower-level API, each in its own CL package but same git repo due to synchronized releases.
Each system (*.asd file) isolates tests and may be invoked using:
(asdf:test-system foo :force t)
Separate systems may be built via make, which definitely helps isolate SBCL code-coverage reports.
Some users of the library may only want to load the lower-level API. For simplifying dependencies for those using higher-level API, it seems best to keep everything bundled in one repo. Any revision to one library would likely require updating all for the same release.
I currently have a single directory tree with a subdirectory for each CL package. There's a top-level Makefile plus one in each subdirectory that the library maintain would use. The top-level also contains symbolic links for .asd files pointing into relevant subdirectories. (It's a library deeply dependent upon POSIX calls via uiop-posix, so it's only applicable on an OS with sym-links.)
This seems to be an issue at large considering issue #1 for Quicklisp-docs [0].
Found nothing relevant in Google's CL style guide [1], State of the Common Lisp Ecosystem, 2015 [2], Edi's CL Recipes [3] or Lisp-lang [4]. Browsing repos seem to have quite a mix of styles.
[0] https://github.com/rudolfochrist/quicklisp-docs/issues/1
[1] https://google.github.io/styleguide/lispguide.xml
[2] https://web.archive.org/web/20160305061250/http://eudoxia.me/article/common-lisp-sotu-2015/
[3] http://weitz.de/cl-recipes/
[4] http://lisp-lang.org/learn/writing-libraries but see also their section on Continuous Integration
Repo to be fixed: https://gitlab.com/dpezely/cl-mmap
(commit a23bd88d of 2018-07-14; release will be tagged when fixed)
You could consider using asdf-inferred-package. With that, you could have a mmap/high package that depends on a mmap/low package. With that setup, you can actually ask Quicklisp to load either of them directly:
(ql:quickload "mmap/high")
or
(ql:quickload "mmap/low")
You can see an example in my cl-bulk repo.
Attempting to reach a specific audience that might not have seen the question here, it was posted to Common Lisp Pro mailing list.
A summary of the various responses-- apart from great insights to various possible future directions-- there is no de facto convention, mechanism or style for addressing the combination of factors:
dependency synchronization across interrelated libraries/packages/systems
accommodating loading each individually
accommodating testing each individually
accommodating code-coverage reports of each individually
At time of adding this answer, the closest to a consistent, concrete, existing solution seems to align with what had already been implemented by the package mentioned in the original post-- or close enough. (There are of course subtle design and naming differences as indicated by the earlier answer here, but I see these variations as comparable.)
Highlights of packages and systems suggested as examples:
An early implementation of CLIM (predating McCLIM) for its separation of API versus implementation
Despite conventional use of ASDF systems and packages, explore how UIOP within ASDF itself is structured
ASDF and LIL import and reexport
all symbols in a directory; see Faré's full summary
Future directions and suggested reading included:
Consider that intersection to be a software engineering question, and construct accordingly, because "In short: it depends!"
Modules of Racket or Gerbil Scheme
Perhaps internal updates to Google's CL style guide have added something relevant?
(Much thanks to Pascal J. Bourguignon, Ken Tilton, Scott McKay, Faré, Svante v. Erichsen, Don Morrison and Pascal Costanza for participating in the email thread.)
Is it a common thing for Julia users to encapsulate functionality into own modules with the module keyword?
I've been looking at modules and they seem to use include more than actually using the module keyword for parts of code.
What is the better way?
Julia has 3 levels of "Places you can put code"
include'd files
submodules
packages -- which for our purposes have exactly 1 (non-sub) module.
I use to be a big fan of submodules, given I come from python.
But submodules in julia are not so good.
From my experience, code written with them tends to be annoying both from a developer, and from a user perspective.
It just don't work out cleanly.
I have ripped the submodules out of at least one of my packages, and switched to plain includes.
Python needs submodules, to help deal with its namespace -- "Namespaces are great, lets do more of those".
But because of multiple dispatch, julia does not run out of function names -- you can reuse the same name, with a different type signature, and that is fine (Good even)
In general submodules grant you separation and decoupling of each submodule from each other. But in that case, why are you not using entirely separate packages? (with a common dependency)
Things that I find go wrong with submodules:
say you had:
- module A
- module B (i.e A.B)
- type C
One would normally do using A.B
but by mistake you can do using B since B is probably in a file called B.jl in your LOAD_PATH.
If you do this, then you want to access type C.
If you had done using A.B you would end up with a type A.B.C, when you entered the statement B.C().
But if you had mistakenly done using B, then B.C() gives you the type B.C.
And this type is not compatible, with functions that (because they do the right using) expect as A.B.C.
It just gets a bit messy.
Also reload("A.B") does not work.
(however reload often doesn't work great)
Base is one of the only major chuncks of julia code that uses submodules (that I aware of). And even Base is pushed a lot of those into seperate (stdlib) packages for julia 0.7.
The short is, if you are thinking about using a submodule,
check that it isn't a habit you are bringing over from another language.
And consider if you don't just want to release another separate package.
I have frequently used Packages in Julia-Lang, there are many articles that describes how to work with them, but I don't know what is the exact definition of that.
EDIT
Following is a general definition from wiki:
Package (package management system), in which individual files or
resources are packed together as a software collection that provides
certain functionality as part of a larger system
I would like to know the special points of view toward Package that Julia-lang has. e.g. look at this definition from wiki about Java Package
I would say that a Julia package is a module (similar to a namespace in other languages) containing a collection of related functions that provide new functionality for Julia, and that will be useful for other people.
This definition is not unambiguous though. For example, I suggested recently that several image format packages could belong inside a single ImageFormats package, but the replies were that there was a good reason (code size and binary dependencies) for certain kinds of formats to be in separate packages.
If you follow the discussion of the pull requests for new packages on METADATA.jl, you will have a good idea about the community's feeling about what packages should be for / look like. My takeaway from following those discussions is that a more-or-less unified vision is starting to emerge.
I'm looking for a best way to integrate GNAT compiler with our custom code analysis/modification tools. We are using custom tools to perform different code metrics (like execution time, test coverage and etc) and even do some code obfuscation. So for example for measuring code execution time I need to insert 2 procedure calls into each function/procedure (the first one where the function starts, and the other one for each function exit). The code for these 2 procedures are implemented in a separate translation unit. What is the best way to do these code instrumentations (insert/modify code) with GNAT compiler in terms of simplicity and performance? I can think of these several ways:
Does GNAT compiler support code generation plugins of any kind? Seem's that it doesn't, but maybe I missed something while googling about it. Maybe there is a way to do it using some metaprogramming tricks (like in some modern programming languages like Nimrod and D), but I couldn't find if Ada even supports metaprogramming at all.
Looks like the ASIS library can help me out, but it is made for creating separate tools. Is it possible to integrate ASIS-based tool with GNAT? So for example to write a tool that would be loaded by GNAT during compilation, and would modify nodes in the AST before it (the AST) is about to be transformed into GIMPLE. Using the ASIS-based tool separately (for example by preprocessing each source file before passing it to compiler) may reduce compilation time, as source code will need to be parsed twice (by the tool and by the compiler) and be saved/loaded to/from some temporary location on disk.
Is it possible to get GIMPLE from GNAT compiler, modify and pass it to GCC? I couldn't find if there is a working GIMPLE front-end inside GCC, but it seems that GIMPLE is used internally only. I can dump it with GCC compiler, but I can't recompile modified GIMPLE afterwards (seems that there is no GIMPLE front-end for GCC).
I search for a programming language for which a compiler exists and that supports self modifying code. I’ve heared that Lisp supports these features, but I was wondering if there is a more C/C++/D-Like language with these features.
To clarify what I mean:
I want to be able to have in some way access to the programms code at runtime and apply any kind of changes to it, that is, removing commands, adding commands, changing them.
As if i had the AstTree of my programm. Of course i can’t have that tree in a compiled language, so it must be done different. The compile would need to translate the self-modifying commands into their binary equivalent modifications so they would work in runtime with the compiled code.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
Probably there is a reason Lisp is like it is? Lisp was designed to program other languages and to compute with symbolic representations of code and data. The boundary between code and data is no longer there. This influences the design AND the implementation of a programming language.
Lisp has got its syntactical features to generate new code, translate that code and execute it. Thus pre-parsed code is also using the same data structures (symbols, lists, numbers, characters, ...) that are used for other programs, too.
Lisp knows its data at runtime - you can query everything for its type or class. Classes are objects themselves, as are functions. So these elements of the programming language and the programs also are first-class objects, they can be manipulated as such. Dynamic language has nothing to do with 'dynamic typing'.
'Dynamic language' means that the elements of the programming language (for example via meta classes and the meta-object protocol) and the program (its classes, functions, methods, slots, inheritance, ...) can be looked at runtime and can be modified at runtime.
Probably the more of these features you add to a language, the more it will look like Lisp. Since Lisp is pretty much the local maximum of a simple, dynamic, programmable programming language. If you want some of these features, then you might want to think which features of your other program language you have to give up or are willing to give up. For example for a simple code-as-data language, the whole C syntax model might not be practical.
So C-like and 'dynamic language' might not really be a good fit - the syntax is one part of the whole picture. But even the C syntax model limits us how easy we can work with a dynamic language.
C# has always allowed for self-modifying code.
C# 1 allowed you to essentially create and compile code on the fly.
C# 3 added "expression trees", which offered a limited way to dynamically generate code using an object model and abstract syntax trees.
C# 4 builds on that by incorporating support for the "Dynamic Language Runtime". This is probably as close as you are going to get to LISP-like capabilities on the .NET platform in a compiled language.
You might want to consider using C++ with LLVM for (mostly) portable code generation. You can even pull in clang as well to work in C parse trees (note that clang has incomplete support for C++ currently, but is written in C++ itself)
For example, you could write a self-modification core in C++ to interface with clang and LLVM, and the rest of the program in C. Store the parse tree for the main program alongside the self-modification code, then manipulate it with clang at runtime. Clang will let you directly manipulate the AST tree in any way, then compile it all the way down to machine code.
Keep in mind that manipulating your AST in a compiled language will always mean including a compiler (or interpreter) with your program. LLVM is just an easy option for this.
JavaScirpt + V8 (the Chrome JavaScript compiler)
JavaScript is
dynamic
self-modifying (self-evaluating) (well, sort of, depending on your definition)
has a C-like syntax (again, sort of, that's the best you will get for dynamic)
And you now can compile it with V8: http://code.google.com/p/v8/
"Dynamic language" is a broad term that covers a wide variety of concepts. Dynamic typing is supported by C# 4.0 which is a compiled language. Objective-C also supports some features of dynamic languages. However, none of them are even close to Lisp in terms of supporting self modifying code.
To support such a degree of dynamism and self-modifying code, you should have a full-featured compiler to call at run time; this is pretty much what an interpreter really is.
Try groovy. It's a dynamic Java-JVM based language that is compiled at runtime. It should be able to execute its own code.
http://groovy.codehaus.org/
Otherwise, you've always got Perl, PHP, etc... but those are not, as you suggest, C/C++/D- like languages.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
If that's all you're looking for, I'd recommend Python or Ruby. They can both run on their own virtual machines and the JVM and the .Net CLR. Thus, you can choose any runtime you want. Of the two, Ruby seems to have more meta-programming facilities, but Python seems to have more mature implementations on other platforms.