(ASDF 3) Is it possible to recursively load systems in subdirectories? - common-lisp

I know about using :modules, but what about when systems get nested? Suppose I have the following structure, relative to some unknown user directory:
foo/
-foo.asd
-bar/
--bar.asd
This could arise, for example, when using Git submodules. How shall I configure the (defsystem) call in foo.asd to load bar as a dependency, without modifying a config file outside of foo/ or demanding particular placement for the foo/ tree itself? Feels like it should be simple.
3 Feb. 2020: From #Svante's answer, it sounds like my question is really 'How do I dynamically ensure that foo/ and bar/ both get into the *source-registry*?' The ASDF manual makes me think this should do the trick:
(asdf:initialize-source-registry
'(:source-registry
(:tree "«absolute-path-to-foo»/")
:inherit-configuration))
though I have not seen an example of that usage.
26 Mar. 2020: The technique above seems to work fine, so I'm closing this question. ASDF 3 is excellent.

ASDF doesn't care about relative locations of .asd files. ASDF systems and their dependencies are completely orthogonal to file/directory structure and oblivious to any source version control.
It just looks in several locations for .asd files. Each such file then may contain definitions for systems. It will generally recurse into the configured folders, so any .asd file in a git submodule would usually also be found.
The definitions, e. g. of components, inside of an .asd file then work relatively from the location of that file.
In your example, if you give a :depends-on ("bar") option to the "foo" system, it would just work, no matter where bar.asd resides (as long as it is somewhere where ASDF finds it).
A bit more awareness would be required if you have several versions of a library. This might happen if you work on "foo" and "bar" at the same time, while a stable version of "bar" is also available, e. g. in a quicklisp dist. Then the lookup order comes into play, but usually your “personal” directories have precedence over “system” directories, so again, it would just work. For more control, you might want to look into qlot.

Related

Is there a way to avoid recursive make with nobase?

I've got the following directory structure:
Makefile.am
src/
mymod/
mod.cc
submod/
submod.cc
inc/
Makefile.am
mymod/
mod.hh
submod/
submod.hh
Using autotools, I'd like to distribute both a library made from src and the headers in inc. The top level Makefile.am looks something like
lib_LTLIBRARIES = mylib.la
mylib_la_SOURCES=./mymod/mod.cc\
./mymod/submod/submod.cc
SUBDIRS=inc
Then inc/Makefile.am has
mymod_includedir=$(includedir)
nobase_mymod_include_HEADERS=mymod/mod.hh\
mymod/submod/submod.hh
This works OK. I end up with whatever library stuff, and my headers get installed appropriately. However, I'd like to eliminate the recursion involved in the Makefile. The problem is that if I move the lines in inc/Makefile.am to the root directory, then I have to update the paths as follows:
mymod_includedir=$(includedir)
nobase_mymod_include_HEADERS=inc/mymod/mod.hh\
inc/mymod/submod/submod.hh
This results in my headers getting dumped as $PREFIX/include/inc/mymod/mod.hh and not $PREFIX/include/mymod/mod.hh like I want. I know I
could do something like
mymodincludedir=$(includedir)/mymod
mymod_HEADERS=inc/mymod/mod.hh
mysubmodincludedir=$(includedir)/mymod/submod
mysubmod_HEADERS=inc/mymod/submod/submod.hh
but that's pretty painful, because there's a lot of subdirectories, and more subdirectories within the subdirectories (we're distributing a 3rd party's code that our own headers need). What I'd like to be able to do is either tell automake to just copy the directories in /inc to $(includepath) along with every subdirectory it encounters within, or tell it to only strip part of the path from the header files I'm listing. Is this possible?
I think the closest you can find is Karel Zak's Makemodule.am approach for which nobase_ would work as you need.

How to include all function and package declarations in a file called all.lisp for an asdf package-inferred-system

So, in the asdf manual/documentation in section 6.5 of the package-inferred-system extension, the example uses an all.lisp file for determining packages (which I/we will assume will contain all of the function and package information for that respective subdirectoty).
What I want to know is what would be the "proper" way of including all of the function and package declarations in this all.lisp file. Would I do something like including all of the function declarations for that subdirectory in the all.lisp file, and then use the register-system-packages function in the asdf file. Or could I omit the all.lisp file, and let the compiler infer the packages from the files (but would I have to then use the register-system-packages function for every single file I add to this system).
I'm just wondering more about the specifics of using this system and the files and declarations that have to be made when adding a new file to the system.
Sorry for the opacity of the question; I can't seem to grasp the specifics of this system
1- See how it's done in lisp-interface-library/*/all.lisp, using uiop:define-package and its :use-reexport clause.
See for instance pure/all.lisp:
(uiop:define-package :lil/pure/all
(:nicknames :pure)
(:import-from :lil/interface/all)
(:use :closer-common-lisp)
(:mix :fare-utils :uiop :alexandria)
(:use-reexport
:lil/interface/base
:lil/interface/eq
:lil/interface/order
:lil/interface/group
:lil/pure/empty
:lil/pure/collection
:lil/pure/iterator
:lil/pure/map
:lil/pure/set
:lil/pure/alist
:lil/pure/tree
:lil/pure/hash-table
:lil/pure/fmim
:lil/pure/encoded-key-map
:lil/pure/queue
:lil/pure/iterator-implementation
:lil/pure/map-implementation
:lil/pure/set-implementation
:lil/pure/alist-implementation
:lil/pure/tree-implementation
:lil/pure/hash-table-implementation
:lil/pure/fmim-implementation
:lil/pure/encoded-key-map-implementation
:lil/pure/queue-implementation
))
2- These days, I recommend use requiring asdf 3.1 and not using asdf-package-system. For maximal backward incompatibility, use
#-asdf3.1 (error "<my system> requires ASDF 3.1 or later. Please upgrade your ASDF.")
And then in your defsystem, :class :package-inferred-system
3- I do not follow this forum closely. ASDF questions find a quicker answer on the asdf-devel mailing-list.
As I interpret that, you would simply have the package defined in all.lisp depend on the packages defined in the other files of that system. It is then, in a way, an entry point for the dependency graph into that system. I would expect all.lisp to contain high level entry definitions that naturally depend on the other files.
For example, if you build a system that has a (sub-)system for exposing a web interface, the webinterface/all.lisp file/package would contain functions for configuring, starting, and stopping the web server. These functions would depend on the handler definitions in other files/packages which in turn would depend on other files/packages that provide the data or do the meat of request processing.

AFP Dijkstra's Shortest Path Algorithm

For the AFP entry Dijkstra's Shortest Path Algorithm, both the proof outline and proof document were nonexistent *. Unfortunately, I did not find an IsaMakefile either to build those documents locally. What is the best way to get those documents?
Another question, as the Dijkstra.thy depends on a lot of other theories, is there a way to load everything faster?
*) It is fixed now.
(There seems to be something broken at AFP right now, please tell the editors about it.)
In general, you can download the sources of AFP entries and produce the documents yourself like this:
Get and unpack all AFP sources -- downloading separate entries is offered as well, but then you have to disentangle dependencies manually.
Invoke isabelle build like this:
isabelle build -d afp-2013-03-02 -o document=pdf -v Dijkstra_Shortest_Path
Here afp-2013-03-02 is the directory that was obtained by unpacking the current AFP sources.
See also the Isabelle System manual about "Isabelle sessions and build management", which is all new in Isabelle2013.
See isabelle build -b there to make things load faster, by producing persistent heap images from sessions.
The links in the AFP entry were indeed broken and should now be fixed again, sorry about that.
As Makarius writes, the AFP new uses Isabelle's new build system, i.e. has a ROOT file for each entry that can be used to check the associated theories and build the document.
Makarius' answer is pretty much the official way to do it, although I would additionally recommend setting up the AFP as a component. This gives you the following steps:
Download the AFP to e.g. ~/afp
Set it up as component e.g. by adding ~/afp to ~/.isabelle/Isabelle2013/components (see also AFP as a component)
build the entry with
isabelle afp_build Dijkstra_Shortest_Path
You can also have jEdit build the heap image for you. If the AFP is setup as a component (see the other answers for that), just start jEdit with
isabelle jedit -d '$AFP' -l Dijkstra_Shortest_Path
and jEdit will select Dijkstra_Shortest_Path as base logic and (re)build it if necessary.
If you make regular use of the AFP, it might be useful to add the AFP path by default. For this, create a file ROOTS in $ISABELLE_HOME_USER with the line $AFP in it (or add this line, if the file already exists).

GNAT Programming Suite - source file not found

Ada is still new to me, so I am trying to find my way around the GPS IDE. I asked another question earlier, but I think this problem has precedence over that one, and may be at the root of my trouble.
When I compile, I am getting a long list of *warning: source file ... not found"
In my .gpr file, I have listed all of the spec and body source files and use the following naming scheme:
package Naming is
for Casing use "mixedcase";
for Dot_Replacement use ".";
for Spec_Suffix ("ada") use "_s.ada";
for Body_Suffix ("ada") use "_b.ada";
end Naming;
What is odd it the error messages all look either like this:
warning: source file "xxx_b.adb" not found
or this
warning: source file "xxx.adb" not found
Note that neither of these (xxxb.adb or xxx.adb) conform to the file specs, which should end with .ada.
Can someone explain what is going on here?
I'm 99% sure that the problem is one of the ones I mentioned in answer to your other question: GNAT does not normally support more than one compilation unit in a file. I got exactly the behaviour you describe with GPS and these files:
james_s.ada:
with Jane;
package James is
end James;
jim_s.ada:
package Jim is
end Jim;
package Jane is
end Jane;
The error message on compiling james_s.ada says it can't find Jane_s.ada, but when I ask GPS to go to the declaration of Jane it takes me to the "correct" line in jim_s.ada.
You could use gnatchop to split jim_s.ada, but it doesn't understand project files or naming conventions; you probably want to keep the existing names for the code that works, so you'd rename gnatchop's output as required.
However! to my great surprise, it turns out that GNAT does support having more than one compilation unit in a file, provided package Naming in the project file tells it about each unit in the file:
package Naming is
for Casing use "mixedcase";
for Dot_Replacement use ".";
for Spec_Suffix ("ada") use "_s.ada";
for Body_Suffix ("ada") use "_b.ada";
for Spec ("Jim") use "jim_s.ada" at 1;
for Spec ("Jane") use "jim_s.ada" at 2;
end Naming;
It's up to you whether to do this or to bite the bullet and use gnatchop, either on the multi-unit files or on the whole source tree.
First off, this isn't an Ada problem, its a Gnat problem. Other Ada compilers have no problem with the file names you are using.
However, Gnat is rather unique in that it expects there to be only one program unit (package body, package spec, stand-alone routine, etc) per source file. This is because it is also rather unique in that it expects to be able to find the source code for any program unit just by knowing that unit's Ada intentifier. Most other Ada compilers maintain some kind of library file that maps file names to program units, and you have to register all your files into it. (Whereas your typcial C compiler just leaves the problem of finding files for all your code up to the user entirely).
Generally the easiest thing to do with Gnat, the way that will cause you the least trouble, is to just use its default file naming convention (and of course don't put multiple program units in a single file.
If you already have some existing Ada code (perhaps developed for another compiler), the easiest way to import it into Gnat is typically to run the gnatchop tool on it all. So that's what I'd suggest you try.
From GPRbuild User's Guide:
Strings are used for values of attributes or as indexes for these attributes. They are in general case sensitive, except when noted otherwise [...]
Based on this, I believe you have to use "Ada" instead of "ada" as index for Spec_Suffix and Body_Suffix. I currently do not have access to the tools for testing this, so I suggest to just try it out.

cmake: Working with multiple output configurations

I'm busy porting my build process from msbuild to cmake, to better be able to deal with the gcc toolchain (which generates much faster code for some of the numeric stuff I'm doing).
Now, I'd like cmake to generate several versions of the output, stuff like one version with sse2, another with x64, and so on. However, cmake seems to work most naturally if you simply have a bunch of flags (say, "sse2_enable", and "platform") and then generate one output based on those platforms.
What's the best way to work with multiple output configurations like this? Intuitively, I'd like to iterate over a large number of flag combinations and rerun the same CMakeLists.txt files for each combination - but of course, you can't express that within the CMakeLists.txt files (AFAIK).
The recommended way to do this is to simply have multiple build directories. From each one you simply call cmake with the required settings.
For example you could do, starting in the base source directory (using Linux shell syntax but the idea is the same):
mkdir build-sse2 && cd build-sse2
cmake .. -DENABLE_SSE2 # or whatever to enable it in your CMakeLists.txt
make
cd ..
mkdir build-x64 && cd build-x64
cmake .. -DENABLE_X64 # or whatever again...
make
This way, each build directory is completely separated from each other.
This allows you to have one directory for Debug, another for Release and another for cross-compiling.
There hasn't been much activity here, so I've come up with a workable solution myself. It's probably not ideal, so if you have a better idea, please do add it!
Now, it's hard to iterate over build configs in cmake because cmake's crucial variables don't live in function scope - so, for instance, that means if you do include_directories(X) the X directory will remain in the include list even after the function exits.
Directories do have scope - and while normally each input directory corresponds to one output directory, you can have multiple output directories.
So, my solution looks like this:
project(FooAllConfigs)
set(FooVar 2)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-2b")
set(FooVar 5)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-5c")
set(FooVar 3)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-3b")
set(FooVar 3)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-3c")
The normal project dir then contains a CMakeLists.txt file with code to set up the appropriate includes and compiler options given the global variables set in the FooAllConfigs project, and it also determines a build suffix that's appended to all build outputs - any even indirectly included output (e.g. as generated by add_executable) must have a unique name.
This works fine for me.

Resources