Is there a way to make a configurable ASDF system that loads different lisp files depending on user configuration variables?
Preferably, the loading process would fail with a legible error message if a required config var was not provided.
How would one set such variables conveniently when loading the system? (Manually, through quicklisp, or from another system definition file)
Please do not use #+ and #- in the defsystem form, but :if-feature instead.
Also, it's much better to do runtime-differentiation or to have a different system target altogether depending on features than to have a target that changes its meaning in an unaccountable way that the build system can't see.
Not out of the box but you could arrange something akin to that. The easiest way to change files a system loads would be using #+/- in combination with *features*.
One way to add features would be in .sbclrc (or your implementations startup file) if you want something more project specific you could define a perform :before load-op on your system that would call a function that would read a file and depending on its content's it could modify *features* so that different components are read. I use this scheme for loading variables with configuration files.
So if my system is foo
(defsystem #:foo
...
:dependencies (#:foo-config)
...)
(defmethod perform :after ((op load-op)
(system (eql (find-system :foo-config))))
(asdf/package:symbol-call :foo/config-loader 'load-config
(system-relative-pathname :foo-config
#P"foo.config")))
Because #+/- works at read-time I'm guessing it wouldn't work because the features would be modified after reading the system definition. A hacky way around this could be to just require the config system and make the after method require the system after setting up the features.
My 2c.
Related
Basically, I want to be able to generate class definitions, compile the system, and save it for reuse. Would that involve a code walker, or is there a simpler option?
(save-lisp-and-die "isn't going to work for me")
Expanding to explain. I'm generating systems based on OpenAPI definitions, so a system roughly corresponds to an API client.
There will be dozens, if not hundreds of these.
The idea is to NOT keep them all in the image, but load at run time as required.
I see two possible routes here, and to some extent, I suspect they mainly differ in "the last mile" (as it were).
The route you seem to have settled on, run-time definition of classes and functions.
A route whereby you generate your function/class forms, but don't go the full way to get them "Live" in the image and instead emit the form(s) to a file.
I suspect that it would be possible to have most of the generating code shared between the two and for the first route have a wrapping macro that effectively returns a PROGN, and in the second calls a function to pretty-print what the macro would have returned on a stream.
Saying that, building a tailored environment and saving it to a "core" file is a pretty good way of getting excellent startup times.
I am in the design phase of building a deployment tool for CL projects.
I imagine the typical workflow to be like so:
(ql:quickload :yolo)
(yolo:configure)
Developer adds remote machine user and host, etc...
Developer goes to bed. Turns off the PC
In the morning: hack hack hack
Time to deploy. Developer commits changes and goes to the REPL and types (yolo:deploy)
What I want is for my library to know which system the developer wants deployed based on the current package. (*package*)
My question: Is it possible to find the system that loaded a particular package? Naturally I am talking about ASDF.
Bonus question: Is such a tool even needed? Perhaps there is a better workflow. I am also planning to make the library an executable, but then the default project can be obtain by the current directory.
First, read about packages, systems, libraries etc. here.
I do not think that it makes much sense to infer the intended system. You need at least one additional parameter anyway (the target where to deploy).
A few ideas:
I imagine that a deployment tool would deploy a system. It would then perhaps sensibly be defined as an extension for ASDF.
For example, you might devise such an extension so that you can specify deployment configuration in the defsystem form like this:
(defsystem #:foo
:defsystem-depends-on (#:your-awesome-asdf-deploy)
;; ...
:deploy-targets (:test (:host "test.example.org"
:user "foo"
:env (:backend-url "https://test.foo.org/api"
:dev t))
:prod (:host "prod.example.org"
:user "bar"
:env (:backend-url "https://foo.org/api"
:dev nil))))
This information could then be used in a new op deploy-op, that you might invoke like:
(asdf:oos 'asdf-deploy:deploy-op 'foo :target :test)
There is no builtin way to answer your question, but if, right after you load asdf, you hook into (defmethod perform :after ((o load-op) (s system)) ...) a function that diffs the list of new entries from the list of all packages, then you could build an index of which systems create which packages.
i have this tree structure:
repository/modules/module1
repository/modules/module2
repository/modules/module..
repository/apps/application1
repository/apps/application2
repository/apps/application..
where the applications are using some modules.
now, I'd like to put some resources inside a module (like a very colorfull icons inside a widget used by several applications) but.. something gets wrong.
inside the module CMakeLists.txt if I use only:
set(${MODULE_NAME}_RCS
colors.qrc
)
...
qt4_add_resources (${MODULE_NAME}_RHEADERS ${${MODULE_NAME}_RCS})
no qrc_colors.cxx are created anywhere. so I've tried to add:
ADD_EXECUTABLE (${MODULE_NAME}
${${MODULE_NAME}_RHEADERS}
)
but.. I get this weird error:
CMake Error at repo/modules/ColorModule/CMakeLists.txt:51 (ADD_EXECUTABLE):
add_executable cannot create target "ColorModule" because another
target with the same name already exists. The existing target is a static
library created in source directory
"repo/modules/ColorModule". See documentation for
policy CMP0002 for more details.
(I've changed the path of the error of course)
so.. don't know what to think because i'm new both to cmake and qt..
what can i try?
EDIT:
if I add the ${MODULE_NAME}_RHEADERS and ${MODULE_NAME}_RCS in the add_library command the qrc_colors.cxx is created BUT it is in repository/modules/module1/built and not copied in the application built directory...
There is at least two errors in your code.
1) It is usually not necessary to use ${MODULE_NAME} everywhere like that, just "MODULE_NAME". You can see that the difference is the raw string vs. variable. It is usually recommended to avoid double variable value dereference if possible.
2) More importantly, you seem to be setting ${MODULE_NAME} in more than one executable place, which is "ColorModule" according to the error output. You should have individual executable names for different binaries.
Also, the resource file focus is a bit of red herring in here. There are several other issues with your project.
You can cmake files as CmakeLists.txt instead of CMakeLists.txt which inherently causes issues on case sensitive systes as my Linux box.
You use Findfoo.cmake, and find_package(foo) for that matter, rather than the usual FindFoo.cmake convention alongside find_package(Foo).
Your FindFoo.cmake is quite odd, and you should probably be rewritten.
Most importantly, you should use config files rather than find modules.
Documentation and examples can be found at these places:
http://www.cmake.org/Wiki/CMake/Tutorials#CMake_Packages
https://projects.kde.org/projects/kde/kdeexamples/repository/revisions/master/show/buildsystem
When you would like use a find module, you need to have that at hand already. That will tell you what to look for, where things are, or if they are not anywhere where necessary. It is not something that you should write. You should just reuse existing ones for those projects that are not using cmake, and hence the find modules are added separately.
It is a bit like putting the treasure map just next to the treasure. Do you understand the irony? :) Once you find the map, you would automatically have the treasure as well. i.e. you would not look for it anymore.
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.
I have a system I wrote that uses lparallel when possible, and otherwise works around it. Problem is, I'm now trying to test it on ECL, which errors upon merely loading lparallel.
Is there a way (other than #-ecl ) to specify system dependencies parameterized by implementation type? I've looked at Madiera Port but it seems to work only for subcomponents, not system dependencies. I wish to keep my .asd file as a simple machine-readable form, without reader conditionals etc.
( aside: lparallel loads fine on a current ECL. mine was just outdated. )
To my knowledge there is no direct way to do that in ASDF apart from reader conditionals. You could use XCVB instead, or write a defsystem* macro that adds new syntax, or (maybe) hook into the existing defsystem as madeira does.
Without knowing your motivation, I wonder why the simple solution of #-ecl is being avoided. Do you wish to store metadata that links ECL to the nonexistence of lparallel? Note #-lparallel and #+lparallel may be used in lisp files.
By the way lparallel loads fine for me in ECL. Are you running the latest ECL? There is a known bug in ECL that causes the lparallel tests to eventually hang, however this may not affect all platforms.