I am in the design phase of building a deployment tool for CL projects.
I imagine the typical workflow to be like so:
(ql:quickload :yolo)
(yolo:configure)
Developer adds remote machine user and host, etc...
Developer goes to bed. Turns off the PC
In the morning: hack hack hack
Time to deploy. Developer commits changes and goes to the REPL and types (yolo:deploy)
What I want is for my library to know which system the developer wants deployed based on the current package. (*package*)
My question: Is it possible to find the system that loaded a particular package? Naturally I am talking about ASDF.
Bonus question: Is such a tool even needed? Perhaps there is a better workflow. I am also planning to make the library an executable, but then the default project can be obtain by the current directory.
First, read about packages, systems, libraries etc. here.
I do not think that it makes much sense to infer the intended system. You need at least one additional parameter anyway (the target where to deploy).
A few ideas:
I imagine that a deployment tool would deploy a system. It would then perhaps sensibly be defined as an extension for ASDF.
For example, you might devise such an extension so that you can specify deployment configuration in the defsystem form like this:
(defsystem #:foo
:defsystem-depends-on (#:your-awesome-asdf-deploy)
;; ...
:deploy-targets (:test (:host "test.example.org"
:user "foo"
:env (:backend-url "https://test.foo.org/api"
:dev t))
:prod (:host "prod.example.org"
:user "bar"
:env (:backend-url "https://foo.org/api"
:dev nil))))
This information could then be used in a new op deploy-op, that you might invoke like:
(asdf:oos 'asdf-deploy:deploy-op 'foo :target :test)
There is no builtin way to answer your question, but if, right after you load asdf, you hook into (defmethod perform :after ((o load-op) (s system)) ...) a function that diffs the list of new entries from the list of all packages, then you could build an index of which systems create which packages.
Related
Is there a way to make a configurable ASDF system that loads different lisp files depending on user configuration variables?
Preferably, the loading process would fail with a legible error message if a required config var was not provided.
How would one set such variables conveniently when loading the system? (Manually, through quicklisp, or from another system definition file)
Please do not use #+ and #- in the defsystem form, but :if-feature instead.
Also, it's much better to do runtime-differentiation or to have a different system target altogether depending on features than to have a target that changes its meaning in an unaccountable way that the build system can't see.
Not out of the box but you could arrange something akin to that. The easiest way to change files a system loads would be using #+/- in combination with *features*.
One way to add features would be in .sbclrc (or your implementations startup file) if you want something more project specific you could define a perform :before load-op on your system that would call a function that would read a file and depending on its content's it could modify *features* so that different components are read. I use this scheme for loading variables with configuration files.
So if my system is foo
(defsystem #:foo
...
:dependencies (#:foo-config)
...)
(defmethod perform :after ((op load-op)
(system (eql (find-system :foo-config))))
(asdf/package:symbol-call :foo/config-loader 'load-config
(system-relative-pathname :foo-config
#P"foo.config")))
Because #+/- works at read-time I'm guessing it wouldn't work because the features would be modified after reading the system definition. A hacky way around this could be to just require the config system and make the after method require the system after setting up the features.
My 2c.
first of all, i will explain what i would like to do here : given a C big programm, i would like to output a list of producers/consumers for a data and a list of calling/called-by functions of the function where this data is.
for doing this, i am thinking about using what computes some modules of frama-c, like dataflow.ml or callgraph.ml in my own plugin.
however, as i read the plugin developper doc, i can't manage to see how we can have access to the data of those modules.
is a "open.cyl_type" sufficient here in my own plugin?
moreover, here are my other questions :
i tried using by the way pdg plugin for my purposes but when i call it and it says "pdg graph computed", how can i access it?
is there any more documented thing about "impact" plugin than the official webpage, in depth, how it works fondamentally? (i have to say that i'm in like a pre-project phase, and that i installed frama-c with the apt-get on ubuntu and that i did not get an impact plugin working (i'll see by compiling the sources))
by the way, do you think i'm using the right method to get to my purposes?
Your question is quite unclear, and this answer is thus very generic. As mentioned in the developer documentation, there are two main classes of plugins: static plugins, compiled with the kernel and whose API is exposed in a module (usually of the same name of the plugin) in Db. Dynamic plugins, such as Semantic_callgraph register dynamically their entry points through the Dynamic module.
If you do make doc in Frama-C sources (I'm not sure that there is a corresponding package in Ubuntu) you can access documentation for the Db module in FRAMAC_SOURCE_DIR/doc/code/html/Db.html and the list of functions registered by dynamic plugins in FRAMAC_SOURCE_DIR/doc/code/dynamic_plugins/Dynamic_plugins.html.
I think that, following Virgile's advice, you should get the source code anyway because you will most of the time need to browse the code to find what you are looking for. Beside, you can have a look at the hello_word plug-in (in src/dummy/hello_world) to have an example of a very simple plug-in. You can also find some examples on my web site at https://anne.pacalet.fr/Notes/doku.php?id=notes:0061_frama_c_scripts to find out how to have access to some information in the AST.
I'm using Emacs as my Lisp environment, and would like to have offline access to the Common Lisp HyperSpec. I downloaded it, and put the folders into my emacs.d directory. I then put the following code into my .emacs:
(global-set-key [(f2)] 'slime-hyperspec-lookup)
(setq common-lisp-hyperspec-root "/.emacs.d/HyperSpec/")
However, every time I try to search for something in it, my query ends up being malformed. Specifically, suppose I search for dotimes; what gets dumped into my browser is file:///.emacs.d/HyperSpec/Body/m_dolist.htm, which doesn't match the directory structure in the HyperSpec folder, causing the query to fail. The HyperSpec lookup instead formats to suit the web version (in that case, it works absolutely fine).
Is there any way I can fix this, and if so, how? I basically just want to be able to look up the HyperSpec even when I'm not online.
You might like to try my CLHS ASDF wrapper, which is specifically designed to download the CLHS into a standard location (basically the quicklisp directory) and then help you setup emacs to point to it.
Simply invoke (ql:quickload "clhs") and follow the instructions.
Hope this helps!
edit: #JigarParekh I think you may have skimmed my answer a little too fast.
The question's title is "Viewing the Common Lisp HyperSpec offline via Emacs". The question's body basically gets bogged down in the details of manually installing the CLHS and pointing emacs to it, and asks how to solve an immediate subproblem related to that. The selected answer solves the user's immediate subproblem, but is less than optimal given what's available today.
My answer does include the essential part of the answer, which is:
Simply invoke (ql:quickload "clhs") and follow the instructions.
This downloads the CLHS from Quicklisp in a way that should remain available for the foreseeable future, regardless of the helpful but optional additional information I included for reference in the first paragraph. My answer would remain useful even if the reference links' content changed or even if, god forbid, they became 404 Not Found or otherwise unavailable. (I note in passing that since the referenced page is part of a public domain website and available on github, anyone could easily mirror it and provide a replacement link to it here should that ever come to pass. But as I said, it's optional additional reference information anyway.)
Please replace
(setq common-lisp-hyperspec-root "/.emacs.d/HyperSpec/")
with
(setq common-lisp-hyperspec-root "~/.emacs.d/HyperSpec/")
or even
(setq common-lisp-hyperspec-root (expand-file-name "~/.emacs.d/HyperSpec/"))
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.
I have a system I wrote that uses lparallel when possible, and otherwise works around it. Problem is, I'm now trying to test it on ECL, which errors upon merely loading lparallel.
Is there a way (other than #-ecl ) to specify system dependencies parameterized by implementation type? I've looked at Madiera Port but it seems to work only for subcomponents, not system dependencies. I wish to keep my .asd file as a simple machine-readable form, without reader conditionals etc.
( aside: lparallel loads fine on a current ECL. mine was just outdated. )
To my knowledge there is no direct way to do that in ASDF apart from reader conditionals. You could use XCVB instead, or write a defsystem* macro that adds new syntax, or (maybe) hook into the existing defsystem as madeira does.
Without knowing your motivation, I wonder why the simple solution of #-ecl is being avoided. Do you wish to store metadata that links ECL to the nonexistence of lparallel? Note #-lparallel and #+lparallel may be used in lisp files.
By the way lparallel loads fine for me in ECL. Are you running the latest ECL? There is a known bug in ECL that causes the lparallel tests to eventually hang, however this may not affect all platforms.