What's the difference between :, :: and / in SBT? - sbt

I was trying to recall whether it was test:compile, test::compile or test/compile that I wanted while doing something on SBT, when it struck me that though I have some intuition on which separator to use for what, I don't have a clear notion of what each separator is used for.
So, when typing tasks on the sbt console, when/for what do I use :, :: and /?

Different separators were used for different scope axis:
single colon : follows a configuration axis
double colon :: follows a task axis
slash / follows a subproject axis
However these have been unified by slash syntax: Unification of sbt shell notation and build.sbt DSL discussion led to Unify sbt shell and build.sbt syntax (scope path syntax) #3434 which released in 1.1.0 slash syntax
<project-id>/<config-ident>/intask/key
corresponding to
<project-id>/config:intask::key
hence, for example,
show root/Compile/compile/scalacOptions
corresponds to
show root/compile:compile::scalacOptions
Related question: what does a single colon mean in sbt

Related

How do I reload a module under development in Julia 1.6?

I know this question has been asked and answered before, but none of the many answers work for me as described.
What is the procedure for reloading a module that I'm working on in Julia (1.6)?
For example, I have
module MyModule
export letters
const letters = String('A':'Z')
end
and I want the be able to load the module, make changes to letters in the module's file, and then reload the module and have those changes reflected in subsequent uses of letters. This seems simple enough, but I can't get it to work.
I've tried
include("src/MyModule.jl")
using .MyModule
but if I change the definition of letters in MyModule.jl and then
include("src/MyModule.jl")
letters doesn't change, unless I fully qualify its use each time with Main.MyModule.letters: using Main.MyModule; letters refers, for example, to the old definition.
How do I reload a module under development so that I can refer to its definitions without fully qualifying them (and without having an unqualified shadow definition always lying around)?
I would just use Revise.jl and wrap everything in functions:
module MyModule
export letters
letters(char_start, char_end) = char_start:char_end |> String
end
julia> using Revise
julia> includet("src/MyModule.jl")
julia> using .MyModule
julia> letters('l', 'p')
"lmnop"
module MyModule
export letters
letters(char_start, char_end) = char_start:char_start |> String
end
julia> letters('l', 'p')
"l"
const is for defining things that you do not want to modify, so I would not expect your original version to work as expected. Revise.jl should also throw a redefinition error if you try to change it
In general though, it's usually much nicer (and easier too!) to just put everything in a package and use the usual using/import syntax. PkgTemplates.jl is great for this
If you would like to redefine consts though, I would definitely recommend checking out Pluto.jl

Common Lisp: relative path to absolute

May be it is a really dumb question, but after playing around with all built-in pathname-family functions and cl-fad/pathname-utils packages I still can't figure out how to convert a relative path to absolute (with respect to $PWD):
; let PWD be "/very/long/way"
(abspath "../road/home"); -> "/very/long/road/home"
Where the hypothetical function abspath works just like os.path.abspath() in Python.
The variable *DEFAULT-PATHNAME-DEFAULTS* usually contains your initial working directory, you can merge the pathname with that;
(defun abspath (pathname)
(merge-pathnames pathname *default-pathname-defaults*))
And since this is the default for the second argument to merge-pathnames, you can simply write:
(defun abspath (pathname)
(merge-pathnames pathname))
UIOP
Here is what the documentation of UIOP says about cl-fad :-)
UIOP completely replaces it with better design and implementation
A good number of implementations ship with UIOP (used by ASDF3), so it's basically already available when you need it (see "Using UIOP" in the doc.). One of the many functions defined in the library is uiop:parse-unix-namestring, which understands the syntax of Unix filenames without checking if the path designates an existing file or directory. However the double-dot is parsed as :back or :up which is not necessarily supported by your implementation. With SBCL, it is the case and the path is simplified. Note that pathnames allows to use both :back and :up components; :back can be simplified easily by looking at the pathname only (it is a syntactic up directory), whereas :up is the semantic up directory, meaning that it depends on the actual file system. You have a better chance to obtain a canonical file name if the file name exists.
Truename
You can also call TRUENAME, which will probably get rid of the ".." components in your path. See also 20.1.3 Truenames which explains that you can point to the same file by using different pathnames, but that there is generally one "canonical" name.
Here's the final solution (based on the previous two answers):
(defun abspath
(path-string)
(uiop:unix-namestring
(uiop:merge-pathnames*
(uiop:parse-unix-namestring path-string))))
uiop:parse-unix-namestring converts the string argument to a pathname, replacing . and .. references; uiop:merge-pathnames* translates a relative pathname to absolute; uiop:unix-namestring converts the pathname back to a string.
Also, if you know for sure what kind of file the path points to, you can use either:
(uiop:unix-namestring (uiop:file-exists-p path))
or
(uiop:unix-namestring (uiop:directory-exists-p path))
because both file-exists-p and directory-exists-p return absolute pathnames (or nil, if file does not exist).
UPDATE:
Apparently in some implementations (like ManKai Common Lisp) uiop:merge-pathnames* does not prepend the directory part if the given pathname lacks ./ prefix (for example if you feed it #P"main.c" rather than #P"./main.c"). So the safer solution is:
(defun abspath
(path-string &optional (dir-name (uiop:getcwd)))
(uiop:unix-namestring
(uiop:ensure-absolute-pathname
(uiop:merge-pathnames*
(uiop:parse-unix-namestring path-string))
dir-name)))

Define a recursively expanded variable using a variable whose name is computed from a simply expanded variable

I've run into a difficulty in defining generic rules for a non-recursive make system.
Background
For further reading, rather than me reproducing too much existing material, see this earlier question, which covers the ground pretty well, and previously helped me when constructing this system.
For the make system I am constructing, I want to define dependencies between system components - e.g. component A depends on component B - and then leave the make system to ensure that any products of the B build process are built before they will be needed by build steps for A. It's slightly wasteful due to the granularity (some unneeded intermediates may be built), but for my use case it meets a comfortable balance point between ease-of-use and build performance.
A difficulty the system must deal with is that the order of makefile loading cannot be controlled - indeed, should not matter. However because of this, a component defined in an early-loaded makefile may depend on a component defined in a not-yet-read makefile.
To allow common patterns to be applied across all the component-defining makefiles, each component uses variables such as: $(component_name)_SRC. This is a common solution for non-recursive (but recursively included) make systems.
For information on GNU make's different types of variables, see the manual. In summary: simply expanded variables (SEVs) are expanded as a makefile is read, exhibiting behaviour similar to an imperative programming language; recursively expanded variables (REVs) are expanded during make's second phase, after all the makefiles have been read.
The Problem
The specific issue arises when trying to turn a list of depended-on components into a list of the files those components represent.
I've distilled my code down to this runnable example which leaves out a lot of the detail of the real system. I think this is sufficiently simple to demonstrate the issue without losing its substance.
rules.mk:
$(c)_src := $(src)
$(c)_dependencies := $(dependencies)
### This is the interesting line:
$(c)_dependencies_src := $(foreach dep, $($(c)_dependencies), $($(dep)_src))
$(c) : $($(c)_src) $($(c)_dependencies_src)
#echo $^
Makefile:
.PHONY: foo_a.txt foo_b.txt bar_a.txt hoge_a.txt
### Bar
c := bar
src := bar_a.txt
dependencies :=
include rules.mk
### Foo
c := foo
src := foo_a.txt foo_b.txt
dependencies := bar hoge
include rules.mk
### Hoge
c := hoge
src := hoge_a.txt
dependencies := bar
include rules.mk
These will run to give:
$ make foo
foo_a.txt foo_b.txt bar_a.txt
$
hoge_a.txt is not included in the output, because at the time that foo_dependencies is defined as a SEV, hoge_src doesn't exist yet.
Expansion after all the makefiles have been read is a problem REVs should be able to solve and I did previously try defining $(c)_dependencies_src as a REV, but that doesn't work either because $(c) is then expanded at substitution time, not definition time, so it no longer holds the correct value.
In case anyone is wondering why I am not using target-specific variables, I am concerned that the application of the variable to all the prerequisites of the target described in the manual will cause an unwanted interaction between the rules for different components.
I'd like to know:
Is there a solution to this specific issue? (i.e. is there a simple way to make that line achieve what I want it to?)
Is there a more typical way of building a make system like this? (i.e. a single make instance, loading components from multiple makefiles and defining dependencies between those components.)
If there are multiple solutions, what are the trade-offs between them?
A final comment: As I've written my question, I've begun to realise that there might be a solution possible using eval to construct a REV definition, however as I couldn't find this problem covered anywhere else on SO I thought worthwhile asking the question anyway for the sake of future searchers, plus I'd like to hear more experienced users' thoughts on this or any other approaches.
The short answer is there's no good solution for the question you are asking. It's not possible to stop expansion of a variable partway through and defer it until later. Not only that, but because you use the variable in a prerequisite list even if you could get the value of the $(c)_dependencies_src variable to contain just the variable references you wanted, in the very next line they would be completely expanded as part of the prerequisites list so it wouldn't gain you anything.
There's only one way to postpone the expansion of prerequisites and that's to use the secondary expansion feature. You would have to do something like:
$(c)_src := $(src)
$(c)_dependencies := $(dependencies)
.SECONDEXPANSION
$(c) : $($(c)_src) $$(foreach dep, $$($$#_dependencies), $$($$(dep)_src))
#echo $^
(untested). This side-steps the issue with $(c)_dependencies_src by just not defining it at all and putting it into the prerequisite list directly, but as a secondary expansion.
As I wrote in my comment above, though, I personally would not design a system that worked like this. I prefer to have systems where all the variables are created up-front using a namespace (typically prepending the target name) and then at the end, after all variables have been defined, including a single "rules.mk" or whatever that will use all those variables to construct the rules, most likely (unless all your recipes are very simple) using eval.
So, something like:
targets :=
### Bar
targets += bar
bar_c := bar
bar_src := bar_a.txt
bar_dependencies :=
### Foo
targets += foo
foo_c := foo
foo_src := foo_a.txt foo_b.txt
foo_dependencies := bar hoge
### Hoge
targets += hoge
hoge_c := hoge
hoge_src := hoge_a.txt
hoge_dependencies := bar
# Now build all the rules
include rules.mk
And then in rules.mk you would see something like:
define make_c
$1 : $($1_src) $(foreach dep, $($1_dependencies), $($(dep)_src))
#echo $$^
endif
$(foreach T,$(targets),$(eval $(call make_c,$T)))
You can even get rid of the settings for target if you are careful about your variable names, by adding something like this into rules.mk:
targets := $(patsubst %_c,%,$(filter %_c,$(.VARIABLES)))
In order to allow the same components in different targets you'd just need to add more to the namespace to differentiate the two different components.

How can I pass a ML value as an argument to an outer syntax command?

I define an outer syntax command, imake to write some code to a file and do some other things. The intended usage is as follows:
theory Scratch
imports Complex_Main "~/Is0/IsS"
begin
imake ‹myfile›
end
The above example will write some contents to the file myfile. myfile should be a path relative to the location of the Scratch theory.
ML ‹val this_path = File.platform_path(Resources.master_directory #{theory})
I would like to be able to use the value this_path in specifying myfile. The imake command is defined in the import ~/Is0/IsS and currently looks as follows:
ML‹(*imake*)
val _ = Outer_Syntax.improper_command #{command_spec "imake"} ""
(Parse.text >>
(fn path => Toplevel.keep
(fn _ => Gc.imake path)))›
The argument is pased using Parse.text, but I need feed it the path based on the ML value this_path, which is defined later (in the Scratch theory). I searched around a lot, trying to figure out how to use something like Parse.const, but I won't be able to figure anything out any time soon.
So: It's important that I use, in some way, Resources.master_directory #{theory} in Scratch.thy, so that imake gets the folder Scratch is in, which will come from the use of #{theory} in Scratch.
If I'm belaboring the last point, it's because in the past, I wasted a lot of time getting the wrong folder, because I didn't understand how to use the command above correctly.
How can I achieve this?
Your minimal examples uses Resource.master_directory with the parameter #{theory} to define your path. #{theory} refers (statically) to the theory at the point where you write down the antiquotation. This is mostly for interactive use, when you explore stuff. For code which is used in other places, you must use the dynamically passed context and extract the theory from it.
The function Toplevel.keep you use takes a function Toplevel.state -> unit as an argument. The Toplevel.state contains a context (see chapter 1 of the Isabelle Implementation Manual), which again contains the current theory; with Toplevel.theory_of you can extract the theory from the state. For example, you could use
Toplevel.keep (fn state => writeln
(File.platform_path (Resources.master_directory (Toplevel.theory_of state))))
to define a command that prints the master_directory for your current theory.
Except in simple cases, it is very likely that you do not only need the theory, but the whole context (which you can get with Toplevel.context_of).
Use setup from preceding (parts of the) theory
In the previous section, I assumed that you always want to use the master directory. For the case where the path should be configurable, Isabelle knows the concept of configuration options.
In your case, you would need to define an configuration option before you declare your imake command
ML ‹
val imake_path = Attrib.setup_config_string #{binding imake_path}
(K path)
› (* declares an option imake_path with the value `path` as default value *)
Then, the imake command can refer to this attribute to retrieve the path via Config.get:
Toplevel.keep (fn state =>
let val path = Config.get (Toplevel.context_of state) imake_path
in ... end)
The value of imake_path can then be set in Isar (only as a string):
declare [[imake_path="/tmp"]]
or in ML, via Config.map (for updating proof contexts) or Config.map_global (for updating theories). Note that you need to feed the updated context back to the system. Isar has the command setup (takes an ML expression of type theory -> theory) for that:
setup ‹Config.map_global imake_path (K "/tmp")›
Configuration options are described in detail in the Isar Implementation Manual, section 1.1.5.
Note: This mechanism does not allow you to automatically set imake_path to the master directory for each new theory. You need to set it manually, e.g. by adding
setup ‹
Config.map imake_path
(K (File.platform_path (Resources.master_directory #{theory})))
›
at the beginning of each theory.
The more general mechanism behind configuration options is context data. For details, see section 1.1 and in particular section 1.1.4 of the Isabelle Implementation Manual). This mechanism is used in a lot of places in Isabelle; the simpset, the configuration of the simplifier, is one example for this.

How to denote that a command line argument is optional when printing usage

Assume that I have a script that can be run in either of the following ways.
./foo arg1 arg2
./foo
Is there a generally accepted way to denote that arg1 and arg2 aren't mandatory arguments when printing the correct usage of the command?
I've sometimes noticed usage printed with the arguments wrapped in brackets like in the following usage printout.
Usage: ./foo [arg1] [arg2]
Do these brackets mean that the argument is optional or is there another generally accepted way to denote that an argument is optional?
I suppose this is as much a standard as anything.
The Open Group Base Specifications Issue 7
IEEE Std 1003.1, 2013 Edition
Copyright © 2001-2013 The IEEE and The Open Group
Ch. 12 - Utility Conventions
Although it doesn't seem to mention many things I have commonly seen over the years used to denote various meanings:
square brackets [optional option]
angle brackets <required argument>
curly braces {default values}
parenthesis (miscellaneous info)
Edit: I should add, that these are just conventions. The important thing is to pick a convention which is sensible, clearly state your convention, and stick to it consistently. Be flexible and create conventions which seem to be most frequently encountered on your target platform(s). They will be the easiest for users to adapt to.
I personally have not seen a 'standard' that denotes that a switch is optional (like how there's a standard that defines how certain languages are written for example), as it really is personal choice, but according to IBM's docs and the Wiki, along with numerous shell scripts I've personally seen (and command line options from various programs), and the IEEE, the 'defacto' is to treat square bracketed ([]) parameters as optional parameters. Example from Linux:
ping (output trimmed...)
usage: ping [-c count] [-t ttl] host
where [-c count] and [-t ttl] are optional parameters but host is not (as defined in the help).
I personally follow the defacto as well by using [] to mean they are optional parameters and make sure to note that in the usage of that script/program.
I should note that a computer standard should define how something happens and its failure paths (either true fail or undefined behavior). Something along the lines of the command line interpreter _shall_ treat arguments as optional when enclosed in square brackets, and _shall_ treat X as Y when Z, etc.. Much like the ISO C standard says how a function shall be formed for it to be valid (otherwise it fails). Given that there are no command line interpreters, from ASH to ZSH and everything in between, that fail a script for treating [] as anything but optional, one could say there is not a true standard.
Yes, the square brackets indicate optional arguments in Unix man pages.
From "man man":
[-abc] any or all arguments within [ ] are optional.
I've never wondered if they're formally specified somewhere, I've always just assumed they come from conventions used in abstract algebra, in particular, in BNF grammars.

Resources