I'm interested in figuring out what some of the operations of sbt.TaskKey or sbt.SettingKey do
<<=
<+=
<++=
I know there are a lot of examples and docs in the document section of the main website, but I didn't find anything of help. Here's where I looked:
http://www.scala-sbt.org/release/docs/Getting-Started/More-About-Settings.html
http://www.scala-sbt.org/release/api/index.html#sbt.TaskKey
http://www.scala-sbt.org/release/api/index.html#sbt.SettingKey
sbt 0.12 syntax
If you want to learn about <<=-family of the operators, the best place to go is sbt 0.12.1 version of the Getting Started guide, specifically the page you linked More Kinds of Setting has a section called Computing a value based on other keys' values: <<=.
~= defines a new value in terms of a key's previously-associated value. But what if you want to define a value in terms of other keys' values?
<<= lets you compute a new value using the value(s) of arbitrary other keys.
<<= has one argument, of type Initialize[T]. An Initialize[T] instance is a computation which takes the values associated with a set of keys as input, and returns a value of type T based on those other values. It initializes a value of type T.
Given an Initialize[T], <<= returns a Setting[T], of course (just like :=, +=, ~=, etc.).
As noted in the document, <<= makes you think in terms of Initialize[T], so if you want to extract values out of multiple keys and compose it in some way, you'd need to do something like:
jarName in assembly <<= (name, version) map { (n, v) =>
n + "-assembly-" + v + ".jar" }
At this point you have to know somehow that jarName is a TaskKey not a SettingKey.
sbt 0.13 syntax
The reason you did not find <<= in the latest Getting Started guide, is that the sbt 0.13 syntax makes <<= obsolete. All you need is :=. sbt uses macro to expand the rhs of the := to generate the above out of this:
jarName in assembly := {
name.value + "-assembly-" + version.value + ".jar"
}
:= lets you think in T, so it's easier to deal with.
Related
Posting for two reasons: (1) I was stuck on unhelpful compiler errors for far too long for such a simple issue and I want the next person to google those messages to come upon my (or other) answers, and (2) I still don't understand disallowing a use clause, so my own answer is really incomplete.
In order to call a program in two places with mostly the same arguments, I want to use the '&' to append to a default list inline:
declare
Exit_Code : constant Integer := GNAT.OS_Lib.Spawn (Program_Name => "gprbuild", Args => (Default_GPR_Arguments & new String'(File_Name_Parameter)));
begin
if Exit_Code /= 0 then
raise Program_Error with "Exit code:" & Exit_Code'Image;
end if;
end;
However, the compiler complains that System.Strings.String_List needs a use clause:
operator for type "System.Strings.String_List" is not directly visible
use clause would make operation legal
But inserting use System.Strings.String_List yields:
"System.Strings.String_List" is not allowed in a use clause
I also got this warning:
warning: "System.Strings" is an internal GNAT unit
warning: use "GNAT.Strings" instead
So I substituted GNAT for System in the with and the use clause and got an extra error in addition to the original 'you need a use clause for System.Strings.String_List' one:
"GNAT.Strings.String_List" is not allowed in a use clause
Why is GNAT.Strings.String_List not allowed in a use clause? Section 8.5 on use clauses doesn't seem to state anything on disallowed packages, so is this a compiler bug? Is it possible to define a new package that cannot have a use clause?
In a use clause of the form
use Name;
Name must be a package name. GNAT.Strings.String_List is a subtype name, not a package name.
There are a number of ways to invoke "&" for String_List. The simplest is to use the full name:
GNAT.Strings."&" (Left, Right)
but presumably you want to be able to use it as an operator in infix notation, Left & Right. Ways to achieve this, in decreasing specificity:
function "&" (Left : GNAT.Strings.String_List; Right : GNAT.Strings.String_List) return GNAT.Strings.String_List renames GNAT.Strings."&"; This makes this specific function directly visible.
use type GNAT.Strings.String_List; This makes all primitive operators of the type directly visible.
use all type GNAT.Strings.String_List; This makes all primitive operations of the type (including non-operator operations) directly visible.
use GNAT.Strings; This makes everything in the package directly visible.
Looks like it is a design decision. And many other packages in System follows this rule. From the s-string.ads (package specification for System.String):
-- Note: this package is in the System hierarchy so that it can be directly
-- be used by other predefined packages. User access to this package is via
-- a renaming of this package in GNAT.String (file g-string.ads).
My guess why this is done in that way: because it isn't in the Ada specification, but extension from GNAT.
I am reading SBT docs quite thoroughly now and there is a brief mention of Def.task and taskValue but there is no explanation so far.
They say here following:
You can compute values of some tasks or settings to define or append a value for another task.
It’s done by using Def.task and taskValue as an argument to :=, +=, or ++=.
And provide following code snippet:
sourceGenerators in Compile += Def.task {
myGenerator(baseDirectory.value, (managedClasspath in Compile).value)
}.taskValue
This brings more questions than answers for me. How is that different from regular dependency of some SBT task on another task? When should I use this macro? etc.
I have also tried to check scaladoc but without any success really. That part of code is not documented well.
I think this particular example in the introductory part of documentation is unnecessarily complicated. In this example you have to use .taskValue just because sourceGenerators value type is Seq[Task[Seq[File]]], so you have to add a task to it, not the value of that task.
A simpler example for that "Tasks based on other keys’ values" is
homepage := Some(
url(s"https://github.com/${organization.value}/${name.value}")
)
In the right part of :=/+=/++=/~= operators you can use other tasks values with simple .value suffix. Writing
foo := bar.value + 1
is the same as
foo := Def.task { bar.value + 1 }.value
In this simple example it's just unnecessary, but Def.task becomes useful when you want to separate task implementation from the task key setting:
def fooTask(n: Int): Def.Initialize[Task[Int]] = Def.task {
bar.value + n
}
So Def.task allows you to write a task definition and use other tasks/settings inside (with .value). Then you can evaluate this task definition when you set the corresponding task key somewhere else (in your project settings):
foo := fooTask(5).value
But if you need to refer to the task definition itself without evaluating it, you can use .taskValue instead of .value, like in your example. See documentation on generating sources for more information about sourceGenerators.
Here are some other relevant parts of the sbt documentation about tasks:
Tasks
Tasks/Settings: Motivation
Execution semantics of tasks
Sequencing tasks
I've run into a difficulty in defining generic rules for a non-recursive make system.
Background
For further reading, rather than me reproducing too much existing material, see this earlier question, which covers the ground pretty well, and previously helped me when constructing this system.
For the make system I am constructing, I want to define dependencies between system components - e.g. component A depends on component B - and then leave the make system to ensure that any products of the B build process are built before they will be needed by build steps for A. It's slightly wasteful due to the granularity (some unneeded intermediates may be built), but for my use case it meets a comfortable balance point between ease-of-use and build performance.
A difficulty the system must deal with is that the order of makefile loading cannot be controlled - indeed, should not matter. However because of this, a component defined in an early-loaded makefile may depend on a component defined in a not-yet-read makefile.
To allow common patterns to be applied across all the component-defining makefiles, each component uses variables such as: $(component_name)_SRC. This is a common solution for non-recursive (but recursively included) make systems.
For information on GNU make's different types of variables, see the manual. In summary: simply expanded variables (SEVs) are expanded as a makefile is read, exhibiting behaviour similar to an imperative programming language; recursively expanded variables (REVs) are expanded during make's second phase, after all the makefiles have been read.
The Problem
The specific issue arises when trying to turn a list of depended-on components into a list of the files those components represent.
I've distilled my code down to this runnable example which leaves out a lot of the detail of the real system. I think this is sufficiently simple to demonstrate the issue without losing its substance.
rules.mk:
$(c)_src := $(src)
$(c)_dependencies := $(dependencies)
### This is the interesting line:
$(c)_dependencies_src := $(foreach dep, $($(c)_dependencies), $($(dep)_src))
$(c) : $($(c)_src) $($(c)_dependencies_src)
#echo $^
Makefile:
.PHONY: foo_a.txt foo_b.txt bar_a.txt hoge_a.txt
### Bar
c := bar
src := bar_a.txt
dependencies :=
include rules.mk
### Foo
c := foo
src := foo_a.txt foo_b.txt
dependencies := bar hoge
include rules.mk
### Hoge
c := hoge
src := hoge_a.txt
dependencies := bar
include rules.mk
These will run to give:
$ make foo
foo_a.txt foo_b.txt bar_a.txt
$
hoge_a.txt is not included in the output, because at the time that foo_dependencies is defined as a SEV, hoge_src doesn't exist yet.
Expansion after all the makefiles have been read is a problem REVs should be able to solve and I did previously try defining $(c)_dependencies_src as a REV, but that doesn't work either because $(c) is then expanded at substitution time, not definition time, so it no longer holds the correct value.
In case anyone is wondering why I am not using target-specific variables, I am concerned that the application of the variable to all the prerequisites of the target described in the manual will cause an unwanted interaction between the rules for different components.
I'd like to know:
Is there a solution to this specific issue? (i.e. is there a simple way to make that line achieve what I want it to?)
Is there a more typical way of building a make system like this? (i.e. a single make instance, loading components from multiple makefiles and defining dependencies between those components.)
If there are multiple solutions, what are the trade-offs between them?
A final comment: As I've written my question, I've begun to realise that there might be a solution possible using eval to construct a REV definition, however as I couldn't find this problem covered anywhere else on SO I thought worthwhile asking the question anyway for the sake of future searchers, plus I'd like to hear more experienced users' thoughts on this or any other approaches.
The short answer is there's no good solution for the question you are asking. It's not possible to stop expansion of a variable partway through and defer it until later. Not only that, but because you use the variable in a prerequisite list even if you could get the value of the $(c)_dependencies_src variable to contain just the variable references you wanted, in the very next line they would be completely expanded as part of the prerequisites list so it wouldn't gain you anything.
There's only one way to postpone the expansion of prerequisites and that's to use the secondary expansion feature. You would have to do something like:
$(c)_src := $(src)
$(c)_dependencies := $(dependencies)
.SECONDEXPANSION
$(c) : $($(c)_src) $$(foreach dep, $$($$#_dependencies), $$($$(dep)_src))
#echo $^
(untested). This side-steps the issue with $(c)_dependencies_src by just not defining it at all and putting it into the prerequisite list directly, but as a secondary expansion.
As I wrote in my comment above, though, I personally would not design a system that worked like this. I prefer to have systems where all the variables are created up-front using a namespace (typically prepending the target name) and then at the end, after all variables have been defined, including a single "rules.mk" or whatever that will use all those variables to construct the rules, most likely (unless all your recipes are very simple) using eval.
So, something like:
targets :=
### Bar
targets += bar
bar_c := bar
bar_src := bar_a.txt
bar_dependencies :=
### Foo
targets += foo
foo_c := foo
foo_src := foo_a.txt foo_b.txt
foo_dependencies := bar hoge
### Hoge
targets += hoge
hoge_c := hoge
hoge_src := hoge_a.txt
hoge_dependencies := bar
# Now build all the rules
include rules.mk
And then in rules.mk you would see something like:
define make_c
$1 : $($1_src) $(foreach dep, $($1_dependencies), $($(dep)_src))
#echo $$^
endif
$(foreach T,$(targets),$(eval $(call make_c,$T)))
You can even get rid of the settings for target if you are careful about your variable names, by adding something like this into rules.mk:
targets := $(patsubst %_c,%,$(filter %_c,$(.VARIABLES)))
In order to allow the same components in different targets you'd just need to add more to the namespace to differentiate the two different components.
I want to create a 2D array of Uint64s in Julia 0.4. This worked in 0.3:
s = 128
a = zeros(Uint64, s, s)::Array{Uint64,2}
It continues to compile but gives me the notice
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
I don't know what this message means. I've tried googling the error message but haven't found anything helpful. What is an equivalent line of code that will not produce any warnings?
s = 128
a = zeros(UInt64, s, s)::Array{UInt64,2}
Watch out for capitals!
Doug's answer is correct, except that you can simplify it to
s = 128
a = zeros(UInt64, s, s)
You don't need the type annotation ::Array{UInt64,2}. Defining a = zeros(UInt64, s, s) will create a variable which knows its type.
Note that the Julia error message was telling you what you had to do -- replace Uint64 by UInt64. If you can think of a better way of rephrasing the message to be clearer, that would be useful to hear.
In general, type annotations are at best redundant when defining variables in Julia -- the type is automatically inferred from the type of the right-hand side, and this will be the type assigned to the variable being created.
Type annotations are used in Julia in two circumstances:
1. to define the type of variables inside a composite type
2. for multiple dispatch in function definitions, to specify which types a given method applies to.
I define an outer syntax command, imake to write some code to a file and do some other things. The intended usage is as follows:
theory Scratch
imports Complex_Main "~/Is0/IsS"
begin
imake ‹myfile›
end
The above example will write some contents to the file myfile. myfile should be a path relative to the location of the Scratch theory.
ML ‹val this_path = File.platform_path(Resources.master_directory #{theory})
I would like to be able to use the value this_path in specifying myfile. The imake command is defined in the import ~/Is0/IsS and currently looks as follows:
ML‹(*imake*)
val _ = Outer_Syntax.improper_command #{command_spec "imake"} ""
(Parse.text >>
(fn path => Toplevel.keep
(fn _ => Gc.imake path)))›
The argument is pased using Parse.text, but I need feed it the path based on the ML value this_path, which is defined later (in the Scratch theory). I searched around a lot, trying to figure out how to use something like Parse.const, but I won't be able to figure anything out any time soon.
So: It's important that I use, in some way, Resources.master_directory #{theory} in Scratch.thy, so that imake gets the folder Scratch is in, which will come from the use of #{theory} in Scratch.
If I'm belaboring the last point, it's because in the past, I wasted a lot of time getting the wrong folder, because I didn't understand how to use the command above correctly.
How can I achieve this?
Your minimal examples uses Resource.master_directory with the parameter #{theory} to define your path. #{theory} refers (statically) to the theory at the point where you write down the antiquotation. This is mostly for interactive use, when you explore stuff. For code which is used in other places, you must use the dynamically passed context and extract the theory from it.
The function Toplevel.keep you use takes a function Toplevel.state -> unit as an argument. The Toplevel.state contains a context (see chapter 1 of the Isabelle Implementation Manual), which again contains the current theory; with Toplevel.theory_of you can extract the theory from the state. For example, you could use
Toplevel.keep (fn state => writeln
(File.platform_path (Resources.master_directory (Toplevel.theory_of state))))
to define a command that prints the master_directory for your current theory.
Except in simple cases, it is very likely that you do not only need the theory, but the whole context (which you can get with Toplevel.context_of).
Use setup from preceding (parts of the) theory
In the previous section, I assumed that you always want to use the master directory. For the case where the path should be configurable, Isabelle knows the concept of configuration options.
In your case, you would need to define an configuration option before you declare your imake command
ML ‹
val imake_path = Attrib.setup_config_string #{binding imake_path}
(K path)
› (* declares an option imake_path with the value `path` as default value *)
Then, the imake command can refer to this attribute to retrieve the path via Config.get:
Toplevel.keep (fn state =>
let val path = Config.get (Toplevel.context_of state) imake_path
in ... end)
The value of imake_path can then be set in Isar (only as a string):
declare [[imake_path="/tmp"]]
or in ML, via Config.map (for updating proof contexts) or Config.map_global (for updating theories). Note that you need to feed the updated context back to the system. Isar has the command setup (takes an ML expression of type theory -> theory) for that:
setup ‹Config.map_global imake_path (K "/tmp")›
Configuration options are described in detail in the Isar Implementation Manual, section 1.1.5.
Note: This mechanism does not allow you to automatically set imake_path to the master directory for each new theory. You need to set it manually, e.g. by adding
setup ‹
Config.map imake_path
(K (File.platform_path (Resources.master_directory #{theory})))
›
at the beginning of each theory.
The more general mechanism behind configuration options is context data. For details, see section 1.1 and in particular section 1.1.4 of the Isabelle Implementation Manual). This mechanism is used in a lot of places in Isabelle; the simpset, the configuration of the simplifier, is one example for this.