accessing SBT settings under current scope - sbt

I'm having a problem trying to understand the concept of scope in sbt. I want a task to be run under a specific scope, and be able to access scoped settings, i.e.
build.sbt
name := "Superapp"
name in Test := "Testapp"
val printScopedKey = TaskKey[Unit]("psk", "Print Scoped Key")
printScopedKey := println("***** [APP NAME] " + name.value)
I'd expect the following:
> test:psk
> ***** [APP NAME] Testapp
Instead of the actual:
> ***** [APP NAME] Superapp
How can I do this in sbt? Is that even possible?

OP wrote "I'd expect the following:"
> test:psk
> ***** [APP NAME] Testapp
Without actually defining psk task in Test configuration, sbt would look for psk task first in Global configuration, then in the order of configurations of the project, which by default is Seq(Compile, Runtime, Test, Provided, Optional).
So the following (and #Jacek Laskowski's answer too) describes how one can go about in defining tasks into multiple scopes without code duplication. A setting can be scoped in three axes (project, configuration, and task). The project part doesn't come into play as much so we'll discuss configuration and task here.
It's recommended that task-specific settings are scoped to a task to encourage reuse of keys. For example:
test in assembly := {}
In the above test key is scoped to assembly task to control tests that are run before creating a fat JAR. You can define a "task-generator" method that would take a key and create a graph of settings around it:
def assemblyTask(key: TaskKey[File]): Initialize[Task[File]] = Def.task {
val t = (test in key).value
val s = (streams in key).value
Assembly((outputPath in key).value, (assemblyOption in key).value,
(packageOptions in key).value, (assembledMappings in key).value,
s.cacheDirectory, s.log)
}
I use that to define assembly, packageScala, and packageDependency tasks.
lazy val baseAssemblySettings: Seq[sbt.Def.Setting[_]] = Seq(
assembly := Assembly.assemblyTask(assembly).value,
packageScala := Assembly.assemblyTask(packageScala).value,
....
)
So far baseAssemblySettings is configuration-neutral.
If I wanted to scope it in configurations like Compile and Test, I'd call inConfig(conf)(settings) like this:
lazy val assemblySettings: Seq[sbt.Def.Setting[_]] =
inConfig(Compile)(baseAssemblySettings) ++
inConfig(Test)(baseAssemblySettings)
Now you have multiple task graphs in multiple configurations.

Thanks for the question! I'd initially thought I'd know the answer and then realized it's not so simple. I had to look around for a solution.
I use sbt 0.13.2-RC1.
> about
[info] This is sbt 0.13.2-RC1
[info] The current project is {file:/C:/dev/sandbox/0.13.2/}root-0-13-2 0.1-SNAPSHOT
[info] The current project is built against Scala 2.11.0-RC3
[info] Available Plugins: org.sbtidea.SbtIdeaPlugin, de.johoop.jacoco4sbt.JacocoPlugin, com.timushev.sbt.updates.UpdatesPlugin
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.3
I found the solution in Mark Harrah's response to a similar question on the sbt mailing list that boils down to the following changes in build.sbt:
scalaVersion := "2.11.0-RC3"
name := "Superapp"
name in Test := "Testapp"
name in Runtime := "Runtimeapp"
lazy val psk = taskKey[Unit]("Print Scoped Key")
val pskSetting = psk := println("***** [APP NAME] " + name.value)
// https://groups.google.com/d/msg/simple-build-tool/A87FFV4Sw4k/KPtygikQvogJ
val myPsks = Seq(Compile, Test, Runtime) flatMap { conf =>
inConfig(conf)( Seq(pskSetting) )
}
myPsks
When the build file's loaded, sbt will automagically know that when you're executing psk its dependency is name in Compile while test:psk depends on name in Test. Pretty clever.
> psk
***** [APP NAME] Superapp
[success] Total time: 0 s, completed 2014-03-26 21:27:37
> test:psk
***** [APP NAME] Testapp
[success] Total time: 0 s, completed 2014-03-26 21:27:41
> runtime:psk
***** [APP NAME] Runtimeapp
[success] Total time: 0 s, completed 2014-03-26 21:27:44
Use inspect to dig deeper. It's always quite useful to know how it works under the hood (which is not that hard to understand once you start using right tools, like inspect).

Related

Execute sbt task before packaging of fat-jar

I wrote a small sbt plugin for some resource files editing in project's target directory (actually, it just works similary to maven profiles). Now, when I wrote and tested my simple custom sbt task (let's call it interpolateParameters), I want it to be executed between resource copying and jar creation when running sbt assembly. However, I can't find any documentation about which tasks are executed "under the hood" of assembly task provided by sbt-assembly plugin. And actually I doubt is it even possible.
Therefore, I have 2 questions: is it possible to somehow execute my task between sbt assembly's compile + copyResources and "create jar" steps? And if not, is there a way to achieve what I want without creating my own fork of sbt-assembly plugin?
I solved this with making assembly depends on my task interpolateParameters, and interpolateParameters depends on products. Here is part of my resulting build.sbt file with solution:
lazy val some<oduleForFatJar = (project in file("some/path"))
.dependsOn(
someOtherModule % "test->test;compile->compile"
)
.settings(
name := "some module name",
sharedSettings,
libraryDependencies ++= warehouseDependencies,
mainClass in assembly := Some("com.xxxx.yyyy.Zzzz"),
assemblyJarName in assembly := s"some_module-${version.value}.jar",
assembly := {
assembly dependsOn(interpolateParameters) value
},
interpolateParameters := {
interpolateParameters dependsOn(products) value
},
(test in assembly) := {}
)
Hope it can help someone.

Override project's setting inside of SBT task

In my build.sbt a compilation phase depends on running scapegoat inspection
(compile in Compile) := (compile in Compile).dependsOn(scapegoat).value
I'm trying to introduce a new task for running tests (for development purposes to speed things up) that does not depend on scapegoat like this:
lazy val fastTests = taskKey[Unit]("")
fastTests := {
scapegoat in Compile := {}
(test in Test).value
}
but gets ignored
You cannot do it with a task because tasks cannot change settings. You can solve it either with different configurations or with a command (which can change settings). See for example:
How to disable “Slow” tagged Scalatests by default, allow execution with option?
(for the configurations approach)
How to change setting inside SBT command?

What does extend for a configuration do?

In SBT I create a new config, called katebush, as follows:
lazy val KateBush: Configuration = config("katebush")
When I try to run katebush:compile I get an error. That's what I expect.
> katebush:compile
[error] No such setting/task
[error] katebush:compile
[error] ^
Now I extend Compile in my config definition, and I expect to pick up the compile from the inherited scope.
lazy val KateBush: Configuration = config("katebush") extend Compile
Except it doesn't work:
> katebush:compile
[error] No such setting/task
[error] katebush:compile
[error] ^
But if I add in the defaults to the config (in build.sbt) so it looks as follows:
lazy val KateBush: Configuration = config("katebush") extend Compile
inConfig(KateBush)(Defaults.compileSettings)
it works fine:
> katebush:compile
[info] Updating {file:/Users/jacek/sandbox/so-25596360/}so-25596360...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[success] Total time: 0 s, completed Aug 31, 2014 11:35:47 PM
So, my question is, what exactly does extend for a configuration do?
DISCLAIMER I've got a rather basic understanding of the config concept of sbt.
tl;dr Extending a configuration is solely to inherit the dependencies groups not settings.
From the sources of final case class Configuration:
def extend(configs: Configuration*) = Configuration(name, description, isPublic, configs.toList ::: extendsConfigs, transitive)
By default, extendsConfigs is Nil as can be seen in the sbt.Configurations object:
def config(name: String) = new Configuration(name)
that resolves to (note Nil)
def this(name: String) = this(name, "", true, Nil, true)
In sbt.IvySbt.toIvyConfiguration:
import org.apache.ivy.core.module.descriptor.{ Configuration => IvyConfig }
and that's where the support of the config concept ends in sbt and Ivy steps in. That's where you'd have to look at the documentation of Ivy.
But before that read Advanced configurations example where it says:
This is an example .scala build definition that demonstrates using Ivy
configurations to group dependencies.
That's the beginning of the explanation. Ivy configurations are to group dependencies and extending a configuration is to extend the grouping.
From the official documentation of Ivy about the conf element:
a configuration is a way to use or construct a module.(...)
a module may need some other modules and artifacts only at build time, and some others at runtime. All those differents ways to use or build a module are called in Ivy module configurations.
Reading along you can find the answer to your question (that I'm myself yet to digest, too):
A configuration can also extend one or several other ones of the same
module. When a configuration extends another one, then all artifacts
required in the extended configuration will also be required in the
configuration that extends the other one. For instance, if
configuration B extends configuration A, and if artifacts art1 and
art2 are required in configuration A, then they will be automatically
required in configuration B. On the other hand, artifacts required in
configuration B are not necessarily required in configuration A.
This notion is very helpful to define configurations which are similar
with some differences.
At the bottom of the page, there's the Examples section with an example with runtime config that has "runtime will be composed of all dependencies, all transitively, including the dependencies declared only in compile."
With this, you can now understand the config concept in sbt as a dependencies groups and what's grouped in Compile is available in Runtime as its definition looks as follows:
lazy val Runtime = config("runtime") extend (Compile)
I have just had to figure this out, so I thought this was worth clarifying. The configuration has to be added to the project for delegation to the extended configuration to occur:
lazy val KateBush: Configuration = config("katebush") extend Compile
lazy val root = (project in file(".")).configs(KateBush)
will work fine. If you
inspect katebush:compile
then you can view the delegation chain:
...
[info] Delegates:
[info] katebush:compile
[info] compile:compile
[info] *:compile
[info] {.}/katebush:compile
[info] {.}/compile:compile
[info] {.}/*:compile
[info] */katebush:compile
[info] */compile:compile
[info] */*:compile
...

How to set system properties for runMain on command line?

How can I set a system property for runMain upon executing it from command line on Windows?
I'd like to be able to run the following command:
sbt -Dconfig.resource=../application.conf "runMain akka.Main com.my.main.Actor"
Regardless of whether fork is true, whether I put it in SBT_OPTS, or how I pass it in I cannot accomplish this. I am familiar with both Setting value of setting on command line when no default value defined in build? and Setting system properties with "sbt run" but neither answer my question.
Other questions seem to indicate you can't even easily view the Java invocation arguments easily in SBT. Any help is appreciated.
This works:
sbt '; set javaOptions += "-Dconfig.resource=../application.conf" ; runMain akka.Main com.my.main.Actor'
If this isn't a "friendly" enough syntax, wrap it in a little shell script.
(Note this assumes you have fork set to true for running. If you don't, see akauppi's comment.)
You could use envVars setting. I'm unsure how idiomatic it is in SBT, though.
> help envVars
Environment variables used when forking a new JVM
The following (very minimalistic) build.sbt worked fine.
fork := true
envVars := Map("msg" -> "hello")
Once you get it running, setting envVars to any value with set does the trick.
> help set
set [every] <setting-expression>
Applies the given setting to the current project:
1) Constructs the expression provided as an argument by compiling and loading it.
2) Appends the new setting to the current project's settings.
3) Re-evaluates the build's settings.
This command does not rebuild the build definitions, plugins, or configurations.
It does not automatically persist the setting(s) either.
To persist the setting(s), run 'session save' or 'session save-all'.
If 'every' is specified, the setting is evaluated in the current context
and the resulting value is used in every scope. This overrides the value
bound to the key everywhere.
I've got a simple app to run.
$ sbt run
[info] Set current project to fork-testing (in build file:/C:/dev/sandbox/fork-testing/)
[info] Running Hello
[info] hello
With the envVars setting changed on the command line the output would change as follows:
$ sbt 'set envVars := Map("msg" -> "Hello, Chad")' run
[info] Set current project to fork-testing (in build file:/C:/dev/sandbox/fork-testing/)
[info] Defining *:envVars
[info] The new value will be used by *:runner, compile:run::runner and 1 others.
[info] Run `last` for details.
[info] Reapplying settings...
[info] Set current project to fork-testing (in build file:/C:/dev/sandbox/fork-testing/)
[info] Running Hello
[info] Hello, Chad
runMain is no different from run in this case.
$ sbt 'set envVars := Map("msg" -> "Hello, Chad")' 'runMain Hello'
[info] Set current project to fork-testing (in build file:/C:/dev/sandbox/fork-testing/)
[info] Defining *:envVars
[info] The new value will be used by *:runner, compile:run::runner and 1 others.
[info] Run `last` for details.
[info] Reapplying settings...
[info] Set current project to fork-testing (in build file:/C:/dev/sandbox/fork-testing/)
[info] Running Hello
[info] Hello, Chad
If you're trying to set SBT properties, like plugin settings, then the above won't work (AFAICT) as of 0.13+ in my experience. The following however did work, when trying to pass in Liquibase settings, like password, from our CI frameworks.
In your build.sbt
Ugly, but supplies defaults, and optionally grabs from System.properties. This way you've got your default and override cases covered.
def sysPropOrDefault(propName:String,default:String):String = Option(System.getProperty(propName)).getOrElse(default)
liquibaseUsername := sysPropOrDefault("liquibase.username","change_me")
liquibasePassword := sysPropOrDefault("liquibase.password","chuck(\)orris")
From the commandline
Now just override via -Dprop=value like you would with Maven or other JVM programs. Note props appear before SBT task.
sbt -Dliquibase.password="shh" -Dliquibase.username="bob" liquibase:liquibase-update

Can sbt execute "compile test:compile it:compile" as a single command, say "*:compile"?

I'm running compile test:compile it:compile quite often and...would like to cut the number of keystrokes to something like *:compile. It doesn't seem to work, though.
$ sbt *:compile
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Loading project definition from /Users/jacek/oss/scalania/project
[info] Set current project to scalania (in build file:/Users/jacek/oss/scalania/)
[error] No such setting/task
[error] *:compile
[error] ^
Is it possible at all? I use SBT 0.13.
test:compile implies a compile so compile doesn't need to be explicitly run before test:compile. If your IntegrationTest configuration extends Test, it:compile implies test:compile.
One option is to define an alias that executes multiple commands:
sbt> alias compileAll = ; test:compile ; it:compile
See help alias and help ; for details. You can make this a part of your build with:
addCommandAlias("compileAll", "; test:compile ; it:compile")
The other option is to define a custom task that depends on the others and call that:
lazy val compileAll = taskKey[Unit]("Compiles sources in all configurations.")
compileAll := {
val a = (compile in Test).value
val b = (compile in IntegrationTest).value
()
}

Resources