How to define a task that is called for the root project only? - sbt

When I define a task it gets called for each project in a multi-project build:
import sbt._
import Keys._
import IO._
object EnsimePlugin extends Plugin {
val ensime = TaskKey[Unit](
"generateEnsime",
"Generate the ENSIME configuration for this project")
override val projectSettings = Seq(
ensime := generateEnsime (
(thisProject in Test).value,
(update in Test).value
)
)
private def generateEnsime(proj: ResolvedProject, update: UpdateReport): Unit = {
println(s"called by ${proj.id}")
}
}
How can I define a task so that it is only called for the root project?
They are usually discouraged, but is this perhaps a valid use of a Command? e.g. like the sbt-idea plugin.

From the official docs about Aggregation:
In the project doing the aggregating, the root project in this case,
you can control aggregation per-task.
It describes aggregate key scoped to a task with the value false as:
aggregate in update := false
Use commands for session processing that would otherwise require additional steps in a task. It doesn't necessarily mean it's harder in tasks, but my understanding of commands vs tasks is that the former are better suited for session manipulation. I might be wrong, though in your particular case commands are not needed whatsoever.

Related

How to separate task code from build.sbt

I have written custom task for my sbt project and placed it in build.sbt.
However I would like to put the task's code to separate file and import it from build.sbt. I have created mytask.sbt that contains something like this:
lazy val myTask = taskKey[Unit]("my task")
myTask := {
//doing task
}
but I don't know how can I import this task from within build.sbt so I could use something like this:
compile <<= (compile in Compile) dependsOn myTask
I'm aware that there is plugins concept but I think this is overkill for my needs. All I want is to separate some code.
You do not mention what sbt version your are using, so the following answer is based on version 0.13.12.
As far as I know, task has to be defined either in an sbt file, in a plugin, or inside a scala file object extending the Build trait.
I don't think its possibly to define anything in one sbt file, and use it in another, so from my point of view, your have these options:
Extending the Build trait
this way of doing things, has become deprecated in the later versions of sbt, so I will not go into that one.
Defining the logic of the task in an scala file.
You could split the declaration and the logic of the task up, so that your declare the task in the sbt build file, but move the logic of it into a scala file. Then your build file could be like this:
import CommonBuildCode._
val myTask = taskKey[Unit]("my task")
myTask := {
myTaskPerform()
}
compile := {
myTask.value
(compile in Compile).value
}
And you logic for the task could be in the file project/CommonBuildCode.scala
import sbt._
object CommonBuildCode {
def myTaskPerform(): Unit = {
println("-------------------------- myTask called --------------------------")
}
}
I do not know it this would be enough for your, but it would keep the number of lines concerning the myTask task in your build.sbt file to a minimum.
Creating a simple plugin
Its pretty easy to create a plugin with sbt, and this will give a result very close to what your asking for. First create the file project/MyTaskPlugin.scala like this:
import sbt._
object MyTaskPlugin extends AutoPlugin {
object autoImport {
val myTask = taskKey[Unit]("my task")
}
import autoImport._
override def projectSettings = Seq(
myTask := {
println("--------------- myTask called from a plugin------------------")
}
)
}
When the project is enabled, anything under the autoImport project will be auto imported and available to use in your sbt build file, and all the settings set in the projectSettings method will apply. So now the only thing you need to do in you build.sbt file, is to activate the plugin:
enablePlugins(MyTaskPlugin)
compile := {
myTask.value
(compile in Compile).value
}
An extra bonus of using a plugin, is that it will be pretty easy to refactor the plugin into its own project, where it can publish a jar, that will be easily activated by other projects. This could be relly handy, in case myTask turns out to be a common build task among your project.

How to implement custom watch task in sbt?

I am unsatisfied with ~ ;task1; task2; so I want to implement my very own task that watches for changes and executes tasks. I.e. I need an sbt task that watches for some files and runs some tasks depending on what is changed.
e.g.
val task1: Initialize[Task[Int]] = ....
val task2: Initialize[Task[Int]] = ....
myTask := {
log.info("Press Enter to stop watching...")
while(isEnterNotPressedYet) {
if (someFilesChanged)
execute(task1) //start task1 and wait for its termination
else if (someOtherFilesChanged)
execute(task2)
Thread.sleep(watchDuration.value)
}
}
task1.value will not work because it will execute task1 BEFORE the body and only once. dynTask will not work because I want to execute tasks multiple times and without leaving the loop. Precisely, question is how to implement following:
def execute[T](task: Initialize[Task[T]]): T
def isEnterNotPressedYet: Boolean
Background:
I have web application that uses JS, Scala, sbt-revolver. Some resources support hot reloading (but still require compilation!), some does not. I.e. if *.js files change, I want to invoke compileJs task. If .scala files change, I want to invoke re-start task. But sbt watch has only one set of watched resources per project...
Had to dive into SBT sources and implement it on my own.
There are quite a bit workarounds around strange SBT behavior but it works!
https://github.com/scf37/sbt-overwatch

Adding a library dependency via an sbt plugin - per sub-project

I am trying to add a library dependency through an sbt plugin. The dependency should be added to each sub-project per its binary scala version, so I iterate through each subproject.
private def inject(): State => State = { state =>
val extracted: Extracted = Project.extract(state)
val enrichedLibDepSettings = extracted.structure.allProjectRefs map { projRef =>
val projectScalaVersion = (scalaBinaryVersion in projRef)
libraryDependencies in projRef +=
compilerPluginOrg % (compilerPluginArtifact + "_" + projectScalaVersion.value) % compilerPluginVersion % "provided"
}
val newState = extracted.append(enrichedLibDepSettings, state)
val updateAfterLibAppend = extracted.structure.allProjectRefs map { projRef =>
println("running update: " + EvaluateTask(extracted.structure, update, newState, projRef)) }
state
}
However this is not working - the printed output shows no trace of a library dependency being appended through libraryDependencies in projRef +=, nor is any error issued, leaving subsequent steps to fail over the missing dependency. What might be wrong with this technique?
You will ask why is this needed in the first place? why add a library dependency through an sbt plugin like that?
Although we have in sbt addCompilerPlugin, it cannot be used for compiler plugins that have arguments (-Xplugin with a path to a jar must be specified to scalac, for it to accept compiler plugin arguments, as far as experimentation shows). Hence we need to inject the compiler plugin via -Xplugin after having it resolved as a library dependency (then fiddle its file path location inspecting the result of update). Hence we do need to add a library dependency via an sbt plugin. And we further need to do this per sub-project, as a multi-project build may house sub-projects of varying scala versions - each one must have a binary compatible compiler plugin injected, in order to maintain binary compatibility.
By the way, and this might illuminate something I'm in the dark over:
When adding the library dependency in a projectSettings override for the root project - as below - the dependency seems to resolve, but that is useless, as it will apply the same binary version to all sub-projects, which is against the nature of the task at hand (some sub-projects will naturally crash over binary incompatibility). Also I think it will override the root's settings whereas the goal here is to append a setting not to override existing settings.
object Plugin extends AutoPlugin {
override lazy val projectSettings = Seq(
...
}
A pair of clues?
Appending scalacOptions per sub-project - using the same technique - simply works.
Applying += to libraryDepenencies above, does not even affect the output of inspect libraryDependencies, unlike when using the same idiom inside a an override lazy val projectSettings block of an AutoPlugin.
I think you might be confused about what projectSettings is. If you extend AutoPlugin, you can define the default settings that are applied (on top of the defaults) for each project, see https://github.com/sbt/sbt/blob/v0.13.9/main/src/main/scala/sbt/Plugins.scala#L81
This means you could simply add your artefact here using the typical Setting / Task notation, e.g.
def projectSettings = Seq(
libraryDependencies += {
val bin = scalaBinaryVersion.value
...
}
)
note that this is +=, not :=. Does that help?

What does a bare line on its own do?

In a .sbt file, I often have copy-pasted lines from readmes, of which I have no idea what I'm actually doing. An example is, after adding sbt-revolver to plugins.sbt, writing the line
Revolver.settings
My current understanding of what this does is magically adding re-start and re-stop commands to sbt. I have been led to understand that a line in an .sbt file does not, in fact, perform magic, but rather creates a key and associates a value with it.
What keys does such a line set, and to what value? What would be the equivalent statement in a .scala build definition?
*.sbt files can take bare DslEntry which include Setting[T] and Seq[Setting[T]].
An expression like someString := "a" or someSeq += "b" is a Setting for a specific T type.
These settings are values though, they define transformation (change, add, append, etc) of different parts of the build, which get folded into the build state and structure.
In your example Revolver.settings is Seq[Setting[_]] which defines default setup of using sbt-revolver.
If setting it up in a project/*.scala you need to assign it to the root project, which is either:
the sole project in your build
the project that aggregates all other (sub) projects.
Therefore it would look something like:
import sbt._, Keys._
object Build extends Build {
val bippy = project in file(".") settings Revolver.settings
}

How to make a Setting of list of values depend on the value of a task

I want to set a SettingKey[Seq[Tuple2[String, String]]] called IzPack.variables of a 3rd party plugin called sbt-izpack.
The documentation tells how to set this setting:
IzPack.variables in IzPack.Config <+= name {name => ("projectName", name)}
I think that the <+= syntax is old. There is no explanation about it in the 0.13.5 SBT documentation.
How can I append values that depends on tasks?
This is the syntax I'm using now:
IzPack.variables in IzPack.Config ++= Seq(
("appVersion", mySetting1.value),
("cocoonXconf", mySetting2.value),
)
but it complains when trying to use a task value with this message:
A setting cannot depend on a task
If it's a Setting it must be known at project load as Settings are only computed once. Tasks on the other hand are computed each time. A Setting can not depend on a Task but a Task can depend on a Setting.
See http://www.scala-sbt.org/0.13/tutorial/More-About-Settings.html#Tasks+with+dependencies.
The solution I've finally done is to refactor my own code, so that a task that generates a file, is split in a setting that defines the output file, and a task that creates the file contents.
The setting is used for initializing the setting IzPack.variables, whereas my custom task is made dependant of the task that uses IzPack.variables.

Resources