In my build.sbt a compilation phase depends on running scapegoat inspection
(compile in Compile) := (compile in Compile).dependsOn(scapegoat).value
I'm trying to introduce a new task for running tests (for development purposes to speed things up) that does not depend on scapegoat like this:
lazy val fastTests = taskKey[Unit]("")
fastTests := {
scapegoat in Compile := {}
(test in Test).value
}
but gets ignored
You cannot do it with a task because tasks cannot change settings. You can solve it either with different configurations or with a command (which can change settings). See for example:
How to disable “Slow” tagged Scalatests by default, allow execution with option?
(for the configurations approach)
How to change setting inside SBT command?
Related
I have a PlayFramework 2.7 application which is build by sbt.
For accessing the database, I am using JOOQ. As you know, JOOQ reads my database schema and generates the Java source classes, which then are used in my application.
The application only compiles, if the database classes are present.
I am generating the classes with this custom sbt task:
// https://mvnrepository.com/artifact/org.jooq/jooq-meta
libraryDependencies += "org.jooq" % "jooq-meta" % "3.12.1"
lazy val generateJOOQ = taskKey[Seq[File]]("Generate JooQ classes")
generateJOOQ := {
(runner in Compile).value.run("org.jooq.codegen.GenerationTool",
(fullClasspath in Compile).value.files,
Array("conf/db.conf.xml"),
streams.value.log).failed foreach (sys error _.getMessage)
((sourceManaged.value / "main/generated") ** "*.java").get
}
I googled around and found the script above and modified it a little bit according to my needs, but I do not really understand it, since sbt/scala are new to me.
The problem now is, when I start the generateJOOQ, sbt tries to compile the project first, which fails, because the database classes are missing. So what I have to do is, comment all code out which uses the generated classes, execute the task which compiles my project, generates the database classes and then enable the commented out code again. This is a pain!
I guess the problem is the command (runner in Compile) but I did not find any possibility to execute my custom task WITHOUT compiling first.
Please help! Thank you!
Usually, when you want to generate sources, you should use a source generator, see https://www.scala-sbt.org/1.x/docs/Howto-Generating-Files.html
sourceGenerators in Compile += generateJOOQ
Doing that automatically causes SBT to execute your task first before trying to compile the Scala/Java sources.
Then, you should not really use the runner task, since that is used to run your project, which depends on the compile task, which needs to execute first of course.
You should add the jooq-meta library as a dependeny of the build, not of your sources. That means you should add the libraryDependencies += "org.jooq" % "jooq-meta" % "3.12.1" line e.g. to project/jooq.sbt.
Then, you can simply call the GenerationTool of jooq just as usually in your task:
// build.sbt
generateJOOQ := {
org.jooq.codegen.GenerationTool.main(Array("conf/db.conf.xml"))
((sourceManaged.value / "main/generated") ** "*.java").get
}
I have an sbt project where docker:publishLocal will create a docker image on my machine for testing, and docker:publish will publish the image to a repository and also publish jar files from the build to a repository.
If my project is a snapshot, I would like to disable publishing to the repositories, while still being able to build the local image.
ThisBuild / publishArtifact := ! isSnapshot.value
does the right thing for the publish command, but it also disables publishLocal.
I want to write something like
if (isSnapshot.value) {
publish := { }
}
but that gives me an error that I do not understand at all:
[info] Loading project definition from /Users/dev/project
/Users/dev/build.sbt:1: error: type mismatch;
found : Unit
required: sbt.internal.DslEntry
if (isSnapshot.value) {
^
Past experience dictates that redefining publish to conditionally call the original version won't work, as
publish := {
if (!isSnapshot.value) publish.value
}
gives warnings that the task is always evaluated.
Is there a way to do this?
The problem with this code is that it evaluates publish.value regardless of the if structure. I recommend reading the documentation on task dependencies. If you want to "delay" the evaluation of a task in one of the if branches, you need to use dynamic task definition:
publish := Def.taskDyn {
if (isSnapshot.value)
Def.task {} // doing nothing
else
Def.task { publish.value } // could be written as just publish
}.value
But apart from fixing your code, you should be aware that there is a special setting for the functionality you want, it's called skip:
publish/skip := isSnapshot.value
Another thing to notice, is the scoping. If you want to override docker:publish, which is the same as Docker/publish in the new syntax, you should add this Docker/ scope prefix to every mention of publish in the code above.
I have a project performing integration tests for a bunch of projects which are all bundled in the same multi-project build with it. The integration tests run through a regular main (object Runner extends App).
I wish being able to run it from the root project of the multi-project build through a task or command named integrationTest, so I try:
val integrationTest = taskKey[Unit]("Executes integration tests.")
lazy val root = (project in file(".")).aggregate(projIntegrationTest, projA, projB, ...).settings(
integrationTest := (run in Compile in projIntegrationTest).value
)
Which does nothing when I issue integrationTest on the prompt, only emitting:
[success] Total time: 0 s, completed Oct 23, 2015 12:31:21 AM
How may I approach finding out why does it not get run when my custom task integrationTest runs?
Oddly, replacing run with compile or publishlocal in integrationTest := (run in Compile in projIntegrationTest).value above, my custom task line acts as expected and takes care of compiling or publishing when the custom task is executed.
It doesn't work because run is an InputTask, not a regular Task.
You need to do:
integrationTest :=
(run in Compile in projIntegrationTest)
.toTask("").value
this is covered in the "Get a Task from an InputTask" section of http://www.scala-sbt.org/0.13/docs/Input-Tasks.html.
As of sbt 0.13.13 your code gives:
warning: `value` is deprecated for an input task. Use `evaluated` or `inputTaskValue`.
This is a nice improvement; earlier versions of sbt let this pass, making it hard to troubleshoot. (But note that the deprecation message suggests a different solution than I've used here; I haven't investigated that discrepancy. Can someone shed some light on that?)
I'm having a problem trying to understand the concept of scope in sbt. I want a task to be run under a specific scope, and be able to access scoped settings, i.e.
build.sbt
name := "Superapp"
name in Test := "Testapp"
val printScopedKey = TaskKey[Unit]("psk", "Print Scoped Key")
printScopedKey := println("***** [APP NAME] " + name.value)
I'd expect the following:
> test:psk
> ***** [APP NAME] Testapp
Instead of the actual:
> ***** [APP NAME] Superapp
How can I do this in sbt? Is that even possible?
OP wrote "I'd expect the following:"
> test:psk
> ***** [APP NAME] Testapp
Without actually defining psk task in Test configuration, sbt would look for psk task first in Global configuration, then in the order of configurations of the project, which by default is Seq(Compile, Runtime, Test, Provided, Optional).
So the following (and #Jacek Laskowski's answer too) describes how one can go about in defining tasks into multiple scopes without code duplication. A setting can be scoped in three axes (project, configuration, and task). The project part doesn't come into play as much so we'll discuss configuration and task here.
It's recommended that task-specific settings are scoped to a task to encourage reuse of keys. For example:
test in assembly := {}
In the above test key is scoped to assembly task to control tests that are run before creating a fat JAR. You can define a "task-generator" method that would take a key and create a graph of settings around it:
def assemblyTask(key: TaskKey[File]): Initialize[Task[File]] = Def.task {
val t = (test in key).value
val s = (streams in key).value
Assembly((outputPath in key).value, (assemblyOption in key).value,
(packageOptions in key).value, (assembledMappings in key).value,
s.cacheDirectory, s.log)
}
I use that to define assembly, packageScala, and packageDependency tasks.
lazy val baseAssemblySettings: Seq[sbt.Def.Setting[_]] = Seq(
assembly := Assembly.assemblyTask(assembly).value,
packageScala := Assembly.assemblyTask(packageScala).value,
....
)
So far baseAssemblySettings is configuration-neutral.
If I wanted to scope it in configurations like Compile and Test, I'd call inConfig(conf)(settings) like this:
lazy val assemblySettings: Seq[sbt.Def.Setting[_]] =
inConfig(Compile)(baseAssemblySettings) ++
inConfig(Test)(baseAssemblySettings)
Now you have multiple task graphs in multiple configurations.
Thanks for the question! I'd initially thought I'd know the answer and then realized it's not so simple. I had to look around for a solution.
I use sbt 0.13.2-RC1.
> about
[info] This is sbt 0.13.2-RC1
[info] The current project is {file:/C:/dev/sandbox/0.13.2/}root-0-13-2 0.1-SNAPSHOT
[info] The current project is built against Scala 2.11.0-RC3
[info] Available Plugins: org.sbtidea.SbtIdeaPlugin, de.johoop.jacoco4sbt.JacocoPlugin, com.timushev.sbt.updates.UpdatesPlugin
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.3
I found the solution in Mark Harrah's response to a similar question on the sbt mailing list that boils down to the following changes in build.sbt:
scalaVersion := "2.11.0-RC3"
name := "Superapp"
name in Test := "Testapp"
name in Runtime := "Runtimeapp"
lazy val psk = taskKey[Unit]("Print Scoped Key")
val pskSetting = psk := println("***** [APP NAME] " + name.value)
// https://groups.google.com/d/msg/simple-build-tool/A87FFV4Sw4k/KPtygikQvogJ
val myPsks = Seq(Compile, Test, Runtime) flatMap { conf =>
inConfig(conf)( Seq(pskSetting) )
}
myPsks
When the build file's loaded, sbt will automagically know that when you're executing psk its dependency is name in Compile while test:psk depends on name in Test. Pretty clever.
> psk
***** [APP NAME] Superapp
[success] Total time: 0 s, completed 2014-03-26 21:27:37
> test:psk
***** [APP NAME] Testapp
[success] Total time: 0 s, completed 2014-03-26 21:27:41
> runtime:psk
***** [APP NAME] Runtimeapp
[success] Total time: 0 s, completed 2014-03-26 21:27:44
Use inspect to dig deeper. It's always quite useful to know how it works under the hood (which is not that hard to understand once you start using right tools, like inspect).
I am trying to use a JVM option in a forked test, which has been set externally to SBT prior to its launch. I'm also setting additional JVM options like so:
javaOptions in ThisBuild ++= List("-Xmx3072m")
to my understanding, based on the SBT documentation the JVM options provided to the SBT process should be available to the forked process:
By default, a forked process uses the same Java and Scala versions being used for the build and the working directory and JVM options of the current process.
However, I can't seem to retrieve those "external" JVM options in the forked tests, i.e. System.getProperty("foo") will always return null. Given that I am trying to pass along a password, I can't set it directly in the build file. My questions therefore are:
is there an SBT task / key to access the JVM options passed to the JVM in which SBT is running? That way I would attempt to add the key to the javaOptions
is there any other means by which to pass external Java Options to a forked test?
You may control your options with testGrouping. Below copy'n'paste from one of my projects. It properly handles hierarchical projects and root project without tests too. Options are merged from javaOptions in run and test.options file. This allow modify arguments without project reloading. That project has load time more then minute. So I use test.options to fast switch between production and debug mode with -Xrunjdwp:transport=dt_...
testGrouping in Test <<= (definedTests in Test, javaOptions in run, baseDirectory in LocalRootProject) map { (tests, javaOptions, baseDirectory) ⇒
if (tests.nonEmpty) {
val testOptionsFile = baseDirectory / "test.options"
val externalOptions = if (testOptionsFile.exists()) {
val source = scala.io.Source.fromFile(testOptionsFile)
val options = source.getLines().toIndexedSeq
source.close()
options
} else Nil
tests map { test ⇒
new Tests.Group(
name = test.name,
tests = Seq(test),
// runPolicy = Tests.InProcess)
runPolicy = Tests.SubProcess(javaOptions = javaOptions ++ externalOptions))
}
} else {
Seq(new Tests.Group(
name = "Empty",
tests = Seq(),
runPolicy = Tests.InProcess))
}
},