There is sbt project declaration
lazy val myProject = (Project("myProject", file("someRoot"))
enablePlugins ...
settings (...)
It has taskKey that extracts some dependencies to file system.
My problem is that for the moment of loading SBT I can't determine all the dependencies, it could be done only after private Command Alias is executed
addCommandAlias("resolveDependencies", "; resolveDependenciesTask; TODO: update myProject dependencies and reload it")
Is there anyway to do that?
Actually, disregard my comment on your question. You can use a command to modify the state of the build, so after you run it, the changes it made stay.
Something along these lines:
// in your build.sbt
commands += Command.command("yourCustomCommand")(state =>
Project.extract(state).append(
Seq(libraryDependencies += // settings you want to modify
"com.lihaoyi" % "ammonite-repl" % "0.5.7" cross CrossVersion.full),
state))
Then call it with sbt yourCustomCommand.
The state instance you return from the command becomes the new state of the build, i.e. if you've added some dependencies, the build will see them.
Related
I have a PlayFramework 2.7 application which is build by sbt.
For accessing the database, I am using JOOQ. As you know, JOOQ reads my database schema and generates the Java source classes, which then are used in my application.
The application only compiles, if the database classes are present.
I am generating the classes with this custom sbt task:
// https://mvnrepository.com/artifact/org.jooq/jooq-meta
libraryDependencies += "org.jooq" % "jooq-meta" % "3.12.1"
lazy val generateJOOQ = taskKey[Seq[File]]("Generate JooQ classes")
generateJOOQ := {
(runner in Compile).value.run("org.jooq.codegen.GenerationTool",
(fullClasspath in Compile).value.files,
Array("conf/db.conf.xml"),
streams.value.log).failed foreach (sys error _.getMessage)
((sourceManaged.value / "main/generated") ** "*.java").get
}
I googled around and found the script above and modified it a little bit according to my needs, but I do not really understand it, since sbt/scala are new to me.
The problem now is, when I start the generateJOOQ, sbt tries to compile the project first, which fails, because the database classes are missing. So what I have to do is, comment all code out which uses the generated classes, execute the task which compiles my project, generates the database classes and then enable the commented out code again. This is a pain!
I guess the problem is the command (runner in Compile) but I did not find any possibility to execute my custom task WITHOUT compiling first.
Please help! Thank you!
Usually, when you want to generate sources, you should use a source generator, see https://www.scala-sbt.org/1.x/docs/Howto-Generating-Files.html
sourceGenerators in Compile += generateJOOQ
Doing that automatically causes SBT to execute your task first before trying to compile the Scala/Java sources.
Then, you should not really use the runner task, since that is used to run your project, which depends on the compile task, which needs to execute first of course.
You should add the jooq-meta library as a dependeny of the build, not of your sources. That means you should add the libraryDependencies += "org.jooq" % "jooq-meta" % "3.12.1" line e.g. to project/jooq.sbt.
Then, you can simply call the GenerationTool of jooq just as usually in your task:
// build.sbt
generateJOOQ := {
org.jooq.codegen.GenerationTool.main(Array("conf/db.conf.xml"))
((sourceManaged.value / "main/generated") ** "*.java").get
}
I have an sbt project where docker:publishLocal will create a docker image on my machine for testing, and docker:publish will publish the image to a repository and also publish jar files from the build to a repository.
If my project is a snapshot, I would like to disable publishing to the repositories, while still being able to build the local image.
ThisBuild / publishArtifact := ! isSnapshot.value
does the right thing for the publish command, but it also disables publishLocal.
I want to write something like
if (isSnapshot.value) {
publish := { }
}
but that gives me an error that I do not understand at all:
[info] Loading project definition from /Users/dev/project
/Users/dev/build.sbt:1: error: type mismatch;
found : Unit
required: sbt.internal.DslEntry
if (isSnapshot.value) {
^
Past experience dictates that redefining publish to conditionally call the original version won't work, as
publish := {
if (!isSnapshot.value) publish.value
}
gives warnings that the task is always evaluated.
Is there a way to do this?
The problem with this code is that it evaluates publish.value regardless of the if structure. I recommend reading the documentation on task dependencies. If you want to "delay" the evaluation of a task in one of the if branches, you need to use dynamic task definition:
publish := Def.taskDyn {
if (isSnapshot.value)
Def.task {} // doing nothing
else
Def.task { publish.value } // could be written as just publish
}.value
But apart from fixing your code, you should be aware that there is a special setting for the functionality you want, it's called skip:
publish/skip := isSnapshot.value
Another thing to notice, is the scoping. If you want to override docker:publish, which is the same as Docker/publish in the new syntax, you should add this Docker/ scope prefix to every mention of publish in the code above.
I have a multi-project build in SBT where some projects should aggregate dependencies and contain no code. So then clients could depend on these projects as a single dependency instead of directly depending on all of their aggregated dependencies. With Maven, this is a common pattern, e.g. when using Spring Boot.
In SBT, I figured I can suppress the generation of the empty artifacts by adding this setting to these projects:
packagedArtifacts := Classpaths.packaged(Seq(makePom)).value
However, the makePom task writes <packaging>jar</packaging> in the generated POM. But now that there is no JAR anymore, this should read <packaging>pom</packaging> instead.
How can I do this?
This question is a bit old, but I just came across the same issue and found a solution. The original answer does point to the right page where this info can be found, but here is an example. It uses the pomPostProcess setting to transform the generated POM right before it is written to disk. Essentially, we loop over all the XML nodes, looking for the element we care about and then rewrite it.
import scala.xml.{Node => XmlNode, NodeSeq => XmlNodeSeq, _}
import scala.xml.transform._
pomPostProcess := { node: XmlNode =>
val rule = new RewriteRule {
override def transform(n: XmlNode): XmlNodeSeq = n match {
case e: Elem if e != null && e.label == "packaging" =>
<packaging>pom</packaging>
case _ => n
}
}
new RuleTransformer(rule).transform(node).head
},
Maybe you could modify the result pom as described here: Modifying the generated POM
You can disable publishing the default artifacts of JAR, sources, and docs, then opt in explicitly to publishing the POM. sbt produces and publishes a POM only, with <packaging>pom</packaging>.
// This project has no sources, I want <packaging>pom</pom> with dependencies
lazy val bundle = project
.dependsOn(moduleA, moduleB)
.settings(
publishArtifact := false, // Disable jar, sources, docs
publishArtifact in makePom := true,
)
lazy val moduleA = project
lazy val moduleB = project
lazy val moduleC = project
Run sbt bundle/publishM2 to verify the POM in ~/.m2/repository.
I dare say this is almost intuitive, a rare moment of pleasant surprise with sbt 😅
I confirmed this with current sbt 1.3.9, and 1.0.1, the oldest launcher I happen to have installed on my machine.
The Artifacts page in the reference docs may be helpful, perhaps this trick should be added there.
I am trying to use phantomjs as installed via npm to perform my unit tests for ScalaJS.
When I run the tests I am getting the following error:
/usr/bin/env: node: No such file or directory
I believe that is because of how phatomjs when installed with npm loads node:
Here is the first line from phantomjs:
#!/usr/bin/env node
If I change that first line to hardcode to the node executable (this involves modifying a file installed by npm so it's only a temporary solution at best):
#!/home/bjackman/cgta/opt/node/default/bin/node
Everything works.
I am using phantom.js btw because moment.js doesn't work in the NodeJSEnv.
Work Around:
After looking through the plugin source is here the workaround:
I am forwarding the environment from sbt to the PhantomJSEnv:
import scala.scalajs.sbtplugin.ScalaJSPlugin._
import scala.scalajs.sbtplugin.env.nodejs.NodeJSEnv
import scala.scalajs.sbtplugin.env.phantomjs.PhantomJSEnv
import scala.collection.JavaConverters._
val env = System.getenv().asScala.toList.map{case (k,v)=>s"$k=$v"}
olibCross.sjs.settings(
ScalaJSKeys.requiresDOM := true,
libraryDependencies += "org.webjars" % "momentjs" % "2.7.0",
ScalaJSKeys.jsDependencies += "org.webjars" % "momentjs" % "2.7.0" / "moment.js",
ScalaJSKeys.postLinkJSEnv := {
if (ScalaJSKeys.requiresDOM.value) new PhantomJSEnv(None, env)
else new NodeJSEnv
}
)
With this I am able to use moment.js in my unit tests.
UPDATE: The relevant bug in Scala.js (#865) has been fixed. This should work without a workaround.
This is indeed a bug in Scala.js (issue #865). For future reference; if you would like to modify the environment of a jsEnv, you have two options (this applies to Node.js and PhantomJS equally):
Pass in additional environment variables as argument (just like in #KingCub's example):
new PhantomJSEnv(None, env)
// env: Map[String, String]
Passed-in values will take precedence over default values.
Override getVMEnv:
protected def getVMEnv(args: RunJSArgs): Map[String, String] =
sys.env ++ additionalEnv // this is the default
This will allow you to:
Read/Modify the environment provided by super.getVMEnv
Make your environment depend on the arguments to the runJS method.
The same applies for arguments that are passed to the executable.
In my Build.scala, I have:
override def rootProject = Some(frontendProject)
I'm trying to convert to the newer build.sbt format, but don't know the equivalent of this line. How do I set the project for sbt to load by default when using build.sbt?
I'm still not sure that I understood you right, but you said about multi-project build, so I assume that you want to define a root project which aggregates subprojects. Here is how you can do that (in your root build.sbt):
lazy val root = project.in( file(".") ).aggregate(subProject1, subProject2)
lazy val subProject1 = project in file("subProject1")
lazy val subProject2 = project in file("subProject2")
See sbt documentation about multi-projects.
Then if you want to set the default project to load on sbt startup to a sub-project, in addition to your answer to this SO question, I can suggest
run sbt with sbt "project XXX" shell command
or adding this line to your build.sbt:
onLoad in Global := { Command.process("project XXX", _: State) } compose (onLoad in Global).value
In both cases sbt first loads the root project and then the subproject.
I've found the following script to be helpful:
#!/usr/bin/env bash
exec "sbt" "project mysubproject" "shell"
exit $?