Per the SBT documentation, "A project is defined by declaring a lazy val of type Project."
That is certainly the norm, and is what we are doing, but I was wondering what if any is the reason it needs to be lazy.
Using a regular val instead of lazy val seems to work. Of course using strict vals results in the project definitions being initialized in order, which means forward references don't work, requiring projects to be defined in dependency order.
For a relatively large build, with 53 interdependent projects, having ordering enforced is actually a Good Thing™, so I was wondering if there's an actual reason for using lazy val -- besides allowing definitions to occur in arbitrary order.
This is a common "best practice". I haven't seen anybody explicitly state this, but from my experience this practice is related to a few features of sbt.
sbt parses any *.sbt file
sbt evaluates an *.sbt file top to bottom
you can created dependencies between everything in multiple sbt files
Now imagine you want to structure your build.sbt for readability. We have some common settings and 3 projects (a root and two builds)
val root = project.in(file("."))
.settings(commonSettings)
.aggregate(api, server)
val api = project.in(file("api"))
.settings(commonSettings)
.settings(name := "server")
val server= project.in(file("api"))
.settings(commonSettings)
.settings(name := "api")
.dependsOn(api)
val commonSettings = Seq(organization := "com.example")
sbt won't start as multiple things are wrong in this build.sbt
The api and server module are referenced in the root project before they are defined
commonSettings is referenced before its definition in all projects
Without making everything lazy it gets hard to refactor your build files. This is the reasons all sbt documentation uses lazy vals every. To avoid confusion and frustration for first time users.
hope that helps,
Muki
Related
In a cross Scala JS server / client project, I want changes of some sources to restart the server and other sources to trigger the packaging process, but without the restart. Different tasks will not help because they will simply do one or the other and I want both at the same time.
In more detail:
I've got a Scala.js crossProject. I'm using the following to ensure the server can serve the built JavScript:
val app = crossProject.settings(...)
lazy val appJS = app.js
lazy val jsFile = fastOptJS in(appJS, Compile)
lazy val appJVM = app.jvm.settings(
(resources in Compile) += jsFile.value.data,
(resources in Compile) += jsFile.value.data.toPath.resolveSibling(jsFile.value.data.name+".map").toFile,
(resources in Compile) += (packageJSDependencies in(appJS, Compile)).value
)
If I run ~ appJVM/compile:packageBin::packageConfiguration then changes to the JavaScript source are immediately compiled and placed in the appJVM target/classes dir, so a refresh of the browser gets my new code - brilliant.
However, I would also like to use the sbt-revolver plugin to restart the server if I edit server-side code. But there's the rub - if I use ~ ;appJVM/compile:packageBin::packageConfiguration;appJVM/reStart then changes to the client side source restart the server, which I don't want. But if I remove the client side project from the transitive watch then it no longer notices if I change the client side project.
Is there a way to define watchTransitiveSources differently per task?
~ is actually a command that watches the transitive sources of the base project and then synchronously runs everything passed as an argument to it when those change, before re-running the original input (including ~). It does not make any information about what has changed available to those command line inputs (difficult to see how it could).
Consequently the solution I came to is to write a new watch command. It also needs to watch all sources, but then conditionally choose what to do based on which files have changed.
I've hacked something ugly as anything together that does this, but will look at making it more legible, general, tested and a Plugin. However, in the meantime anyone trying to follow my path can use this public gist: https://gist.github.com/LeisureMonitoringAdmin/0eb2e775e47b40f07d9e6d58d17b6d52
Are you sure you are using sbt-resolver not sbt-revolver ?
Because the second one allows controlling the triggered resources using
watchSources - defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.
watchTransitiveSources - then combines the
watchSources for the current project and all execution and classpath
dependencies (see .scala build definition for details on interProject
dependencies).
Source: http://www.scala-sbt.org/0.13/docs/Triggered-Execution.html
I want to use libisabelle to invoke Isabelle from Scala. However, by default (i.e., using the invocation as described in the tutorial), libisabelle will download a fresh Isabelle installation.
I wish to use an existing (read-only) Isabelle configuration instead. I tried the following:
val path = "/opt/Isabelle2016-1"
val setup = Setup.detect(Platform.genericPlatform(new File(path).toPath), Version.Stable("2016-1")).right.get
val resources = Resources.dumpIsabelleResources().right.get
val environment = Await.result(setup.makeEnvironment(resources), Duration.Inf)
val config = Configuration.simple("Example")
System.build(environment,config)
val system = System.create(environment,config)
I am not sure whether this is how I am supposed to set things up, but in any case, it does not work:
java.nio.file.AccessDeniedException: /opt/Isabelle2016-1/.lock
So libisabelle wants to write to the Isabelle installation. I want the code to work even with a read-only installation.
How can I get libisabelle to work in the above situation?
Setup.detect will attempt to lock the installation such that no two processes can write into them at the same time.
Using a genericPlatform probably doesn't do what you think, because the path you pass there will be used for everything that libisabelle obtains from or writes to disk, including resources.
Luckily, instantiation a Setup manually is quite simple:
val setup = Setup(
Paths.get("/opt/Isabelle2016-1"),
Platform.guess.get,
Version.Stable("2016-1")
)
With that incantation, you'll use the global installation in /opt/Isabelle2016-1, but nothing is written there. $ISABELLE_HOME_USER etc. will point towards ~/.local/share/libisabelle on Linux.
Given a build.sbt which is committed to a code repository, what is a general pattern which allows users to override setting values defined in this script?
On the surface, this appears to have been answered in Where should local settings (per user, and not version-controlled) go in newer (0.10+) versions of SBT?. The answer there is to simply define two scripts in the same directory -- build.sbt and local.sbt, and rely on the fact that sbt will combine these scripts together. This may work for augmenting values (i.e., appending lists), but I don't see how it works for overriding the script value, since if a setting's value is set in both scripts, I don't know which of the two values will survive after sbt has combined the scripts.
It could be that I'm missing something very simple.
I'd recommend using ~/.sbt/0.13 global directory where your .sbt files are processed during project load and after the other files in the project itself.
I found ~/.sbt/0.13/global.sbt a good place for global settings - the name always hints me for its purpose.
I'm writing a sbt plugin for assembling loosely coupled modules/services for runtime and testing. These modules could be a project (in this multi-project build) or a ModuleID (resolved through Ivy). The project structure may look like this:
root
foo
bar
foobar
There is no classpath dependency between the projects. Lets say project foo is self contained. Project bar would need foo to be started to test bar. And project foobar will need both foo, bar, and a module referenced by org.sample.mymodule-1.0.0 to be started.
The projects foo, bar, and foobar are declared within root (and have other project dependencies not listed here, but no dependencies among them). To implement these, we need to:
Declare a setting (looseCoupledDependencies) that can be set inside a subproject (for instance foobar) that can reference other modules as subprojects or artifacts.
For each of these, find the classpath and start a JVM with the that classpath and start the module at run/test time.
I'm still stuck with #1. 1) I cannot find a type that could represent both Project and ModuleID and 2) I cannot reference projects in root from within foobar (given I have a SettingKey[Seq[Project]] in settings - this is also not yet the right type).
How could I achieve such a setting that could easily take care of both types and allow references from subprojects? Thanks.
I'm using a pretty recent version of SBT (seems to be hard to figure out what the version is). I want to pass system properties to my application with sbt run as follows:
sbt -Dmyprop=x run
How could I do that?
SBT's runner doesn't normally create new processes, so you also have to tell it to do this if you want to set the arguments that are passed. You can add something like this to your build settings:
fork := true
javaOptions := Seq("-Dmx=1024M")
There's more detail on forking processes in the SBT documentation.
I found the best way to be adding this to build.sbt:
// important to use ~= so that any other initializations aren't dropped
// the _ discards the meaningless () value previously assigned to 'initialize'
initialize ~= { _ =>
System.setProperty( "config.file", "debug.conf" )
}
Related: When doing this to change the Typesafe Configuration that gets loaded (my use case), one needs to also manually include the default config. For this, the Typesafe Configuration's suggested include "application" wasn't enough but include classpath("application.conf") worked. Thought to mention since some others may well be wanting to override system properties for precisely the same reason.
Source: discussion on the sbt mailing list
Thanks for the pointer, this actually helped me solve a somewhat related problem with Scala Tests.
It turned out that sbt does fork the tests when there are sub-projects (see my code) and some of the tests fail to pick up the system property.
So in sbt -Dsomething="some value" test, some of the tests would fail when failing to find something in the system properties (that happened to be my DB URI, so it kinda mattered!)
This was driving me nuts, so I thought I'd post it here for future reference for others (as #akauppi correctly noted, chances are high that "others" may well be me in a few weeks!).
The fix was to add the following to build.st:
fork in Test := false
I think the best is to use the JAVA_OPTS environment variable:
#update the java options (maybe to keep previous options)
export JAVA_OPTS="${JAVA_OPTS} -Dmyprop=x"
#now run without any extra option
sbt run
You can pass system properties at the end of the sbt command:
sbt run -Dmyprop=x
If you have to pass program parameters into a stage, just pass system properties after the quotes again:
sbt "runMain com.example.MyClass -p param-value" -Dmyprop=x