I have a project admp which aggregates 3 subprojects:
lazy val admp = (project in file("."))
.aggregate(common, regression, integration)
.settings(commonSettings)
When I execute test:console command then test classes from subprojects are not included:
sbt:admp> test:console
[info] Starting scala interpreter...
Welcome to Scala 2.11.9 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_151).
Type in expressions for evaluation. Or try :help.
scala> import me.enreach.qa.Aerospike._
<console>:11: error: not found: value me
import me.enreach.qa.Aerospike._
^
Only when I run common/test:console command then it loads the classes:
sbt:admp> common/test:console
[info] Starting scala interpreter...
Welcome to Scala 2.11.9 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_151).
Type in expressions for evaluation. Or try :help.
scala> import me.enreach.qa.Aerospike._
import me.enreach.qa.Aerospike._
import me.enreach.qa.Aerospike._
Is there a way to load classes from all sub-projects?
You can achieve it by adding dependencies on your subprojects in the test scope. Add this to your admp project definition:
.dependsOn(
common % "test->test",
regression % "test->test",
integration % "test->test"
)
This way you say that admp's test configuration depends on each subproject's test configuration. You can read more in sbt docs about configurations mapping.
Now when you run admp/test:console you should have access to all subprojects' test sources.
Related
I'm trying to create a new Configuration in an SBT Scala project with its own main class. Here are the basic requirements:
Production application code is located in <project root>/src/main/scala/com/example/Main.scala:
package com.example
object Main extends App {
println("Hello from Main!")
}
The main class I'm trying to run should be located in <project root>/src/qa/scala/com/example/QAMain.scala:
package com.example
object QAMain extends App {
println("Hello from QA!")
}
(As suggested by the path, the actual use-case for this is a version of the application for QA to run that bypasses certain time-consuming operations.)
This main class should be runnable by executing sbt qa:run in the project root directory.
(Nice to have): The classpath of the running application should not contain any of the test classes defined under src/test.
Here's a build.sbt that I feel ought to work, but doesn't:
lazy val QA = config("qa").extend(Compile)
lazy val root = project.in(file("."))
.configs(QA)
.settings(
(sourceDirectories in (QA, compile)) += baseDirectory.value / "src" / "qa",
(mainClass in (QA, run)) := Some("com.example.QAMain"),
(mainClass in Compile) := Some("com.example.Main")
)
Unfortunately, the result is
> sbt qa:run
...
[info] Running playground.Main
Hello from Main!
> sbt "qa:runMain com.example.QAMain"
...
[info] Running com.example.QAMain
[error] (run-main-0) java.lang.ClassNotFoundException: com.example.QAMain
[error] java.lang.ClassNotFoundException: com.example.QAMain
[error] at java.lang.ClassLoader.findClass(ClassLoader.java:530)
[error] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[error] Nonzero exit code: 1
[error] (Compile / runMain) Nonzero exit code: 1
That last line is interesting, because it looks like SBT is running the task scoped to the Compile configuration instead of my custom QA configuration. sbt inspect confirms this.
My assumption has been that since a configuration's compile task uses the sourceDirectory setting, that an override of that setting will force an override of any task downstream of that setting. This assumption might be wrong in a couple of different ways:
sourceDirectory may not be upstream of compile, but maybe some other setting is that I could change;
compile might need to be explicitly overridden anyway.
It's not clear which-all settings were upstream of compile, but there's apparently enough of them that you need to use sbt.Defaults:
lazy val QA = config("qa").extend(Compile)
lazy val root = project.in(file("."))
.configs(QA)
.settings(
inConfig(QA)(Defaults.compileSettings) : _*
)
This achieves the desired behavior. To test this, add the following classes:
<project root>/src/main/scala/com/example/Water.scala
package com.example
class Water {
def drink(): Unit = {
println("Cool and refreshing")
}
}
<project root>/src/qa/scala/com/example/Poison.scala
package com.example
class Poison {
def drink(): Unit = {
println("You have died of dysentery.")
}
}
Then SBT is happy to build an instance of Water into QAMain, but will not find Poison to build into Main.
I am trying to setup a multi-project that includes a sub-project that does an import of a class that is defined in a Dependencies.scala file in its project directory. When I run sbt on the sub-project everything is fine but when I run sbt on the root project I get an error stating that Dependencies is not found. Here is my root build.sbt:
name := "sbtTest"
organization := "com.test"
version := "0.1"
lazy val foo = project
Here is foo's build.sbt:
import Dependencies._
name := "foo"
version := "0.2"
scalaVersion := "2.10.6"
Dependencies.scala is in foo/projects and here is the exact error I get:
/Users/xyz/git/sbtTest/foo/build.sbt:1: error: not found: object Dependencies
import Dependencies._
^
[error] Type error in expression
Has anyone run into this problem?
I fixed this by making my build.sbt look like this..
lazy val otherProject = RootProject(file("../otherproject"))
lazy val rootProject = (project in file("."))
// dependsOn allows the root project to use functions from
.dependsOn(otherProject)
// aggregation runs tasks of root project on aggregated projects as well
.aggregate(otherProject)
In sbt you can also define all dependencies in a separated file.
This file tends to be in /project/Dependencies.scala In the same directory than plugins.sbt.
Then import Dependencies._ can be easy imported in build.sbt file.
We are trying to make a fat jar file containing one small scala source file and a ton of dependencies (simple mapreduce example using spark and cassandra):
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import com.datastax.spark.connector._
import org.apache.spark.SparkConf
object VMProcessProject {
def main(args: Array[String]) {
val conf = new SparkConf()
.set("spark.cassandra.connection.host", "127.0.0.1")
.set("spark.executor.extraClassPath", "C:\\Users\\SNCUser\\dataquest\\ScalaProjects\\lib\\spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar")
println("got config")
val sc = new SparkContext("spark://US-L15-0027:7077", "test", conf)
println("Got spark context")
val rdd = sc.cassandraTable("test_ks", "test_col")
println("Got RDDs")
println(rdd.count())
val newRDD = rdd.map(x => 1)
val count1 = newRDD.reduce((x, y) => x + y)
}
}
We do not have a build.sbt file, instead putting jars into a lib folder and source files in the src/main/scala directory and running with sbt run. Our assembly.sbt file looks as follows:
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0")
When we run sbt assembly we get the following error message:
...
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: java heap space
at java.util.concurrent...
We're not sure how to change the jvm settings to increase the memory since we are using sbt assembly to make the jar. Also, if there is something egregiously wrong with how we are writing the code or building our project that'd help us out a lot too; there's been so many headaches trying to set up a basic spark program!
sbt is essentially a java process. You can try to tune your sbt runtime heap size for the OutOfMemory issues.
For 0.13.x, the default memory options sbt uses is
-Xms1024m -Xmx1024m -XX:ReservedCodeCacheSize=128m -XX:MaxPermSize=256m.
And you can enlarge the heap size by doing something like
sbt -J-Xms2048m -J-Xmx2048m assembly
I was including spark as an unmanaged dependency (putting the jar file in the lib folder) which used a lot of memory because it is a huge jar.
Instead, I made a build.sbt file which included spark as a provided, unmanaged dependency.
Secondly, I created the environment variable JAVA_OPTS with the value -Xms256m -Xmx4g, which sets the minimum heap size to 256 megabytes, while allowing the heap to grow to a maximum size of 4 gigabytes. These two combined allowed me to create a jar file with sbt assembly
More info on provided dependencies:
https://github.com/sbt/sbt-assembly
I met the issue before. For my env, set Java_ops doesn't work.
I use below command and it works.
set SBT_OPTS="-Xmx4G"
sbt assembly
There is no issue of out of memeory.
this works for me:
sbt -mem 2000 "set test in assembly := {}" assembly
I'm writing a Task that needs access to the Project - it needs to iterate through the dependencies to do some side-effect specific to our build. I need to be able to work out transitive internal and external dependencies (i.e. modules and jars) of children modules of thisProject.
I'm currently doing something like this to pass the other things that I need (name and Ivy-managed deps via libraryDependencies):
myTask := runMyTask(
(name in Compile).value,
(libraryDependencies in Compile).value
)
I still need another parameter like
(project in Compile)
but such a key does not exist.
How am I supposed to get the Project?
NB I realise this is probably not possible - without an evil hack involving named lookup of projects from a manually maintained hashmap - because of the Project/Task/Phase axis, but worth asking anyway in case there is a clean solution.
Use thisProject or any other lazy val that you've defined in the build.
> help thisProject
Provides the current project for the referencing scope.
> inspect thisProject
[info] Setting: sbt.ResolvedProject = Project(id runtime-assembly, base: C:\dev\sandbox\runtime-assembly, configurations: List(compile, runtime, test, provided, optional), plugins: List(<none>), autoPlugins: List(sbt.plugins.CorePlugin, sbt.plugins.IvyPlugin, sbt.plugins.JvmPlugin, sbt.plugins.JUnitXmlReportPlugin))
[info] Description:
[info] Provides the current project for the referencing scope.
[info] Provided by:
[info] {file:/C:/dev/sandbox/runtime-assembly/}runtime-assembly/*:thisProject
[info] Defined at:
[info] (sbt.Load) Load.scala:210
[info] Reverse dependencies:
[info] *:ivyConfigurations
[info] *:name
[info] *:organization
[info] *:cacheDirectory
[info] *:baseDirectory
[info] Delegates:
[info] *:thisProject
[info] {.}/*:thisProject
[info] */*:thisProject
Give it a try in consoleProject as follows:
> consoleProject
[info] Starting scala interpreter...
[info]
import sbt._
import Keys._
import dsl._
import _root_.org.sbtidea.SbtIdeaPlugin._
import _root_.de.johoop.jacoco4sbt.JacocoPlugin._
import _root_.com.timushev.sbt.updates.UpdatesPlugin._
import _root_.sbtassembly.Plugin._
import _root_.sbt.plugins.IvyPlugin
import _root_.sbt.plugins.JvmPlugin
import _root_.sbt.plugins.CorePlugin
import _root_.sbt.plugins.JUnitXmlReportPlugin
import currentState._
import extracted._
import cpHelpers._
Welcome to Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_60).
Type in expressions to have them evaluated.
Type :help for more information.
scala> thisProject
res0: sbt.SettingKey[sbt.ResolvedProject] = sbt.SettingKey$$anon$4#1f3eff94
scala> thisProject.eval
res1: sbt.ResolvedProject = Project(id runtime-assembly, base: C:\dev\sandbox\runtime-assembly, configurations: List(compile, runtime, test, provided, optional), plugins: List(<none>), autoPlugins: List(sbt.plugins.CorePlugin, sbt.plugins.IvyPlugin, sbt.plugins.JvmPlugin, sbt.plugins.JUnitXmlReportPlugin))
scala> thisProject.eval.dependencies
res2: Seq[sbt.ClasspathDep[sbt.ProjectRef]] = List()
There's also configurations field that holds a list of available configurations for the project. Use it if you need to query for the value of a setting, say libraryDependencies, across configurations.
scala> thisProject.eval.configurations
res3: Seq[sbt.Configuration] = List(compile, runtime, test, provided, optional)
You may also want to read about ScopeFilter in Getting values from multiple scopes "that gets values from multiple scopes". Including a sample from the page:
lazy val core = project
lazy val util = project
lazy val root = project.settings(
sources := {
val filter = ScopeFilter(inProjects(core, util), inConfigurations(Compile))
// each sources definition is of type Seq[File],
// giving us a Seq[Seq[File]] that we then flatten to Seq[File]
val allSources: Seq[Seq[File]] = sources.all(filter).value
allSources.flatten
}
)
I'd like to configure Typesafe Activator and it's bundled tooling not to use my user home directory - I mean ~/.activator (configuration?), ~/.sbt (sbt configuration?) and especially ~/.ivy2, which I'd like to share between my two OSes.
Typesafe "documentation" is of little help.
Need help for both Windows and Linux, please.
From Command Line Options in the official documentation of sbt:
sbt.global.base - The directory containing global settings and plugins (default: ~/.sbt/0.13)
sbt.ivy.home - The directory containing the local Ivy repository and artifact cache (default: ~/.ivy2)
It appears that ~/.activator is set and used in the startup scripts and that's where I'd change the value.
It also appears (in sbt/sbt.boot.properties in activator-launch-1.2.1.jar) that the value of ivy-home is ${user.home}/.ivy2:
[ivy]
ivy-home: ${user.home}/.ivy2
checksums: ${sbt.checksums-sha1,md5}
override-build-repos: ${sbt.override.build.repos-false}
repository-config: ${sbt.repository.config-${sbt.global.base-${user.home}/.sbt}/repositories}
It means that without some development it's only possible to change sbt.global.base.
➜ minimal-scala activator -Dsbt.global.base=./sbt -Dsbt.ivy.home=./ivy2 about
[info] Loading project definition from /Users/jacek/sandbox/sbt-launcher/minimal-scala/project
[info] Set current project to minimal-scala (in build file:/Users/jacek/sandbox/sbt-launcher/minimal-scala/)
[info] This is sbt 0.13.5
[info] The current project is {file:/Users/jacek/sandbox/sbt-launcher/minimal-scala/}minimal-scala 1.0
[info] The current project is built against Scala 2.11.1
[info] Available Plugins: sbt.plugins.IvyPlugin, sbt.plugins.JvmPlugin, sbt.plugins.CorePlugin, sbt.plugins.JUnitXmlReportPlugin
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.4
If you want to see under the hood, you could query for the current values of the home directories for sbt and Ivy with consoleProject command (it assumes you started activator with activator -Dsbt.global.base=./sbt -Dsbt.ivy.home=./ivy2):
> consoleProject
[info] Starting scala interpreter...
[info]
import sbt._
import Keys._
import _root_.sbt.plugins.IvyPlugin
import _root_.sbt.plugins.JvmPlugin
import _root_.sbt.plugins.CorePlugin
import _root_.sbt.plugins.JUnitXmlReportPlugin
import currentState._
import extracted._
import cpHelpers._
Welcome to Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_60).
Type in expressions to have them evaluated.
Type :help for more information.
scala> appConfiguration.eval.provider.scalaProvider.launcher.bootDirectory
res0: java.io.File = /Users/jacek/sandbox/sbt-launcher/minimal-scala/sbt/boot
scala> appConfiguration.eval.provider.scalaProvider.launcher.ivyHome
res1: java.io.File = /Users/jacek/.ivy2
Iff you're really into convincing Activator to use sbt.ivy.home, you have to change sbt/sbt.boot.properties in activator-launch-1.2.2.jar. Just follow the steps:
Unpack sbt/sbt.boot.properties out of activator-launch-1.2.2.jar.
jar -xvf activator-launch-1.2.2.jar sbt/sbt.boot.properties
Edit sbt/sbt.boot.properties and replace ivy-home under [ivy].
ivy-home: ${sbt.ivy.home-${user.home}/.ivy2}
Add the changed sbt/sbt.boot.properties to activator-launch-1.2.2.jar.
jar -uvf activator-launch-1.2.2.jar sbt/sbt.boot.properties
With the change, -Dsbt.ivy.home=./ivy2 works fine.
scala> appConfiguration.eval.provider.scalaProvider.launcher.bootDirectory
res0: java.io.File = /Users/jacek/sandbox/sbt-launcher/minimal-scala/sbt/boot
scala> appConfiguration.eval.provider.scalaProvider.launcher.ivyHome
res1: java.io.File = /Users/jacek/sandbox/sbt-launcher/minimal-scala/ivy2
I was experimenting with this today. After a while, it seems to me like this could be the best thing to do:
Windows:
setx _JAVA_OPTIONS "-Duser.home=C:/my/preferred/home/"
Linux:
export _JAVA_OPTIONS='-Duser.home=/local/home/me'
Then you should be good to go for any Java Program that wants to store data in your home directory.
As an addition to Jacek's answer, another way that worked for me to set the .ivy2 directory was to use the sbt ivyConfiguration task. It returns configuration settings related to ivy, including the path to the ivy home (the one which defaults to ~/.ivy2).
Simply add these few lines to the build.sbt file in your project :
ivyConfiguration ~= { originalIvyConfiguration =>
val config = originalIvyConfiguration.asInstanceOf[InlineIvyConfiguration]
val ivyHome = file("./.ivy2")
val ivyPaths = new IvyPaths(config.paths.baseDirectory, Some(ivyHome))
new InlineIvyConfiguration(ivyPaths, config.resolvers, config.otherResolvers,
config.moduleConfigurations, config.localOnly, config.lock,
config.checksums, config.resolutionCacheDir, config.log)
}
It returns a new ivy configuration identical to the original one, but with the right path to the ivy home directory (here ./.ivy2, so it'll be located just next to the build.sbt file). This way, when sbt uses the ivyConfiguration task to get the ivy configuration, the path to the .ivy2 directory will be the one set above.
It worked for me using sbt 0.13.5 and 0.13.8.
Note: for sbt versions 0.13.6 and above, the construction of the InlineIvyConfiguration needs an additional parameter to avoid being flagged as deprecated, so you might want to change the last line into :
new InlineIvyConfiguration(ivyPaths, config.resolvers, config.otherResolvers,
config.moduleConfigurations, config.localOnly, config.lock,
config.checksums, config.resolutionCacheDir, config.updateOptions, config.log)
(note the additional config.updateOptions)
I had that same problem on Mac. I erased the directory in user home directory named .activator and all related folder. After that run activator run command on terminal. Activator download all of the folder which I deleted. Than problem solved.