I know in sbt you can do something like this to exclude specific sub-dependencies of dependencies for some lib your project uses:
val kafka = "org.apache.kafka" % "kafka-clients" % "0.9.0.1" excludeAll(
ExclusionRule(organization = "com.sun.jdmk"),
ExclusionRule(organization = "com.sun.jmx")
)
How can I do this for an unmanaged jar? I put an unmanaged library jar 'something.jar' in my project's lib directory but sbt is giving me errors for some dependencies of something.jar that I don't care about. How can I exclude these?
Related
I'd like to create an SBT project with inheritance and shared dependencies.
With Maven's POM files, there is the idea of Project Inheritance where you can set a parent project. I'd like to do the same thing with SBT.
The xchange-stream library uses Maven's Project Inheritance to resolve subproject dependencies when compiled from the parent project.
Here is my idea of what the file structure would look like:
sbt-project/
project/
dependencies.scala # Contains dependencies common to all projects
build.sbt # Contains definition of parent project with references
# to subprojects
subproject1/
build.sbt # Contains `subproject3` as a dependency
subproject2/
build.sbt # Contains `subproject3` as a dependency
subproject3/
build.sbt # Is a dependency for `subproject1` and `subproject2`
Where project1 and project2 can include project3 in their dependencies lists like this:
libraryDependencies ++= "tld.organization" % "project3" % "1.0.0"
Such that when subproject1 or subproject2 are compiled by invoking sbt compile from within their subdirectories, or when the parent: sbt-project is compiled from the main sbt-project directory, then subproject3 will be compiled and published locally with SBT, or otherwise be made available to the projects that need it.
Also, how would shared dependencies be specified in sbt-project/build.sbt or anywhere in the sbt-project/project directory, such that they are useable within subproject1 and subproject2, when invoking sbt compile within those subdirectories?
The following examples don't help answer either of the above points:
jbruggem/sbt-multiproject-example:
Uses recursive build.sbt files, but doesn't share dependencies among child projects.
Defining Multi-project Builds with sbt: pbassiner/sbt-multi-project-example:
Uses a single build.sbt file for the projects in their subdirectories.
sachabarber/SBT_MultiProject_Demo:
Uses a single build.sbt file.
Such that when subproject1 or subproject2 are compiled by invoking sbt compile from within their subdirectories...
Maybe Maven is meant to be used together with the shell environment and cd command, but that's not how sbt works at least as of sbt 1.x in 2019.
The sbt way is to use sbt as an interactive shell, and start it at the top level. You can then either invoke compilation as subproject1/compile, or switch into it using project subproject1, and call compile in there.
house plugin
A feature similar to parent POM would be achieved by creating a custom plugin.
package com.example
import sbt._
import Keys._
object FooProjectPlugin extends AutoPlugin {
override def requires = plugins.JvmPlugin
val commonsIo = "commons-io" % "commons-io" % "2.6"
override def buildSettings: Seq[Def.Setting[_]] = Seq(
organization := "com.example"
)
override def projectSettings: Seq[Def.Setting[_]] = Seq(
libraryDependencies += commonsIo
)
}
sbt-sriracha
It's not exactly what you are asking for, but I have an experimental plugin that allows you to switch between source dependency and binary dependency. See hot source dependencies using sbt-sriracha.
Using that you could create three individual sbt builds for project1, project2, and project3, all located inside $HOME/workspace directory.
ThisBuild / scalaVersion := "2.12.8"
ThisBuild / version := "0.1.1-SNAPSHOT"
lazy val project3Ref = ProjectRef(workspaceDirectory / "project3", "project3")
lazy val project3Lib = "tld.organization" %% "project3" % "0.1.0"
lazy val project1 = (project in file("."))
.enablePlugins(FooProjectPlugin)
.sourceDependency(project3Ref, project3Lib)
.settings(
name := "project1"
)
With this setup, you can launch sbt -Dsbt.sourcemode=true and it will pick up project3 as a subproject.
You can use Mecha super-repo concept. Take a look on the setup and docs here: https://github.com/storm-enroute/mecha
The basic idea is that you can combine dependent sbt projects (with their own build.sbt) under single root super-repo sbt project:
/root
/project/plugins.sbt
repos.conf
/project1
/src/..
/project/plugins.sbt
build.sbt
/project2
/src/..
/project/plugins.sbt
build.sbt
Please, note that there is no build.sbt in the root folder!
Instead there is repos.conf file. It contains definition of the sub-repos and looks like the folowing:
root {
dir = "."
origin = ""
mirrors = []
}
project1 {
dir = "project1"
origin = "git#github.com:some_user/project1.git"
mirrors = []
}
project2 {
dir = "project2"
origin = "git#github.com:some_user/project2.git"
mirrors = []
}
Then you can specify the Inter-Project, source-level Dependencies within individual projects.
There are two approaches:
dependencies.conf file
or in the build source code
For more details, please, see the docs
I'm attempting to use the sbt-aspectj plugin with the sbt native packager and am running into an issue where the associated -javaagent path to the aspectj load time weaver jar references an ivy cache location rather than something packaged.
That is, after running sbt stage, executing the staged application via bash -x target/universal/stage/bin/myapp/ results in this javaagent:
exec java -javaagent:/home/myuser/.ivy2/cache/org.aspectj/aspectjweaver/jars/aspectjweaver-1.8.10.jar -cp /home/myuser/myproject/target/universal/stage/lib/org.aspectj.aspectjweaver-1.8.10.jar:/home/myuser/myproject/target/universal/stage/lib/otherlibs.jar myorg.MyMainApp args
My target platform is Heroku where the artifacts are built before being effectively 'pushed' out to individual 'dynos' (very analogous to a docker setup). The issue here is that the resulting -javaagent path was valid on the machine in which the 'staged' deployable was built, but will not exist where it's ultimately run.
How can one configure the sbt-aspectj plugin to reference a packaged lib rather than one from the ivy cache?
Current configuration:
project/plugins.sbt:
addSbtPlugin("com.typesafe.sbt" % "sbt-aspectj" % "0.10.6")
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.1.5")
build.sbt (selected parts):
import com.typesafe.sbt.SbtAspectj._
lazy val root = (project in file(".")).settings(
aspectjSettings,
javaOptions in Runtime ++= { AspectjKeys.weaverOptions in Aspectj }.value,
// see: https://github.com/sbt/sbt-native-packager/issues/598#issuecomment-111584866
javaOptions in Universal ++= { AspectjKeys.weaverOptions in Aspectj }.value
.map { "-J" + _ },
fork in run := true
)
Update
I've tried several approaches including pulling the relevant output for javaOptions from existing mappings, but the result is a cyclical dependency error thrown by sbt.
I have something that technically solves my problem but feels unsatisfactory. As of now, I'm including an aspectjweaver dependency directly and using the sbt-native-packager concept of bashScriptExtraDefines to append an appropriate javaagent:
updated build.sbt:
import com.typesafe.sbt.SbtAspectj._
lazy val root = (project in file(".")).settings(
aspectjSettings,
bashScriptExtraDefines += scriptClasspath.value
.filter(_.contains("aspectjweaver"))
.headOption
.map("addJava -javaagent:${lib_dir}/" + _)
.getOrElse(""),
fork in run := true
)
You can add the following settings in your sbt config:
.settings(
retrieveManaged := true,
libraryDependencies += "org.aspectj" % "aspectjweaver" % aspectJWeaverV)
AspectJ weaver JAR will be copied to ./lib_managed/jars/org.aspectj/aspectjweaver/aspectjweaver-[aspectJWeaverV].jar in your project root.
I actually solved this by using the sbt-javaagent plugin to adding agents to the runtime
We are trying to make a fat jar file containing one small scala source file and a ton of dependencies (simple mapreduce example using spark and cassandra):
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import com.datastax.spark.connector._
import org.apache.spark.SparkConf
object VMProcessProject {
def main(args: Array[String]) {
val conf = new SparkConf()
.set("spark.cassandra.connection.host", "127.0.0.1")
.set("spark.executor.extraClassPath", "C:\\Users\\SNCUser\\dataquest\\ScalaProjects\\lib\\spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar")
println("got config")
val sc = new SparkContext("spark://US-L15-0027:7077", "test", conf)
println("Got spark context")
val rdd = sc.cassandraTable("test_ks", "test_col")
println("Got RDDs")
println(rdd.count())
val newRDD = rdd.map(x => 1)
val count1 = newRDD.reduce((x, y) => x + y)
}
}
We do not have a build.sbt file, instead putting jars into a lib folder and source files in the src/main/scala directory and running with sbt run. Our assembly.sbt file looks as follows:
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0")
When we run sbt assembly we get the following error message:
...
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: java heap space
at java.util.concurrent...
We're not sure how to change the jvm settings to increase the memory since we are using sbt assembly to make the jar. Also, if there is something egregiously wrong with how we are writing the code or building our project that'd help us out a lot too; there's been so many headaches trying to set up a basic spark program!
sbt is essentially a java process. You can try to tune your sbt runtime heap size for the OutOfMemory issues.
For 0.13.x, the default memory options sbt uses is
-Xms1024m -Xmx1024m -XX:ReservedCodeCacheSize=128m -XX:MaxPermSize=256m.
And you can enlarge the heap size by doing something like
sbt -J-Xms2048m -J-Xmx2048m assembly
I was including spark as an unmanaged dependency (putting the jar file in the lib folder) which used a lot of memory because it is a huge jar.
Instead, I made a build.sbt file which included spark as a provided, unmanaged dependency.
Secondly, I created the environment variable JAVA_OPTS with the value -Xms256m -Xmx4g, which sets the minimum heap size to 256 megabytes, while allowing the heap to grow to a maximum size of 4 gigabytes. These two combined allowed me to create a jar file with sbt assembly
More info on provided dependencies:
https://github.com/sbt/sbt-assembly
I met the issue before. For my env, set Java_ops doesn't work.
I use below command and it works.
set SBT_OPTS="-Xmx4G"
sbt assembly
There is no issue of out of memeory.
this works for me:
sbt -mem 2000 "set test in assembly := {}" assembly
I am totally new to SBT. Suppose I have three Scala projects: project_a, project_b, project_c. How should I go about building all three projects into one jar file? Suppose I use project_a as the root project. The directory structure is like
--project_a
--build.sbt
--project_b
--project_c
Following the instructions on sbt webiste, I created a build.sbt file, which looks something like
lazy val root = (project.in(file("."))).aggregate(project_b, project_c)
lazy val project_b = project
lazy val project_c = project
I put the build.sbt under the project_a. When I run sbt clean compile under project_a, a new (kinda of empty) project_b and project_c folders are created under the folder project_a. However, in the build.sbt file, I meant project_b and project_c to refer to the original folders I already created which contains the source and test code, and which are outside project_a.
Can someone let me know what I did wrong?
Thanks
First, your multi-project setup is not right.
Getting Started guide says:
Aggregation means that running a task on the aggregate project will also run it on the aggregated projects.
If you have project_a that uses project_b and project_c, then you need root in addition to project_a, project_b, and project_c.
Root can aggregate all three (a, b, and c), but it only aggregates commands given to sbt shell, for instance for compiling all three at the same time.
project_a should be set up to depend on project_b and project_c.
Here's an example:
lazy val commonSettings = Seq(
scalaVersion := "2.11.4",
organization := "com.example"
)
lazy val root = (project in file(".")).
aggregate(project_a, project_b, project_c).
settings(commonSettings: _*)
lazy val project_a = project.
dependsOn(project_b, project_c).
settings(commonSettings: _*).
settings(
// your settings here
)
lazy val project_b = project.
settings(commonSettings: _*)
lazy val project_c = project.
settings(commonSettings: _*)
How should I go about building all three projects into one jar file?
If you just want *.class files from your own projects, you can see an example on Macro Projects.
If you want *.class files and library dependencies, you need sbt-assembly.
I'm attempting to scope a dependency to a module in the same project using SBT's configurations.
In production, this dependency is satisfied by a jar on the classpath, but during dev it would be nice to do server/config-a:run or server/config-b:run to select the dependency manually.
Currently, I have something like this:
lazy val configA = config("config-a") extend Runtime
lazy val configB = config("config-b") extend Runtime
lazy val DevConfigA = Project(id = "dev-config-a", base = file("dev-config-a"))
lazy val DevConfigB = Project(id = "dev-config-b", base = file("dev-config-b"))
lazy val server = Project(id = "server",
base = file("server"),
dependencies = Seq(common))
.configs(configA, configB)
.dependsOn(DevConfigA % configA, DevConfigB % configB)
DevConfigA and DevConfigB bring in resources used for configuration. We want exactly one of them to be loaded. The goal is that server/config-a:run would depend on DevConfigA module, and not DevConfigB.
I had to move the configs and dependsOn out of the call to Project.apply to get it to compile. After that, the DevConfig* dependencies aren't showing up when I server/config-a:run or if I call show server/config-a:dependency-classpath.
Is there a way to make inter-module dependencies dependent on the config?
Yes, there's a way to make dependencies configuration-dependent - use libraryDependencies config-scoped.
I'm using the latest stable release of SBT.
[server]> show sbtVersion
[info] 0.13.1
Let's assume you need different versions of a library, e.g. scalaz, based upon what configuration you execute run with. As a matter of fact, you don't have to worry about the task, but the dependencies available in a given configuration, and since libraryDependencies drives it, I'm going to use it.
[server]> help libraryDependencies
Declares managed dependencies.
Here's the build.sbt that gives what you want.
build.sbt
lazy val configA = config("config-a") extend Runtime
lazy val configB = config("config-b") extend Runtime
lazy val server = project in file(".") configs(configA, configB)
val scalaz705 = "org.scalaz" %% "scalaz-core" % "7.0.5"
val scalaz710_M5 = "org.scalaz" %% "scalaz-core" % "7.1.0-M5"
libraryDependencies in configA += scalaz705
libraryDependencies in configB += scalaz710_M5
With the above build.sbt sbt lets us pick different versions of Scalaz based upon configuration.
[server]> show libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.3)
[server]> show config-a:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.3, org.scalaz:scalaz-core:7.0.5)
[server]> show config-b:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.3, org.scalaz:scalaz-core:7.1.0-M5)