Using macro-paradise and cross-compiling with 2.12/2.13 - sbt

Since Scala 2.13, macro-paradise has been inlined in the compiler and is available via a compiler flag:
Compile / scalacOptions += "-Ymacro-annotations"
For reference, in previous versions of Scala, macro-paradise was available via a compiler plugin:
addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full)
What is the canonical way of conditionally add the first setting or the second, according to the value of the Scala version, in a build targetting both Scala 2.12 and 2.13?
I would like to write the following but it doesn’t work:
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, n)) if n >= 13 => Compile / scalacOptions += "-Ymacro-annotations"
case _ => addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full)
}
It fails with the following error:
error: `value` can only be used within a task or setting macro, such as :=, +=, ++=, Def.task, or Def.setting.
CrossVersion.partialVersion(scalaVersion.value) match {
^
In the meantime, I can use the following workaround but I wish a simpler solution was supported:
Compile / scalacOptions ++= {
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, n)) if n >= 13 => "-Ymacro-annotations" :: Nil
case _ => Nil
}
}
libraryDependencies ++= {
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, n)) if n >= 13 => Nil
case _ => compilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full) :: Nil
}
}

If you would like to write
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, n)) if n >= 13 => Compile / scalacOptions += "-Ymacro-annotations"
case _ => addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full)
}
one option is defining a SBT custom command like so
def compileWithMacroParadise: Command = Command.command("compileWithMacroParadise") { state =>
import Project._
val extractedState = extract(state)
val stateWithMacroParadise = CrossVersion.partialVersion(extractedState.get(scalaVersion)) match {
case Some((2, n)) if n >= 13 => extractedState.appendWithSession(Seq(Compile / scalacOptions += "-Ymacro-annotations"), state)
case _ => extractedState.appendWithSession(addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full), state)
}
val (stateAfterCompileWithMacroParadise, _) = extract(stateWithMacroParadise).runTask(Compile / compile, stateWithMacroParadise)
stateAfterCompileWithMacroParadise
}
commands ++= Seq(compileWithMacroParadise),
addCommandAlias("compile", "compileWithMacroParadise")
sbt compile should now make appropriate modifications to build state (stateWithMacroParadise) before running the compile task.

Fully working example, add this code in a Compiler.scala file in your project directory:
import sbt._
import sbt.Keys._
object Compiler extends AutoPlugin {
override def trigger = allRequirements
override def projectSettings: Seq[Def.Setting[_]] =
Seq(
libraryDependencies ++= (CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, x)) if x < 13 =>
Seq(
compilerPlugin(("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full)),
"org.scala-lang.modules" %% "scala-collection-compat" % "2.1.6"
)
case _ => Nil
}),
Compile / scalacOptions ++= (CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, x)) if x >= 13 =>
Seq("-Ymacro-annotations")
case _ => Nil
})
)
}
Note that you have to use projectSettings, buildSettings will not work
The "scala-collection-compat" dependency is another one you typically want when cross compiling 2.12 and 2.13. It let's you do
import scala.jdk.CollectionConverters._
Instead of using the deprecated scala.collection.JavaConverters

Related

Build a Jar with and without provided dependencies

I have an SBT project, with spark dependencies. These dependencies are provided at runtime, and hence I import them under provided scope.
val hadoop = Seq("org.apache.hadoop" % "hadoop-client" % "3.3.1" % provided)
val spark = Seq(
"org.apache.spark" %% "spark-core" % SparkVersion % provided,
"org.apache.spark" %% "spark-sql" % SparkVersion % provided,
"org.apache.spark" %% "spark-mllib" % SparkVersion % provided
)
lazy val coreDto = (project in file("xxxx"))
.enablePlugins(BuildInfoPlugin)
.enablePlugins(PackPlugin)
.settings(
name := "xxxx",
moduleName := "xxxx",
version := "1.0",
libraryDependencies ++= (hadoop ++ spark))
All is well till now. Now I have a new scenario, where I have to publish the jar to our maven repository. And successfully I am able to publish it. The issue now is: provided scope. As the compile time dependencies are not appropriately set.
Question: How do I configure, where the provided scope is ignored during publishing? But still considered when I package it?
Using this to publish in case if helpful
lazy val publishSettings = Seq(
publishMavenStyle := true,
publishTo := {
val url = "https://xxxxl/maven/v1/"
if (version.value.trim.toUpperCase.endsWith("SNAPSHOT"))
Some("snapshots".at(url))
else
Some("releases".at(url))
},
aetherDeploy / logLevel := Level.Info,
aetherOldVersionMethod := true,
credentials += Credentials(Path.userHome / ".sbt" / ".credentials")
)

Conditional scalacSettings / settingKey

I want my scalacSettings to be more strict (more linting) when I issue my own command validate.
What is the best way to achieve that?
A new scope (strict) did work, but it requires to compile the project two times when you issue test. So that's not a option.
SBT custom command allows for temporary modification of build state which can be discarded after command finishes:
def validate: Command = Command.command("validate") { state =>
import Project._
val stateWithStrictScalacSettings =
extract(state).appendWithSession(
Seq(Compile / scalacOptions ++= Seq(
"-Ywarn-unused:imports",
"-Xfatal-warnings",
"...",
))
,state
)
val (s, _) = extract(stateWithStrictScalacSettings).runTask(Test / test, stateWithStrictScalacSettings)
s
}
commands ++= Seq(validate)
or more succinctly using :: convenience method for State transformations:
commands += Command.command("validate") { state =>
"""set scalacOptions in Compile := Seq("-Ywarn-unused:imports", "-Xfatal-warnings", "...")""" ::
"test" :: state
}
This way we can use sbt test during development, while our CI hooks into sbt validate which uses stateWithStrictScalacSettings.

Excluding a dependency in creating fat jar using SBT

I am writing a akka application. While creating far jar of application , I dont want scala libraries to be packaged with the jar. My build.sbt looks as follows:
lazy val root = (project in file(".")).
settings(
name :="akka-app",
version :="1.0",
scalaVersion :="2.10.4",
mainClass in Compile := Some("sample.hello.HelloWorld")
)
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % "2.3.4" % "provided",
"com.typesafe" % "config" % "1.2.1"
)
// META-INF discarding
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
}
But this sbt packages scala with jar. I want only com.typesafe.config library to be present in the jar. Any solution how to achieve this?
You can exclude Scala by modifying the option in the assemblyOption setting:
assemblyOption in assembly :=
(assemblyOption in assembly).value.copy(includeScala = false)

Why is the error "Not a valid command: assembly-merge-strategy"?

I have the following build.sbt file.
import AssemblyKeys._
name := "approxstrmatch"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies+="org.apache.spark" %% "spark-core" % "1.0.0"
resolvers += "AkkaRepository" at "http://repo.akka.io/releases/"
// My merge strategy is specified here.
lazy val app = Project("approxstrmatch", file("approxstrmatch"),
settings = buildSettings ++ assemblySettings ++ Seq(
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case PathList("javax", "servlet", xs # _*) => MergeStrategy.first
case PathList("javax", "transaction", xs # _*) => MergeStrategy.first
case PathList("javax", "mail", xs # _*) => MergeStrategy.first
case PathList("javax", "activation", xs # _*) => MergeStrategy.first
case PathList(ps # _*) if ps.last endsWith ".html" => MergeStrategy.first
case "application.conf" => MergeStrategy.concat
case "unwanted.txt" => MergeStrategy.discard
case x => old(x)
}
})
)
mainClass in assembly := Some("approxstrmatch.JaccardScore")
// jarName in assembly := "approstrmatch.jar"
When I execute the following command sbt assembly-merge-strategy there's an error I don't understand. Any help appreciated.
approxstrmatch]$ sbt assembly-merge-strategy
[info] Loading project definition from /apps/sameert/software/approxstrmatch/project
[info] Set current project to approxstrmatch (in buildfile:/apps/sameert/software/approxstrmatch/)
[error] Not a valid command: assembly-merge-strategy
[error] No such setting/task
My understanding tells me there's no assembly-merge-strategy task in sbt-assembly plugin (I can only suspect you use that plugin in your build).
Execute assembly as described in https://github.com/sbt/sbt-assembly#assembly-task as "an awesome new assembly task which will compile your project, run your tests, and then pack your class files and all your dependencies into a single JAR file".
There is a setting named assemblyMergeStrategy (aka assembly-merge-strategy). It's just that you won't directly use it. The way sbt-assembly uses it is scoped to assembly task:
mergeStrategy in assembly <<= ....
So here's what you have to do to call it from the shell:
$ sbt assembly::assemblyMergeStrategy
[info] blabla other things...
[info] <function1>
add assemblySettings in your build.sbt will help

How to tell sbteclipse to ignore src/main/java?

How can I get the sbt-eclipse plugin to ignore adding/creating the src/main/java and src/test/java to the eclipse .classpath?
I dont have these folders and when I run >eclipse the eclipse-sbt-plugin creates those folders and adds to eclipse .classpath.
build.sbt file
name := "myproject"
version := "1.0"
scalaVersion := "2.10.1"
resolvers += "google-api-services" at "http://google-api-client-libraries.appspot.com/mavenrepo"
libraryDependencies += "org.scalatest" %% "scalatest" % "1.9.1" % "test"
libraryDependencies += "junit" % "junit" % "4.10" % "test"
libraryDependencies += "com.novocode" % "junit-interface" % "0.10-M1" % "test"
EclipseKeys.createSrc := EclipseCreateSrc.ValueSet(EclipseCreateSrc.Unmanaged, EclipseCreateSrc.Source, EclipseCreateSrc.Resource)
projects/plugins.sbt file
resolvers += Classpaths.typesafeResolver
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.2.0")
Thanks.
This is the default behavior of sbt to have javaSources and scalaSources in classpath.
Them being in eclipse is just a consequence.
It can be changed with (for only java project):
unmanagedSourceDirectories in Compile := (javaSource in Compile).value :: Nil
or (for only scala project)
unmanagedSourceDirectories in Compile := (scalaSource in Compile).value :: Nil
or just remove them all
unmanagedSourceDirectories in Compile := Nil
You can do it like this:
unmanagedSourceDirectories in Test <<= (sourceDirectory){ src => src / "somerandompathfortestsources" :: Nil}
To see what they are (in sbt console):
show unmanagedSourceDirectories
show sources
...
To see what makes them:
inspect unmanagedSourceDirectories
...
More about:
http://www.scala-sbt.org/0.13.0/docs/Detailed-Topics/Java-Sources.html

Resources