What does a bare line on its own do? - sbt

In a .sbt file, I often have copy-pasted lines from readmes, of which I have no idea what I'm actually doing. An example is, after adding sbt-revolver to plugins.sbt, writing the line
Revolver.settings
My current understanding of what this does is magically adding re-start and re-stop commands to sbt. I have been led to understand that a line in an .sbt file does not, in fact, perform magic, but rather creates a key and associates a value with it.
What keys does such a line set, and to what value? What would be the equivalent statement in a .scala build definition?

*.sbt files can take bare DslEntry which include Setting[T] and Seq[Setting[T]].
An expression like someString := "a" or someSeq += "b" is a Setting for a specific T type.
These settings are values though, they define transformation (change, add, append, etc) of different parts of the build, which get folded into the build state and structure.
In your example Revolver.settings is Seq[Setting[_]] which defines default setup of using sbt-revolver.
If setting it up in a project/*.scala you need to assign it to the root project, which is either:
the sole project in your build
the project that aggregates all other (sub) projects.
Therefore it would look something like:
import sbt._, Keys._
object Build extends Build {
val bippy = project in file(".") settings Revolver.settings
}

Related

premake: how to get a list of the defined configurations?

ie, I'd like a hypothetical function get_configurations() that would let me do something like this in my premake5.lua:
workspace "myworkspace"
configurations { "debug", "release" }
project "myproject"
configurations { "projconfig" }
for _, cfg in ipairs(get_configurations()) do
print(cfg)
end
...and have it output:
debug
release
projconfig
Is this possible? I saw that there is premake.configset, but it's not clear how to use it...
No, that is not possible. In your simple example it looks easy, but in larger projects there may be many considerations that go into deciding what values end up in that list. It could be filtered by target platform, or toolset, or any number of other variables. Some later block may remove a value that was set earlier. The actual list can't be determined until after all scripts have been run and the final configuration is compiled for export.
However, Premake is just Lua, so you can always define a list of configurations and associate it with a variable, or wrap your settings up in a function and pass in the values.

How to make Flow understand code written for Node.js?

I'm just getting started with Flow, trying to introduce it into an existing Node codebase.
Here are two lines Flow complains about:
import Module from 'module';
const nodeVersion = Number(process.versions.node.split('.')[0]);
The warnings about these lines are, respectively:
module. Required module not found
call of method `split`. Method cannot be called on possibly null value
So it seems like Flow isn't aware of some things that are standard in a Node environment (e.g. process.versions.node is guaranteed to be a string, and there is definitely a Node builtin called module).
But then again, Flow's configuration docs suggest it's Node-aware by default. And I have plenty of other stuff like import fs from 'fs'; which does not cause any warning. So what am I doing wrong?
Module fs works as expected because Flow comes with built-in definitions for it, see declare module "fs" here: https://github.com/facebook/flow/blob/master/lib/node.js#L624
Regarding process.versions.node, you can see in the same file that the versions key is typed as a map of nullable strings, with no mention of the specific node property: versions : { [key: string] : ?string };. So you'll need to either make a PR to improve this definition, or adjust your code for the possibility of that value being null.
I guess the answer about module "module" is obvious now – there are no built-in definitions for that module in Flow in lib/node.js. You could write your own definitions, and optionally send a PR with them to the Flow team. You can also try searching github for these, someone might have done the work already.
That lib directory is very useful by the way, it has Flow definitions for DOM and other stuff as well.

Add external QML module path to QQmlApplicationEngine

I have an external QML module composed of graphical elements. I'd like to find its path in order to add it to my QQmlApplicationEngine. Is there a way to do this?
QQmlApplicationEngine engine;
engine.addImportPath("externalQmlModulePath");
With this, I'll be able to import the graphical elements from my qrc's QML files (which are of course inside the project).
Take a look at QStandardpaths. Having your resources (whether your own or 3rd party) relative to those paths makes them consistently available on target systems. The suggested path for application specific data is QStandardPaths::AppDataLocation.
In CMake you could add a custom post-build command to copy all your resources (again, no difference if your own or 3rd party):
add_custom_command(TARGET ${MY_APP} POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy_directory ${MY_APP_RES_SOURCE_DIR} ${MY_APP_RES_DEST_DIR})
Edit
QStandardPaths::AppDataLocation is of course just an enum value to specify which standard path you are looking for. To actually get the app data path, use the standardLocations method like this:
auto appDataPath = QStandardPaths::standardLocations(QStandardPaths::AppDataLocation).first();
Finally, add your app's resource folder as import path (as you already did) and you're done:
engine.addImportPath(appDataPath + "/res_dir_name");
Note: On Mac you can get away more easily by putting resources in the application bundle.

How to make a Setting of list of values depend on the value of a task

I want to set a SettingKey[Seq[Tuple2[String, String]]] called IzPack.variables of a 3rd party plugin called sbt-izpack.
The documentation tells how to set this setting:
IzPack.variables in IzPack.Config <+= name {name => ("projectName", name)}
I think that the <+= syntax is old. There is no explanation about it in the 0.13.5 SBT documentation.
How can I append values that depends on tasks?
This is the syntax I'm using now:
IzPack.variables in IzPack.Config ++= Seq(
("appVersion", mySetting1.value),
("cocoonXconf", mySetting2.value),
)
but it complains when trying to use a task value with this message:
A setting cannot depend on a task
If it's a Setting it must be known at project load as Settings are only computed once. Tasks on the other hand are computed each time. A Setting can not depend on a Task but a Task can depend on a Setting.
See http://www.scala-sbt.org/0.13/tutorial/More-About-Settings.html#Tasks+with+dependencies.
The solution I've finally done is to refactor my own code, so that a task that generates a file, is split in a setting that defines the output file, and a task that creates the file contents.
The setting is used for initializing the setting IzPack.variables, whereas my custom task is made dependant of the task that uses IzPack.variables.

How do I change Closure Compiler compile options not exported to command line?

I found that some options in CompilerOption are not exported to the command line.
For example, alias all strings is available in the Closure Compiler's Java API CompilerOption but I have no idea how set this in the command line.
I know I can create a new java class, like:
Compiler c = new Compiler();
ComppilerOptions opt = new ComppilerOptions();
opt.setAliasAllString(true);
c.compile(.....);
However I have to handle the command line args myself.
Any simple idea?
============================
In order to try the alias all string option, I write a simple command line application based on compiler.jar.
However I found that, the result I got when open the alias all string is not what I expected.
For example:
a["prototype"]["say"]=function(){
var a="something string";
}
Given the above code, the something string will be replaced by a variable like this:
var xx="something string";
....
var a=xx;
....
This is fine, but how about the string "say"? How does the closure compiler know this should be aliased(replace it use variable) or exported(export this method)?
This is the compiled code now:
a.prototype.say=function(){....}
It seems that it export it.
While I want this:
var a="prototype",b="say",c="something string";
xx[a][b]=function(){.....}
In fact, this is the google_map-like compilation.
Is this possible?
Not all options are available from the command line - this includes aliasAllStrings. For some of them you have the following options:
Build a custom version of the compiler
Use the Java API (see example).
Use plovr
Getting the same level of compression and obfuscation as the Maps API requires code written specifically for the compiler. When properly written, you'll see property and namespace collapsing, prototype aliasing and a whole host of others. For an example of the style of code that will optimize that way, take a look at the Closure Library.
Modifying http://code.google.com/p/closure-compiler/source/browse/trunk/src/com/google/javascript/jscomp/CompilationLevel.java?r=706 is usually easy enough if you just want to play with something.
Plovr (a Closure build tool) provides an option called experimental-compiler-options, which is documented as follows:
The Closure Compiler contains many options that are only available programmatically in Java. Many of these options are experimental or not finalized, so they may not be a permanent part of the API. Nevertheless, many of them will be useful to you today, so plovr attempts to expose these the experimental-compiler-options option. Under the hood, it uses reflection in Java, so it is fairly hacky, but in practice, it is a convenient way to experiment with Closure Compiler options without writing Java code.

Resources