Does sbt have something like gradle's processResources task with ReplaceTokens support? - sbt

We are moving into Scala/SBT from a Java/Gradle stack. Our gradle builds were leveraging a task called processResources and some Ant filter thing named ReplaceTokens to dynamically replace tokens in a checked-in .properties file without actually changing the .properties file (just changing the output). The gradle task looks like:
processResources {
def whoami = System.getProperty( 'user.name' );
def hostname = InetAddress.getLocalHost().getHostName()
def buildTimestamp = new Date().format('yyyy-MM-dd HH:mm:ss z')
filter ReplaceTokens, tokens: [
"buildsig.version" : project.version,
"buildsig.classifier" : project.classifier,
"buildsig.timestamp" : buildTimestamp,
"buildsig.user" : whoami,
"buildsig.system" : hostname,
"buildsig.tag" : buildTag
]
}
This task locates all the template files in the src/main/resources directory, performs the requisite substitutions and outputs the results at build/resources/main. In other words it transforms src/main/resources/buildsig.properties from...
buildsig.version=#buildsig.version#
buildsig.classifier=#buildsig.classifier#
buildsig.timestamp=#buildsig.timestamp#
buildsig.user=#buildsig.user#
buildsig.system=#buildsig.system#
buildsig.tag=#buildsig.tag#
...to build/resources/main/buildsig.properties...
buildsig.version=1.6.5
buildsig.classifier=RELEASE
buildsig.timestamp=2013-05-06 09:46:52 PDT
buildsig.user=jenkins
buildsig.system=bobk-mbp.local
buildsig.tag=dev
Which, ultimately, finds its way into the WAR file at WEB-INF/classes/buildsig.properties. This works like a champ to record build specific information in a Properties file which gets loaded from the classpath at runtime.
What do I do in SBT to get something like this done? I'm new to Scala / SBT so please forgive me if this seems a stupid question. At the end of the day what I need is a means of pulling some information from the environment on which I build and placing that information into a properties file that is classpath loadable at runtime. Any insights you can give to help me get this done are greatly appreciated.

The sbt-buildinfo is a good option. The README shows an example of how to define custom mappings and mappings that should run on each compile. In addition to the straightforward addition of normal settings like version shown there, you want a section like this:
buildInfoKeys ++= Seq[BuildInfoKey](
"hostname" -> java.net.InetAddress.getLocalHost().getHostName(),
"whoami" -> System.getProperty("user.name"),
BuildInfoKey.action("buildTimestamp") {
java.text.DateFormat.getDateTimeInstance.format(new java.util.Date())
}
)

Would the following be what you're looking for:
sbt-editsource: An SBT plugin for editing files
sbt-editsource is a text substitution plugin for SBT 0.11.x and
greater. In a way, it’s a poor man’s sed(1), for SBT. It provides the
ability to apply line-by-line substitutions to a source text file,
producing an edited output file. It supports two kinds of edits:
Variable substitution, where ${var} is replaced by a value. sed-like
regular expression substitution.
This is from Community Plugins.

Related

nginx module: compiles but fails to link a new library

I have modified an nginx module in a way that it depends on a library.... let's call the library that I need libx.
I have modified auto/os/linux in such a way that I am able to detect if libx is present... by adding something like:
+ngx_feature="libx"
+ngx_feature_name="NGX_HAVE_LIBX"
+ngx_feature_run=no
+ngx_feature_incs="#include <libx.h>"
+ngx_feature_path=
+ngx_feature_libs=-lx
+ngx_feature_test="libx_init();"
+. auto/feature
Then, in the module code I do #ifs checking for NGX_HAVE_LIBX... something like:
#if (NGX_HAVE_LIBX)
libx_init();
#endif
And it works like a charm.... when I run auto/configure I get that the library is detected with something like:
checking for libx... found
and it compiles BUT at link time it's not including -lx as part of the flags sent to cc/ld when building the final objs/nginx binary. I would have expected that after the feature be detected in auto/os/linux it would automagically be added to the linking phase when creating the Makefile... but apparently that's not the case so I know I am missing something... what additional step do I need to do to pull it off?
This is on nginx 1.19.2 (well, master branch from nginx mirror).
I think I got it.
I needed to add something like this:
+ngx_feature="libx"
+ngx_feature_name="NGX_HAVE_LIBX"
+ngx_feature_run=no
+ngx_feature_incs="#include <libx.h>"
+ngx_feature_path=
+ngx_feature_libs=-lx
+ngx_feature_test="libx_init();"
+. auto/feature
if [ $ngx_found = yes ]; then
CORE_LIBS="$CORE_LIBS -lx"
NGX_LIBDL="-lx"
fi

Fail a grunt build when debug prints exist in source

I am working on a PHP/Javascript project where I've nicely set up a build workflow. It involves testing, minifying, compressing into the final zip deliverable, and a whole lot of other nice stuff.
I want to build a task that fails when there are certain patterns in the source code. I would like to look for any print_r(), error_log(), var_dump(), etc functions, and halt the build process if there are any. Perhaps later I would like to check for things in Javascript or CSS so this is not only a PHP question.
I know it can be done with grunt-shell and grep but I'd like to know the following:
Are there any grunt plugins specific to this task? Ideally I would like to be able to specify a list of regexes per file type, and to set whether to continue or fail the build on pattern match.
How do others tackle the problem of double-checking the packaged source for the most common debug statements or other patterns?
Not a complete answer to my question, but I've recently come across this grunt plugin which is somewhat related. It removes console.log statements from JavaScript. Haven't tried it yet. Looks good. I still would like to know if there's something similar for PHP though.
http://grunt-tasks.com/grunt-remove-logging-calls/
Edit: Seeing as there's only tumbleweeds rolling in the wind here, I'm posting my workaround that's based on grunt-shell. However this is not what I was looking for. It's not perfect because it doesn't do proper syntax parsing:
shell: {
check_debug_prints: {
command: '(! (egrep -r "var_dump|print_r|error_log" --include=*.php src || egrep -r "console\.\w+|debugger;" --include=*.js src) ) || (echo "Debug prints in source - build aborted" && false )'
}
},
and
grunt.loadNpmTasks( 'grunt-shell' );
Edit 2: I finally found the exact grunt plugin I was looking for. It is grunt-search. There is a failOnMatch boolean option that lets you indicate if a particular regex pattern should cause the build to fail when found.

ADA File Names vs. Package Names

I inherited an ADA program where the source file names and package file names don't follow the default naming convention. ADA is new to me, so I may be missing something simple, but I can't see it in the GNAT Pro User's Guide. (This similar question didn't help me.)
Here are a couple of examples:
File Name: C_Comm_Config_S.Ada
Package Name: Comm_Configuration
File Name: D_Bus_Buffers_S.Ada
Package Name: Bus_Buffers
I think I have the _S.Ada and _B.Ada sorted out, but I can't find anything in the program source or build files that show the binding between the Package Name and the rest of the File Name.
When I compile a file that doesn't use any other packages, I get a warning: file name does not match unit name... This appears to be from the prefix of C_ or D_, in this particular case.
Also, I'm not clear if the prefixes C_ and D_ have any special meaning in the context of ADA, but if it does, I'd like to know about it.
So I appear to have two issues, the Prefix of C_ or D_ and in some cases the rest of the file name doesn't match the package.
You could use gnatname: see the User’s Guide.
I copied subdirectories a/ and d/ from the ACATS test suite to a working directory and created a project file p.gpr:
project p is
for source_dirs use ("a", "d");
end p;
and ran gnatname with
gnatname -P p -d a -d d \*.ada
which resulted in an edited p.gpr and two new files, p_naming.gpr and p_source_list.txt. These are rather long, but look like
p.gpr:
with "p_naming.gpr";
project P is
for Source_List_File use "p_source_list.txt";
for Source_Dirs use ("a", "d");
package Naming renames P_Naming.Naming;
end P;
p_naming.gpr:
project P_Naming is
for Source_Files use ();
package Naming is
for Body ("d4a004b") use "d4a004b.ada";
for Body ("d4a004a") use "d4a004a.ada";
for Body ("d4a002b") use "d4a002b.ada";
...
for Body ("aa2010a_parent.boolean") use "aa2010a.ada" at 4;
for Body ("aa2010a_parent") use "aa2010a.ada" at 3;
for Spec ("aa2010a_parent") use "aa2010a.ada" at 2;
for Spec ("aa2010a_typedef") use "aa2010a.ada" at 1;
...
for Body ("a22006d") use "a22006d.ada";
for Body ("a22006c") use "a22006c.ada";
for Body ("a22006b") use "a22006b.ada”;
end Naming;
end P_Naming;
The for Body ("aa2010a_parent") use "aa2010a.ada" at 3; is used when there’s more than one unit in the source file.
p_source_list.txt:
a22006b.ada
a22006c.ada
a22006d.ada
a27003a.ada
a29003a.ada
...
d4a002b.ada
d4a004a.ada
d4a004b.ada
When building, for example, test d4a004b, you have to use the file name and suffix:
gnatmake -P p d4a004b.ada
The Ada standard does not say anything about source file naming conventions. As it appears that you use GNAT, I assume that you mean the "GNAT default naming convention".
You can tell GNAT about alternatively named files in a Naming package inside your project files.
A simple example:
project OpenID is
...
package Naming is
for Implementation ("Util.Log.Loggers.Traceback")
use "util-log-loggers-traceback-gnat.adb";
for Implementation ("Util.Serialize.IO.XML.Get_Location")
use "util-serialize-io-xml-get_location-xmlada-4.adb";
end Naming;
end OpenID;

Can I use sbt's `apiMappings` setting for managed dependencies?

I'd like the ScalaDoc I generate with sbt to link to external libraries, and in sbt 0.13 we have autoAPIMappings which is supposed to add these links for libraries that declare their apiURL. In practice though, none of the libraries I use provide this in their pom/ivy metadata, and I suspect some of these libraries will never do so.
The apiMappings setting is supposed to help with just that, but it is typed as Map[File, URL] and hence geared towards setting doc urls for unmanaged dependencies. Managed dependencies are declared as instances of sbt.ModuleID and cannot be inserted directly in that map.
Can I somehow populate the apiMappings setting with something that will associate an URL with a managed dependency ?
A related question is: does sbt provide an idiomatic way of getting a File from a ModuleID? I guess I could try to evaluate some classpaths and get back Files to try and map them to ModuleIDs but I hope there is something simpler.
Note: this is related to https://stackoverflow.com/questions/18747265/sbt-scaladoc-configuration-for-the-standard-library/18747266, but that question differs by linking to the scaladoc for the standard library, for which there is a well known File scalaInstance.value.libraryJar, which is not the case in this instance.
I managed to get this working for referencing scalaz and play by doing the following:
apiMappings ++= {
val cp: Seq[Attributed[File]] = (fullClasspath in Compile).value
def findManagedDependency(organization: String, name: String): File = {
( for {
entry <- cp
module <- entry.get(moduleID.key)
if module.organization == organization
if module.name.startsWith(name)
jarFile = entry.data
} yield jarFile
).head
}
Map(
findManagedDependency("org.scalaz", "scalaz-core") -> url("https://scalazproject.ci.cloudbees.com/job/nightly_2.10/ws/target/scala-2.10/unidoc/")
, findManagedDependency("com.typesafe.play", "play-json") -> url("http://www.playframework.com/documentation/2.2.1/api/scala/")
)
}
YMMV of course.
The accepted answer is good, but it'll fail when assumptions about exact project dependencies don't hold. Here's a variation that might prove useful:
apiMappings ++= {
def mappingsFor(organization: String, names: List[String], location: String, revision: (String) => String = identity): Seq[(File, URL)] =
for {
entry: Attributed[File] <- (fullClasspath in Compile).value
module: ModuleID <- entry.get(moduleID.key)
if module.organization == organization
if names.exists(module.name.startsWith)
} yield entry.data -> url(location.format(revision(module.revision)))
val mappings: Seq[(File, URL)] =
mappingsFor("org.scala-lang", List("scala-library"), "http://scala-lang.org/api/%s/") ++
mappingsFor("com.typesafe.akka", List("akka-actor"), "http://doc.akka.io/api/akka/%s/") ++
mappingsFor("com.typesafe.play", List("play-iteratees", "play-json"), "http://playframework.com/documentation/%s/api/scala/index.html", _.replaceAll("[\\d]$", "x"))
mappings.toMap
}
(Including scala-library here is redundant, but useful for illustration purposes.)
If you perform mappings foreach println, you'll get output like (note that I don't have Akka in my dependencies):
(/Users/michaelahlers/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.7.jar,http://scala-lang.org/api/2.11.7/)
(/Users/michaelahlers/.ivy2/cache/com.typesafe.play/play-iteratees_2.11/jars/play-iteratees_2.11-2.4.6.jar,http://playframework.com/documentation/2.4.x/api/scala/)
(/Users/michaelahlers/.ivy2/cache/com.typesafe.play/play-json_2.11/jars/play-json_2.11-2.4.6.jar,http://playframework.com/documentation/2.4.x/api/scala/)
This approach:
Allows for none or many matches to the module identifier.
Concisely supports multiple modules to link the same documentation.
Or, with Nil provided to names, all modules for an organization.
Defers to the module as the version authority.
But lets you map over versions as needed.
As in the case with Play's libraries, where x is used for the patch number.
Those improvements allow you to create a separate SBT file (call it scaladocMappings.sbt) that can be maintained in a single location and easily copy and pasted into any project.
Alternatively to my last suggestion, the sbt-api-mappings plugin by ThoughtWorks shows a lot of promise. Long term, that's a far more sustainable route than each project maintaining its own set of mappings.

How to use a template in vim

This is really a newbie question - but basically, how do I enable a template for certain filetypes.
Basically, I just want the template to insert a header of sorts, that is with some functions that I find useful, and libraries loaded etc.
I interpret
:help template
the way that I should place this in my vimrc
au BufNewFile,BufRead ~/.vim/skeleton.R
Running a R script then shows that something could happen, but apparently does not:
--- Auto-Commands ---
This may be because a template consists of commands (and there are no such in skeleton.R) - and in this case I just want it to insert a text header (which skelton.R consist of).
Sorry if this question is mind boggeling stupid ;-/
The command that you've suggested is not going to work: what this will do is run no Vim command whenever you open ~/.vim/skeleton.R
A crude way of achieving what you want would be to use:
:au BufNewFile *.R r ~/.vim/skeleton.R
This will read (:r) your file whenever a new *.R file is created. You want to avoid having BufRead in the autocmd, or it will read the skeleton file into your working file every time you open the file!
There are many plugins that add a lot more control to this process. Being the author and therefore completely biased, I'd recommend this one, but there are plenty of others listed here.
Shameless plug:
They all work in a relatively similar way, but to explain my script:
You install the plugin as described on the linked page and then create some templates in ~/.vim/templates. These templates should have the same extension as the 'target' file, so if it's a template for .R files, call it something like skeleton.R. In your .vimrc, add something like this:
let g:file_template_default = {}
let g:file_template_default['R'] = 'skeleton'
Then create your new .R file (with a filename, so save it if it's new) and enter:
:LoadFileTemplate
You can also skip the .vimrc editing and just do:
:LoadFileTemplate skeleton
See the website for more details.
Assume that your skeletons are in your ~/.vim/templates/ directory, you can put this
snippet in your vimrc file.
augroup templates
au!
" read in templates files
autocmd BufNewFile *.* silent! execute '0r ~/.vim/templates/skeleton.'.expand("<afile>:e")
augroup END
Some explanation,
BufNewFile . = each time we edit a new file
silent! execute = execute silently, no error messages if failed
0r = read file and insert content at top (0) in the new file
expand(":e") = get extension of current filename
see also http://vim.wikia.com/wiki/Use_eval_to_create_dynamic_templates
*fixed missing dot in file path
Create a templates subdirectory in your ~/.vim folder
$ mkdir -p ~/.vim/templates
Create a new file in subdirectory called R.skeleton and put in the header and/or other stuff you want to automagically load upon creating a new ".R " file.
$ vim ~/.vim/templates/R.skeleton
Then, add the following to your ~/.vimrc file, which may have been suggested in a way by "guest"
autocmd BufNewFile * silent! 0r $HOME/.vim/templates/%:e.skeleton
Have a look at my github repository for some more details and other options.
It's just a trick I used to use .
It's cheap but If you ain't know nothing about vim and it's commands it's easy to handle.
make a directory like this :
~/.vim/templates/barney.cpp
and as you konw barney.cpp should be your template code .
then add a function like ForUncleBarney() to end of your .vimrc file located in ~/.vimrc
it should be like
function ForBarneyStinson()
:read ~/.vim/templates/barney.cpp
endfunction
then just use this command in vim
:call ForBarneyStinson()
then you see your template
as an example I already have two templates for .cpp files
:call ForBarney()
:call ACM()
sorry said too much,
Coding's awesome ! :)
Also take a look at https://github.com/aperezdc/vim-template.git.
I use it and have contributed some patches to it and would argue its relatively full featured.
What about using the snipmate plugin? See here
There exist many template-file expanders -- you'll also find there explanations on how to implement a rudimentary template-file expander.
For my part, I'm maintaining the fork of muTemplate. For a simple start, just drop a {ft}.template file into {rtp}/template/. If you want to use any (viml) variable or expression, just do. You can even put vim code (and now even functions) into the template-file if you wish. Several smart decisions are already implemented for C++ and vim files.

Resources