Custom gradle plugin - buildscript dependencies - gradle-plugin

I am trying to write a custom gradle plugin (kotlin/kotlin dsl) that will allow me to set all sorts of things in the build scripts.
I would like to add dependencies on build tooling, such as detekt, log4j2, ktlint, and even configure them. So for example, including the plugin will add detekt tasks to the build.
I have a basic binary plugin with a task, but I am having difficulty constructing a good web-search to find what I need. Can anyone please:
provide some clues
give me some search terms
point me at an article
or point me at a good example?
Thanks in advance.
Phill
Edit: this plugin class compiles, and seems acceptible to the receiving build script, but no detekT tasks appear in the receiving build.
class BuildCommonPlugin: Plugin<Project> {
override fun apply(project: Project) {
project.dependencies
.create("io.gitlab.arturbosch.detekt:1.19.0")
}
}
EDIT: At the comment I am using a 'buildSrc' type plugin. The irritating thing is that I need to duplicate this in each grade project, and when I find an improvement I need to update them all.
How easy is it to convert a 'buildSrc' into a publishable plugin?

Related

I want to generate boiler plate code in my repository pattern project

As title suggests, I am creating open source project that is in .net core 2.0. here is the architecture of it.
Now, it's working fine with everything including code first, seeders, swagger UI, TDD etc.
But there are many places where I have to add/modify classes when I want to add new Table in Database (see SimpleCRUD.Model > Entities)
So, I think I can reduce that boilerplate code, but I am not sure what is best way to do it.
What I did so far?
I tried to create a windows app, which will check and generate code for new added entity.
What I am trying to achieve?
Is there anyway I can add some kind of code in my current project and that will check after each build? is it feasible? any other suggestion to make it working perfectly?
Reference
I have checked this working in few other frameworks like serenity, asp.net boilerplate etc.
T4 temples can help cut down the boiler plate...
https://dotnetthoughts.net/generate-your-database-entities-using-t4-templates/
You've asked for my help here
I agree with other posters that you might want to look into T4. It sounds like you also want to create an MSBuild task.
I outlined the steps to do this for a different question in post here
You can find my code generators under this folder, CodeGen.SessionProxies
The t4 example can be found here: AppSessionPartials.tt
The MSBuild task can be found here: GenerateSessionProxies.cs
I had it generating a nuget file through the CodeGen.SessionProxies.nuspec. You won't find it on nuget.com; I had a local nuget repository. It would be helpful to you to look at the corresponding install.ps1 to understand how to set the generator up as a msbuild task.
Disclaimer: All of the GitHub links are subject to break if I ever decide to clean up that repo.
Cheers

Prevent Atom Editor from auto creating files

at the moment I try to experiment a little bit with Atom for writing an API documentation with RAML. Everything works fine but one damn thing:
Everytime I type some file paths (e.g. !include schemas/file.schema Atom auto creates the file when I'm not quick enough with typing. So, in some cases I have a hole bunch of file-zombies in my schema folder. That's kind of annoying.
My setup is standard Atom on MacBook, with api-workbench plugin, which includes linter as well. I already had a look at all those settings concerning auto completion - nothing found there. Also, Google doesn't show any hints. Any Tips?
Best regards,
Chris
It looks like this is a defect in the api-workbench package:
Api workbench creates new schemas, while i type their paths. For example below, i can see two-three files created while i type full name:
E.g:
schemas:
- myschema: !include schemas/myschema.json
Will create following files:
schemas/my
schemas/mysche
schemas/myschema
schemas/myschemas.json - this file is existing, i've created it before. all other files are redudant and i have to delete them.
Bug is not reproduced with examples, which i can also include in my document. Having issues while edition RAML 0.8 files.
If you want to help the package maintainers fix the defect, can I suggest you put together a minimal but complete example that reproduces the issue, this will make it easier for them to identify and resolve the issue.

Why my wrapped meteor package isn't working?

I seek a little help to wrap a package for meteor. It has always been my weakness on this framework. I know it is not difficult, I read tutorials and some articles like :
https://www.discovermeteor.com/blog/wrapping-npm-packages/
http://www.meteorpedia.com/read/Packaging_existing_Libraries
However I get lost in export and stuff, and it is time to understand !
I tried to wrap this package :
https://github.com/fians/Waves
(I knew one day I was going to not find the package already made by someone on atmosphere :()
So I wanted to do the thnings right, following guidelines made by dandv. I forked the repo, add the meteor packages with the files : export.js and package.js. Following the example of moment As you can see here :
https://github.com/Voyag3r/Waves
Finally, in my app, I created the local package folder with the meteor command : meteor create --package voyag3r:waves I tried to called the waves variable, bit it is not defined. I tried with and without a capital letter. (like in the source code waves.js). No, I tried with this.Waves instead of just Waves in export.js, neither.
There is something I do not understand with namespace and visibility I think, and errors are not displaying useful information this time. Is there someone to explain to me ? Because I would like to do a lot of other packages !
Thanks !
Glad to see that you read through those documents above, as they're quite helpful. However, I understand that it can be confusing to work through the details. Hopefully, I can assist you.
I recently finished packaging up a couple of libraries for Meteor, so you should take a look at those repositories as examples:
jspdf:core
jspdf:autotable
More specifically, take a look at the jspdf:core repository above and inspect the meteor-pre.js and meteor-post.js files for how to handle exporting variables:
meteor-pre.js
var window = {};
meteor-post.js
jsPDF = window.jsPDF;
Other important files include package.js and package.json, and of course autopublish.json, for integrating version updates with http://autopublish.meteor.com/, a fantastic tool written by Luca Mussi #splendido.
Additionally, I would recommend that you review the Official Meteor integration directly from 3rd party libraries discussion and ask #splendido or #dandv for assistance with reserving the namespace for this library.
This process has gone through rapid change over the last few months, and although not perfect, it's improving steadily. I'm encouraged to see that, like me, you want to assist the Meteor ecosystem.

Conditional addSbtPlugin based on scalaVersion

I'm using a plugin (sbt-scapegoat) which only works for Scala 2.11.
Can I have a conditional addSbtPlugin based on scalaVersion? Like:
if (scalaVersion.value.startsWith("2.11")) addSbtPlugin("com.sksamuel.scapegoat" %% "sbt-scapegoat" % "0.94.6")
How can I do this in SBT?
Jianshi
tl;dr It's not possible given the description of the problem.
There are at least two build configurations involved in a sbt project - the real project (you want to bet your money on) and the meta build for the build of your project. Yes, I know it sounds a little weird, but it's a very powerful concept IMHO.
See sbt is recursive:
The project directory is another build inside your build, which knows how to build your build. To distinguish the builds, we sometimes use the term proper build to refer to your build, and meta-build to refer to the build in project. The projects inside the metabuild can do anything any other project can do. Your build definition is an sbt project.
sbt runs atop Scala and requires a strict version of it. No way to change it unless you fancy spending time on things you should really not be touching in the first place :)
What you can do is to apply the plugin in project/plugins.sbt and then, in the project, apply the settings of the plugin selectively, per scalaVersion of the project's build not the meta-build's itself.
It's not that complicated as the answer reads, but explaining simple concepts is usually not an easy task for me. Have fun with sbt! It's gonna pay you back very soon when used properly.
Updated answer for 2020: You can use .filter on addSbtPlugin.
For example, the following works:
val scalafixEnabled = System.getProperty("SCALAFIX", "").trim.nonEmpty
addSbtPlugin("ch.epfl.scala" % "sbt-scalafix" % "0.9.14").filter(_ => scalafixEnabled)

Frama-C: access to the cil/src/ext modules data and few others questions as well

first of all, i will explain what i would like to do here : given a C big programm, i would like to output a list of producers/consumers for a data and a list of calling/called-by functions of the function where this data is.
for doing this, i am thinking about using what computes some modules of frama-c, like dataflow.ml or callgraph.ml in my own plugin.
however, as i read the plugin developper doc, i can't manage to see how we can have access to the data of those modules.
is a "open.cyl_type" sufficient here in my own plugin?
moreover, here are my other questions :
i tried using by the way pdg plugin for my purposes but when i call it and it says "pdg graph computed", how can i access it?
is there any more documented thing about "impact" plugin than the official webpage, in depth, how it works fondamentally? (i have to say that i'm in like a pre-project phase, and that i installed frama-c with the apt-get on ubuntu and that i did not get an impact plugin working (i'll see by compiling the sources))
by the way, do you think i'm using the right method to get to my purposes?
Your question is quite unclear, and this answer is thus very generic. As mentioned in the developer documentation, there are two main classes of plugins: static plugins, compiled with the kernel and whose API is exposed in a module (usually of the same name of the plugin) in Db. Dynamic plugins, such as Semantic_callgraph register dynamically their entry points through the Dynamic module.
If you do make doc in Frama-C sources (I'm not sure that there is a corresponding package in Ubuntu) you can access documentation for the Db module in FRAMAC_SOURCE_DIR/doc/code/html/Db.html and the list of functions registered by dynamic plugins in FRAMAC_SOURCE_DIR/doc/code/dynamic_plugins/Dynamic_plugins.html.
I think that, following Virgile's advice, you should get the source code anyway because you will most of the time need to browse the code to find what you are looking for. Beside, you can have a look at the hello_word plug-in (in src/dummy/hello_world) to have an example of a very simple plug-in. You can also find some examples on my web site at https://anne.pacalet.fr/Notes/doku.php?id=notes:0061_frama_c_scripts to find out how to have access to some information in the AST.

Resources