Yocto - Use a variable in a patch - patch

Can one use a Yocto variable, like, say, ${MACHINE}, in a patch file?
This is quite a general question, probably somebody can provide a quick example or just say "impossible".
Thanks.

It's not impossible but it will be wrong. Yocto variables are better kept to yocto build meta-data alone. Moreover this will not tie packages to yocto build environment.

You achieve what you want by adding a post patch task to replace a variable, let's say REPLACEME with ${MACHINE} using sed or whatever.

Related

Control over OpenAPI 3.0 package generation for jersey-jaxrs

I'm using openapi-generator for jersey-jaxrs (OpenAPI 3.0). I'd like to control the package where my code is being generated.
I'm setting the api-package, model-package, package-name, and invoker-package options, all to a xxx.yyy.zzz value.
My problem is that most of the code is generated under gen.xxx.yyy.zzz, and it's not discoverable by the part of the code generated under xxx.yyy.zzz. Implicitly, gen is prepended to the package name. I understand this is convenient in many cases, but not mine. Is there any generator option to avoid this?
I've learned a bit about the Mustache templates and they seem like a possible solution, but maybe a bit too much for my requirements.
Ultimately, I can move the code in gen to the other (non-gen) package manually, and it works, but this is quite inconvenient.
Finally, I found out that you can mark folders in IntelliJ IDEA as "generated sources root", which makes it discoverable to the rest of the project's code.
This doesn't solve my question, but it does solve the problem that originated the question.

How to import external Java libraries in OpenTest framework?

I would like to find out how I can import external libraries into my tests? For example, if i use a Java library for random name/number generation, how do I go about using it in my tests?
Thanks
Before I answer, I would advise that you should avoid using Java code, if you can. For example, a random name/number generator is very easy to implement in JavaScript and you can find plenty of ready-made examples out there. If it's JS code, you can easily embed it in your tests using one of the techniques described here. Even better, you should use capabilities that are provided out-of-the-box with OpenTest: $random and $randomString.
If you really need to use Java code, there are two ways to do it:
The recommended way: create one or more custom OpenTest keywords as described here. This will make it easier for you to maintain your test suite in the future and it also makes it easier for other members of your team to leverage this work in their own tests, especially if they are not familiar with Java.
The "quick and dirty" way: create a user-jars directory in your test actor's working directory and drop the JAR file in there. Then, call your Java code from JavaScript as described here.

Conditional addSbtPlugin based on scalaVersion

I'm using a plugin (sbt-scapegoat) which only works for Scala 2.11.
Can I have a conditional addSbtPlugin based on scalaVersion? Like:
if (scalaVersion.value.startsWith("2.11")) addSbtPlugin("com.sksamuel.scapegoat" %% "sbt-scapegoat" % "0.94.6")
How can I do this in SBT?
Jianshi
tl;dr It's not possible given the description of the problem.
There are at least two build configurations involved in a sbt project - the real project (you want to bet your money on) and the meta build for the build of your project. Yes, I know it sounds a little weird, but it's a very powerful concept IMHO.
See sbt is recursive:
The project directory is another build inside your build, which knows how to build your build. To distinguish the builds, we sometimes use the term proper build to refer to your build, and meta-build to refer to the build in project. The projects inside the metabuild can do anything any other project can do. Your build definition is an sbt project.
sbt runs atop Scala and requires a strict version of it. No way to change it unless you fancy spending time on things you should really not be touching in the first place :)
What you can do is to apply the plugin in project/plugins.sbt and then, in the project, apply the settings of the plugin selectively, per scalaVersion of the project's build not the meta-build's itself.
It's not that complicated as the answer reads, but explaining simple concepts is usually not an easy task for me. Have fun with sbt! It's gonna pay you back very soon when used properly.
Updated answer for 2020: You can use .filter on addSbtPlugin.
For example, the following works:
val scalafixEnabled = System.getProperty("SCALAFIX", "").trim.nonEmpty
addSbtPlugin("ch.epfl.scala" % "sbt-scalafix" % "0.9.14").filter(_ => scalafixEnabled)

Using two yeoman generators?

Is it possible to use two generators on one project with yeoman?
For example: I want to use the angular-generator but also want to use another generator, whether it be custom or one of the bootstrap generators.
I know you can add dependencies through bower, but that doesn't add anything to my workflow(e.g. compiling less), does it?
Yes, it is not only possible, but common. Example: When you use JS-MV* generator in the project (generator-angular for instance) you will probably use generators responsible for other stuff, such as generator-travis-ci, generator-heroku.
Using two generators dedicated to two different JS-MV* frameworks ? NO. It makes no sense.
Yo can do it physically, for instance running generator-ember and generator-angular consequently in the same dir will result in angular's one trying to overwrite files generated previously by generator-ember.
As for the second question changing the workflow is basically changing the Gruntfile. It can be done by generators or by you, manually.

Closure: --namespace Foo does not include Foo.Bar, and related issues

I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.

Resources