SBT - watch different sources depending on Task - sbt

In a cross Scala JS server / client project, I want changes of some sources to restart the server and other sources to trigger the packaging process, but without the restart. Different tasks will not help because they will simply do one or the other and I want both at the same time.
In more detail:
I've got a Scala.js crossProject. I'm using the following to ensure the server can serve the built JavScript:
val app = crossProject.settings(...)
lazy val appJS = app.js
lazy val jsFile = fastOptJS in(appJS, Compile)
lazy val appJVM = app.jvm.settings(
(resources in Compile) += jsFile.value.data,
(resources in Compile) += jsFile.value.data.toPath.resolveSibling(jsFile.value.data.name+".map").toFile,
(resources in Compile) += (packageJSDependencies in(appJS, Compile)).value
)
If I run ~ appJVM/compile:packageBin::packageConfiguration then changes to the JavaScript source are immediately compiled and placed in the appJVM target/classes dir, so a refresh of the browser gets my new code - brilliant.
However, I would also like to use the sbt-revolver plugin to restart the server if I edit server-side code. But there's the rub - if I use ~ ;appJVM/compile:packageBin::packageConfiguration;appJVM/reStart then changes to the client side source restart the server, which I don't want. But if I remove the client side project from the transitive watch then it no longer notices if I change the client side project.
Is there a way to define watchTransitiveSources differently per task?

~ is actually a command that watches the transitive sources of the base project and then synchronously runs everything passed as an argument to it when those change, before re-running the original input (including ~). It does not make any information about what has changed available to those command line inputs (difficult to see how it could).
Consequently the solution I came to is to write a new watch command. It also needs to watch all sources, but then conditionally choose what to do based on which files have changed.
I've hacked something ugly as anything together that does this, but will look at making it more legible, general, tested and a Plugin. However, in the meantime anyone trying to follow my path can use this public gist: https://gist.github.com/LeisureMonitoringAdmin/0eb2e775e47b40f07d9e6d58d17b6d52

Are you sure you are using sbt-resolver not sbt-revolver ?
Because the second one allows controlling the triggered resources using
watchSources - defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.
watchTransitiveSources - then combines the
watchSources for the current project and all execution and classpath
dependencies (see .scala build definition for details on interProject
dependencies).
Source: http://www.scala-sbt.org/0.13/docs/Triggered-Execution.html

Related

The transaction currently built is missing an attachment for class - Attempted to find a suitable attachment but could not find any in the storage

Full Error:
transactions.TransactionBuilder. - The transaction currently built is missing an attachment for class: com/gibtn/corda/printutilities/PrintLedgerTransaction. Attempted to find a suitable attachment but could not find any in the storage.
This has been asked here and here but I hope to get better clarification.
Problem:
I have built a set of libraries to perform common tasks in my Flows that I include in all my CorDapps. For now I just copy the JARs into each project, make some changes to the gradle files and everything works great.
I recently put together a small library for performing common tasks in Contracts and added the JAR the same way.
This works fine with MockNodes. But when I test with real nodes I will get this error in the CRaSH shell and the transaction will fail with a NoClassDefFoundError exception.
Question:
Is what I am doing even possible? Or do I always have to keep my utility classes inside the Contracts module in IntelliJ so they are bundled together with the Contracts into a single JAR? That way when the node starts the JAR (containing the Contracts and any utilities) is added to Attachment storage as a single Attachment.
I found a way to solve this. It's a bit dirty but initial testing seems to work. I just created a blank class in my utilities JAR that implements Contract. It's verify() method is empty. Now when the Corda node starts it sees this Contract and adds the JAR to Attachment storage. So from the CRaSH shell if I run:
attachments trustInfo
...my utility JAR will be listed (it wasn't before). I see when I use one of the utility methods in a Contract the utility JAR will be included as a separate Attachment in the WireTransaction.
I'm not crazy about this solution and will probably stop using a utility JAR for Contracts. I'll go back to copying the classes into each project. Nevertheless there is a way to do it. I would just need a more experienced Corda developer to give it their blessing before I'd go forward into production with it.

Qt5 is it possible to define more than one build configuration?

I am working on application for Mac OS written in C++ using Qt5.7.1, which is distributed both as direct download from website and through App Store.
Now I have a new requirement to add self-update to the application. Which would require building two different versions of application - one for App Store (without self-update mechanism) and one for direct download (with self-update).
I have no problem to implement the self-update, but I am stuck with making a separate build configuration. So the question is -- in Qt5 is it possible to define two separate Release configurations (in a single .pro file) and if yes, then how? It also needs to work with qmake since the builds are automatic (with Jenkins).
You cannot really define 2 Release configurations in a .pro file.
However you can define different configuration options in your .pro file like this:
foo {
#something
DEFINES += FOO
} else {
#something else
DEFINES += NOT_FOO
}
bar {
#another thing
DEFINES += BAR=42
}
And then when you run qmake, add either CONFIG+=foo or CONFIG+=bar or both.
Finally in QtCreator you can define as many build profile as you want and customize the call to qmake for each profile by adding or not CONFIG+=foo options.
For more information check qmake's CONFIG documentation, especially the last example.

order of precedence of Sling run modes

I have a doubt over this question
Question: What is the correct order of precedence to setup runmodes in aem? (From left to right, left beign the highest)?
A. System property, Sling properties file, jar file
B. jar file, sling properties file, system property
C. Sling properties file, jar file, system property
D. jar file, System property, Sling properties file
Answer : B
I had gone through various docs and had done multiple experiments over this.
Acc to Adobe documentation the order is - Sling.properties, System property, jar file
Similarly, this Adobe doc has a contradictory opinion - jar file, sling.properties, system property
Also, Apache Sling Doc says that any property to option D (-D) set in manner, n=v, overwrites same named properties in the sling.properties file. which means system property has higher precedence then sling.properties.
Now, these are all according to docs, what I had experimented is-
I made a path ${dir}/crx-quickstart/conf and created a file sling.properties and wrote sling.run.modes=publish. Then renamed the jar file as cq-author-7502.jar. Then run this jar with the command java -jar cq-author-7502.jar -Dsling.run.modes=prod
This is my observation:
1. When the jar runs, Setting 'sling.run.modes' to 'publish' from sling.properties. this message is shown in the terminal.
2. The instance up in author mode. And
3. When I checked the instance-mode in felix console, it was prod
I am totally confused about the order of precedence. As everything seems contradictory to me.
It would be grateful if anyone can put some light on it..
Thank you
I think it depends on when we are checking the run mode precedence, at the time of installation or later on a running instance and how we are starting our instance. There are 2 kinds of run modes. Installation time run mode, custom run modes.
Installation time run mode - As explained by official run modes documentation and setup instructions, this can be set only one time at the time of installation. This includes author,publish,nosamplecontent,samplecontent
Custom run mode - Own customized run modes e.g. dev, qa, prod etc
I did some tests (AEM 6.1), precedence is working in following way
Initial setup
Start jar (by double clicking) - In this you do not have option to set run mode in sling.properties, start script first time. JAR name takes precedence.
Unpack jar and specify run mode as system properties in start script - JAR name doesn't comes to picture here. In this you do not have option to set run mode in sling.properties. System properties takes precedence.
Running instance
Even if we change run mode in JAR name, it doesn't changes the installation time run mode. For custom run mode, JAR file name is not applicable. Order of precedence is sling.properties -> specifying -r option (command line jar option) -> system properties (start script)
As far as the question (seems to be AEM certification question), the context is not clear with respect to which they are asking. Helpx article is contributed by community, context might be different. Sling documentation link (it seems as per this link the launchpad version in AEM is old, not 2.4.0). Need to ask Adobe to confirm :).
There are two conflicting Adobe articles that say something quite different
Article 1: (Assumed more recent)
Starting CQ with a specific run mode If you have defined
configurations for multiple run modes then you need to define which is
to be used upon startup. There are several methods for specifying
which run mode to use; the order of resolution is:
sling.properties file
-r option
system properties (-D)
Filename detection
From this Reference: Configure Run Modes
- the answer is C
Article 2:
Behavior when run modes are specified more than one way The run mode
specified in the naming of the jar file takes precedence. If run modes
are not specified in the naming of the jar file, the values in the
sling.properties file are used. If run modes are not specified in
either the naming of the jar file or the sling.properties file, the
system property (or JVM argument) is used.
From this Reference: Configure Run Modes
- the answer is B
However based on my experience and based on process of elimination I'd go with answer B.

Node.js externs for closure compiler?

Firstly: The "official" (?) node.js externs are located here: https://github.com/google/closure-compiler/tree/master/contrib/nodejs
But i can't use it because the high amount of warnings (and errors sometimes) that are generated. For example: The declaration of "process" module is very "thin"; Only has one property defined on his prototype, besides not inherit from EventEmitter, so i can't register callback when, for example, i want to do a clean job on process SIGINT (process.on('SIGINT', callback)).
When i mix several externs files declaring the core modules of node.js, more and more warnings and errors are raised (i always respect the deps tree between externs). For example: If i include the events.js and stream.js externs files, an error is raised because the "event" global var is redeclared: Once in events and again in stream.
So: What am i doing wrong?
The closure compiler that i am using is the latest git, whit --new_type_inf and --env flags activated, among others.
For example: If i include the events.js and stream.js externs files, an error is raised because the "event" global var is redeclared: Once in events and again in stream.
This highlights the core of the problem - and why they are not well maintained. The compiler doesn't recognize that these variables are in fact NOT global. The compiler currently does not have a method to correctly interpret externs as modules. The were originally contributed and consumed by a fork of the project that could understand externs as modules.
I am currently working on adding support to the compiler for this and hope to some day soon be able to completely rewrite this answer.
In the mean time, you might be able to work around some of this by adding #suppress {duplicate} annotations to the files. However keep in mind that they will still be global types.
If you wish to improve the files (like having process properly extend EventEmitter), I will happily review pull requests for such changes.

Building multiple outputs through the same build process with external config

I'm trying to leverage GruntJS to create a build process that is uniform across multiple teams and projects at my company. The idea hear is that we have a config file for each application that only specifies the files that need to be processed and what bundles they need to be concatenated into at the end. The build process would be the same for all apps: pick up the config for the app, process files in each bundle using a uniform build process.
For Example:
asset.json config file specifies two bundles, "main" with 1.js + 2.js and "secondary" with 2.js and 3.js
Build process says for each bundle, preprocess, minify, then concatenate into a js file based on the bundle
Get output of "main.js" and "secondary.js"
The problem I'm running into is that Grunt takes a "static" configuration and executes it. I've already abstracted out the building of the configuration so that I can add chunks dynamically, but right now I don't see a better way forward than literally looping over each bundle and building out a unique task for each section of the build process for each bundle, building up queues of tasks to execute, and then running each task in the queues during the build process. Its definitely possible, but its a lot of manual work and seems prone to breaking. Is there way to just execute each task in order as I loop over the bundles? Any better way to achieve the same net result of config + source in, N bundles out?
I want to be clear that I am fully aware that Grunt CAN build multiple files. What I'm trying to do is separate the specification of how many bundles from the build steps themselves. Grunt core has to bake these two things together which means each project would have to go in and alter their build steps rather than an external configuration. As per the example above, I should be able to swap out the asset.json file specified in step 1 for any config file that has 1, 2, 3, ... N bundles with N files in each one (and potentially specifying a "type" like scripts or styles).
Edit 10/12/13: The Nitty Gritty posted an article yesterday that might be another approach to tackling your issue.
This can be done by passing the module name you want to build as a command line argument and loading in the whole assets file in your grunt config. Please note this is example code, I have not tested this, so it's possible you need to set paths etc. correct for your case.
Start with updating the assets.json file to a plain JavaScript file, and reform it like so:
module.exports = {
main: ["1.js", "2.js"],
secondary: ["2.js","3.js"]
}
Next, you can pass a command line argument to Grunt, which should specify one of the module names in assets.js. Example:
grunt --bundle=main
Now, you'll need to load in the assets.js file in the Gruntfile:
var assets = require('./assets'); // assuming assets.js is on the same level as your Gruntfile
And then you can get the argument name by using:
var bundle = grunt.option("bundle");
Now you can use bundle as your output file name and assets.bundle to get the array files for that bundle.

Resources