I'm facing the following interesting problem:
I would like to create a Grunt task that is executed at various parts of the build/release process.
It's possible that one invocation makes some changes to files based on a pattern, e.g. replacing a version number or Git tag like #develop in some files.
A later invocation in the same build needs to "revert" the above changes back to their original values.
Based on the above I would need to find a way to track the changed files from the first invocation, so I can revert the changes in the second step. I can't use a pattern for the second step to avoid false positives.
Example:
Start a release - many steps as part of this, compile, uglify/minify, bump version number, create a git release branch.
Replace the version numbers of some dependencies, e.g. in bower.json - change them from #develop to #1.2.3
Some more steps, including Git commits
Replace the version numbers changed above from #1.2.3 back to #develop
Additional steps to clean up the release
The last step is difficult since I don't want to change any occurrences of #1.2.3 that I did not update in the previous step. For this I would need to store the list of changed files somewhere, either in memory, or in a temporary file in the project root, e.g. something like .grunt-replace. After reading the file in the last step, it could be deleted by the plugin.
Is anybody using a Grunt plugin like this? How would you solve this? Does Grunt have functionality that would support something like the above, or should I just use the Node.js file API?
Are there other patterns I should consider for keeping some kind of state between Grunt invocations?
You can use a file watcher to keep the grunt process alive and then keep track of state using events.
For instance:
var files = [];
module.exports = function(grunt) {
grunt.initConfig({
watch: {
files: ['orig.js'],
tasks: ['my-task']
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.registerTask('my-task', function(){
// Execute some task
});
grunt.event.on('watch', function(action, filepath, project) {
//add file
files.push(filepath);
});
grunt.registerTask('default', ['watch']);
};
You can then keep track of which files have been changed and act accordingly.
I have to add though, that this is a bit of a hack. Your build flow shouldn't be stateful, you need to be able to build from scratch.
Related
I am using Crashlytic sdk for collecting Android NDK crash reports. Please find the configuration file below.
It works well if I run the following command:
./gradlew crashlyticsUploadSymbolsXXXRelease
Also, If I add assembleXXXRelease.finalizedBy(crashlyticsUploadSymbolsXXXRelease) to afterEvaluate block, the symbol files also are uploaded after each build.
Here are my questions:
I want this uploading process to be automatic. The issue I am facing is the finalizedBy way will added around 10 more minutes to the build time, which is I have to avoid. I prefer to create a new task and invoke it somewhere else after each build, so the APK build time will stay the same as now. I tried to do so, but it seems not working. See the code below. If you have any clue or I missed something, please let me know.
Is that possible to override the NDK output path based on different flavor? I tried to get the flavor XXX from task name like assbleXXXRelease, and assign it to the output path like XXX/obj, but failed. It seems the flavor has been fixed to the default flavor.
Is that possible that I can pick up the cSYM symbols files and upload to Crashlytics without Crashlytic sdk?
apply plugin: 'com.android.application'
apply plugin: 'io.fabric'
// Question 2 will be happening here:
crashlytics {
enableNdk true
androidNdkOut "obj" // override the path based on the flavor name to flavorName/obj?
androidNdkLibsOut "libs" // override the path based on the flavor name to flavorName/libs?
}
dependencies {
// ...
implementation 'com.crashlytics.sdk.android:crashlytics:2.10.1'
// Add the Crashlytics NDK dependency
implementation 'com.crashlytics.sdk.android:crashlytics-ndk:2.1.1'
}
// Question 1 will be happening here:
// If I call this task from command line, it won't work. Nothing will happen.
task upload {
doLast {
crashlyticsUploadSymbolsXXXRelease
}
}
afterEvaluate {project ->
android.applicationVariants.all { variant ->
assembleXXXRelease.finalizedBy(crashlyticsUploadSymbolsXXXRelease) // it works here
}
}
I want this uploading process to be automatic. The issue I am facing is the finalizedBy way will added around 10 more minutes to the build time, which is I have to avoid. I prefer to create a new task and invoke it somewhere else after each build, so the APK build time will stay the same as now. I tried to do so, but it seems not working.
I solved this by modifying the task as follows:
task upload {
doLast {
// do something whatever
}
}
upload.finalizedBy(crashlyticsUploadSymbolsXXXRelease)
Is that possible to override the NDK output path based on different flavor? I tried to get the flavor XXX from task name like assbleXXXRelease, and assign it to the output path like XXX/obj, but failed. It seems the flavor has been fixed to the default flavor.
I have made this work to get the current flavor name by calling
gradle.startParameter.taskName[0]
Is that possible that I can pick up the cSYM symbols files and upload to Crashlytics without Crashlytic sdk?
I have not yet found a solution for this.
I want this uploading process to be automatic. The issue I am facing is the finalizedBy way will added around 10 more minutes to the build time, which is I have to avoid. I prefer to create a new task and invoke it somewhere else after each build, so the APK build time will stay the same as now.
I created a Gradle project that allows exactly that: uploading Firebase Crashlytics NDK symbols independent from the build process of your app.
You can check it out here:
https://github.com/triplef/android-upload-ndk-symbols
In a cross Scala JS server / client project, I want changes of some sources to restart the server and other sources to trigger the packaging process, but without the restart. Different tasks will not help because they will simply do one or the other and I want both at the same time.
In more detail:
I've got a Scala.js crossProject. I'm using the following to ensure the server can serve the built JavScript:
val app = crossProject.settings(...)
lazy val appJS = app.js
lazy val jsFile = fastOptJS in(appJS, Compile)
lazy val appJVM = app.jvm.settings(
(resources in Compile) += jsFile.value.data,
(resources in Compile) += jsFile.value.data.toPath.resolveSibling(jsFile.value.data.name+".map").toFile,
(resources in Compile) += (packageJSDependencies in(appJS, Compile)).value
)
If I run ~ appJVM/compile:packageBin::packageConfiguration then changes to the JavaScript source are immediately compiled and placed in the appJVM target/classes dir, so a refresh of the browser gets my new code - brilliant.
However, I would also like to use the sbt-revolver plugin to restart the server if I edit server-side code. But there's the rub - if I use ~ ;appJVM/compile:packageBin::packageConfiguration;appJVM/reStart then changes to the client side source restart the server, which I don't want. But if I remove the client side project from the transitive watch then it no longer notices if I change the client side project.
Is there a way to define watchTransitiveSources differently per task?
~ is actually a command that watches the transitive sources of the base project and then synchronously runs everything passed as an argument to it when those change, before re-running the original input (including ~). It does not make any information about what has changed available to those command line inputs (difficult to see how it could).
Consequently the solution I came to is to write a new watch command. It also needs to watch all sources, but then conditionally choose what to do based on which files have changed.
I've hacked something ugly as anything together that does this, but will look at making it more legible, general, tested and a Plugin. However, in the meantime anyone trying to follow my path can use this public gist: https://gist.github.com/LeisureMonitoringAdmin/0eb2e775e47b40f07d9e6d58d17b6d52
Are you sure you are using sbt-resolver not sbt-revolver ?
Because the second one allows controlling the triggered resources using
watchSources - defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.
watchTransitiveSources - then combines the
watchSources for the current project and all execution and classpath
dependencies (see .scala build definition for details on interProject
dependencies).
Source: http://www.scala-sbt.org/0.13/docs/Triggered-Execution.html
I want to create a file called config.js for the client end of my app, but it should be based on the environment. I've successfully done this for production using the tasks/register/prod.js file, but sailsjs does not seem to have an equivalent dev.js file.
I also can't find much information about this, so I'm hoping there is a standard workaround I'm just not thinking of.
I'm not sure why I found it so confusing, or why I never opened the README.md (duh!) in tasks/, but dev stuff goes in the default task (tasks/register/default.js).
ANSWER: README's are named as such for a very good reason.
I'm trying to leverage GruntJS to create a build process that is uniform across multiple teams and projects at my company. The idea hear is that we have a config file for each application that only specifies the files that need to be processed and what bundles they need to be concatenated into at the end. The build process would be the same for all apps: pick up the config for the app, process files in each bundle using a uniform build process.
For Example:
asset.json config file specifies two bundles, "main" with 1.js + 2.js and "secondary" with 2.js and 3.js
Build process says for each bundle, preprocess, minify, then concatenate into a js file based on the bundle
Get output of "main.js" and "secondary.js"
The problem I'm running into is that Grunt takes a "static" configuration and executes it. I've already abstracted out the building of the configuration so that I can add chunks dynamically, but right now I don't see a better way forward than literally looping over each bundle and building out a unique task for each section of the build process for each bundle, building up queues of tasks to execute, and then running each task in the queues during the build process. Its definitely possible, but its a lot of manual work and seems prone to breaking. Is there way to just execute each task in order as I loop over the bundles? Any better way to achieve the same net result of config + source in, N bundles out?
I want to be clear that I am fully aware that Grunt CAN build multiple files. What I'm trying to do is separate the specification of how many bundles from the build steps themselves. Grunt core has to bake these two things together which means each project would have to go in and alter their build steps rather than an external configuration. As per the example above, I should be able to swap out the asset.json file specified in step 1 for any config file that has 1, 2, 3, ... N bundles with N files in each one (and potentially specifying a "type" like scripts or styles).
Edit 10/12/13: The Nitty Gritty posted an article yesterday that might be another approach to tackling your issue.
This can be done by passing the module name you want to build as a command line argument and loading in the whole assets file in your grunt config. Please note this is example code, I have not tested this, so it's possible you need to set paths etc. correct for your case.
Start with updating the assets.json file to a plain JavaScript file, and reform it like so:
module.exports = {
main: ["1.js", "2.js"],
secondary: ["2.js","3.js"]
}
Next, you can pass a command line argument to Grunt, which should specify one of the module names in assets.js. Example:
grunt --bundle=main
Now, you'll need to load in the assets.js file in the Gruntfile:
var assets = require('./assets'); // assuming assets.js is on the same level as your Gruntfile
And then you can get the argument name by using:
var bundle = grunt.option("bundle");
Now you can use bundle as your output file name and assets.bundle to get the array files for that bundle.
I am new to gruntJS. Looking at the tutorials/presentations, I believe it is awesome.
Currently, we are using batch scripts in our web + embedded project, which performs the following tasks:
Merges all the JS file into one.
Merges all the CSS file into one.
Kills existing .EXE on which our project runs. It's basically a simulator's EXE which loads and runs our website. Our website is packaged in the form of ZIP file.
Deletes existing ZIP file.
Creates a new ZIP file which will contain some folders like "html", "lsp" (Lua Server Pages), images, JS(which will contain only one merged file), CSS(which will contain only one CSS file).
Start the .EXE. Basically the EXE once loaded which pick up the zip file from a specified directory.
I understand, merging process can be achieved via gruntJS, but I am not sure about starting/killing the EXE. It would be great if somebody gives me pointers how to get started. Once being certain about the process, I can convince my boss.
Thanks for reading.
Having a grunt-like script that launch your server isn't good practice. In an ideal world, you would separate the build and package phase from the launch of the server.
But anyway, there are either grunt plugins to do that, or the vanilla child_process of node, assuming you use node to run grunt.
Using grunt-exec it would look like this:
exec: {
start_server: {
command: 'program.exe'
}
}
Using the vanilla approach:
var spawn = require('child_process').spawn;
prog = spawn('program.exe');
prog.on('close', function (returnCode) {
console.log('program.exe terminated with code', returnCode);
});