gulp watch and browserify. watches but doesnt build again - gruntjs

So today is my first day playing around with gulp and grunt or any task runner for js. I got to a place where I was able to change my code in my js files and then run the
gulp browserify
this works fine. However it was annoying and I wanted to add a watch to this so that when I make any changes to the scripts, it would automatically run gulp browserify or something and I would not have to do it manually. So here is what I did to my gulp.js
var gulp = require('./gulp')({
});
gulp.task('watch', function() {
// Watch .js files
gulp.watch('jsfolder/**/*.js', ['scripts']);
});
gulp.task('release', ['build']);
gulp.task('build', ['scripts', 'browserify']);
gulp.task('default', ['watch']);
so when I do this
gulp watch
and when I save my changes it gives me
14:37:21] Starting 'clean'...
[14:37:21] Finished 'clean' after 3.18 ms
[14:37:21] Starting 'concat'...
[14:37:21] Finished 'concat' after 263 μs
[14:37:21] Starting 'checksum'...
[14:37:21] Finished 'checksum' after 19 ms
[14:37:21] Starting 'scripts'...
[14:37:21] Finished 'scripts' after 455 μs
[14:38:41] Starting 'clean'...
[14:38:41] Finished 'clean' after 2.9 ms
[14:38:41] Starting 'concat'...
[14:38:41] Finished 'concat' after 218 μs
[14:38:41] Starting 'checksum'...
[14:38:41] Finished 'checksum' after 18 ms
[14:38:41] Starting 'scripts'...
[14:38:41] Finished 'scripts' after 302 μs
but my changes never make show up on pages. I am assuming that its just watching and not building? What am I missing?
EDIT
I added this
gulp.watch('ui.id.go.com/public/**/*.js', ['scripts','browserify']);
but now its checking it way too often and even though updating, my machine cpu is hiked up!! any better ideas out there?
thanks

You should use Watchify with Browserify to watch file changes with less performance costs. When your application begins to expand, your codebase will take to much time to bundle because Browserify rebuilds every file, even if only one file has changed in the latest modification.
Watchify only rebuilds what it needs to. The initial build (when you run a gulp task) remains the same as before, but at each change, you'll see the difference.
In a 5578610 bytes JavaScript application, the initial build take 6.67s, and rebuild on file change take ~400ms. With Browserify only, there is 6.67s on every change.
To get started, install the NPM packages:
npm install browserify watchify --save-dev
Import Browserify and Watchify in your gulpfile.js:
var browserify = require('browserify');
var watchify = require('watchify');
Init the bundler (I'm using Lodash _ for the commodity). client.js is the application entry point here:
var bundler = watchify(browserify(_.assign(watchify.args, {
entries: ['./src/client.js'],
debug: true
})));
bundler.on('log', gutil.log); // Output build logs to terminal using gulp-util.
Then create your bundle() function using Watchify:
function bundle() {
return bundler.bundle()
// Log errors if they happen.
.on('error', gutil.log.bind(gutil, 'Browserify Error'))
.pipe(source('client.js'))
// Optional, remove if you don't need to buffer file contents.
.pipe(buffer())
// Optional, remove if you dont want sourcemaps.
// Loads map from Browserify file using gulp-sourcemaps.
.pipe(sourcemaps.init({loadMaps: true}))
// Add transformation tasks to the pipeline here.
.pipe(sourcemaps.write('./')) // Writes .map file.
.pipe(size(config.size)) // Checks output file size with gulp.size.
.pipe(gulp.dest('./build'));
}
Finally, runs the bundler when there is a dependency update:
gulp.task('scripts', bundle);
gulp.task('watch', ['scripts'], function() {
bundler.on('update', bundle); // On any dependency update, runs the bundler.
});
Run gulp watch and you're ready to work.
Note: you should only set entry points as bundler entries. Browserify and the dependency branch will take care of the rest, and you will not build the same file twice.

For me the problem was due to the directory structure, it appears that gulp doesn't handle relative paths that well, at least in my case.
My project was setup something like this:
/
project/
gulpfile.js
src/
app.js
build/
mybuiltfile.js
I ended up moving it all into one folder and that fixed my problem.

Related

Gulp del and / or runsequence causes localhost Wordpress to break but works after browser refresh

I've built a gulp project that allows me to build and watch a wordpress theme (bones) in a wordpress installation that is outside the working folder:
Working folder: localhost/sites/testbed/
- dev-bones
- node_modules
- gulpfile.js
- package.json
- package-lock.json
Target folder: localhost/sites/my-wordpress-site/wp-content/themes/
- bones
I installed 'del' and 'run-sequence' so that I could delete the target folder before the build sequence began.
del task:
gulp.task('clean:dirbuild', function() {
// If build directory is outside the working folder:
return del.sync(dir.build, {force: true});
// If inside:
// return del.sync(dir.build);
});
build task:
gulp.task('build', function (callback) {
runSequence('clean:dirbuild',
['php', 'css', 'js', 'copyroot', 'copytranslation'],
callback
)
});
default task:
gulp.task('default', ['build', 'watch']);
After typing 'gulp watch' the browser launches and the wordpress site loads. But I get a white screen with a message saying that the path to bones.php doesn't exist.
I checked the target folder and it does exist.
After scratching my head for a while I manually refreshed the browser and the site loaded correctly.
I've gone through a number of permutations of the code above, always with the same result.
I found this post here on Stack Overflow : run-sequence doesn't run gulp tasks in order, but it didn't seem to fit my case.
I've uploaded the full gulpfile to Github.
I'd be very grateful if someone could give it the once-over and see where I'm going wrong.
UPDATE
I went back to a tutorial I got the 'del' task from. In the comments the author had pointed out that he'd missed something from the task: the callback.
So I updated the code:
gulp.task('clean:dirbuild', function(callback) {
// If build directory is outside the working folder:
return del.sync(dir.build, {force: true}, callback);
// If inside:
// return del.sync(dir.build, callback);
});
and typed 'gulp'.
This time all I got was a white screen and manually refreshing didn't load the site.
OK I figured it out. The problem was with the order of execution of the 'default' task: some of the 'watch' tasks where firing before, during and after the 'build' task.
So I left everything as it was except I changed:
default task
gulp.task('default', ['build', 'watch']);
to
gulp.task('default', function (callback) {
runSequence('build', ['watch'], callback)
});
which ensures that the 'watch' task will run after the 'build' task.
Now, everything works as expected.
Note: I've deleted the example gulpfile.js from GitHub. When I've completed my project and got it on GitHub I'll post a link to it from here.

Grunt error with fs.unlinkSync on Sails

I'm using skipper to receive the files, sharp to resize (and save) and fs unlink to remove the old image. But I got a very weird error this time that concerns me a lot:
error: ** Grunt :: An error occurred. **
error:
Aborted due to warnings.
Running "copy:dev" (copy) task
Warning: Unable to read "assets/images/users/c8e303ca-1036-4f52-88c7-fda7e01b6bba.jpg" file (Error code: ENOENT).
error: Looks like a Grunt error occurred--
error: Please fix it, then restart Sails to continue running tasks (e.g. watching for changes in assets)
error: Or if you're stuck, check out the troubleshooting tips below.
error: Troubleshooting tips:
error:
error: *-> Are "grunt" and related grunt task modules installed locally? Run npm install if you're not sure.
error:
error: *-> You might have a malformed LESS, SASS, CoffeeScript file, etc.
error:
error: *-> Or maybe you don't have permissions to access the .tmp directory?
error: e.g., (edited for privacy)/sails/.tmp ?
error:
error: If you think this might be the case, try running:
error: sudo chown -R 1000 (edited for privacy)/sails/.tmp
Grunt stopped running and to have that in production is a big NoNo... I believe that this is caused because of concurrency with fs.unlinkSync(fname). The error is also intermittent and very hard to reproduce in some machines (IO ops/sec maybe?).
I have the following controller action:
var id = 1; // for example
req.file('avatar').upload({
dirname: require('path').resolve(sails.config.appPath, 'assets/images')
}, function(err, files){
var avatar = files.pop();
//file name operations here. output is defined as the path + id + filetype
//...
sharp(avatar.fd)
.resize(800, 800)
.toFile(output, (err, info)=>{
if(err){
res.badRequest();
} else {
fs.unlinkSync(avatar.fd);
res.ok();
}
});
});
Now I've been thinking about a few solutions:
Output the new image directly to .temp
Unlink when files exists on .tmp. Explanation: Grunt already copied the old file so removing it would be safe!
But I don't know if this is some spaghetti code or even if a better solution exists.
EDIT: My solution was, as proposed by arbuthnott, wrap a controller like this:
get : function(req, res){
var filepath = req.path.slice(1,req.path.length);
//remove '/' root identifier. path.resolve() could be used
if(fs.existsSync(path.resolve(filepath))){
return res.sendfile(filepath);
} else {
return res.notFound();
}
}
I think you are on the right track about the error. You are making some rapid changes to in the assets folder. If I read your code right:
Add an image with user-generated filename to assets/images (ex cat.jpg)
Copy/resize the file to an id filename in assets/images (ex abc123.jpg)
Delete the original upload (cat.jpg)
(I don't know the details of sharp, there may be more under the hood there)
If sails is running in dev mode, then Grunt will be trying to watch the whole assets/ folder, and copy all the changes to .tmp/public/. It's easy to imagine Grunt may register a change, but when it gets around to copying the added file (assets/images/cat.jpg) it is already gone.
I have two suggestions for the solution:
One:
Like you suggested, upload your original to the .tmp folder (maybe even a custom subfolder of .tmp). Still place your sized copy into /assets/images/, and it will be copied to /.tmp/public/ where it can be accessed as an asset by the running app. But Grunt will ignore the quick add-then-delete in the .tmp folder.
Two:
Do a bit of general thinking about both what you want to include in version control, and what Grunt tasks you want to be running in production. Note that if you use sails lift --prod then Grunt watch is turned off by default, and this error would not even occur. Generally, I don't feel like we want Grunt to do too much in production, it is more of a development shortcut. Specifically, Grunt watch can use a lot of resources on a production server.
The note about version control is just that you probably want some of the contents of assets/images/ to be in version control (images used by the site, etc), but maybe not in the case of user-uploaded avatars. Make sure you have a way to differentiate these contents (subdirectories or whatever). Then they can be easily .git-ignore'd or whatever is appropriate.
Hope this helps, good luck!

when running jasmine task in grunt, getting an error "glob pattern string required"

I am trying to run jasmine test using grunt-contrib-jasmine
Below is my Gruntfile.js code
module.exports = function(grunt) {
grunt.initConfig({
jasmine : {
// Your project's source files
src : 'src/**/*.js',
// Your Jasmine spec files
specs : 'specs/**/*spec.js',
// Your spec helper files
helpers : 'specs/helpers/*.js'
}
});
// Register tasks.
grunt.loadNpmTasks('grunt-contrib-jasmine');
// Default task.
grunt.registerTask('default', 'jasmine');
};
Then I tried running "grunt jasmine" command from command prompt. It gives me an error like below
The issue you're describing was reported in this grunt issue. It was then fixed in this commit to grunt-contrib-jasmine on February 13 2016.
As of this writing, the most recent release of grunt-contrib-jasmine is v1.0.0, released on January 26 2016. So, the fix didn't make it into the most recent release, which is what NPM pulls when you do an install.
You can get around this by bypassing the NPM repository and going straight to GitHub for a prerelease version of grunt-contrib-jasmine. You do this by changing your package.json to read:
"dependencies: {
"grunt-contrib-jasmine": "git://github.com/gruntjs/grunt-contrib-jasmine#1e78d891704fa13fe7c7abf4cabf43cefacafcaf"
}
(The commit SHA in the URL just happens to be the most recent one at the time of this writing; feel free to replace it with a later one if you like.)
Ideally, this solution will become obsolete, when grunt-contrib-jasmine releases v1.0.1 (or higher) with the fix commit. Today though, this is what fixed the problem for me.

using execution directory as Gruntfile's file base

I'm trying to use Grunt to clean up a large project. For this specific example, I am trying to run unit tests and want to do so only for paths under the current grunt execution directory (i.e., the result of pwd).
I want one Gruntfile at the project root. I know grunt will find and execute this with no problem from any subdirectory. If I define my test runner options to look in "test/", it only runs tests under {project root/}test/. Is there a way to tell a project-level Gruntfile to make its paths (in all or in part) relative to the executing location?
Notes:
I don't need to be told "Why would you do this? Grunt should manage your whole project!" This is a retrofit, and until that halcyon day when it all works, I want/need it piecemeal.
To reiterate, "**/test/" isn't the answer, because I want only the tests under the current grunt execution directory.
--base also won't work, because Grunt will look for the Node packages at the base location.
I have, for similar situations, used a shared configuration JSON file that I've imported with grunt.config.merge(grunt.file.readJSON("../grunt-shared.json"));. However, that requires Gruntfiles in subfolders, as well as a hard-coded path to the shared file (e.g., ../), which seems tenuous.
I could write code to do some directory climbing and path building, but I'd like to make that a last resort.
Here's the solution I came up with (H/T to #firstdoit, https://stackoverflow.com/a/28763634/356016):
Create a single, shared JavaScript file at the root of the project to centralize Grunt behavior.
Each "sub-project" directory has a minimal, boilerplate Gruntfile.js.
Manually adjust Grunt's file base in the shared file to load from one node_modules source.
Gruntfile.js
/**
* This Gruntfile is largely just to establish a file path base for this
* directory. In all but the rarest cases, it can simply allow Grunt to
* "pass-through" to the project-level Gruntfile.
*/
module.exports = function (grunt)
{
var PATH_TO_ROOT = "../";
// If customization is needed...
// grunt.config.init({});
// grunt.config.merge(require(PATH_TO_ROOT + "grunt-shared.js")(grunt));
// Otherwise, just use the root...
grunt.config.init(require(PATH_TO_ROOT + "grunt-shared.js")(grunt));
};
Using a var for PATH_TO_ROOT is largely unnecessary, but it provides a single focus point for using this boilerplate file across sub-projects.
{ROOT}/grunt-shared.js
module.exports = function (grunt)
{
// load needed Node modules
var path = require("path");
var processBase = process.cwd();
var rootBase = path.dirname(module.filename);
/*
* Normally, load-grunt-config also provides the functionality
* of load-grunt-tasks. However, because of our "root modules"
* setup, we need the task configurations to happen at a different
* file base than task (module) loading. We could pass the base
* for tasks to each task, but it is better to centralize it here.
*
* Set the base to the project root, load the modules/tasks, then
* reset the base and process the configurations.
*
* WARNING: This is only compatible with the default base. An explicit base will be lost.
*/
grunt.file.setBase(rootBase);
require("load-grunt-tasks")(grunt);
// Restore file path base.
grunt.file.setBase(processBase);
// Read every config file in {rootBase}/grunt/ into Grunt's config.
var configObj = require("load-grunt-config")(grunt, {
configPath: path.join(rootBase, "grunt"),
loadGruntTasks: false
});
return configObj;
};

Grunt uglifym - call stack size exceeded

I am trying to use uglify with grunt to concat and minify some files. I have already used the npm to install grunt-contrib-uglify.
I have the following in my grunt.js file: (I have removed some other tasks for anonymity)
module.exports = function(grunt) {
'use strict';
grunt.loadNpmTasks('grunt-contrib-uglify');
uglify: {
options: {
sourceMap: 'app/map/source-map.js'
},
files: {
'app/dist/sourcefiles.min.js': [
'app/test_js/test.js'
]
}
}
};
I then run:
grunt uglify
but I keep getting the following error:
Warning: Maximum call stack size exceeded Use --force to continue.
If I use force, the grunt task never stops running.
Can someone tell me where I am going wrong? I am tearing my hair out on this one.
I had the same problem, using an other Grunt plugin called recess.
The error message was not explicit.
Warning: Cannot read property 'message' of undefined Use --force to continue.
But the verbose mode showed that my task was called hundred of times.
The problem was that I created a "cyclic dependency" (causing an infinite loop) when I registered my task.
grunt.registerTask('recess', ['recess']); //does not work => cyclic dependency!
The first parameter of registerTask method is an "alias task" and has to be different from the task names defined in the second parameter.
I corrected like this:
grunt.registerTask('my-recess-task', ['recess']);
And I runned the task calling this (in the Command prompt window)
grunt my-recess-task
And then it was OK!
More about registerTask() method, from grunt API:
http://gruntjs.com/api/grunt.task#grunt.task.registertask
I also met this problem, i solved this by removing
grunt.registerTask('uglify', ['uglify']);
before i solved this, i ran grunt uglify -v to check what happend.
I found it because that where you using this grunt.loadNpmTasks('grunt-contrib-uglify'); ,it implicitly executes the grunt.registerTask('uglify', ['uglify']); ^_^

Resources