I am using the docpad-plugin-grunt with several grunt tasks. I wonder what is the best way to run tasks over all docpad generated html documents without individual files to be specified in the gruntfile.js. Is there a routine where docpads handing over to grunt all filenames? Can dopad handle the iteration or is a seperated script needed?
I don't think grunt is the way to go for this, what you really want to do is hook in to the rendering process that docpad does for each document. Typically, grunt is used to run a task at the end of the generation to do something that docpad doesn't do out of the box.
To intercept docpad's rendering process you probably want to handle the docpad render event which you define in the events section of the docpad.coffee file.
Assuming you want to minify your html you might do something like this (more or less lifted from the docpad documentation):
render: (opts) ->
# Prepare
{inExtension,outExtension,templateData,file,content} = opts
docpad = #docpad
# Render if applicable
if outExtension in ['html']
opts.content = minifyFunction(content)
Related
For a new project I am bound to keep things webpack-only, and thus need to find a way to efficiently compile our stylesheets. Basically in a very gulh-ish way:
gather all less-files including glob-patterns like src/**/*.less (css order may be arbitrary)
also allow import of css files like, say ../node_modules/vendor/**/*.css or 3rdparty/master.less
(If I have to setup a bogus.js entry point for that, fine...)
And with all of that, a typical gulp workflow:
transpile less to css
merge (concat) less and css
have cssnano do its optimization job, with specific css nano options like e.g. orderedValues: false, normalizeWhitespace: true ...
write styles.css as final output
And of course:
have source-maps survive that chain for a styles.css.map
efficient watching and the usual lazy/incremental compilation (as gulp and webpack have it)
Something I do not need is css modules (css imported into node modules for 'scoped' use, coming out as hash-scoped selectors...)
How can a 'typical' less|css-processing toolchain be done in Webpack?
This SO question has my first attempt where I got stuck in config hell right in the middle after combining...
considerations so far (helpful or not)
I know, to webpack, any ressource including css or even font and images is a "module"... Rather than merging my css 'modules' with with actual js (only to later painstakingly separate them again later again), it might be wiser, to have an entry point cssstub.js in parallel – also known as multi-compiler mode.
But that's really, where my wisdom ends... doing a sequence of $things on a set of files in webpack seems really hard (unless it's a connected javascript module graph). I did find something on globbing, but that's not enough (merge css, cssnano,...) and mostly I simply can't glue the pieces together...
I have used gulp to build less and create corresponding maps like below:
First step compiles less and generated css in tmp folder
gulp.task('compile-less', function () {
gulp.src('./*.less') // path to your file
.pipe(less().on('error', function(err) {
console.log(err);
}))
.pipe(gulp.dest('./tmp/'));
});
Second step minifies generated css and create map files
gulp.task('build-css', ['clean'], function() {
return gulp.src('./tmp/**/*.css')
.pipe(sourcemaps.init())
.pipe(cachebust.resources())
.pipe(sourcemaps.write('./maps'))
.pipe(gulp.dest('./compiled/css'));
});
If you want you can add additional step to conact generated css.
I am trying to learn about technologies including Grunt, Gulp, Webpack, Browserify, but I did not feel that there is much difference between them.
In other words, I feel Webpack can do everything that a task runner does. But still I got a huge examples where gulp and webpack are used together. What's the reason?
I might be taking things in the wrong direction. Am I missing anything? If so, what?
Grunt and Gulp are actually task runners, and they have differences like config driven tasks versus stream based transformations. Each has its own strengths and weaknesses, but at the end of the day, they pretty much help you create tasks that can be run to solve a larger build problem. Most of the time, they have nothing to do with the actual run-time of the app, but rather they transform or they put files, configs and other things in place so that the run time works as expected. Sometimes they even spawn servers or other processes that you need to run your app.
Webpack and Browserify are package bundlers. Basically, they are designed to run through all of a package's dependencies and concatenate their source into one file that (ideally) can be used in a browser. They are important to modern web development, because we use so many libraries that are designed to run with Node.js and the v8 compiler. Again, there are pros and cons and different reasons some developers prefer one or the other (or sometimes both!). Usually the output bundles of these solutions contain some sort of bootstrapping mechanisms to help you get to the right file or module in a potentially huge bundle.
The blurred line between runners and bundlers might be that bundlers can also do complex transformations or trans-pilations during their run-time, so they can do several things that task runners can do. In fact, between browserify and webpack there's probably around a hundred transformers that you can use to modify your source code. For comparison, there's at least 2000 gulp plugins listed on npm right now. So you can see that there are clear (hopefully... ;)) definitions of what works best for your application.
That being said, you might see a complex project actually using both task-runners and package bundlers at the same time or in tandem. For example at my office, we use gulp to start our project, and webpack is actually run from a specific gulp task that creates the source bundles that we need to run our app in the browser. And because our app is isomorphic, we also bundle some of the server code.
In my humble opinion, you might want to get familiar with all of these technologies because chances are you will see (use) all them in the course of your career.
I've just created my own task runner/bundler.
It's simpler to use than gulp and probably webpack (although I've never used webpack).
It's very simple and has babel, browserify, uglify, minify, and handlebars out of the box.
The syntax looks like this:
const Autumn = require("autumn-wizard");
const w = new Autumn();
//----------------------------------------
// CSS
//----------------------------------------
var cssFiles = [
'./lib/pluginABC/src/css/**/*.{css,scss}',
];
w.forEach(cssFiles, srcPath => {
var dstPath = w.replace('/src/', '/dist/', srcPath);
dstPath = w.replace('.scss', '.css', dstPath);
dstPath = w.replace('.css', '.min.css', dstPath);
w.minify(srcPath, dstPath, {
sourceMap: useSourceMap,
});
});
//----------------------------------------
// BUNDLE THE JS MODULE
//----------------------------------------
var srcPath = "./lib/pluginABC/src/main.js";
var dstPath = "./lib/pluginABC/dist/bundled.min.js";
w.bundle(srcPath, dstPath, {
debug: useSourceMap,
});
//----------------------------------------
// CREATE THE HANDLEBARS TEMPLATES
//----------------------------------------
var tplPaths = [
"./lib/pluginABC/src/templates/**/*.hbs",
];
dstPath = "./lib/pluginABC/dist/templates/bundled.js";
w.precompile(tplPaths, dstPath);
And the doc is here: https://github.com/lingtalfi/Autumn
Hopefully it helps.
As we all know, Meteor's initial payload sent to the client comprises (in production) a concatenated javascript file containing the Meteor platform, packages, and all templates parsed into Meteor's reactive templating system. Server-side rendering, where the templates are rendered to HTML and sent to the client in the initial payload, is on its way but doesn't have an expected release date yet.
I'm looking for a way to "hack" or approximate server-side rendering given the available functionality in Meteor 0.8.x. Specifically, I want to:
enable the page to render its initial contents without first waiting for the several hundred KB Meteor platform javascript file to be downloaded and parsed.
alternatively, modify Meteor so it only sends the templates it needs to render the initial request in the javascript payload, and fetches the remaining templates once render is complete.
The use case is http://q42.com. I recognise Meteor isn't the best fit for static websites like this one but I want to try and see how far I can get anyway. Right now the Meteor platform JS file is over 600 KB in size (±200 KB gzipped) and I'd like to reduce this size if possible.
Note: I'm aware of and already using Arunoda's fast-render package, which is intended to send data with the initial payload. In this case I want to cut down on time-to-first-render by also getting the templates themselves down faster.
This is a little bit tricky. But there are some things you could do. This may not be the beeest way to do it but it could help you get started somehow.
Meteor is build with many packages as 'default', sometimes some are not needed. You can remove the standard-app-packages and add the packages (that you need and use manually) listed here: https://github.com/meteor/meteor/blob/devel/packages/standard-app-packages/package.js
To cut down the templates you would have to include the bare templates that you use and include the other template's separately and perhaps send down the Template information via a Collection, using a live observer handle to initiate the templates
You would have to 'render' the templates on the server side or store them manually in your collection using Spacebars.compile from the spacebars-compiler package which is a little tricky but you could have it done decently:
This should give you a rough idea, not sure how to get passed the 'eval' bit of it though:
HTML file in /private/template.html
<template name="test">
Hello {{name}}
</template>
JS file in /private/template.js
Template.test.name = function() { return "Bob" }
Server side code
var collection = new Meteor.Collection("templates");
var templateData = Assets.getText("template.html");
var templateJs = Assets.getText("template.js");
var compiled = Spacebars.compile(templateData).toString();
var jsData = templateJs;
collection.insert({templateName:"test", data: templateData, js: templateJs});
Client Side code
collection.find().observeChanges({
added: function(id, fields) {
var template = fields.data,
name = fields.name,
js = fields.js;
Template["name"] = UI.Component.extend({
kind: "name",
render: eval(template),
});
eval(js);
}
});
Then just subscribe to the collection asking for your template and it should exist. If you use iron-router I think (not sure) you could make the subscription wait before the template is rendered so you could have it work.
Again this is just a 'hacky' solution, one thing I personally don't like about it is the use of eval, but javascript needs to run somehow...
You could loop through files in a particular folder using fs = Npm.require('fs') to render each template too.
One alternative would be to inject a 'script' tag calling the compiled js template and template helpers to let the template exist.
I need to run grunt-bump which bumps the version number in the package.json, then run grunt-xmlpoke and update a config file with new version number.
So I have tried a couple of things. Inside the grunt.initConfig I run bump, then I run xmlpoke.
1) xmlpoke takes grunt.file.readJSON('package.json').version
or
2) after bump I run a custom task that adds the new version to a grunt option and xmlpoke takes a value of grunt.options("versionNumber")
In both of these versions the xml result is the pre-bump version. So xmlpoke is getting it's values before the tasks are run and the uses them when it's task is called. But I need it to take the value that is the result of a previous task.
Is there anyway to do this?
Ok, I have figured out the, somewhat obvious, solution.
Using grunt-bump you can update the package.config, you can also update the package.config that is often read into the variable pkg at the beginning of the initConfig. so in the setup of the bump task you specify
{
updateConfigs:['pkg']
}
Then in the xmlpoke I can do
{ xpath:'myxpath', value:'blablabla/<%=pkg.version%>'}
and this works. What I was doing before was
{ xpath:'myxpath', value:'blablabla/' + grunt.options.versionNumber}
where I had set the versionnumber in a previous task after the bump. Or
{ xpath:'myxpath', value:'blablabla/'+ grunt.file.readJSON('package.json').version}
neither of those worked. I guess I was just getting to smart for my own good as the <%= %> is the more common and typical way of accessing parameters from within the initConfig.
Anyway, there you have it. Or I have it.
I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.