Shifting from .bat to grunt - gruntjs

I am new to gruntJS. Looking at the tutorials/presentations, I believe it is awesome.
Currently, we are using batch scripts in our web + embedded project, which performs the following tasks:
Merges all the JS file into one.
Merges all the CSS file into one.
Kills existing .EXE on which our project runs. It's basically a simulator's EXE which loads and runs our website. Our website is packaged in the form of ZIP file.
Deletes existing ZIP file.
Creates a new ZIP file which will contain some folders like "html", "lsp" (Lua Server Pages), images, JS(which will contain only one merged file), CSS(which will contain only one CSS file).
Start the .EXE. Basically the EXE once loaded which pick up the zip file from a specified directory.
I understand, merging process can be achieved via gruntJS, but I am not sure about starting/killing the EXE. It would be great if somebody gives me pointers how to get started. Once being certain about the process, I can convince my boss.
Thanks for reading.

Having a grunt-like script that launch your server isn't good practice. In an ideal world, you would separate the build and package phase from the launch of the server.
But anyway, there are either grunt plugins to do that, or the vanilla child_process of node, assuming you use node to run grunt.
Using grunt-exec it would look like this:
exec: {
start_server: {
command: 'program.exe'
}
}
Using the vanilla approach:
var spawn = require('child_process').spawn;
prog = spawn('program.exe');
prog.on('close', function (returnCode) {
console.log('program.exe terminated with code', returnCode);
});

Related

Silverstripe 4 - CSS and JS Requirements. How/what populates the /resources directory?

I have a SS3.x module that I have forked, pulled down from it's fork via composer, and started porting to SS4. So far so good, except when it comes to Requirements.
I'm using the Requirements format found in existing code in another module, which has a colon-separated format as follows:
Requirements::javascript('company/mymodule:javascript/SortableUploadField.js');
This file exists in the module at /vendor/company/mymodule/javascript/SortableUploadField.js. However on page load, I have a 404 in console as SS is looking for this file at /resources/company/mymodule/css/SortableUploadField.css. And this does not exist.
I added the following to my composer.json file for the module as I saw other modules doing this:
"extra": {
"installer-name": "sortableuploadfield",
"expose": [
"css",
"javascript"
]
},
And ran a composer update. But the /resources directory does not appear for this module (other modules are there). And I can't find any information online on how this is supposed to work.
Edit: As a sidenote, I wonder if the documentation for Requirements is misleading? It omits this caveat with modules and mention of the resources directory at all. If that documentation is meant only to convey the process for working with JS/CSS in normal mysite development, then it is just a bit confusing because the code samples us everywhere. Which wouldn't be a direct url to something in /vendor, surely.
Found this after tracing code that basically used the /resources directory. Short answer to my query is simply running
composer vendor-expose
This calls the VendorExposeTask that does this copying. The only other place I found this task being used is on VendorPlugin install method. So I assume that other than the above command, the only way SS actually does this on your behalf is on initial install of a module.

SBT - watch different sources depending on Task

In a cross Scala JS server / client project, I want changes of some sources to restart the server and other sources to trigger the packaging process, but without the restart. Different tasks will not help because they will simply do one or the other and I want both at the same time.
In more detail:
I've got a Scala.js crossProject. I'm using the following to ensure the server can serve the built JavScript:
val app = crossProject.settings(...)
lazy val appJS = app.js
lazy val jsFile = fastOptJS in(appJS, Compile)
lazy val appJVM = app.jvm.settings(
(resources in Compile) += jsFile.value.data,
(resources in Compile) += jsFile.value.data.toPath.resolveSibling(jsFile.value.data.name+".map").toFile,
(resources in Compile) += (packageJSDependencies in(appJS, Compile)).value
)
If I run ~ appJVM/compile:packageBin::packageConfiguration then changes to the JavaScript source are immediately compiled and placed in the appJVM target/classes dir, so a refresh of the browser gets my new code - brilliant.
However, I would also like to use the sbt-revolver plugin to restart the server if I edit server-side code. But there's the rub - if I use ~ ;appJVM/compile:packageBin::packageConfiguration;appJVM/reStart then changes to the client side source restart the server, which I don't want. But if I remove the client side project from the transitive watch then it no longer notices if I change the client side project.
Is there a way to define watchTransitiveSources differently per task?
~ is actually a command that watches the transitive sources of the base project and then synchronously runs everything passed as an argument to it when those change, before re-running the original input (including ~). It does not make any information about what has changed available to those command line inputs (difficult to see how it could).
Consequently the solution I came to is to write a new watch command. It also needs to watch all sources, but then conditionally choose what to do based on which files have changed.
I've hacked something ugly as anything together that does this, but will look at making it more legible, general, tested and a Plugin. However, in the meantime anyone trying to follow my path can use this public gist: https://gist.github.com/LeisureMonitoringAdmin/0eb2e775e47b40f07d9e6d58d17b6d52
Are you sure you are using sbt-resolver not sbt-revolver ?
Because the second one allows controlling the triggered resources using
watchSources - defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.
watchTransitiveSources - then combines the
watchSources for the current project and all execution and classpath
dependencies (see .scala build definition for details on interProject
dependencies).
Source: http://www.scala-sbt.org/0.13/docs/Triggered-Execution.html

Building multiple outputs through the same build process with external config

I'm trying to leverage GruntJS to create a build process that is uniform across multiple teams and projects at my company. The idea hear is that we have a config file for each application that only specifies the files that need to be processed and what bundles they need to be concatenated into at the end. The build process would be the same for all apps: pick up the config for the app, process files in each bundle using a uniform build process.
For Example:
asset.json config file specifies two bundles, "main" with 1.js + 2.js and "secondary" with 2.js and 3.js
Build process says for each bundle, preprocess, minify, then concatenate into a js file based on the bundle
Get output of "main.js" and "secondary.js"
The problem I'm running into is that Grunt takes a "static" configuration and executes it. I've already abstracted out the building of the configuration so that I can add chunks dynamically, but right now I don't see a better way forward than literally looping over each bundle and building out a unique task for each section of the build process for each bundle, building up queues of tasks to execute, and then running each task in the queues during the build process. Its definitely possible, but its a lot of manual work and seems prone to breaking. Is there way to just execute each task in order as I loop over the bundles? Any better way to achieve the same net result of config + source in, N bundles out?
I want to be clear that I am fully aware that Grunt CAN build multiple files. What I'm trying to do is separate the specification of how many bundles from the build steps themselves. Grunt core has to bake these two things together which means each project would have to go in and alter their build steps rather than an external configuration. As per the example above, I should be able to swap out the asset.json file specified in step 1 for any config file that has 1, 2, 3, ... N bundles with N files in each one (and potentially specifying a "type" like scripts or styles).
Edit 10/12/13: The Nitty Gritty posted an article yesterday that might be another approach to tackling your issue.
This can be done by passing the module name you want to build as a command line argument and loading in the whole assets file in your grunt config. Please note this is example code, I have not tested this, so it's possible you need to set paths etc. correct for your case.
Start with updating the assets.json file to a plain JavaScript file, and reform it like so:
module.exports = {
main: ["1.js", "2.js"],
secondary: ["2.js","3.js"]
}
Next, you can pass a command line argument to Grunt, which should specify one of the module names in assets.js. Example:
grunt --bundle=main
Now, you'll need to load in the assets.js file in the Gruntfile:
var assets = require('./assets'); // assuming assets.js is on the same level as your Gruntfile
And then you can get the argument name by using:
var bundle = grunt.option("bundle");
Now you can use bundle as your output file name and assets.bundle to get the array files for that bundle.

RequireJS with multiple pages -- using optimizer

I have my JavaScript organized as described here: https://stackoverflow.com/a/10816983/83897.
I have a JavaScript-heavy ASP.NET application that has multiple different pages (vs. being a single-page application). Each page has different dependencies, so I have a per-page .js file (page1.js, page2.js, etc.). Each has a require() call, declaring its dependencies:
require(['jquery', 'page1Module'], function($, module){
// page1 specific stuff here
});
This works fine. What I'm wondering is, how might the RequireJS build process work? I think I want a per-page "build" .js file (e.g. page1-build.js, page2-build.js, etc.)? Is there existing software I can leverage?
The process might look like this:
Compile all dependencies for a given script into one build.js file in a temporary directory.
Calculate an MD5 fingerprint for the compiled file.
Compare that fingerprint with the comparable file in public/assets.
Create an in-memory RequireJS manifest, mapping each module to the compiled file. Append this manifest to the compiled file.
Somehow make production use the build file.
EDIT: After some thought, I'm thinking the RequireJS optimization using node + r.js will just be part of a larger asset building process, where the asset building relies on some other, third-party library. The RequireJS optimization will simply be used for certain JavaScript dependencies (i.e. the JavaScript files for each page, including found dependencies), perhaps specified in some XML config.
You can create multiple optimized files by specifing the modules in the build profile:
{
modules: [
{
name: "main"
},
{
name: "page1",
include: ["dep1", "shim2"],
exclude: ["main"]
}]
}
Each entry will generate a optmized .js file.
More info here: How to use RequireJS build profile + r.js in a multi-page project

web.config - auto generate a release version

Simple task, but for some reason no simple solution just yet.
We've all got web.config files - and I haven't worked anywhere yet that doesn't have the problem where someone yells across the room "Sh*t, I've just uploaded the wrong web.config file".
Is there a simple way of being able to auto generate a web.config file that will contain the right things for copying to release? An example of these being:
Swap connection string over to use live database
Change
Switch over to use the live/release logging system, and live/release security settings
(in our case we need to change the SessionState mode to InProc from StateServer - this isn't normal)
If you have others, let me know and I'll update it here so it's easy for someone else to find
Maintaining 2 config files works, but is a royal pain, and is usually the reason something's gone wrong while you're pushing things live.
Visual Studio 2010 supports something like this. Check it out here.
How are you deploying your builds. In my environment, this used to be a pain point too, but now we use cruisecontrol.net and script our builds in nant. In our script, we detect the environment and have different versions of the config settings for each environment. See: http://www.mattwrock.com/post/2009/10/22/The-Perfect-Build-Part-3-Continuous-Integration-with-CruiseControlnet-and-NANT-for-Visual-Studio-Projects.aspx for my blogpost onthe subject of using cruisecontrol.net for build management. Skip to the end fora brief description of how we handle config versions.
In my most recent project I wrote a PowerShell script which loaded the web.config file, modified the necessary XML elements, and saved the file back out again. A bit like this:
param($mode, $src)
$ErrorActionPreference = "stop"
$config = [xml](Get-Content $src)
if ($mode -eq "Production")
{
$config.SelectSingleNode("/configuration/system.web/compilation").SetAttribute("debug", "false")
$config.SelectSingleNode("/configuration/system.web/customErrors").SetAttribute("mode", "off")
$config.SelectSingleNode("/configuration/system.net/mailSettings/smtp/network").SetAttribute("host", "live.mail.server")
$config.SelectSingleNode("/configuration/connectionStrings/add[#name='myConnectionString']").SetAttribute("connectionString", "Server=SQL; Database=Live")
}
elseif ($mode -eq "Testing")
{
# etc.
}
$config.Save($src)
This script overwrites the input file with the modifications, but it should be easy to modify it to save to a different file if needed. I have a build script that uses web deployment projects to build the web app, outputting the binaries minus the source code to a different folder - then the build script runs this script to rewrite web.config. The result is a folder containing all the files ready to be placed on the production server.
XSLT can be used to produce parameterized xml files. Web.config being xml file this approach works.
You can have one .xslt file(having xpath expressions).
Then there can be different xml files like
1. debug.config.xml
2. staging.config.xml
3. release.config.xml
Then in the postbuild event or using some msbuild tasks the xslt can be combined with appropriate xml files to having different web.config.
Sample debug.config.xml file can be
<Application.config>
<DatabaseServer></DatabaseServerName>
<ServiceIP></ServiceIP>
</Application.config>
.xslt can have xpaths referring to the xml given above.
Can have a look at the XSLT transformation This code can be used in some MSBuild tasks or nant tasks and different web.config's can be produced depending on the input config xml files.
This way you just have to manage the xml files.
There is only one overhead that the xslt file which is similar to web.config need to be managed. i.e whenever there is any tag getting added in the web.config the xslt also needs to be changed.
I don't think you can 100% avoid this.
The last years of work ever and ever shows: where human worked, there are fails.
So, here are 3 ideas from my last company, not the best maybe, but better then nothing:
Write an batch file or an C#.Net Application that change your web.config on a doubleclick
Write a "ToDo on Release"-List
Do pair-realesing (== pair programming while realease :))

Resources