Concatenating javascript files using closure compiler - google-closure-compiler

I am trying to use Closure compiler to concatenate javascript files - possibly before code optimization for better results in code optimization. I am new to closure and I cant find a way to concatenate js files with closure. I would love it if its possible to write a closure script to concatenate files, move them between folders etc.
Here's an example of what I'm looking for: These are three js files
//A.js
require ("B");
require ("C");
function x(){...} //Calls y() and calls z()
//B.js
require ("C");
export function y(){...} //Calls z()
//C.js
export function z(){...}
Concatenate B.js and C.js into one file and make it look like this:
//A.js
require ("B");
function x(){...} //Calls y() and calls z()
//B.js
export module B{
export function y(){...} //Calls z()
export function z(){...}
}
After this restructuring I would like closure to optimize and obfuscate A.js and B.js

The Closure Compiler doesn't yet support EcmaScript 6 modules. And I believe that ES6 dropped the "module" literal. Did you mean to use ES6 syntax?

Related

Deno evaluate expression/script into memory

Is there currently a way in Deno to evaluate a value? If I had a long string with a compacted script, could I evaluate and initialize its logic into say a function? This would be helpful for more dynamic scripting. If an eval specifically isn't possible is there a preferred way? I don't want to have to use the CLI or pipe values into the program.
There is an eval() function available in Deno though I could not find it in the documentation.
Try this:
deno run "console.log(eval('1 + 1'))"
I believe it behaves like its JavaScript counterpart, and here is the TypeScript definition I could fine:
/**
* Evaluates JavaScript code and executes it.
* #param x A String value that contains valid JavaScript code.
*/
declare function eval(x: string): any;

protect function to be not overridden in ZSH

I have a file where my ZSH functions are defined, and I source it from my zshrc.
There are the set of helper functions which used only in other functions from that file.
My question is how can I keep readable names for those helpers (such as 'ask', etc.) and be sure that they will not be overridden later in other sourced files.
So, for example I have two functions:
helper() {
# do something
}
function-i-want-to-use-in-shell() {
helper # call helper, I want to be sure that it is 'my' helper
# do something more
}
I want to protect helper for functions declared within that file.
It would be nice if I could wrap those functions in, for example, subshell () and then export function-i-want-to-use-in-shell to parent (I know this is impossible);
So I am looking for a convenient way to create something like their own scope for those functions, and make some of them global and some local.
[EDIT]
I think another example will give better explanation of the behaviour I want to achieve:
So, for second example I have two files: file1.sh and file2.sh.
file1.sh the same as example above, in file2.sh another function helper defined. I want you to understand that helper from file1.sh it's just function for local usage (within that file), just snippet of code. Later in shell I want only use function-i-want-to-use-in-shell from file1.sh and helper from file2.sh. I do not want helper readonly, I just want it for local usage only. Maybe I can do something like "namespace" for functions in file1.sh, or somehow achieve javascript-like scoping lookup behaviour in that file. The only way I see to do it now is to refuse the condition to keep good, readable, self-explaining names of my helper functions, and
give them names that are hardly to be invented by someone else, or use prefix for those functions. Oh, I just wanted to write something like if ask "question"; then but not if my-local-ask "question"; then in other my functions, and be sure that if someone (or I myself) will define later another function ask nothing will be broken
It's a little heavy-handed, but you can use an autoloaded function to, if not prevent overriding a function, "reset" it easily before calling. For example.
# Assumes that $func_dir is a directory in your fpath;
% echo 'print bar' > $func_dir/helper
% helper () { print 9; }
% helper
9
% unset -f helper
% autoload helper
% helper
bar

Function signature not found despite showing with methods(...)

I am new to Julia, so this might be trivial.
I have a function definition within a module that looks like (using URIParser):
function add!(graph::Graph,
subject::URI,
predicate::URI,
object::URI)
...
end
Outside of the module, I call:
add!(g, URIParser.URI("http://test.org/1"), URIParser.URI("http://test.org/2"), URIParser.URI("http://test.org/1"))
Which gives me this error:
ERROR: no method add!(Graph,URI,URI,URI)
in include at boot.jl:238
in include_from_node1 at loading.jl:114
at /Users/jbaran/src/RDF/src/RDF.jl:79
Weird. Because when I can see a matching signature:
julia> methods(RDF.add!)
# 4 methods for generic function "add!":
add!(graph::Graph,subject::URI,predicate::URI,object::Number) at /Users/jbaran/src/RDF/src/RDF.jl:29
add!(graph::Graph,subject::URI,predicate::URI,object::String) at /Users/jbaran/src/RDF/src/RDF.jl:36
add!(graph::Graph,subject::URI,predicate::URI,object::URI) at /Users/jbaran/src/RDF/src/RDF.jl:43
add!(graph::Graph,statement::Statement) at /Users/jbaran/src/RDF/src/RDF.jl:68
At first I thought it was my use of object::Union(...), but even when I define three functions with Number, String, and URI, I get this error.
Is there something obvious that I am missing? I am using Julia 0.2.1 x86_64-apple-darwin12.5.0, by the way.
Thanks,
Kim
This looks like you may be getting bit by the very slight difference between method extension and function shadowing.
Here's the short of it. When you write function add!(::Graph, ...); …; end;, Julia looks at just your local scope and sees if add! is defined. If it is, then it will extend that function with this new method signature. But if it's not already defined locally, then Julia creates a new local variable add! for that function.
As JMW's comment suggests, I bet that you have two independent add! functions. Base.add! and RDF.add!. In your RDF module, you're shadowing the definition of Base.add!. This is similar to how you can name a local variable pi = 3 without affecting the real Base.pi in other scopes. But in this case, you want to merge your methods with the Base.add! function and let multiple dispatch take care of the resolution.
There are two ways to get the method extension behavior:
Within your module RDF scope, say import Base: add!. This explicitly brings Base.add! into your local scope as add!, allowing method extension.
Explicitly define your methods as function Base.add!(graph::Graph, …). I like this form as it more explicitly documents your intentions to extend the Base function at the definition site.
This could definitely be better documented. There's a short reference to this in the Modules section, and there's currently a pull request that should be merged soon that will help.

Closure compiler mixes variable names

I have a problem where the Closure Compiler renames a global variable something like x.sa.xa but in all function where that global variable is referenced the compiler renames it something else like H.sa.xa
When I view the HTML page I get a JavaScript TypeError: H.sa.xa is undefined.
// Top-level namespace for all the code
var nam = nam || {};
(function($, nam) {
goog.provide('nam.jsConfig');
nam.jsConfig.cookies = {"RECENT_ITEMS": "recentitems"};
})($, nam);
(function($, nam) {
goog.provide('nam.util.cookie');
nam.util.cookie.readMyCookie = function () {
var ritems_cookie = nam.util.cookie.JSONCookie.get(nam.jsConfig.cookies['RECENT_ITEMS']);
};
})($, nam);
Closure Compiled Code:
x.sa = {};
x.sa.xa = {RECENT_ITEMS:"recentitems"};
H.a = {};
H.a.cookie = {};
H.a.Tm = function() {
var a = H.a.cookie.ja.get(H.sa.xa.RECENT_ITEMS);
};
For some reason the Closure Compiler is referencing H.sa.xa.RECENT_ITEMS instead of x.sa.xa.RECENT_ITEMS
Any reason why the compiler is doing this this?
The only way I can interpret your question is that one of two things is happening:
There is an issue with the Closure Compiler's obfuscating and minimizing code, or
The error you are seeing is from JavaScript running outside of the code compiled by the Closure Compiler that is referencing a compiled variable directly.
If it is the former, you should isolate the case that is causing variable misalignment and submit it as a bug to Google. All of us using the Closure Compiler would greatly appreciate it.
If instead, as I suspect, it is the latter, you are most likely not exporting the global variable you wish to use outside of the compiled code. The easiest way to do this is to call goog.exportSymbol() function to make the global variable available outside of your code assembled by the Closure Compiler. For example, if you wished to access the property sandwich.meat.Ham in compiled mode from non-compiled code, you could do the following:
goog.exportSymbol('sandwich.meat.Ham', sandwich.meat.Ham);
Then you could have some code that exists outside of your compiled code that references the exported variable:
function() {
var meat = new sandwich.meat.Ham();
}
Let me guess what you are doing: compiling each file independently in ADVANCED mode. If so, this isn't how ADVANCED mode works. In advanced mode if you want to share variable and properties between compilations jobs you need to export them.
There are much more significant issues in the code example you provided. For one
goog.provide('nam.util.cookie');
was turned into
H.a = {};
H.a.cookie = {};
Yet later this code:
nam.util.cookie.readMyCookie = function () {...
was turned into
H.a.Tm = function() {...
Where one would expect it should be
H.a.cookie.Tm = function() {...
Additionally, the fact that you use nam as the base namespace for both halves of the uncompiled code and that it gets turned into separate x and H namespaces, respectively, also suggests more is at play. Some suggestions:
If you wish to use the module pattern, put the provide/require statements outside of the module
Don't manually create namespaces with stuff like var nam = nam || {} because provide does this for you already
As others have mentioned, both files containing nam.jsConfig and nam.util.cookie should be included in a single compilation
Make sure you goog.require('nam.jsConfig') in the file with nam.util.cookie.readMyCookie, to ensure the dependency requirements are met
FWIW, we use closure in an extensive application with hundreds of files, containing interdependencies like this. I would be highly suspect that the issue lies not with the tools, but instead with how they are being used.

Javascript: sync to async converter libs

1) What is better streamlinejs: https://github.com/Sage/streamlinejs
or narrative: http://www.neilmix.com/narrativejs/ ? any others libs?
2) How does any of those libraries even work?
(I read the docs, I am looking for a simplify explanation of what's going on behind the scene..)
As far as question #2....in general these things:
parse javascript into some abstract syntax tree (AST)
transform the AST
stringify the transformed tree back into javascript
I wrote a partial converter as a learning experience a while back. I used uglify.js to parse into an AST and then the tree walker that lib provides to do the transformations. The transformations were general purpose and produced code that looked like a state machine -- where each step started with a sequence of 0 or more sync actions and ended with an async action. E.g. this simple script:
var fs = require('fs');
console.log(fs.readFile('input.js', _).toString('utf-8'));
would get converted to this:
var fs, $v_0;
function s_0() {
fs = require("fs");
fs.readFile("input.js", function(err, res) {
if (err) s_err(err); else {
$v_0 = res;
s_1();
}
})
}
function s_1() {
console.log($v_0.toString("utf-8"));
}
s_0()
I imagine that streamline and the like do something very similar. Certain structures (loops, try/catch) need special handing but the general approach is the same -- convert into a state machine.
The issues with this approach that I found were:
1) it's not a local problem - i.e. any async behavior that needs to be handled infects everything all the way up the call stack.
2) you need function metadata so you either have to make assumptions or require people to annotate their functions in some manner.

Resources