How do I use the Google Closure Compiler to build one file? - google-closure-compiler

The question is a little more complicated than the title suggests, so allow me to elaborate. I have a project that is split into two repositories. The both use Google Closure for dependency management and compilation. I need to deliver a compiled version of project A to project B. Project B does advanced optimizations, so project A must be whitespace only. The problem is that I can't find a way to satisfy all my requirements for compiled A, which are:
It must be ordered by dependency
There can be no goog base code, i.e., var goog=goog||{}...
Similarly, there can be no goog.provides or goog.requires
It must be whitespace-only compiled
So far I've tried:
Closurebuilder.py
pros: can be whitespace only
problem: Has base code; getting duplicate namespace issues
Compiler.jar
problem: whitespace-only mode keeps goog.provides and requires in
problem: any optimizations of project A break project B
Has anyone made a similar setup work?

I solved it by getting the dependencies from closurebuilder.py and simply concatenating them together, in order, into one file. It gets shipped to a built repo, where B can pick it up via npm, and it gets run through a closure script (that adv. optimizes it and splits it into modules).

Related

Using MSBUILD like a classic MAKEfile -- how do I do this?

I'm frustrated by the lack of flexibility in the Visual Studio project/solution, but I realized that now that it uses MSBUILD it might be quite powerful but just doesn't expose that to the IDE. So I took a look at MSBUILD docs and don't know where to start! I wish there was a Nutshell book for that. Is there any good tutorial someone could point me to?
More specifically, here is the kinds of things I want to do:
Run a utility pre-processor to generate .CPP and .H files, which are then used by a regular C++ project. There are multiple inputs (to figure dependencies of; specifically should know if a normal .h file it uses has changed) and multiple outputs (at least one .cpp and one .h file) that are used as files in another project.
FWIW, the most complex case involves using Qt in a "normal" C++ project that can be built using VS Express 2010 or MSBUILD directly from a script on a server. Since that is a common library, there might be some guides or whatever to help? Note that a VS plug-in is not useful for the building stage, but could be used to initially generate project files that then rely only on MSBUILD and stuff included with the source code.
Would somebody please point me in the right direction?
--John
It gets worse from there, but that's my first goal.
I found the kind of information I was looking for in a book MSBuild Trickery: 99 Ways to Bend the Build Engine to Your Will by Brian Kretzler.
In the first 18 pages I found a few key pieces of information that, along with the on-line documentations I've already gone through, helps clear things up enough to try tackling my project. Details of interest include the processing order of how MSBuild reads and operates on the things in the file, quick points on when wildcard in items are expanded and how to handle generated files, and how to see what's happening in some practical cases or even step in the debugger.
FWIW, I managed to attack my problem without using the murky ".targets"/rules files that I have yet to understand, but only using better documented/exampled features (in particular, a Target that has wildcard items doesn't care that the file name extension is not in any ".target"; is simple enough to copy from example and allows the files to be seen in the IDE Project and added to the list using the IDE; again, the FileExtension there just works OK.)

designing large projects in OCaml [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What is a best practices to write large software projects in OCaml?
How do you structure your projects?
What features of OCaml should and should not be used to simplify code management? Exceptions? First-class modules? GADTs? Object types?
Build system? Testing framework? Library stack?
I found great recommendations for haskell and I think it would be good to have something similar for OCaml.
I am going to answer for a medium-sized project in the conditions that I am familiar with, that is between 100K and 1M lines of source code and up to 10 developers. This is what we are using now, for a project started two months ago in August 2013.
Build system and code organization:
one source-able shell script defines PATH and other variables for our project
one .ocamlinit file at the root of our project loads a bunch of libraries when starting a toplevel session
omake, which is fast (with -j option for parallel builds); but we avoid making crazy custom omake plugins
one root Makefile contains all the essential targets (setup, build, test, clean, and more)
one level of subdirectories, not two
most subdirectories build into an OCaml library
some subdirectories contain other things (setup, scripts, etc.)
OCAMLPATH contains the root of the project; each library subdirectory produces a META file, making all OCaml parts of the projects accessible from the toplevel using #require.
only one OCaml executable is built for the whole project (saves a lot of linking time; still not sure why)
libraries are installed via a setup script using opam
local opam packages are made for software that it not in the official opam repository
we use an opam switch which is an alias named after our project, avoiding conflicts with other projects on the same machine
Source-code editing:
emacs with opam packages ocp-indent and ocp-index
Source control and management:
we use git and github
all new code is peer-reviewed via github pull requests
tarballs for non-opam non-github libraries are stored in a separate git repository (that can be blown away if history gets too big)
bleeding-edge libraries existing on github are forked into our github account and installed via our own local opam package
Use of OCaml:
OCaml will not compensate for bad programming practices; teaching good taste is beyond the scope of this answer. http://ocaml.org/learn/tutorials/guidelines.html is a good starting point.
OCaml 4.01.0 makes it much easier than before to reuse record field labels and variant constructors (i.e. type t1 = {x:int} type t2 = {x:int;y:int} let t1_of_t2 ({x}:t2) : t1 = {x} now works)
we try to not use camlp4 syntax extensions in our own code
we do not use classes and objects unless mandated by some external library
in theory since OCaml 4.01.0 we should prefer classic variants over polymorphic variants
we use exceptions to indicate errors and let them go through happily until our main server loop catches them and interprets them as "internal error" (default), "bad request", or something else
exceptions such as Exit or Not_found can be used locally when it makes sense, but in module interfaces we prefer to use options.
Libraries, protocols, frameworks:
we use Batteries for all commodity functions that are missing from OCaml's standard library; for the rest we have a "util" library
we use Lwt for asynchronous programming, without the syntax extensions, and the bind operator (>>=) is the only operator that we use (if you have to know, we do reluctantly use camlp4 preprocessing for better exception tracking on bind points).
we use HTTP and JSON to communicate with 3rd-party software and we expect every modern service to provide such APIs
for serving HTTP, we run our own SCGI server (ocaml-scgi) behind nginx
as an HTTP client we use Cohttp
for JSON serialization we use atdgen
"Cloud" services:
we use quite a lot of them as they are usually cheap, easy to interact with, and solve scalability and maintenance problems for us.
Testing:
we have one make/omake target for fast tests and one for slow tests
fast tests are unit tests; each module may provide a "test" function; a test.ml file runs the list of tests
slow tests are those that involve running multiple services; these are crafted specifically for our project, but they cover as much as possible as a production service. Everything runs locally either on Linux or MacOS, except for cloud services for which we find ways to not interfere with production.
Setting this all up is quite a bit of work, especially for someone not familiar with OCaml. There is no framework taking care of all that yet, but at least you get the choice of the tools.
OASIS
To add to Pavel answer:
Disclaimer: I am the author of OASIS.
OASIS also has oasis2opam that can help to create OPAM package quickly and oasis2debian to create Debian packages. This is extremly useful if you want to create a 'release' target that automate most of the tasks to upload a package.
OASIS is also shipped with a script called oasis-dist.ml that creates automatically tarball for upload.
Look all this in https://github.com/ocaml.org.
Testing
I use OUnit to do all my tests. This is simple and pretty efficient if you are used to xUnit testing.
Source control/management
Disclaimer: I am the owner/maintainer of forge.ocamlcore.org (aka forge.o.o)
If you want to use git, I recommend to use github. This is really efficient for review.
If you use darcs or subversion, you can create an account on forge.o.o.
In both case having a public mailing list where you send all commit notification is a must have, so that everyone can see them and review them. You can use either Google groups or a mailing list on forge.o.o.
I recommend to have a nice web (github or forge.o.o) page with OCamldoc documentation build everytime you commit. If you have a huge code base this will help you to use the OCamldoc generated documentation right from the beginning (and fix it quickly).
I recommend to create tarballs when you reach a stable stage. Don't just rely on checking out the latest git/svn version. This tip has saved me hours of work in the past. As said by Martin, store all your tarballs in a central place (a git repository is a good idea for that).
This one probably doesn't answer your question completely, but here is my experience regarding build environment:
I really appreciate OASIS. It has a nice set of features, helping not only to build the project, but also to write documentation and support test environment.
Build system
OASIS generates setup.ml file from the specification (_oasis file), which works basically as a building script. It accepts -configure, -build, -test, -distclean flags. I quite used to them while working with different GNU and other projects that usually use Makefiles and I find it convenient that it is possible to use all of them automatically here.
Makefiles. Instead of generating setup.ml, it is also possible to generate Makefile with all options described above available.
Structure
Usually my project that is built by OASIS has at least three directories: src, _build, scripts and tests.
In the former directory all source files are stored in one directory: source (.ml) and interface (.mli) files are stored together. May be if the project is too large, it is worth introducing more subdirectories.
The _build directory is under the influence of OASIS build system. It stores both source and object files there and I like that build files are not interfered with source files, so I can easily delete it in case something goes wrong.
I store multiple shell scripts in the scripts directory. Some of them are for test execution and interface file generation.
All input and output files for tests I store in a separate directory.
Interfaces/Documentation
The use of interface files (.mli) has both advantages and drawbacks for me. It really helps to find type errors, but if you have them, you have to edit them as well when making changes or improvements in your code. Sometimes forgetting this causes nasty errors.
But the main reason why I like interface files is documentation. I use ocamldoc to generate (OASIS supports this feature with -doc flag) html pages with documentation automatically. In my opinion it is enough to write comments describing each function in the interface and not to insert comments in the middle of code. In OCaml functions are usually short and concise and if there is a necessity to insert extra comments there, may be it is better to split the function.
Also be aware of -i flag for ocamlc. The compiler can automatically generate interface file for a module.
Tests
I didn't find a reasonable solution for supporting tests (I would like to have some ocamltest application), that's why I am using my own scripts for executing and verifying use cases. Fortunately, OASIS supports executing custom commands when setup.ml is run with -test flag.
I don't use OASIS for a long time and if anyone knows any other cool features, I would like also to know about them.
Also, it you are not aware of OPAM, it is definitely worth looking at. Without it installing and managing new packages is a nightmare.

Dynamic linking in zOS

i have to create a dynamically linked library in zOS . What are the options to be passed to the compiler.
Also, how to check if a library in zOS is dynamically linked[dependent] on other libraries.
we have ldd in linux, which shows this linkage. Do we have a 'ldd' equivalent in zOS land?
You don't say it directly, but I assume you mean a C/C++ DLL. You can do shared libraries in other languages as well (even assembler), but the steps would be different.
First, you need to decide what you want to export. A lot of the IBM examples use the compiler EXPORTALL directive, but be aware this can lead to very slow executables, depending on your coding style. If you don't do EXPORTALL, you'll need #pragma export for anything (code or data) you want to export. Don't forget you can export data (variables) as well as executable functions...sometimes you'll need this to share data with DLL functions.
Then, you need to set your compile options on both client (caller) and DLL to use the DLL linkage...this is the -Wc,DLL compile option and when enabled, it generates extra logic in your program to load and manage the DLL. It's a good idea to also include #pragma csect for your exported functions if you think you'll ever have the need to update the DLL without replacing it entirely.
When you link your DLL, be sure to specify the -Wl,DLL option (there are lots of ways...this part is different if you link in batch - I'm assuming you're building in a make file of some sort). The link will generate the actual DLL, as well as a "side deck" containing "IMPORT" statements for all of your exported functions. You'll need these to link any of the client-side programs that you expect to call the DLL. For example, if your imports are in a file called AAA.x, c89 -Wc,DLL myapp.c AAA.x would compile the calling code, with awareness that functions in AAA.x are off in some sort of DLL.
To your point about DLLs calling other DLLs, don't forget that a DLL can both "serve" and "consume" functions...by including the side deck for functions in other DLLs, you can have a DLL that provides some functions while calling other DLLs to access others.
The actual DLL itself can be in several places depending on the nature of your app. If you're UNIX Services friendly, it's just an executable in LIBPATH. It can also be STEPLIB, LNKLST, LPA and so forth.
If you need to, you can access your DLLs explicitly at runtime using dlopen(), dlsym() and so forth. Generally, this lets you control exactly which DLL you're using (sometimes handy if the user can provide one himself), and it gives you what amounts to function pointers that are resolved within the DLL.
There are some other basic things to consider when linking, such as ensuring that your code is reentrant. Most of these are spelled out in the IBM documentation, and if you build with things like "c89" (or equivalent), the correct options are usually setup for you automatically (in fact, to get a good idea of what's going on, turn on the verbose output and see all the parameters for yourself).
If you need to build up a cross reference of what calls what, the UNIX Services "nm" command can give you that information. If you produce detailed link-edit listings, all the data is in there too when you're building your DLLs.
Good luck!

Structure of NAnt build scripts and solution structure on build server

We're in the process of streamlining/automating build, integration and unit testing as well as deployment.
Our software is developed in Visual Studio where we have use both C# and VB.NET in our projects. A single project can be contained within multiple solutions (i.e. Utils project is used in both ProductA and ProductB solutions)
For historical reasons our code repository isn't as well structured as one could have hoped for.
E.g. Utils project might be located under ProductA solution (because that's were it was first used) but it was later deemed useful for productB development and merely just included into the solution of productB (but still located in a subdirectory of productA).
I would like to use continous integration testing and have setup a CC.NET build server where I intend to use NAnt for creating the actual builds.
Question 1: How should I structure my builds on the buildserver? Should I instruct CC.NET to retrieve all the projects for productB into a single library e.g. a file structure similar to
-ProductB
--Utils
--BetterUtils
--Data
or should I opt for a filestructure similar to this
-ProductA
--Utils
-ProductB
--BetterUtils
--Data
and then just have the NAnt build scripts handle the references? Our references in VS doesn't match the actual location in the code repository so it's not possible today to just check-out productB solution and build it straight away (unfortunately). I hope this question makes sense?
Question 2: Is it better to check out all the source code located in different projects into a single file folder (whilst retaining some kind of structure) and then build every thing at once or have multiple projects in CC.NET and then let the CC.NET server handle dependencies?
Example:
Should I have a seperate project in CC.NET for monitoring the automated build/test of Utils project when it's never released on it's own? Or should I just build/test it whilst building it as part of ProductB?
I hope the above makes sense and that you can provide me with some arguments for using either option. We're nowhere near an ideal source code repository structure and I would prefer if I can resolve the lack of repository structure on the build server instead of having to clean up the structure of our repository.
Switching away from VSS is (unfortunately) not an option.
Right now our build consists of either deploying via VS clickonce or pressing F5 so just getting the build automated would be a huge step up for us.
Thanks
To answer your first question, I would recommend a separate top-level folder for each build project. The problem with having a single tree matching your source repository is that when your build server is trying to run multiple builds at once, one or more will likely fail due to files in use by other processes. Also, you may run into cases where a build script is pulling an older version of the code. In that instance you don't want a different project to accidentally use the incorrect source version.
If your solutions already reference projects from relative paths, you may end up with a structure like this:
-CCNetBuilds
--ProductASource
---Utils
---...
--ProductBSource
---ProductA
----Utils
---ProductB
----BetterUtils
----Data
In this case, the build for Product B contains part of the Product A source, at the same relative path as your solution already expects. This takes a bit more time to set up in CC.Net, but makes it easier to maintain if the developers have their code set up this way on their machines. The same solution files used in development are used by the build server.
To answer your second question, I prefer Utilities being its own build. If I have unit tests on my Utilities assembly, I would not want them to run for every single product that uses the Utilities. Also, if you have a separate build for Utilities, you can set a dependency in CC.Net so that Product A and B will not attempt to build if the Utilities build is broken. This provides a bit faster feedback that something is wrong.

Automatic BizTalk Versioning in My Build Process

In all of my other .net apps my build process (a mixture of nant and custom tasks) automatically updates the [AssemblyVersionAttribute] AssemblyInfo.cs with the current build number before the call to msbuild, stamping in the build number in the version number.
I'm now working on my first BizTalk project and I'd like to do the same thing with the version numbers of the BizTalk assemblies, but I've run into trouble!
First of all the aseembly version numbers are stored in the btproj files, so I did some googling and found www.codeplex.com/biztalk which looked like the answer to my problem, but there is a deeper problem!
I have a project for my schemas and another for my pipelines, the pipelines project references my schemas project as I have a flat file dis/assemblers. The problem comes when I update the version numbers, as updating them even from within visual studio does not update the pipeline components references to the schemas.
So if I update all the version numbers manually in the VS IDE from 1.0.0.0 to 1.1.0.0, the build fails as the pipeline components flat file dis/assemblers still reference the old 1.0.0.0 version of the schemas! They don't automatically update!
Is this really a manual process of updating the version numbers of the BizTalk projects in the property pages, then building the projects and manually updating the references to them in the properties of all the pipeline components that reference them?
This means that I can't have my build process control the build number part of my version numbers!
Or is there a better method of managing the version numbers of the BizTalk assemblies?
I'm sorry to disappoint you but I've been down the exact some road I had to give up. I guess it could be possible to achieve it but it would require a lot of changes to both the binding files and other XML files (as you mentioned and even more if you have published services etc).
Maybe it could be possible to wrap all these necessary changes in a build step (a MSBuild step or similar in other build frameworks) - that would be useful!
Developer- :)
We had the similar problem and we ended up developing a small utility which would change the version number in all the projects i.e. *.csproj (asssemblyinfo.cs), *.btproj accordingly. Apart from this it would open and modify the *.btp files with the new version of schemas. In nutshell, what all you have to do is to configure this utility in your VS.net tools menu and execute it.
I guess its not very difficult to develop such utility in any .net lanagauge.
Caveat: Do not forget to save the files after updates with the same encoding as they were originally.
Cheers!
Gutted, thought that might be the case. Maybe BizTalk 2009 projects will play more nicely when updating references when changing version numbers.
I started to go through and automate it manually, and when I realised what needed to be done, I took a biiig step back when I realised just how many places I'd have to modify to get it working. Thank god for Undo Checkout.
I do have a standard C# class library included in my project (various helper functions), which i am able to update the version number of during my build process, so I'm basically using that one assembly to version the whole application. If anyone wants to know what version is in any environment, check out the version number of that one assembly.
Not ideal, but it's working.
We've done this successfully on our project - I'll see if I can get the developer of the tool to post details...
This problem arises when you perform an integration build to the latest versions of your dependent components as file references (aka schemas here).
Keep in mind that upgrading the assemblyversion must always performed manually, that way you are always in charge of changes to assemblyversions.
A possible solution to solve the buildbreaks issue is to file reference to a specific version of a dependent component build and not to the latest version and use a subst drive and a copy script to get the latest component builds.
For example:
SchemaA, assembly version 1.0.0.0
PipelineA (with pipelinecomponent XMLValidator for example), assembly version 1.0.0.0
PipelineA has a file reference to a subst drive(say R drive, which maps to a workspace D:\MyComponents) and version 1.0.0.0 of SchemaA as follows:
R:\SchemaA\1.0.0.0\SchemaA.dll.
The copy-script copies the buildoutput of SchemaA locally to your R drive.
When schema A updates to version 1.1.0.0 you don't have any issues because you still use version 1.0.0.0 and YOU have the choice to use the 1.1.0.0 version of your schema. When you want to upgrade, you have to alter your copy-script and replace the file reference to R:\SchemaA\1.1.0.0\SchemaA.dll.

Resources