What is currently the best build system [closed] - build-process

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
A few years ago I looked into using some build system that isnt Make, and tools like CMake and SCons seemed pretty primitive. I'd like to find out if the situation has improved. So, under the following criteria, what is currently the best build tool:
platform agnostic: should work on windows, linux, mac
language agnostic: should have built-in support for common things like building C/C++ and other static langs. I guess it doesn't need to support the full autotools suite.
extensible: I need to be able to write rules to generate files, like from restructuredText, latex, custom formats, etc. I dont really care what language I have to write the rules in, but I would prefer a real language rather than a DSL.
I would prefer to avoid writing any XML by hand, which I think for example ant requires.
Freely available (preferably open source)
The term "best" is slightly subjective, but I think answers can be rated objectively by the criteria above.

I'd definitively put my vote up for premake. Although it is not as powerful as it's older brothers, it's main advantage is absurd simplicity and ease of use. Makes writing multi-compiler, multi-platform code a breeze, and natively generates Visual Studio solutions, XCode projects, Makefiles, and others, without any additional work needed.

So, judging purely by the criteria set forth in the question, the build system that seems like the best fit is probably waf - pure Python, provides support for C++ and other languages, general, powerful, not a DSL.
However, from my personal experience, I prefer CMake for C++ projects. (I tried CMake, SCons, and waf, and liked them in roughly that order). CMake is a general solution, but it has built-in support for C++ that makes it nicer than a more generic solution when you're actually doing C++.
CMake's build model for C++ is more declarative and less imperative, and thus, to me, easier to use. The CMake language syntax isn't great, but a declarative build with odd syntax beats an imperative build in Python. Of the three, CMake also seems to have the best support for "advanced" things like precompiled headers. Setting up precompiled headers reduced my rebuild time by about 70%.
Other pluses for CMake include decent documentation and a sizable community. Many open source libraries have CMake build files either in-tree or provided by the CMake community. There are major projects that already use CMake (OGRE comes to mind), and other major projects, like Boost and LLVM, are in the process of moving to CMake.
Part of the issue I found when experimenting with build systems is that I was trying to build a NPAPI plugin on OS X, and it turns out that very few build systems are set up to give XCode the exact combination of flags required to do so. CMake, recognizing that XCode is a complex and moving target, provides a hook for manually setting commands in generated XCode projects (and Visual Studio, I think). This is Very Smart as far as I'm concerned.
Whether you're building a library or an application may also determine which build system is best. Boost still uses a jam-based system, in part because it provides the most comprehensive support for managing build types that are more complex than "Debug" and "Release." Most boost libraries have five or six different versions, especially on Windows, anticipating people needing compatible libraries that link against different versions of the CRT.
I didn't have any problems with CMake on Windows, but of course your mileage may vary. There's a decent GUI for setting up build dependencies, though it's clunky to use for rebuilds. Luckily there's also a command-line client. What I've settled on so far is to have a thin wrapper Makefile that invokes CMake from an objdir; CMake then generates Makefiles in the objdir, and the original Makefile uses them to do the build. This ensures that people don't accidentally invoke CMake from the source directory and clutter up their repository. Combined with MinGW, this "CMake sandwich" provides a remarkably consistent cross-platform build experience!

Of course that depends on what your priorities are. If you are looking primarily for ease of use, there are at least two new build systems that hook into the filesystem to automatically track dependencies in a language agnostic fashion.
One is tup:
http://gittup.org/tup/
and the other is fabricate:
http://code.google.com/p/fabricate/
The one that seems to be the best performing, portable, and mature (and the one I have actually used) is tup. The guy who wrote it even maintains a toy linux distro where everything is a git submodule, and everything (including the kernel) is build with tup. From what I've read about the kernel's build system, this is quite an accomplishment.
Also, Tup cleans up old targets and other cruft, and can automatically maintain your .gitignore files. The result is that it becomes trivial to experiment with the layout and names of your targets, and you can confidently jump between git revisions without rebuilding everything. It's written in C.
If you know haskell and are looking for something for very advanced use cases, check out shake:
http://community.haskell.org/~ndm/shake/
Update: I haven't tried it, but this new "buildsome" tool also hooks into the filesystem, and was inspired by tup, so is relevant:
https://github.com/ElastiLotem/buildsome

CMake
CMake is an extensible, open-source
system that manages the build process
in an operating system and in a
compiler-independent manner.

Gradle seems to match all the criteria mentioned above.
It's a build system which took the best of Maven and Ant combined. To me, that's the best.

The Selenium project is moving over to Rake, not because its the best but because it handles multiple languages slightly better than all the other build tools and is cross platform (developed in Ruby).
All build tools have their issues and people learn to live with them. Something that runs on the JVM tends to be really good for building apps so Ant, Maven (i know its hideous), Ivy, Rake

Final Builder is well known in Windows world

smooth build matches most of your requirements.
platform agnostic: yes, it's written in java
language agnostic: it doesn't support c/c++t yet, only java but it is extensible via plugins written in java so adding more compilers support is not a problem
extensible: yes, you can implement smooth function via java plugin, you can also create smooth function via defining it as expression built of other smooth functions.
I would prefer to avoid writing any XML: you won't see a single line of it in smooth build
Freely available: yes, Apache 2 license
disclosure: I'm the author of smooth build.

Related

What is the difference between a cross-platform and platform-independant framework? Particular case of the Qt application framework

I am a novice runner in Qt. Reading what Qt it is, I get confused in the meaning of some terms in web sites. Indeed I meet the term cross-platform framework, often platform-independant framework. I'd really like to understand the difference between these two group of words.
In the specific case of Qt, it is said that it is a cross-platform application framework
Please, Let me give you my thinking. I'd like to confirm that by someone else.
By Qt is a cross-platform framework, I myself understand the Qt source code is the same for each OS (Windows, Linux, Mac OS ...), but compilers used to build the source code which are different depending on OS. Is that true?
Contrary to a framework like java, that is an independant-platform since compiler is the same on any platform or OS.
Please tell me if my understanding is right/wrong about Qt and the meaning both a cross-platform and an independant-platform framework.
Whether I'm correct or not, I tend to think of it like this:
multi-platform: Different versions (possibly with different codebases) exist for more than one platform.
platform-independent: One codebase exists that tends not to rely on any platform-specific features or libraries, so should work on many different platforms without (source code) changes. The code might rely only on the language features and standard library, for example, so is very generic.
cross-platform: One codebase exists for more than one platform, but it may sometimes use different sections of platform-specific code for different platforms where needed.
I don't think people rigidly follow any specific definitions for these terms, though, and often see them used interchangeably.

Gtk+ vs Qt language bindings

Put shortly: For those familiar with language bindings in Qt and Gtk+. E.g. python and ruby. Are there any quality or capability difference?
More background: I know C++ and Qt very well. Minimal experience with Gtk+. I know C++ is not ideal for language bindings due to the lack of a well defined ABI (application binary interface). I also read that Gtk+ was designed to be bound to other languages. So I wonder how this manifets itself in practice. Are the Gtk+ bindings better maintained or work better in some way than their Qt counterparts?
I am presently quite interested in the Go language, and they have started developing Gtk+ bindings. However C++ bindings is far away. It makes me wonder whether learning Gtk+ is worth it.
I've used GTK and Qt in C++ and also PyGTK and PyQt in Python quite extensively.
Qt beats GTK hands down - its a much more flexible, modern and clean API. GTK is also lacking some features that are important to me. From a framework point of view, I'd recommend Qt.
As for langauge bindings (I can only speak about Python, since I've never used the Ruby equivalents), I think PyGTK (using Glade and a wrapper like Padraig Bradys libglade) make GUI programming insanely easy and fun. However, if you can GPL your software (or pay the license fee), then PyQt is also a good option, and while not quite as friendly as PyGTK + Glade (in that with GTK you can define your UI in Glade as a separate XML file, so you can tweak the UI without touching code; in Qt if you want to use QtDesigner, you have to generate code using uic, if I remember correctly) the API itself is really really nice to use and mirrors the Qt frameworks clean design very closely.
Over all, I'd probably recommend using PyQt over PyGTK, but I may be biased since I much prefer Qt over GTK nowadays, though you could try both out and see which you prefer - they are both almost trivial to get working.
If you are looking for a great book on PyQt, I'd recomment Rapid GUI Programming qith Python and Qt.
To summarize: IMHO Qt beats GTK in both quality and capability. Both PyGTK and PyQt are of excellent quality and capability mirrors the underlying framework, though PyGTK can load Glade xml files.
I think that GTK bindings are older than Qt ones (and so a bit more mature) but they are both usable and your previous knowledge of Qt should be the main factor in your choice.
I developed small GUIs using both Qt and GTK with their python bindings and found the two equivalent. Some regrets though on the PyQt bindings with Qt container (QVector, ...) that are not translated into regular python data structure and thus add a bit of complexity to the code. I didn't recall the same issues using PyGTK.
I have worked with both PyQt and PyGTK and I would say they're both regularly mantained and synched with their parent frameworks. However, and this is completely subjective, I found more rewarding working with PyGTK than with PyQt, even if I hadn't previously written any code using GTK. If you know well Qt, go with Qt though.
I have been trying a few combinations around Qt : RubyQt, JRuby + Jambi, PyQt. The first one quickly ends up in various segmentation-faults. My Qt skills may be the problem but all in all the seg faults are not quite readable. The forum for RubyQt is nearly dead so don't expect to find much information there.
So I moved to JRuby + Jambi. This worked until well, I reached some missing functions here and there. Plus I had to implement a proper signal/connect for JRuby. So, more or less a hack. Not convincing.
Finally I moved to Python (wich I don't like very much). But woooh, what a difference. Bindings are up to date, I have still to ecnounter a segmetatio fault, error messages are most of the time very explanatory. So as far as I'm concerned Python+Qt is a clear winner.
Please note that I was trying these combinations in order to find a proper language/qt binding so that I can create a production ready with my commutation hours (roughly 2 hours a day). So my tolerance to small-but-annoying problems such as segmentation faults is 0. I also have to develop on Windows and Linux. So Windows installation is necessary (and once again, Python is a clear winner here).
there are C++ gtk+ bindings. Google for gtkmm.

Are there any libraries/frameworks for SCons?

Each project using SCons seems to be reinventing the wheel.
I would be glad to take someones directory layout, and/or solution for variant builds(debug/release), and/or testing framework, and/or best practices.
Even several not-too-simple examples would help.
You may be interested in Google's Software Construction Toolkit that was made open source in February 2009. It adds new features on top of SCons, such as improved Visual Studio project file generation, unit test functions, and distributed builds with distcc or incredibuild.
The SCons Recipes in the wiki is a good place to start. In addition take a look at other projects which use SCons, e.g. the Ardour build system. If that doesn't cut it, there are a few third party SCons extensions you may want to take a look at:
Parts
Aqualid
To the best of my knowledge, there are no SCons best practices which were agreed upon. The SCons community seems to favor adaptability over "canonicalization". It is not hard to design a decent SCons-based build system from scratch, though. (Once you have understood how VariantDir works, at least.)
If you are using Eclipse for C++ development you may want to check out this SCons builder plugin (http://nic-nac-project.org/~lothar/eclipse/update/SConsBuilderPlugin.html)
Another good example of SCons use is build system for MIT-licensed Godot game engine:
https://github.com/okamstudio/godot/wiki/compiling_intro

What Tools Do You Recommend To Auto-Build Your Application? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As recently as several years ago, the developers actually made the builds that went to clients. This was obviously a disaster for reasons too numerous to list.
Then when we started to learn the errors of our ways, we looked for a way to auto-build the entire application on a dedicated build machine. The culture at that time was very averse to bringing in outside tools, so we built our own autobuild system by writing a VB app.
This worked fine for a while, until the project's structure started to change, new projects were added, and we needed to build the application in different ways. Then then weaknesses of our hand-rolled autobuilder became apparent and, over time, increasingly onerous. This disease has progressed now to the point where QA (who owns our build process) can't even maintain the autobuilder because it requires more and more programming skill. Every time we add a project or change something in an existing project, it consumes more developer time just to make it work. There have been days when we were unable to produce a build because the system was broken.
I'm now in a position where I can change this process, and I'm looking to scrap the entire system and put something else in it's place. My goals are:
Have an autobuild system that can run with zero human interaction at a specific time every day. It should be able to gather all the source code, compile all the apps, create the setups, put the finished products on a network share, and possibly trigger the automated testing system to kick in (we use QTP).
The autobuild system should be flexible enough to easily adapt to changes in the project without rrequiring a major overhaul.
It should be simple enough so that QA can own the system and not require developer resources to make changes to how builds are made.
What are your experiences? Can you recommend an autobuild system? Should I have different goals?
I'm currently using CruiseControl integrated with Ant to control project builds. This allows flexibility of build schedules and means you can automate the entire build process fairly easily using Ant scripts. Also, during defect fixing periods you can have CruiseControl set up to watch for source control submissions instead of time periods and build when these occur. This allows developers very quick feedback on defect fixes.
I use FinalBuilder and FinalBuilder Server for nightly builds. It's a bit buggy at times, but if you think it through it's quite easy to create extensible projects that can build X project type, build it's database from change scripts and deploy it to a testing server.
It can also handle all kinds of wierd and wonderful things like ZIPing a nightly build and uploading it to an FTP or creating ISO images automatically.
Definitely look into MSBuild if you're on the Microsoft stack.
Joel is always going on and on about how great FinalBuilder is, so that might be worth a look as well.
We just migrated from a hand-rolled set of Perl scripts to a Buildbot setup. I found it because that's what Google's using for Chrome.
You can do nightlies, or it can integrate with source control to do an isolated test build whenever anybody does a checkin, or a variety of other things. It's also parallel; you can have more than one machine in the build farm, either for specialized duties or just to handle more load.
The entire system is written in Python, so it's platform-agnostic, which is important if you need to do builds on more than one platform. It can do anything you can do from the command line; we have it calling MSBuild for user-mode components, a DDK build for kernel-mode pieces, and running products for unit test builds.
Out of the box it supports most OSS source control tools, but if you're using TFS or something else you may need to modify the package that you install on the slave machines.
I think you are on the right track here.
Whoever looks after your automated build process needs to have a fundamental understanding of how your solution fits together. This doesn't necessarily mean knowing how to write code or architect solutions, but they will require a solid understanding of how the solution compiles, packages itself etc.
You might need to share responsibility for builds between people or teams to accomplish this. I'd say that a daily build is a "team responsibility".
I'd look at establishing a baseline build configuration which can be extended for "special use" builds (besides just building a release version), e.g. internationalized releases, fxCop/Quality Tools config, build + run Unit Tests, continuous integration builds, a build config to run on developer workstations, etc.
Instead, I'd aim to achieve the following:
Automatic versioning, signing etc
Ability to produce verbose output (logging) to help debug build breaks
On that point - it should handle errors properly, capture as much information and log it properly
Consistency - It should work the same way each time to produce repeatable outcomes
Run in a clean, limited access environment
Well commented/documented so that it can be understood by new staff, etc.
Option to generate release notes, compile metrics, produce reports (if this option is available)
Ability to deploy to multiple environments
Support different ways to obtain source code from source control, e.g. by changeset, label, date, etc
As for tool recommendations, I've used FinalBuilder, Visual Build Pro, MSBuild/Team Build, nAnt, CruiseControl and CIFactory plus and good old fashioned batch files.
Each has its pros and cons, I'm not going to make a recommendation except to say that the products with decent UI support were a little bit easier to work with, but at times were far less powerful. If you're working with VIsual Studio, MSBuild is very powerful, but has a somewhat steep learning curve.
As of tools delivered with MS Visual Studio you might want to use MSBuild. Additional Community toolsets for MSBuild will even give you the possibility to checkout code from Subversion and zip output.
We're using it successfully in our company. Projects consists of several solutions with 100+ subprojects. Works like a charm.
Visual Build Pro is nice, if your build machines are Windows. I think this would fill the requirement you have about QA owning the system. But don't get me wrong, it's pretty powerful.
We use CruiseControl.NET and UppercuT (which uses NAnt) to do this. UppercuT uses conventions for building so it makes it really easy for someone to get started by answering three questions (What is the solution named? What is the path to source control? What is your company's name?) and you are building.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
We use the Hudson buildbot for for big Java web app building from ant build scripts. Hudson is pretty sweet for our purposes. It has a master/slave setup so builds can be done concurrently (on a timer or on-demand). Slave nodes can be any OS/hardware combo provided it has the needed build tools already on it and is on the network (and won't crash every 10 min).
Full web-based interface including live console output, change logs, artifacts from the build are available across the network including previous builds (if successful). Awesomesauce!

How do small software patches correct big software?

One thing I've always wondered about is how software patches work. A lot of software seems to just release new versions on their binaries that need to be installed over older versions, but some software (operating systems like Windows in particular) seem to be able to release very small patches that correct bugs or add functionality to existing software.
Most of the time the patches I see can't possibly replace entire applications, or even small files that are used within applications. To me it seems like the actual binary is being modified.
How are these kinds of patches actually implemented? Could anyone point me to any resources that explain how this works, or is it just as simple as replacing small components such as linked libraries in an application?
I'll probably never need to do a deployment in this manner, but I am curious to find out how it works. If I'm correct in my understanding that patches can really modify only portions of binary files, is this possible to do in .NET? If it is I'd like to learn it since that's the framework I'm most familiar with and I'd like to understand how it works.
This is usually implemented using binary diff algorithms -- diff the most recently released version against the new code. If the user's running the most recent version, you only need to apply the diff. Works particularly well against software, because compiled code is usually pretty similar between versions. Of course, if the user's not running the most recent version you'll have to download the whole thing anyway.
There are a couple implementations of generic binary diff algorithms: bsdiff and xdelta are good open-source implementations. I can't find any implementations for .NET, but since the algorithms in question are pretty platform-agnostic it shouldn't be too difficult to port them if you feel like a project.
If you are talking about patching windows applications then what you want to look at are .MSP files. These are similar to an .MSI but just patch and application.
Take a look at Patching and Upgrading in the MSDN documents.
What an .MSP files does is load updated files to an application install. This typically is updated dll's and resource files, but could include any file.
In addition to patching the installed application, the repair files located in C:\WINDOWS\Installer are updated as well. Then if the user selects "Repair" from Add / Remove programs the updated patch files are used as well.
I'm thinking that the binary diff method discussed by John Millikin must be used in other operating systems. Although you could make it work in windows it would be somewhat alien.

Resources