DVCS - How often and when to commit changes - dvcs

There is another thread here on StackOverflow, dealing wih how often to commit changes to source control. I want to put that in the context of using a DVCS like git or mercurial.
How often and when do you commit?
Do you only commit changes when they
build correctly?
How often and when do you push your changes (or file a pull request or similar)?
How do you approac developing a complex feature / doing a complex refactoring requiring many places to be touched? Are "private commits" that won't build ok? When finished, do you push them also to the master repository or do you bundle all your changes into a single changeset before pushing?

It depends on the nature of the branch ("line of development") you are working on.
The main advantage with those DVCS (git or mercurial) is the ease you can:
branch
merge
So:
1/ How often and when do you commit?
2/ Do you only commit changes when they build correctly?
As many time as necessary on a private branch (for instance, if it compiles).
The practice to only commit if unit tests pass is a good one, but should only apply to an "official" (as in "could be published or 'pushed'") branch: in your private branch, you merge a gazillon times if you need to.
The only thing is: do some merge --interactive to reorganize your many commits on your private branch, before replaying them on your main development branch, where you can pass some tests.
3/ How often and when do you push your changes (or file a pull request or similar)?
Publication is another matter and should be done with a "clear" history (coherent merges, representing a content which compile and pass some tests).
The branch you publish should be one where the history is never rewritten, always updated.
The pace of the publications depends on the nature of the remote branch and of the population pulling that branch. For instance, if it is for another team, you could push quite often. If it is for a system-wide integration testing team, you will push a lot less often.
4/ How do you approach developing a complex feature / doing a complex refactoring requiring many places to be touched? Are "private commits" that won't build ok? When finished, do you push them also to the master repository or do you bundle all your changes into a single changeset before pushing?
See 1. and 2.: patch first in your own private branch, then reorganize your commits on an official (published) patch branch. One single commit is not always the best option if the patch involves several different "activities" (or bug fix).

I'd commit changes to my DVCS (my own topic or task branch) very, very often, this way I can use it not only for "delivering changes" but also to help me while I work: like "why this was working 5 minutes ago and it's not working anymore?" If you commit often you can just run a diff.
Also, a technique I found very, very good is using it to "self-document refactors". Let me explain: if you've to do a big refactor on a topic branch and then review the change as a whole (having modified a nice set of files), you'd probably get lost. But, suppose you checkin on every "intermediate step" and document it with a comment, then you're creating some sort of "movie" of your own changes helping to describe what you've done! Huge for reviewers.

I commit a lot; when adding functions or even reformatting my sources.
I use git and do most of my work on non-shared branches. And when I've added enough little changes that count as a block, I use git rebase to collect the smaller related changes into larger chunks and commit that to the main branches.
This way, I have all the advantages of committed code that I can go backwards or forwards in, but I don't have to commit all my mistakes, and bactracks to the main history.

How often and when do you commit?
Very frequently. It could be as much as a few times in a hour, if the changes I've made work and make up a nice patch. Or it could be every few hours, depending on whether I am spending longer debugging things, or experimenting with risky changes.
Do you only commit changes when they build correctly?
Yes, almost always. I can't think of a reason right to check in code that didn't build correctly. There's plenty of reasons you might check in code that doesn't run correctly (in a branch) though.
How often and when do you push your changes (or file a pull request or similar)?
Normally only when a feature is complete and ready for integration testing. That means it has passed unit tests and other relevant tests, and the code/patch is clean enough that I consider it ready for review.
How do you approac developing a complex feature / doing a complex refactoring requiring many places to be touched? Are "private commits" that won't build ok? When finished, do you push them also to the master repository or do you bundle all your changes into a single changeset before pushing?
I would create a named branch for the feature (which would have traceability across design docs and Issue Tracking system). Commits that don't build would only really be ok on a private branch as an intermediate step, but would still be exceptional. Currently I don't rebase and merge the entire feature branch into a single changeset, though it is something I'm looking at doing in future. Changes are only pushed when appropriate tests are all passed.

I follow this kind of flow
alt text http://img121.imageshack.us/img121/3272/versioncontrolsysbestpr.png
(Here is the original image url)
I guess this says pretty everything. Basically I would do a check-in after implementing a full working use case / user story (depends on your software process). The major important thing is that you check-in things that work in the sense that they compile. Never break the build!
Doing a commit after each user story/use case has the advantage that you have a better tracking of past versions and undoing changes is much easier.

Related

Git branch between hotfix and develop

I am currently developing a branch flow based on GitFlow, I am in the stage of defining how the interaction between the hotfix branch and the develop branch will be done after a correction in production, although GitFlow recommends that a direct merge be done, I particularly want consider an intermediate branch through which to create a Pull Request and in this way peer review can be done, in addition to avoiding committing directly to the "release" and "develop" branch. The simplest way to do this is to use a feature branch, however it is not appropriate to use it.
Please does anyone have any suggestions for the definition of the name of this branch that I need? I'm considering using a "develop-integration" branch.
First, let's reiterate what Git Flow recommends for hotfix branches:
Create a branch for the hotfix, perhaps hotfix-number or hotfix/number, based on master (or main if you call it that instead).
Merge the hotfix branch into master when you deploy to Production.
Also merge hotfix into release if it exists, or develop if release doesn't exist, and now delete the hotfix branch.
For #1, the branch name of the hotfix doesn't actually matter. In Git branch names are simply to help us know what they are for, but you could call your hotfix branch fix-some-bug if you want to. This also translates to your last question:
...does anyone have any suggestions for the definition of the name of this branch that I need?
In the context of an in-between branch, perhaps even moreso than a hotfix branch, it really doesn't matter. In one of my repos where I use Git Flow, we named this in-between branch temp/merge-hotfix-into-develop since it only exists for a few minutes.
Note for #3 above about merging, after using Git Flow for a while, my preference, for both release and hotfix branches, after merging into master, is to then merge master into develop rather than the release or hotfix branch itself. This is slightly cleaner in the long run, but the end result state is identical. Therefore, since I do it this way, my actual branch name in this case would be temp/merge-master-into-develop.
That being said, I'd like to challenge your reasoning for this in the first place:
... although GitFlow recommends that a direct merge be done, I particularly want consider an intermediate branch through which to create a Pull Request and in this way peer review can be done, in addition to avoiding committing directly to the "release" and "develop" branch.
In Git Flow, anytime you are merging one shared branch into another, we can divide the merge into 2 categories:
The merge does not have conflicts.
The merge does have conflicts.
In scenario #1 (which hopefully is the most common scenario), there is nothing to be reviewed, so the PR should be basically automatic. In this case simply create a PR from hotfix (or master) to develop and complete it. You can do a sanity check first if you want to, but it should be extremely rare that it would fail at this point. If the sanity check fails, then you could resort to treating this like #2.
In scenario #2, this is when you need to create the in between branch as you mentioned. I always test the merge first, and if there are conflicts I'll create a branch, as mentioned above, like: temp/merge-release-into-develop or temp/merge-master-into-develop, resolve the conflicts myself (except in rare cases when I can't in which case I'll pull in other devs), and push out the branch and create the PR into, in this case, develop.

In GitAhead, is there a way to delete a remote branch when I already deleted the corresponding local branch?

I've already deleted a local branch without deleting its upstream branch on a GitHub.
Is there a way to delete the remote branch in a GitAhead?
In Sourcetree you just right click on the remote branch and choose delete.
Unfortunately no, GitAhead doesn't have an easy way to push a delete except for the little convenience checkmark when you delete the local branch. You would have to resort to the command line or doing it on your remote host.
This is a major design flaw in Git (in my opinion). The branching concept in mercurial is much more sane (no detached heads, named branches, no ability to delete branches - at least not if it was published once).
think about it: it is a versioning control system. What you want is to preserve and document development history. Someone has created and published a branch for purpose. So even if git allows manipulation of the repository much more than mercurial (and here most abilities are disabled by default!), just do not use it. leave the branch! Its OK. Its the way it should be. Anyway, as git is lean here, its just a pointer (not a real named branch like in hg), it does not take much space.

Demote a build? How to delete promoted builds and run specified script on deletion in Jenkins

In the project I'm working for we're having a continuous deployment setup. The goal is to always install the latest working build to production, unless someone manually overrides this functionality.
In order to make this working we
Run static code analysis
Run unit tests
Run integration tests
Run automatic UI tests, to the extent this is feasible
If any of the above steps fail, the build process is halted, and the build marked as failed. If the installation package is created it is then in steps installed to
CI --> staging --> production
At each step we run a integration and UI tests for the environment, to make sure we didn't introduce some new things which fail on on the subsequent environments. If none of the tests fail, and N minutes pass without anyone pressing the panic button, the build gets promoted to the next env. If the tests fail, we want to delete the package, and discard it completely. The installation packages are, however, delivered to other servers, so we need to run a bunch of remote (shell) scripts to make this step happen.
The problem is, that there are a big set of failure cases which we cannot reliably test in the normal automated cycle, e.g. page layout, or some integrations fail only production and so on.
So the actual question: How shall I demote/delete builds, once they've been promoted? Is it possible to either run a remote script when doing delete build or use any of the promotion plugins to achieve this functionality? Is there some think-outside-the-box solution for this that I might not have thought about?
Instead of deleting builds manually, you may write a Jenkins job that accepts the build number as a parameter, deletes it, and then does the rest of the housekeeping. You can configure Jenkins access privileges so that people do not delete builds manually by accident.
This might be a very particular case, but we decided against creating a separate job for removing the builds, for the very simple reason of keeping all the logging related to a specific build number in one single place. The setup was the following:
Promotion here means make the installation package (RPM) available to the given server, where auto-update handles the actual upgrade of the package.
We have one main build, that builds every time a new commit is available. We had some fine-tuning related to quiet times etc. but basically every new pushed set of commits resulted in a new build. The build contains all the relevant and available testing, which is far from being complete, and probably never will be.
Every hour a separate promotion step handles promotion from staging to production. This build kicks off a separate promotion which takes the latest accepted build from CI to staging. There is a 30min delay before builds were promoted CI-->staging, to prevent accidental promotions for last second commits. Delays were achieved with some bash find scripting. The order of promotions is this, to make sure a build is available in staging for (at least) one hour before going to promotion.
The actual answer:
The promotion steps were done as separate builds. In order to do a real promotion, rather than a separate build with a separate log, the build kicks off an actual promotion in the main build, using curl and calling the remote HTTP API. This leaves a relevan promotions star in the main build log. Using different colors, the promotions are visible with one look.
To demote builds I decided to create a separate "demote build" promotion step. This would then issue a purple star as a sign of the build being defective, and thus removed. The demotion is done by accessing the correct build in the UI, and pressing the "Remove build" button. No automation has been added to this step, but by creating a separate test step, it would be fairly easy to automate the demotion as well. We, however, have not gotten quite this far yet.
The benefits of this approach include
A build is deleted by accessing the failed build, not by providing parameters. Makes it much easier to document, and get right under pressure
Having a "panic button" like this available for anyone to press, builds trust and ownership for the process not only amongst the developers but also managers and DevOps.
It's dead simple to spot dead builds, as the log is available besides the other promotion logs
Having all the relevant promotion calls in the same build makes further scripting easier
Acute things we still have to improve include automating the testing on the later stages of the build pipeline, and also a suitable way of downgrading builds after demotion. E.g. in production a defective build and a demotion must always lead to installing the last good build, which has turned out to be fairly hard to achieve. Production data centers are rarely allowed to be accessible to this level from the development DC where the CI system sits. Also stopping and starting the build pipeline must be automated, as else there is the chance of slipping back to the manual state.
Naturally, in the spirit of continuous improvement, there are always things to improve. The whole setup is something of a bash/perl scripting mess, but since it's scripted and repeatable, there is always the option of improving one small piece at a time. The most important thing is the automation, as it allows for incremental steps, which any manual steps more or less prevent.
For anyone looking for an easy way to delete a build with custom steps:
Create a 'defective' promotion.
Make it manually triggered.
Force it run on the master.
Add a choice parameter DELETE with choices NO and YES.
Add action Execute Shell.
_
if [ "${DELETE}" == "YES" ]; then
# TODO: my custom steps
curl -X POST ${PROMOTED_URL}/doDelete"
fi
To delete a build now, just go to promotions, flip the choice to YES and click approve.

Release Process Improvements

The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next.
I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'.
So the question is, specify one painful/time consuming part of your release process and what did you do to improve it?
My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases.
This works reasonably well but the problems with this approach are:-
ALL changes in the Dev database are included, some of which may still be 'works in progress'.
Sometimes developers made conflicting changes (that were not noticed until the release was in production)
It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2).
When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly.
It's not an ideal scenario for Continuous Integration.
So the solution was:-
Enforce a policy of all changes to the database must be scripted.
A naming convention was important for ensuring the correct running order of the scripts.
Create/Use a tool to run the scripts at release time.
Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes')
The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script.
Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth.
We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.
We've done a few things over the past year or so to improve our build process.
Fully automated and complete build. We've always had a nightly "build" but we found that there are different definitions for what constitutes a build. Some would consider it compiling, usually people include unit tests, and sometimes other things. We clarified internally that our automated build literally does everything required to go from source control to what we deliver to the customer. The more we automated various parts, the better the process is and less we have to do manually when it's time to release (and less worries about forgetting something). For example, our build version stamps everything with svn revision number, compiles the various application parts done in a few different languages, runs unit tests, copies the compile outputs to appropriate directories for creating our installer, creates the actual installer, copies the installer to our test network, runs the installer on the test machines, and verifies the new version was properly installed.
Delay between code complete and release. Over time we've gradually increased the amount of delay between when we finish coding for a particular release and when that release gets to customers. This provides more dedicated time for testers to test a product that isn't changing much and produces more stable production releases. Source control branch/merge is very important here so the dev team can work on the next version while testers are still working on the last release.
Branch owner. Once we've branched our code to create a release branch and then continued working on trunk for the following release, we assign a single rotating release branch owner that is responsible for verifying all fixes applied to the branch. Every single check-in, regardless of size, must be reviewed by two devs.
We were already using TeamCity (an excellent continuous integration tool) to do our builds, which included unit tests. There were three big improvements were mentioning:
1) Install kit and one-click UAT deployments
We packaged our app as an install kit using NSIS (not an MSI, which was so much more complicated and unnecessary for our needs). This install kit did everything necessary, like stop IIS, copy the files, put configuration files in the right places, restart IIS, etc. We then created a TeamCity build configuration which ran that install kit remotely on the test server using psexec.
This allowed our testers to do UAT deployments themselves, as long as they didn't contain database changes - but those were much rarer than code changes.
Production deployments were, of course, more involved and we couldn't automate them this much, but we still used the same install kit, which helped to ensure consistency between UAT and production. If anything was missing or not copied to the right place it was usually picked up in UAT.
2) Automating database deployments
Deploying database changes was a big problem as well. We were already scripting all DB changes, but there were still problems in knowing which scripts were already run and which still needed to be run and in what order. We looked at several tools for this, but ended up rolling our own.
DB scripts were organised in a directory structure by the release number. In addition to the scripts developers were required to add the filename of a script to a text file, one filename per line, which specified the correct order. We wrote a command-line tool which processed this file and executed the scripts against a given DB. It also recorded which scripts it had run (and when) in a special table in the DB and next time it did not run those again. This means that a developer could simply add a DB script, add its name to the text file and run the tool against the UAT DB without running around asking others what scripts they last ran. We used the same tool in production, but of course it was only run once per release.
The extra step that really made this work well is running the DB deployment as part of the build. Our unit tests ran against a real DB (a very small one, with minimal data). The build script would restore a backup of the DB from the previous release and then run all the scripts for the current release and take a new backup. (In practice it was a little more complicated, because we also had patch releases and the backup was only done for full releases, but the tool was smart enough to handle that.) This ensured that the DB scripts were tested together at every build and if developers made conflicting schema changes it would be picked up quickly.
The only manual steps were at release time: we incremented the release number on the build server and copied the "current DB" backup to make it the "last release" backup. Apart from that we no longer had to worry about the DB used by the build. The UAT database still occasionally had to be restored from backup (eg. since the system couldn't undo the changes for a deleted DB script), but that was fairly rare.
3) Branching for a release
It sounds basic and almost not worth mentioning, yet we weren't doing this to begin with. Merging back changes can certainly be a pain, but not as much of a pain as having a single codebase for today's release and next month's! We also got the person who made the most changes on the release branches to do the merge, which served to remind everyone to keep their release branch commits to an absolute minimum.
Automate your release process whereever possible.
As others have hinted, use different levels of build "depth". For instance a developer build could make all binaries for runnning your product on the dev machine, directly from the repository while an installer build could assemble everything for installation on a new machine.
This could include
binaries,
JAR/WAR archives,
default configuration files,
database scheme installation scripts,
database migration scripts,
OS configuration scripts,
man/hlp pages,
HTML documentation,
PDF documentation
and so on. The installer build can stuff all this into an installable package (InstallShield, ZIP, RPM or whatever) and even build the CD ISOs for physical distribution.
The output of the installer build is what is typically handed over to the test department. Whatever is not included in the installation package (patch on top of the installation...) is a bug. Challenge your devs to deliver a fault free installation procedure.
Automated single step build. The ant build script edits all the installer configuration files, program files that need changed ( versioning) and then builds. No intervention required.
There is still a script run to generate the installers when it's done, but we will eliminate that.
The CD artwork is versioned manually; that needs fixed too.
Agree with previous comments.
Here is what has evolved where I work. This current process has eliminated the 'gotchas' that you've described in your question.
We use ant to pull code from svn (by tag version) and pull in dependencies and build the project (and at times, also to deploy).
Same ant script (passing params) is used for each env (dev, integration, test, prod).
Project process
Capturing requirements as user 'stories' (helps avoid quibbling over an interpretation of a requirement, when phrased as a meaningful user interaction with the product)
following an Agile principles so that each iteration of the project (2 wks) results in demo of current functionality and a releasable, if limited, product
manage release stories throughout the project to understand what is in and out of scope (and prevent confusion abut last minute fixes)
(repeat of previous response) Code freeze, then only test (no added features)
Dev process
unit tests
code checkins
scheduled automated builds (cruise control, for example)
complete a build/deploy to an integration environment, and runs smoke test
tag the code and communicate to team (for testing and release planning)
Test process
functional testing (selenium, for example)
executing test plans and functional scenarios
One person manages the release process, and ensures everyone complies. Additionally all releases are reviewed a week before launch. Releases are only approved if there are:
Release Process
Approve release for a specific date/time
Review release/rollback plan
run ant with 'production deployment' parameter
execute DB tasks (if any) (also, these scripts can be version and tagged for production)
execute other system changes / configs
communicate changes
I don't know or practice SDLC, but for me, these tools have been indispensible in achieving smooth releases:
Maven for build, with Nexus local repository manager
Hudson for continuous integration, release builds, SCM tagging and build promotion
Sonar for quality metrics.
Tracking database changes to development db schema and managing updates to qa and release via DbMaintain and LiquiBase
On a project where I work we were using Doctrine's (PHP ORM) migrations to upgrade and downgrade the database. We had all manner of problems as the generated models no longer matched with the database schema causing the migrations to completely fail half way.
In the end we decided to write our own super basic version of the same thing - nothing fancy, just up's and down's that execute SQL. Anyway it worked out great (so far - touch wood). Although we were reinventing the wheel slightly by writing our own, the fact that the focus was on keeping it simple meant that we have far less problems. Now a release is a cinch.
I guess the moral of the story here is that it is sometimes OK to reinvent the wheel some times as long as you are doing so for a good reason.

Improving Your Build Process

Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT

Resources