The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next.
I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'.
So the question is, specify one painful/time consuming part of your release process and what did you do to improve it?
My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases.
This works reasonably well but the problems with this approach are:-
ALL changes in the Dev database are included, some of which may still be 'works in progress'.
Sometimes developers made conflicting changes (that were not noticed until the release was in production)
It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2).
When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly.
It's not an ideal scenario for Continuous Integration.
So the solution was:-
Enforce a policy of all changes to the database must be scripted.
A naming convention was important for ensuring the correct running order of the scripts.
Create/Use a tool to run the scripts at release time.
Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes')
The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script.
Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth.
We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.
We've done a few things over the past year or so to improve our build process.
Fully automated and complete build. We've always had a nightly "build" but we found that there are different definitions for what constitutes a build. Some would consider it compiling, usually people include unit tests, and sometimes other things. We clarified internally that our automated build literally does everything required to go from source control to what we deliver to the customer. The more we automated various parts, the better the process is and less we have to do manually when it's time to release (and less worries about forgetting something). For example, our build version stamps everything with svn revision number, compiles the various application parts done in a few different languages, runs unit tests, copies the compile outputs to appropriate directories for creating our installer, creates the actual installer, copies the installer to our test network, runs the installer on the test machines, and verifies the new version was properly installed.
Delay between code complete and release. Over time we've gradually increased the amount of delay between when we finish coding for a particular release and when that release gets to customers. This provides more dedicated time for testers to test a product that isn't changing much and produces more stable production releases. Source control branch/merge is very important here so the dev team can work on the next version while testers are still working on the last release.
Branch owner. Once we've branched our code to create a release branch and then continued working on trunk for the following release, we assign a single rotating release branch owner that is responsible for verifying all fixes applied to the branch. Every single check-in, regardless of size, must be reviewed by two devs.
We were already using TeamCity (an excellent continuous integration tool) to do our builds, which included unit tests. There were three big improvements were mentioning:
1) Install kit and one-click UAT deployments
We packaged our app as an install kit using NSIS (not an MSI, which was so much more complicated and unnecessary for our needs). This install kit did everything necessary, like stop IIS, copy the files, put configuration files in the right places, restart IIS, etc. We then created a TeamCity build configuration which ran that install kit remotely on the test server using psexec.
This allowed our testers to do UAT deployments themselves, as long as they didn't contain database changes - but those were much rarer than code changes.
Production deployments were, of course, more involved and we couldn't automate them this much, but we still used the same install kit, which helped to ensure consistency between UAT and production. If anything was missing or not copied to the right place it was usually picked up in UAT.
2) Automating database deployments
Deploying database changes was a big problem as well. We were already scripting all DB changes, but there were still problems in knowing which scripts were already run and which still needed to be run and in what order. We looked at several tools for this, but ended up rolling our own.
DB scripts were organised in a directory structure by the release number. In addition to the scripts developers were required to add the filename of a script to a text file, one filename per line, which specified the correct order. We wrote a command-line tool which processed this file and executed the scripts against a given DB. It also recorded which scripts it had run (and when) in a special table in the DB and next time it did not run those again. This means that a developer could simply add a DB script, add its name to the text file and run the tool against the UAT DB without running around asking others what scripts they last ran. We used the same tool in production, but of course it was only run once per release.
The extra step that really made this work well is running the DB deployment as part of the build. Our unit tests ran against a real DB (a very small one, with minimal data). The build script would restore a backup of the DB from the previous release and then run all the scripts for the current release and take a new backup. (In practice it was a little more complicated, because we also had patch releases and the backup was only done for full releases, but the tool was smart enough to handle that.) This ensured that the DB scripts were tested together at every build and if developers made conflicting schema changes it would be picked up quickly.
The only manual steps were at release time: we incremented the release number on the build server and copied the "current DB" backup to make it the "last release" backup. Apart from that we no longer had to worry about the DB used by the build. The UAT database still occasionally had to be restored from backup (eg. since the system couldn't undo the changes for a deleted DB script), but that was fairly rare.
3) Branching for a release
It sounds basic and almost not worth mentioning, yet we weren't doing this to begin with. Merging back changes can certainly be a pain, but not as much of a pain as having a single codebase for today's release and next month's! We also got the person who made the most changes on the release branches to do the merge, which served to remind everyone to keep their release branch commits to an absolute minimum.
Automate your release process whereever possible.
As others have hinted, use different levels of build "depth". For instance a developer build could make all binaries for runnning your product on the dev machine, directly from the repository while an installer build could assemble everything for installation on a new machine.
This could include
binaries,
JAR/WAR archives,
default configuration files,
database scheme installation scripts,
database migration scripts,
OS configuration scripts,
man/hlp pages,
HTML documentation,
PDF documentation
and so on. The installer build can stuff all this into an installable package (InstallShield, ZIP, RPM or whatever) and even build the CD ISOs for physical distribution.
The output of the installer build is what is typically handed over to the test department. Whatever is not included in the installation package (patch on top of the installation...) is a bug. Challenge your devs to deliver a fault free installation procedure.
Automated single step build. The ant build script edits all the installer configuration files, program files that need changed ( versioning) and then builds. No intervention required.
There is still a script run to generate the installers when it's done, but we will eliminate that.
The CD artwork is versioned manually; that needs fixed too.
Agree with previous comments.
Here is what has evolved where I work. This current process has eliminated the 'gotchas' that you've described in your question.
We use ant to pull code from svn (by tag version) and pull in dependencies and build the project (and at times, also to deploy).
Same ant script (passing params) is used for each env (dev, integration, test, prod).
Project process
Capturing requirements as user 'stories' (helps avoid quibbling over an interpretation of a requirement, when phrased as a meaningful user interaction with the product)
following an Agile principles so that each iteration of the project (2 wks) results in demo of current functionality and a releasable, if limited, product
manage release stories throughout the project to understand what is in and out of scope (and prevent confusion abut last minute fixes)
(repeat of previous response) Code freeze, then only test (no added features)
Dev process
unit tests
code checkins
scheduled automated builds (cruise control, for example)
complete a build/deploy to an integration environment, and runs smoke test
tag the code and communicate to team (for testing and release planning)
Test process
functional testing (selenium, for example)
executing test plans and functional scenarios
One person manages the release process, and ensures everyone complies. Additionally all releases are reviewed a week before launch. Releases are only approved if there are:
Release Process
Approve release for a specific date/time
Review release/rollback plan
run ant with 'production deployment' parameter
execute DB tasks (if any) (also, these scripts can be version and tagged for production)
execute other system changes / configs
communicate changes
I don't know or practice SDLC, but for me, these tools have been indispensible in achieving smooth releases:
Maven for build, with Nexus local repository manager
Hudson for continuous integration, release builds, SCM tagging and build promotion
Sonar for quality metrics.
Tracking database changes to development db schema and managing updates to qa and release via DbMaintain and LiquiBase
On a project where I work we were using Doctrine's (PHP ORM) migrations to upgrade and downgrade the database. We had all manner of problems as the generated models no longer matched with the database schema causing the migrations to completely fail half way.
In the end we decided to write our own super basic version of the same thing - nothing fancy, just up's and down's that execute SQL. Anyway it worked out great (so far - touch wood). Although we were reinventing the wheel slightly by writing our own, the fact that the focus was on keeping it simple meant that we have far less problems. Now a release is a cinch.
I guess the moral of the story here is that it is sometimes OK to reinvent the wheel some times as long as you are doing so for a good reason.
Related
I am working on a ASP.net MVC4 project where a same project needs to be deployed to many clients on daily basis, each client will have its own domain / sub domain and a separate app pool and db (MSSSQL).
Doing each deployment manually could take at least 1-2 hours if everything goes well. Is there anyway using which I can do this in some automated way?
Moreover, we also need to update all of the apps when a new version is released.. may be one by one or all of them at same time. However, doing this manually could take weeks and once we have more clients then it will not possible doing this update manually.
The update involves, suspending app for some time, taking a full backup of files and db, update application code/ files in app folder, upgrade db with a script and then start app, doing some diagnosis script to check if update was successful or not, if not we need to check what went wrong?
How can we automate this updates? Any idea would be great on how to approach this issue.
As a developer for BuildMaster, I can say that this scenario, known as the "Core Version" pattern, is a common one. If you're OK with a paid solution, you can setup your deployment plans within the tool that do exactly what you described.
As a more concrete example, we experience this exact situation in a slightly different way. BuildMaster has a set of 60+ extensions that rely on a specific SDK version. In our recent 4.0 release, we had to re-deploy every extension because of breaking API changes within the SDK. This is essentially equivalent to having a bunch of customers and deploying to them all at once. We have set up our deployment plans such that any time we create a new release of the SDK application, we have the option to set a variable that says to build every extension that relies on the SDK:
In BuildMaster, the idea is to promote a build (i.e. an immutable object that travels through various environments like Dev, Test, Staging, Prod) to its final environment (where it becomes the deployed build for the release). In your case, this would be pushing your MVC application to its final environment, and that would then trigger the deployments of all dependent applications (i.e. your customers' instances of your application). For our SDK, the plan looks like this:
For your scenario, you would only need the single action, "Promote Build". As I mentioned before, any dependents would then be promoted to their final environments, so all your customer deployments would kick off once that action is run during deployment. As an example, our Azure extension's deployment plan for its final environment looks like this (internal URLs redacted):
You may have noticed that these plans are marked "Shared", which means every extension we have has the exact same deployment plan, but utilizes different variables to handle the minor differences like names, paths, etc.
Since this is such an enormous topic I could go on for ages, but I think that should be sufficient for your use-case if you wanted to try it out.
There are others but you could setup Team Server Foundation to deploy automated builds.
http://msdn.microsoft.com/en-us/library/ff650529.aspx
I find the easiest way to do this from an MVC project is to create a publish profile.
This is done by right-clicking your project selecting publish and then configuring it to your needs.
Then from TFS you create a new build definition, this kicks of a wizard which takes you through it.
There are quite a few options which would be too long to go into for every scenario.
The main change I usually find the most important is to set an MSBuild Argument to deploy with the publish profile.
This can be found at Process > Advanced > MSBuild Arguments.
Once this is configured correctly it's a simple case of right-clicking and queue new build to build and deploy.
You wil need different PublishProfile/Build configuration per deployment environment.
For backups I use a powershell script which can be called manually or from TFS.
You also have a drop folder in TFS which keeps a backup of x many releases.
The datbases are automatically configured via Sql server to backup, TBH I didn't set that up it was a DB admin guy who is also involved with releases.
From a dev testing side I use jMeter (http://jmeter.apache.org/) to run some automated scripts that check that users can login and view certain screens, just to confirm nothing major has gone wrong. However there is usually a testing team to run more detailed tests, again not setup by me.
All of the above will probably take you sometime to setup but in the long run it will literally save you weeks of time over a year.
A free alternative to TFS is http://www.cruisecontrolnet.org/, I have used this in the past too and is pretty good.
You can automate your .Net deployments with Beanstalk, which will give you a way to trigger deployments with a single click, watch progress, manage permissions and see history of deployments. Check out this guide on the topic:
http://guides.beanstalkapp.com/deployments/deploy-dotnet.html
I hope you will find it useful.
P.S. - I work at Beanstalk.
Does anyone know whether it is 100% safe to replace (copy + paste) an assembly with an updated version of itself, where all version history (AssemblyInfo.vb) is exactly the same but the only difference being that a minor code change took place in one of the aspx.vb files.
It is safe if you are sure you didn't break your existing code (method doesn't exist anymore, ...).
If you manually load assemblies make sure to update versions and tokens.
If you are not sure you can duplicate your website into another folder, create a test IIS instance and test the deployment of single files.
Keep in mind that a clean deployment is safe rather than single file deployments that may cause breaking changes if you are not extremely careful. This may not be a problem on test instances but should never be done on Live.
It is definitely possible and very easily doable but it can easily become a habit that can have a negative impact when the production system is becoming very large, possibly unstable and have a large number of users.
There are a couple of ways I would suggest trying or to be mindful of:
Create a installer or multiple installers depending on the projects
in you solution. These installer produce te excutable(s) to install
on the production server. This will be done manually. Backup the previous
installers for rollback.
You can create a ClickOnce application. This can be run manually and also can be scheduled using a stable scheduling app.
One can also make use of Visual Studio's publishing function. After
compiling code and setting it to release mode it gets publish via
file/network/ftp on the production space. This replaces all the markup files and assemblies.
The automate the process you can use a TFS Builder server and
schedule daily or weekly builds. These installations can be made
manually or one can use SMS(Microsoft System Management Services) to
schedule timely installations
One can use Windows Powershell as well
TFS Build Server and SMS carry cost implications of course but it
will be a small price to pay if problems on a production environment
can bring a company down.
There are a couple of ways to do this. See what might work and get into good habits when it comes to an production environment
In the project I'm working for we're having a continuous deployment setup. The goal is to always install the latest working build to production, unless someone manually overrides this functionality.
In order to make this working we
Run static code analysis
Run unit tests
Run integration tests
Run automatic UI tests, to the extent this is feasible
If any of the above steps fail, the build process is halted, and the build marked as failed. If the installation package is created it is then in steps installed to
CI --> staging --> production
At each step we run a integration and UI tests for the environment, to make sure we didn't introduce some new things which fail on on the subsequent environments. If none of the tests fail, and N minutes pass without anyone pressing the panic button, the build gets promoted to the next env. If the tests fail, we want to delete the package, and discard it completely. The installation packages are, however, delivered to other servers, so we need to run a bunch of remote (shell) scripts to make this step happen.
The problem is, that there are a big set of failure cases which we cannot reliably test in the normal automated cycle, e.g. page layout, or some integrations fail only production and so on.
So the actual question: How shall I demote/delete builds, once they've been promoted? Is it possible to either run a remote script when doing delete build or use any of the promotion plugins to achieve this functionality? Is there some think-outside-the-box solution for this that I might not have thought about?
Instead of deleting builds manually, you may write a Jenkins job that accepts the build number as a parameter, deletes it, and then does the rest of the housekeeping. You can configure Jenkins access privileges so that people do not delete builds manually by accident.
This might be a very particular case, but we decided against creating a separate job for removing the builds, for the very simple reason of keeping all the logging related to a specific build number in one single place. The setup was the following:
Promotion here means make the installation package (RPM) available to the given server, where auto-update handles the actual upgrade of the package.
We have one main build, that builds every time a new commit is available. We had some fine-tuning related to quiet times etc. but basically every new pushed set of commits resulted in a new build. The build contains all the relevant and available testing, which is far from being complete, and probably never will be.
Every hour a separate promotion step handles promotion from staging to production. This build kicks off a separate promotion which takes the latest accepted build from CI to staging. There is a 30min delay before builds were promoted CI-->staging, to prevent accidental promotions for last second commits. Delays were achieved with some bash find scripting. The order of promotions is this, to make sure a build is available in staging for (at least) one hour before going to promotion.
The actual answer:
The promotion steps were done as separate builds. In order to do a real promotion, rather than a separate build with a separate log, the build kicks off an actual promotion in the main build, using curl and calling the remote HTTP API. This leaves a relevan promotions star in the main build log. Using different colors, the promotions are visible with one look.
To demote builds I decided to create a separate "demote build" promotion step. This would then issue a purple star as a sign of the build being defective, and thus removed. The demotion is done by accessing the correct build in the UI, and pressing the "Remove build" button. No automation has been added to this step, but by creating a separate test step, it would be fairly easy to automate the demotion as well. We, however, have not gotten quite this far yet.
The benefits of this approach include
A build is deleted by accessing the failed build, not by providing parameters. Makes it much easier to document, and get right under pressure
Having a "panic button" like this available for anyone to press, builds trust and ownership for the process not only amongst the developers but also managers and DevOps.
It's dead simple to spot dead builds, as the log is available besides the other promotion logs
Having all the relevant promotion calls in the same build makes further scripting easier
Acute things we still have to improve include automating the testing on the later stages of the build pipeline, and also a suitable way of downgrading builds after demotion. E.g. in production a defective build and a demotion must always lead to installing the last good build, which has turned out to be fairly hard to achieve. Production data centers are rarely allowed to be accessible to this level from the development DC where the CI system sits. Also stopping and starting the build pipeline must be automated, as else there is the chance of slipping back to the manual state.
Naturally, in the spirit of continuous improvement, there are always things to improve. The whole setup is something of a bash/perl scripting mess, but since it's scripted and repeatable, there is always the option of improving one small piece at a time. The most important thing is the automation, as it allows for incremental steps, which any manual steps more or less prevent.
For anyone looking for an easy way to delete a build with custom steps:
Create a 'defective' promotion.
Make it manually triggered.
Force it run on the master.
Add a choice parameter DELETE with choices NO and YES.
Add action Execute Shell.
_
if [ "${DELETE}" == "YES" ]; then
# TODO: my custom steps
curl -X POST ${PROMOTED_URL}/doDelete"
fi
To delete a build now, just go to promotions, flip the choice to YES and click approve.
At work we currently use the following deployment strategy:
Run a batch script to clear out all Temporary ASP.NET files
Run a batch script that compiles every ASPX file into its own DLL (ASP.NET Web Site, not Web Application)
Copy each individually changed file (ASPX and DLL) to the appropriate folder on the live server.
Open up the Deployment Scripts folder, run each SQL script (table modifications, stored procs, etc.) manually on the production database.
Say a prayer before going to sleep (joking on this one, maybe)
Test first thing the next morning and hope for the best - fix bugs as they come up.
We have been bitten a few times in the past because someone will forget to run a script, or think they ran something but didn't, or overwrote a sproc related to some module because there are two files (one in a Sprocs folder and one in a [ModuleName]Related folder) or copied the wrong DLL (since they can have the same names with like a random alphanumeric number generated by .NET).
This seems highly inefficient to me - a lot of manual things and very error prone. It can sometimes take 2-3 or more hours for a developer to perform a deployment (we do them late at night, like around midnight) due to all the manual steps and remembering what files need to be copied, where they need to be copied, what scripts need to be run, making sure scripts are run in the right order, etc.
There has got to be an easier way than taking two hours to copy and paste individual ASPX pages, DLLs, images, stylesheets and the like and run some 30+ SQL scripts manually. We use SVN as our source control system (mainly just for update/commit though, we don't do branching) but have no unit tests or testing strategy. Is there some kind of tool that I can look into to help us make our deployments smoother?
I did not go through all of it, but the You're deploying it wrong series from Troy Hunt might be a good place to look at.
Points discussed in the series :
Config transforms
Build Automation
Continuous Integration
We have four stages before it can be deployed.
Development
QA
UAT
Production
We have build scripts (inside bamboo build server) running against QA and against UAT. Our DBA is the only person who can run create scripts against QA, UAT, and PROD. Anything that goes from QA -> UAT is like a test run deployment. UAT gets reverted by copying the production systems down again.
When we release into Production we just create a whole new site and point it at the UAT database and test that environmentally it is working fine. Then when this is working good we flick the 'switch' and point the production IIS record at the next site, and change the DB connection to point at Prod DB.
Because we are using a completely diff folder structure all of our files get copied up so there is no chance of missing one. Because we have had test runs of deployment into UAT we know we haven't missed a DB script (DB Scripts are combined into one generally). Because we have tested a shadow copy of the IIS website we know that environmentally it should work. We can then do all this set up during the day - and then do the final switch flicking at midnight or whenever - reducing the impact on devs.
tl;dr; Automated build and deploy; UAT system for test running deployment; Deployment during work hours; Flick switch/run DB update at midnight.
I am a developer for BuildMaster, a tool which can very easily automate the steps you have outlined above, and we have a limited version free for a team of 5 developers.
Most of your pain points will disappear the moment you set up the deployment automation - mainly the batch script execution and the file-by-file copying. Once you're fully automated, you can even schedule the deployment for night time and only have to worry about it if there's an error in the process (you can set up a notifier for a failed build).
On the database side, you can integrate your database with BuildMaster as well and if you upload the scripts into the tool it will keep track of which ones were run against which database.
To see how to set up a simple web application deployment plan, you can run one of the example applications included. You can also check out: http://inedo.com/support/tutorials/lunchmaster/part-1 to see how to create one yourself - it's slightly outdated since we've made it even easier to get started out-of-the-box but the main concepts are the same.
Please see this blog post and associated talk by Scott Hanselman titled "Web Deployment Made Awesome"
Blog
Video
As for SQL Deployment, you might want to consider one of the following:
RikMigrations
Migrator.NET
FluentMigrator
Mantee Introduction & Source
Have a User Acceptance Testing (UAT) environment which is completely isolated from your development environment, and only accessible to the UAT manager.
Setup a UAT build which you can manually trigger upon each release, when triggered this should send all your deployment files as well as a deployment checklist to the UAT manager, who will redeploy all files to the UAT environment, and run any database upgrade scripts.
Once the applications users and testers have signed off the UAT release, the UAT manager can be authorised to deploy to the PRODUCTION environment using the exact same procedure and checklists as the UAT release. This will guarantee that you never miss any deployment steps, and test the deployment process prior to moving it into production.
Caveats- I'm in an environment where we can't use MSI, batch, etc for the final deployment
Things that helped:
A build server that does the full compilation on a build server and runs all unit tests and integration tests. Why find out you have something in an aspx page that doesn't compile on deployment night? (I admit your Q doesn't make it clear if compilation is happening on deployment night)
I have a page that administrators can reach that exercises environment and deployment failure points, e.g. connect to db, connect to reporting services, send an email, read and write to the temp folder.
Also, put all the things that the administrator needs to change into a file external from web.config. The connection string and app settings sections natively support a way to do this (i.e. don't reinvent the web.config system just to create a separate file)
Here is an article on how to do better integration tests: http://suburbandestiny.com/Tech/?p=601 There is a ton of good literature how to do unit tests, but often if you app already exists, you will have to refactor until unit testing becomes possible. If that isn't an option, then don't be a purist and put together some integration tests that are fast and repeatable as possible.
Keep your dependencies in bin instead of GAC, since it's easier to tell an administrator to copy files than it is to teach them to administer the GAC.
Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT