Best way to deploy multiple instances of the same ASP.NET web application - asp.net

Any creative minds out there want to help me think outside the box?
We have an ASP.NET web application which we need to deploy to say 100 machines. The web application is the same except for configuration files (web.config) and a few other files (scripts, mostly).
Our goals are:
To be able to upgrade the web application efficiently by sending out incremental changes which could also easily be reverted (on a per project or per file basis) in case of a problem.
Efficient support for binary files (as this web application contains compiled *.dlls)
Not require any compilation on the target machines.
Nice to haves would be:
Easy to remotely initiate an update
Opportunity to execute certain scripts (ex: database migrations) automatically post-update
We currently have scripts which commit a compiled version of each separate instance of the application to source control and copies over the appropriate configuration files to each folder. Each instance checks out (and later updates) a specific folder from the source control.
This process is good enough for 100 instances, but would not scale to 1000 instances as it is storing clones of all the binary files with every release (one per instance). I'm looking for better ways of doing this.
The most obvious answer appears to keep the same process but have only one folder to contain the files (one folder for all instances) and have scripts that override the appropriate files after an update on the target machines.
What's a better way of accomplishing this?

Related

Asp.net Deployment Strategies

I have heard lots of strategies for deploying asp.net applications, but am not quite sure which strategy is best for my needs.
I have an asp.net 4 application. I have separate development/staging/production environments (different web.configs). I also need to manage sql server changes. It is possible that I may have more than 1 DB server and more than 1 app server to push changes to. Ideally, I would like to hit a button and say "deploy to staging" or "deploy to production" and it brings deploys code/db/config files to the correct servers. Ideally, I'd like there to be some process to rollback in case of a bad release as well.
I have heard xcopy/robocopy strategies, MSDeploy (now called Web Deploy?) strategies, and building MSI packages to deploy.
Which of these seems like the best fit for this type of need?
Method #1
If you have some time to spend, I suggest using CruiseControl.NET. For a while at least, the stackoverflow team used this for deployments.
Method #2
As far as copy strategies go, I recommend using a combination of 7zip and ftp for application and media. 7Zip is nice, as it allows you to exclude file types (web.config), folders, and file types, and allows you to compress different files differently. Example, there is no point in compressing a PNG. Note, this does a full deployment every time. So, if you have large media folders, I'd handle them separately.
As for the database, I believe you will have the best of luck using SQL Compare by Redgate. They are commercial applications, but they are very, very good. They've been positively mentioned multiple times on the stackoverflow podcasts.
Build a CMD file on the development/build server that generates the master 7zip file and FTP's it to a dedicated folder on the staging (or production) server. I end up with multiple calls to 7zip feeding files into a single 7zip file, using different compression methods for each batch.
Build a CMD file for each staging or production server. This file will execute proper file backups, and extract the 7zip file to the proper location.
A deployment to staging will go like this:
Execute your 7zip-prep command file which will trigger FTP upload to a dedicated FTP folder on the staging server
Execute DB changes against the staging database server via scripts generated by SQL Compare
Execute 7zip extraction command file on the staging server
This is the method that I use. I have not invested the time to master CruiseControl.NET, but when I do I will probably use it instead, at least for larger applications. It's not a one-click deployment, but it allows for multiple efficient deployments per day (as I've been doing off and on for few years now). The 7zip method is nice, because once you have your command files, you can copy them and use them for new projects very quickly.

How to keep deployed code on multiple BizTalk front ends in sync?

We have multiple BizTalk 2006 application servers, and I find it almost impossible to keep the versions of our projects in sync on them. It's a tedious process of deploying the MSI packages, importing them, matching up files in the GAC, deploying some registry changes, and if one step is missed or somebody deployed an updated copy of a DLL directly to one server and not another, there's no easy way to tell.
How do others ensure that copies of software between the two servers are the same version?
Some Background:
Our environment has two (non-clustered) BizTalk front end servers and a separate database back-end. Until recently, though we had both front-ends configured, the host instances were stopped on the second server because of some troubleshooting. They've been disabled for a few months, and we're deployed some updated code in the meantime.
This morning, I did a folder diff on the GAC, as well as the folder that holds the local disk copy of the DLLs for our deployed project (C:\OurProject\ on both servers), and everything matched - same file sizes, same timestamps. However, once I turned on the second set of services, it became obvious that Server2 was using an old version of the project DLL - of the next three files processed, two had normal results and one was clearly out of date.
Please help me avoid an aneurysm.
One thing you may want to look into is the BizTalk Deployment Framework.
We are currently building up a new environment with BizTalk 2009 and I started out with a set of MSBuild scripts that handle exporting sources from SubVersion, building and deploying assemblies using BTSTask.
Of course BTSTask lacks a lot of functionality (start/stop applications) but at least for BizTalk 2006 there is BTSControl.
We use an automated build script whose ultimate end is an MSI with binding files for Dev/Stage/Prod. All released binding files are stored on a share and used to load the BizTalk server by hand. First the App is stopped, MSI's executed on both servers and then MSI imported. During the import, we specify the environment for the bindings and voila. We've had no issues with loss of synch.
So, I'd suggest taking all of your latest MSIs and re-execute them on the servers where you have differences. Otherwise, just try to put a process in place to create a repeatable load process by hand.

How do you deploy your ASP.NET applications to live servers?

I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.

What is involved with making a "build"?

Where I work we use a set schedule to build our applications. What is involved with builds? What is involved with getting an application to build somewhere other than at the local host?
I came across this just the other day... It helps clarify some things regarding building ASP.NET websites. Not a definitive answer, but worth a look.
http://msdn.microsoft.com/en-us/library/399f057w.aspx:
ASP.NET offers two options for precompiling a site:
Precompiling a site in place. This option is useful for existing sites where you want to enhance performance and perform error checking.
Precompiling a site for deployment. This option creates a special output that you can deploy to a production server.
Additionally, you can precompile a site to make it read-only or updatable. The following sections provide more details about each option.
The basic purpose of a build is to ensure that consistent set of all files needed for the website are deployed together.
This is typically accomplished by retrieving the files from source control, packaging them in an appropriate archive or archives (called artifacts of the build process) and then having scripts which send these files to the production or QA servers where the application is then launched.
Depending on the technology and the project, the build script might compile files, run tests, create archives, copy files to different locations and things of that nature.
There are different tools to support the build process to provide the underlying scripting ability. Build scripting can sometimes have unique properties, such as ensuring that a given task is only run once even though different parts of a larger build need to ensure that it is run (such as a task to create a directory to deploy artifacts, the directory is only created once and the build script engine (such as NAnt) makes sure that even if the task is called multiple times it is only run once per build.
If you want to understand more about builds, look around for some open source projects that use a technology you are familiar with, and look at their build proceedure (how you change code and deploy it).
This Joel On Software article is my favorite explanation of builds, daily builds, build servers of all time.
His central thesis in this and many other articles is the more effort it takes for you to create a build (which means any compilation or transformation of source files, including changing their location so that they will be production/test ready) the less productive and lower quality your software will be and the more mistakes you'll be prone to during deployment/builds. Ideally, with a web application, you have a single script that you can double click, it grabs the most current files from source control, compiles them, and deploys everything to the target location.
Many commercial products exist that can help you with this end. I like Team Foundation Server but it may not fit your Budget/Culture/Programming language choice. Having a custom build script is not the worst idea either because it gives your programming a team a great view of your process.
There's a wikipedia article on the subject:
In the field of computer software, the
term software build refers either to
the process of converting source code
files into standalone software
artifact(s) that can be run on a
computer, or the result of doing so.
One of the most important steps of a
software build is the compilation
process where source code files are
converted into executable code.
While for simple programs the process
consists of a single file being
compiled, for complex software the
source code may consist of many files
and may be combined in different ways
to produce many different versions.
The process of building a computer
program is usually managed by a build
tool, a program that coordinates and
controls other programs. Examples of
such a program are make, ant, maven
and SCons. The build utility needs to
compile and link the various files, in
the correct order. If the source code
in a particular file has not changed
then it may not need to be recompiled
(may not rather than need not because
it may itself depend on other files
that have changed). Sophisticated
build utilities and linkers attempt to
refrain from recompiling code that
does not need it, to shorten the time
required to complete the build. Modern
build utilities may be partially
integrated into revision control
programs like Subversion. A more
complex process may involve other
programs producing code or data for
the build process.

Step-By-Step ASP.NET Automated Build/Deploy

Seems like there are so many different ways of automating one's build/deployment that it becomes difficult to parse through all the different scenarios that people support in tutorials on the web. So I wanted to present the question to the stackoverflow crowd ... what would be the best way to set up an automated build and deployment system using the following configuration:
Visual Studio 2008
Web Application Project
CruiseControl.NET
One of the first things I tried was to have CCnet automatically zip the output and copy it to the server, but then that requires manual work to unzip at the destination. However, if we try to copy all the files individually, then it could potentially take a long time if it's a large application (build server lives outside of the datacenter in our office ... I know).
Also of particular interest is how we would support multiple environments as we have dev, qa, uat, and then of course prod.
MSDeploy seems really interesting, but unless I'm interpreting the literature incorrectly, doesn't help in the scenario of deploying from the output of a build server. If anything, it seems like it'll be useful in deploying one build across a build farm ... but even for deploying from one environment to another, one would have to manually change config settings and web service URLs, etc.
I recently spent a few days working on automating deployments at my company.
We use a combination of CruiseControl, NAnt, MSBuild to generate a release version of the app. Then a separate script uses MSDeploy and XCopy to backup the live site and transfer the new files over.
Our solution is briefly described in an answer to this question Automate Deployment for Web Applications?
You might be interested in MSDeploy. Here's a Scott Hanselman post on this. It's only available as a technical preview at the moment (September 2008) but is worth evaluating against your requirements.
There is another new build tool (a very intelligent wrapper) called NUBuild. Its lightweight, open source and extremely easy to setup and provides almost no-touch maintenance. I really like this new tool and we have made it standard tool for our continuous build and integration process of our projects (we have about 400 projects across 75 developers). Try it out.
http://nubuild.codeplex.com/
Easy to use command line interface
Ability to target all .Net framework
version i.e. 1.1, 2.0, 3.0 and 3.5
Supports XML based configuration
Supports both project and file
references
Automatically generates the “complete
ordered build list” for a given
project – No touch maintenance.
Ability to detect and display
circular dependencies
Perform parallel build -
automatically decides which of the
projects in the generated build list
can be built independently.
Ability to handle proxy assemblies
Provides visual clue to the build
process e.g. showing “% completed”,
“current status” etc.
Generates detailed execution log both
in XML and text format
Easily integrated with
Cruise-Control.Net continuous
integration system
Can use custom logger like XMLLogger
when targeting 2.0 + version
Ability to parse error logs
Ability to deploy built assemblies to
user specified location
Ability to synchronize source code
with source-control system
Version management capability
Do you have the ability to run commands remotely? The PsExec utility from Systinternals would let run a command line unzip program on the remote machine. If you have a script that copies the build as a .zip file to the remote site, you would just need one more line for the PsExec call to unzip the files.
I had a related question about getting a deployable set of files from an automated build. I found Web Deployment Projects (links and all in the old question) did what I needed - they're a VS and MSBuild add-on.
This is a common problem (and I wish I had read it sooner) for all development, not just ASP.NET. Being one of its developers, my team naturally uses BuildMaster internally for the entire release process, and for most scenarios it's free. Within the tool, we are able to perform all the standard CI builds to create artifacts and then set up an automation process to deploy these artifacts to any one of the 40+ servers we have internally or externally hosted, depending on the specific application or environment.
Since you specifically mentioned deployment to different testing environments, this is a fundamental aspect of the tool. The idea is to model the environment workflow (e.g. Integration -> QA -> Production) you already have in place and essentially promote a build all the way from source control to production. Most times, it's as simple as adding a deployment action that deploys an artifact to the environment, other times it can be much more complex.
You also casually mentioned configuration file changes are part of deployment, which is another built-in component to BuildMaster. The idea we had was to use the tool itself as the central hub for all configuration files and deployments, thus ensuring the latest changes are applied automatically with a simple "deploy configuration files" action in your deployment plan.
One thing you didn't mention with regard to this process is the database deployment aspect. Most ASP.NET applications require an associated database, otherwise they could just be static HTML files. It is crucial that the database schema gets updated to the appropriate database version with every deployment. There is, not surprisingly, a module within BuildMaster that handles this for you as well. The idea is to store DDL-DML scripts within the tool itself, and by executing scripts only once per environment, it ensures that all of your databases across each environment are up-to-date as your builds are deployed through them. Other scripts (e.g. stored procedures, views, triggers, etc.) are essentially code files and therefore belong in source control. These DROP-CREATE-CONFIGURE type scripts can be run each and every time in most cases with a simple deployment action.
Another piece of the deployment puzzle that most developers do not think about is process automation. Many developers need to perform sign-offs or fill out change request forms in order to manually perform these processes. Again, this is all available as part of the automated workflow setup within BuildMaster. You can setup blockers that do not allow promotion to say the QA environment unless all unit tests have passed, or block promotion to the Staging environment unless someone from the QA team approves the build and all issues in your issue tracking tool are resolved/closed for that particular release.
While I realize I left out CC.NET from the answer, our applications are all built and deployed through BuildMaster so we no longer need it, though we could however just as easily pickup the artifacts from a drop location and deploy them in later environments.
I see that many people use CC for their .NET projects, but why not use Jenkins, Sonarqube? They got all you need. I setup all this in 3 days. I have a Win 2008 server R2, MSSQL, Jenkins, VIsual SVN and Sonarqube.
It all works great and u get all metrics on your project. Sonarqube uses Gallio, Gendarme, FXcop, Stylecop, NDepths and PartCover to get your metrics and all this is pretty straight forward since SonarQube do this automatically without much configuration.
i post som pictures for u too get a feeling for it. Here is Jenkins witch builds and get Sonar metrics and a another job for deploying automatically to IIS
And Sonarqube, all metrics for my project. This is a simple MVC4 app, but it works great!:
If you want more information i can be more specific but i think you should at least consider jenkins. If CC suites you better, at least you looked at good alternative before you chose.
This whole setup uses MSBuild, too build and deploy the apps.

Resources