What's the purpose of an integration server? - axapta

I'm new to DevOps, so forgive me if this is trivial, but given the following workflow, what is the purpose of the integration server?
I've been given the following steps as an example of an approach to DevOps at my organisation :
Developers check in changes to source control (TFS).
Build server checks for changes.
Artefacts of the build are deployed to an "integration server" which has a copy of our ERP on it.
A release management application takes the output from this ERP environment and moves it to test, pre-production, and production environment as and when.
Is this approach correct, and if so, is the purpose of an integration server merely to provide a working implementation of code, that isn't accessed for any means other than moving code onto other servers?

My answer is making some assumptions on what it sounds like is going on in your environment.
When you check in changes to source control with AX, it's adding *.xpo text files of the code/objects that are your changes only.
It sounds like your "integration server" is a build/staging server. Imagine these two scenarios:
You have a customization with 3 objects, and you add 2 of the objects to source control and forget one. When you build on the integration server, it could have compile errors because that missing dependent object.
In your development environment, you create test forms and jobs that are basically junk you are experimenting with. You do not add these objects to source control. You wouldn't want this code to be deployed to your other environments, so the integration environment ensures the code is strictly from the repo.
Doing full compiles/syncs against the integration will also help identify issues. Then you can deploy the environment in its entirety to your other environments.
The big thing to realize is that your repo is really only your changes to the base (sys/syp) code. So part of the integration/build process is your code & base code combining.

Related

Best practices and tools to perform code move for ASP.NET projects

I have a ASP.NET project with my own Development, Staging and Production servers.
In all environements, I move code manually. So everytime I have to promote a change, I perform the following steps:
Get my latest code from SVN.
Merge the code between lower and to be promoted environment using tools like Beyond Compare.
Then I move the respective ASPX and DLL files and any Stored Procedures or table data manually to Production.
This is a very time consuming process and I would like to get some automated methods for code moves.
Is there a way I can get the code moved from SVN to my Servers using some automated tools or in a automated packages.
I am using ASP.NET 2.0 with IIS 7 and SQL Server 2008.
msbuild can help you with getting the code from svn and building it. You will need to create simple batch files to run it, alternatively you can use Cruisecontrol for that.
Manual merges should be avoided. If you are using VS2010, you can use xsd to transform your config files to production version
I am not a big fan of storedprocs. If you can encapsulate your stored procs with the code there is less room for errors and rolling back changes etc as well as making the deployment easier. Database schema updates should be done in a batch file and applied automatically.
There are multiple ways to deploy: webdeploy or msi file. It depends on how much work is required during the deployment process
I would look into continuous integration. My favorite because it is simplest to use is TeamCity.
You will still have to do some work with MSBuild.
You can set up the builds to be a button push from the site.
Have the code pushed when you check into svn.
Just about any way you can think of.
I would strongly urge you to use it to ALWAYS build you code and run unit tests on each SVN check it. It does not have to deploy but TeamCity will provide to you constant feedback on the state of your solution.

Coming up with a better ASP.NET deployment strategy

At work we currently use the following deployment strategy:
Run a batch script to clear out all Temporary ASP.NET files
Run a batch script that compiles every ASPX file into its own DLL (ASP.NET Web Site, not Web Application)
Copy each individually changed file (ASPX and DLL) to the appropriate folder on the live server.
Open up the Deployment Scripts folder, run each SQL script (table modifications, stored procs, etc.) manually on the production database.
Say a prayer before going to sleep (joking on this one, maybe)
Test first thing the next morning and hope for the best - fix bugs as they come up.
We have been bitten a few times in the past because someone will forget to run a script, or think they ran something but didn't, or overwrote a sproc related to some module because there are two files (one in a Sprocs folder and one in a [ModuleName]Related folder) or copied the wrong DLL (since they can have the same names with like a random alphanumeric number generated by .NET).
This seems highly inefficient to me - a lot of manual things and very error prone. It can sometimes take 2-3 or more hours for a developer to perform a deployment (we do them late at night, like around midnight) due to all the manual steps and remembering what files need to be copied, where they need to be copied, what scripts need to be run, making sure scripts are run in the right order, etc.
There has got to be an easier way than taking two hours to copy and paste individual ASPX pages, DLLs, images, stylesheets and the like and run some 30+ SQL scripts manually. We use SVN as our source control system (mainly just for update/commit though, we don't do branching) but have no unit tests or testing strategy. Is there some kind of tool that I can look into to help us make our deployments smoother?
I did not go through all of it, but the You're deploying it wrong series from Troy Hunt might be a good place to look at.
Points discussed in the series :
Config transforms
Build Automation
Continuous Integration
We have four stages before it can be deployed.
Development
QA
UAT
Production
We have build scripts (inside bamboo build server) running against QA and against UAT. Our DBA is the only person who can run create scripts against QA, UAT, and PROD. Anything that goes from QA -> UAT is like a test run deployment. UAT gets reverted by copying the production systems down again.
When we release into Production we just create a whole new site and point it at the UAT database and test that environmentally it is working fine. Then when this is working good we flick the 'switch' and point the production IIS record at the next site, and change the DB connection to point at Prod DB.
Because we are using a completely diff folder structure all of our files get copied up so there is no chance of missing one. Because we have had test runs of deployment into UAT we know we haven't missed a DB script (DB Scripts are combined into one generally). Because we have tested a shadow copy of the IIS website we know that environmentally it should work. We can then do all this set up during the day - and then do the final switch flicking at midnight or whenever - reducing the impact on devs.
tl;dr; Automated build and deploy; UAT system for test running deployment; Deployment during work hours; Flick switch/run DB update at midnight.
I am a developer for BuildMaster, a tool which can very easily automate the steps you have outlined above, and we have a limited version free for a team of 5 developers.
Most of your pain points will disappear the moment you set up the deployment automation - mainly the batch script execution and the file-by-file copying. Once you're fully automated, you can even schedule the deployment for night time and only have to worry about it if there's an error in the process (you can set up a notifier for a failed build).
On the database side, you can integrate your database with BuildMaster as well and if you upload the scripts into the tool it will keep track of which ones were run against which database.
To see how to set up a simple web application deployment plan, you can run one of the example applications included. You can also check out: http://inedo.com/support/tutorials/lunchmaster/part-1 to see how to create one yourself - it's slightly outdated since we've made it even easier to get started out-of-the-box but the main concepts are the same.
Please see this blog post and associated talk by Scott Hanselman titled "Web Deployment Made Awesome"
Blog
Video
As for SQL Deployment, you might want to consider one of the following:
RikMigrations
Migrator.NET
FluentMigrator
Mantee Introduction & Source
Have a User Acceptance Testing (UAT) environment which is completely isolated from your development environment, and only accessible to the UAT manager.
Setup a UAT build which you can manually trigger upon each release, when triggered this should send all your deployment files as well as a deployment checklist to the UAT manager, who will redeploy all files to the UAT environment, and run any database upgrade scripts.
Once the applications users and testers have signed off the UAT release, the UAT manager can be authorised to deploy to the PRODUCTION environment using the exact same procedure and checklists as the UAT release. This will guarantee that you never miss any deployment steps, and test the deployment process prior to moving it into production.
Caveats- I'm in an environment where we can't use MSI, batch, etc for the final deployment
Things that helped:
A build server that does the full compilation on a build server and runs all unit tests and integration tests. Why find out you have something in an aspx page that doesn't compile on deployment night? (I admit your Q doesn't make it clear if compilation is happening on deployment night)
I have a page that administrators can reach that exercises environment and deployment failure points, e.g. connect to db, connect to reporting services, send an email, read and write to the temp folder.
Also, put all the things that the administrator needs to change into a file external from web.config. The connection string and app settings sections natively support a way to do this (i.e. don't reinvent the web.config system just to create a separate file)
Here is an article on how to do better integration tests: http://suburbandestiny.com/Tech/?p=601 There is a ton of good literature how to do unit tests, but often if you app already exists, you will have to refactor until unit testing becomes possible. If that isn't an option, then don't be a purist and put together some integration tests that are fast and repeatable as possible.
Keep your dependencies in bin instead of GAC, since it's easier to tell an administrator to copy files than it is to teach them to administer the GAC.

Integrating Automated Web Testing Into Build Process

I'm looking for suggestions to improve the process of automating functional testing of a website. Here's what I've tried in the past.
I used to have a test project using WATIN. You effectively write what look like "unit tests" and use WATIN to automate a browser to click around your site etc.
Of course, you need a site to be running. So I made the test actually copy the code from my web project to a local directory and started a web server pointing to that directory before any of the tests run.
That way, someone new could simply get latest from our source control and run our build script, and see all the tests run. They could also simply run all the tests from the IDE.
The problem I ran into was that I spent a lot of time maintaining the code to set up the test environment more than the tests. Not to mention that it took a long time to run because of all that copying. Also, I needed to test out various scenarios including installation, meaning I needed to be able to set the database to various initial states.
I was curious on what you've done to automate functional testing to solve some of these issues and still keep it simple.
MORE DETAILS
Since people asked for more details, here it is. I'm running ASP.NET using Visual Studio and Cassini (the built in web server). My unit tests run in MbUnit (but that's not so important. Could be NUnit or XUnit.NET). Typically, I have a separate unit test framework run all my WATIN tests. In the AssemblyLoad phase, I start the webserver and copy all my web application code locally.
I'm interested in solutions for any platform, but I may need more descriptions on what each thing means. :)
Phil,
Automation can just be hard to maintain, but the more you use your automation for deployment, the more you can leverage it for test setup (and vice versa).
Frankly, it's easier to evolve automation code, factoring it and refactoring it into specific, small units of functionality when using a build tool that isn't
just driving statically-compiled, pre-factored units of functionality, as is the case with NAnt and MSBuild. This is one of the reasons that many people who were relatively early users of toole like NAnt have moved off to Rake. The freedom to treat build code as any other code - to cotinually evolve its content and shape - is greater with Rake. You don't end up with the same stasis in automation artifacts as easily and as quickly with Rake, and it's a lot easier to script in Rake than NAnt or MSBuild.
So, some part of your struggle is inherently bound up in the tools. To keep your automation sensible and maintained, you should be wary of obstructions that static build tools like NAnt and MSBuild impose.
I would suggest that you not couple your test environment boot-strapping from assembly load. That's an inside-out coupling that only serves brief convenience. There's nothing wrong (and, likely everything right) with going to the command line and executing the build task that sets up the environment before running tests either from the IDE or from the command line, or from an interactive console, like the C# REPL from the Mono Project, or from IRB.
Test data setup is simply just a pain in the butt sometimes. It has to be done.
You're going to need a library that you can call to create and clean up database state. You can make those calls right from your test code, but I personally tend to avoid doing this because there is more than one good use of test data or sample data control code.
I drive all sample data control from HTTP. I write controllers with actions specifically for controlling sample data and issue GETs against those actions through Selenium. I use these to create and clean up data. I can compose GETs to these actions to create common scenarios of setup data, and I can pass specific values for data as request parameters (or form parameters if needs be).
I keep these controllers in an area that I usually call "test_support".
My automation for deploying the website does not deploy the test_support area or its routes and mapping. As part of my deployment verification automation, I make sure that the test_support code is not in the production app.
I also use the test_support code to automate control over the entire environment - replacing services with fakes, turning off subsystems to simulate failures and failovers, activating or deactivating authentication and access control for functional testing that isn't concerned with these facets, etc.
There's a great secondary value to controlling your web app's sample data or test data from the web: when demoing the app, or when doing exploratory testing, you can create the data scenarios you need just by issuing some gets against known (or guessable) urls in the test_support area. Really making a disciplined effort to stick to restful routes and resource-orientation here will really pay off.
There's a lot more to this functional automation (including test, deployment, demoing, etc) so the better designed these resources are, the better the time you'll have maintaining them over the long hall, and the more opportunities you'll find to leverage them in unforseen but beneficial ways.
For example, writing domain model code over the semantic model of your web pages will help create much more understandable test code and decrease the brittleness. If you do this well, you can use those same models with a variety of different drivers so that you can leverage them in stress tests and load tests as well as functional test as well as using them from the command line as exploratory tools. By the way, this kind of thing is easier to do when you're not bound to driver types as you are when you use a static language. There's a reason why many leading testing thinkers and doers work in Ruby, and why Watir is written in Ruby. Reuse, composition, and expressiveness is much easier to achieve in Ruby than C# test code. But that's another story.
Let's catch up sometime and talk more about the other 90% of this stuff :)
We used Plasma on one project. It emulates a web server in process - just point it at the root of your web application project.
It was surprisingly stable - no copying files or starting up an out of process server.
Here is how a test using Plasma looks for us...
[Test]
public void Can_log_in() {
AspNetResponse response = WebApp.ProcessRequest("/Login.aspx");
AspNetForm form = response.GetForm();
form["UserName"] = User.UserName;
form["Password"] = User.Password;
AspNetResponse loggedIn = WebApp.ProcessRequest(Button.Click(form, "LoginUser"));
Assert.IsTrue(loggedIn.IsRedirect());
AspNetResponse homePage = WebApp.ProcessRequest(loggedIn.GetRedirectUrl());
Assert.AreEqual(homePage.Status, 200);
}
All the "AspNetResponse" and "AspNetForm" classes are included with Plasma.
We are currently using an automated build process for our asp.net mvc application.
We use the following tools:
TeamCity
SVN
nUnit
Selenium
We use an msbuild script that runs on a build agent which can be any amount of machines.
The msbuild script gets the latest version of code from svn and builds it.
On success it then deploys the artifacts to a given machine/folder and creates the virtual site in IIS.
We then use MSBuild contrib tasks to run sql scripts to install the database and load data, you could also do a restore.
On success we kick off the nUnit tests. The test setup ensures that selenium is up and running and then drives the selenium tests much in the same way that Watin does. Selenium has a good recorder for tests which can be exported to c#.
The good thing about Selenium is that you can drive FF, Chorme and IE rather than being restricted to IE which was the case with Watin the last time i looked at it. You can also use Selenium to do load testing with the Selenium Grid therefore you can reuse the same tests.
On success msbuild then tags the build in svn. TeamCity has a job that runs overnight that will deploy the latest tag to a staging environment ready for the business users to check the project status the following morning.
In a previous life we had nant & msbuild scripts to fully manage the environment (installing java, selenium etc) however this does take a lot of time so as a pre req we assume each build agent has these installed. In time we will include these tasks.
Why do you need to copy code? Ditch Cassini and let Visual Studio create a virtual directory for you. Sure the devs must remember to build before running web tests if the web app has changed. We have found that this is not a big deal, especially if you run web tests in CI.
Data is a big challenge. As far as I can see, you must choose between imperfect alternatives. Here's how we handle it. First, I should explain that we are working with a large complex legacy WebForms app. Also I should mention that the domain code is not well-suited for creating test data from within the test project.
This left us with a couple of choices. We could: (a) run data setup scripts under the build, or (b) create all data via web tests using the actual web site. The problem with option (a) is that tests become coupled with scripts at a minute level. It makes my head throb to think about synchronizing web test code with T-SQL. So we went with (b).
One benefit of (b) is that your setup also validates application behavior. The problem is...time.
Ideally tests should be independent, without temporal coupling (can run in any order) and not sharing any context (e.g., common test data). The common way to handle this is to set up and tear down data with every test. After some careful thought, we decided to break this rule.
We use Gallio (MbUnit 3), which provides some nice features that support our strategy. First, it lets you specify execution order at the fixture and test level. We have four "setup" fixtures which are ordered -4, -3, -2, -1. These run in the specified order and before all "non setup" fixtures, which by default have an order of 0.
Our web test project depends on the build script for one thing only: a single well-known username/password. This is a coupling I can live with. As the setup tests run they build up a "data context" object that holds identifiers of data (companies, users, vendors, clients, etc.) that is later used (but never changed) throughout other all fixtures. (By identifiers, I don't necessarily mean keys. In most cases our web UI does not expose unique keys. We must navigate the app using names or other proxies for true identifiers. More on this below.)
Gallio also allows you to specify that a test or fixture depends on another test or fixture. When a precedent fails, the dependent is skipped. This reduces the evil of temporal coupling by preventing "cascading failures" which can reap much confusion.
Creating baseline test data once, instead of before each test, speeds things up a lot. However, the setup tests still might take 10 minutes to run. When I'm working on new tests I want to run and rerun them frequently. Enter another cool Gallio feature: Ambience. Ambience is a wrapper around DB4 that provides a very simple way to persist objects. We use it to persist the data context automatically. Thus setup tests must only be run once between rebuilds of the database. After that you can run any or all other fixtures repeatedly.
So what about cleaning up test data? Don't we need to start from a known state? This is a rule we have found it expedient to break. A strategy that is working for us is to use long random values for things like company name, username, etc. We have found that it is not very difficult to keep a test run inside a logical "data space" such that it does not bump into other data. Certainly I fear the day that I spend hours chasing down a phantom failing test only to find that it's some data collision. It's a trade off that is working for us currently.
We are using Watin. I quite like it. Another key to success is something Scott Bellware alluded to. As we create tests we are building up an abstract model of our UI. So instead of this:
browser.TextField("ctl0_tab2_newNote").TypeText("foo");
You will see this in our tests:
User.NotesTab.NewNote.TypeText("foo");
This approach provides three benefits. First, we never repeat a magic string. This greatly reduces brittleness. Second, tests are much easier to read and understand. Last, we hide most of the the Watin framework behind our own abstractions. In the second example, only TypeText is a Watin method. This will make it easier to change as the framework changes.
Hope this helps.
It was difficult, but not impossible, to build an integration test phase into the build process using maven. What happened was essentially this:
Ignore all JUNit tests in a specific directory unless the integration-test phase fires.
Add a maven profile to execute the integration tests.
For the pre-integration test phase -
Start Jetty running the application hitting a test database.
Start the selenium server
Run selenium integration tests in integration test phase
Stop selenium server
Stop selenium
The difficulty in this step was really setting up jetty - we couldn't get it to just launch from a war, so we actually have to have jetty unpack the war, then run the server - but it works, well, and is automated - all you have to do is type mvn -PintegrationTest (that was our integration test profile name) and off it went.
Do you mean automatically starting testing after build finished?
You could write automated scripts to copy the build files to a working IIS while the build complied successfully. And then start the automated BVT by call mstest.exe or other methods.
You could get a try with autoitx or some function language,such as Python,ruby.

Does anybody create installers to deploy internal asp.net web applications?

I've always deployed my web applications via FTP (sometimes even xcopy), and then manually run database scripts myself.
I started deploying this way in the 90's, but lately, I've seen a few web apps with installers. I'm starting to question, if I'm locked into an out dated process. I'm a consultant, my apps are usually internal, so I don't worry about distributing and having others installing them.
But I'm curious; does anybody create installers to deploy internal asp.net web applications?
If so, why? (Voluntarily, mandated, or part of an automation process)
And have you had any problems doing it this way?
absolutely. We use it to do all of our apps. That way we create the installer and run it on the qa and uat environments to test and we know exactly what is going to happen in production. There are no guesses as to what order someone might do something in, or if they miss a step. It makes things a lot easier.
Ooh I forgot about the automated process too. We have systems in place (Ant Hill Pro) which automatically deploy it to the proper environments. The qa people don't have to wait for something to be done, because it's all done at 2 am. If they need to rerun the build with updates, the devs check the code in and we push a button, and it's automatically deployed. No waiting for the build engineer, because he's in a meeting or sick or whatever.
You always want to have an automated way to build and deploy - it greatly reduces the chances of a one-off error if you forget a certain step. Also, it allows you to offload the deploy to someone else easily without having to teach them 100 customized steps. Whether the project is internal or not, all applications should follow best practices.
Personally I'm a bit like the OP; generally I just deploy using FTP, but in saying that typically my applications are internal, or in the case of other projects, 100% managed by me.
I've also been thinking about this lately however, and have started to think about how using proper deployment may improve the process - having to document a detailed install process can be a real pain.
I use Powershell and found really easy to automate lots of tasks. You will probably find a bit different at the very begining but at the end you will see that it's all about the power of the .NET libraries !!!
I have use the "Web Setup Project" to create an MSI that installed the output of a "Web Deployment Project" for an internal app. Our server admin wasn't up to the task to doing a 50 step manual install. For my current app, my server admin doesn't like the 'black box' feel of MSI installers and prefers getting a pile of files and a 50 step deployment manual. (See a pattern here? Ask your server admin what he wants.)
The Web Setup Project doesn't make it immediately obvious how to install to anything other than the "Default Website", other than that, it made the installation process repeatable and created a built in way to rollback (by just running the installer from 1 version ago).
This of course assumes that your virtual directory doesn't hold any user modified content-- I wouldn't trust an MSI to properly merge user created and new files.
We use the "XCopy" deploy model here, since the Ops folks have their own method of setting up security on a new web application on the server.
However, we did need to use an installer when we had to install a web application that was using a newer version of Crystal Reports since it had to do something special with a key and we didn't have a full blown version of CR on the server itself. So keep that in mind when working with third party apps, they may need to do some kind of merge module that the MSI handles easily.
Yep...we have an app that needs a lot of pre-requisites set up....web service, windows service, user accounts, security, folder creation, GAC bits etc....I rolled it all up into a nice MSI with custom actions that can install and uninstall cleanly. Saved about one hours worth of work to deploy on a new box.
A lot of the other smaller apps are just deployed by doing Publish Website to a local folder then ftp'ing the contents to the target.
It greatly depends upon the scale of your project, your enviornment and your internal user base. I rarely deploy with an msi because we are too small an operation to have multiple environments (except for SharePoint, that's different all together) . We develop and use VS to deploy web apps to a development box, assuming they are approved then we use VS again to deploy to the live box.
The only proviso is that we have multiple copies of the web.config (appended with test, dev and live) and we then delete the suffix off the relevant file depending upon where its been deployed.
It's probably not the best methodology (I know it's not), but it works and it aids rapid deployment of small to medium sized solutions in a small-scale user environment.
F5ToDebug...
Your saying its OK to take short cuts if you dont have time to do it properly?
"who's going to test the code on the test environment?" You said it yourself that you have config files for _test - why would that not be a suitable test?

Step-By-Step ASP.NET Automated Build/Deploy

Seems like there are so many different ways of automating one's build/deployment that it becomes difficult to parse through all the different scenarios that people support in tutorials on the web. So I wanted to present the question to the stackoverflow crowd ... what would be the best way to set up an automated build and deployment system using the following configuration:
Visual Studio 2008
Web Application Project
CruiseControl.NET
One of the first things I tried was to have CCnet automatically zip the output and copy it to the server, but then that requires manual work to unzip at the destination. However, if we try to copy all the files individually, then it could potentially take a long time if it's a large application (build server lives outside of the datacenter in our office ... I know).
Also of particular interest is how we would support multiple environments as we have dev, qa, uat, and then of course prod.
MSDeploy seems really interesting, but unless I'm interpreting the literature incorrectly, doesn't help in the scenario of deploying from the output of a build server. If anything, it seems like it'll be useful in deploying one build across a build farm ... but even for deploying from one environment to another, one would have to manually change config settings and web service URLs, etc.
I recently spent a few days working on automating deployments at my company.
We use a combination of CruiseControl, NAnt, MSBuild to generate a release version of the app. Then a separate script uses MSDeploy and XCopy to backup the live site and transfer the new files over.
Our solution is briefly described in an answer to this question Automate Deployment for Web Applications?
You might be interested in MSDeploy. Here's a Scott Hanselman post on this. It's only available as a technical preview at the moment (September 2008) but is worth evaluating against your requirements.
There is another new build tool (a very intelligent wrapper) called NUBuild. Its lightweight, open source and extremely easy to setup and provides almost no-touch maintenance. I really like this new tool and we have made it standard tool for our continuous build and integration process of our projects (we have about 400 projects across 75 developers). Try it out.
http://nubuild.codeplex.com/
Easy to use command line interface
Ability to target all .Net framework
version i.e. 1.1, 2.0, 3.0 and 3.5
Supports XML based configuration
Supports both project and file
references
Automatically generates the “complete
ordered build list” for a given
project – No touch maintenance.
Ability to detect and display
circular dependencies
Perform parallel build -
automatically decides which of the
projects in the generated build list
can be built independently.
Ability to handle proxy assemblies
Provides visual clue to the build
process e.g. showing “% completed”,
“current status” etc.
Generates detailed execution log both
in XML and text format
Easily integrated with
Cruise-Control.Net continuous
integration system
Can use custom logger like XMLLogger
when targeting 2.0 + version
Ability to parse error logs
Ability to deploy built assemblies to
user specified location
Ability to synchronize source code
with source-control system
Version management capability
Do you have the ability to run commands remotely? The PsExec utility from Systinternals would let run a command line unzip program on the remote machine. If you have a script that copies the build as a .zip file to the remote site, you would just need one more line for the PsExec call to unzip the files.
I had a related question about getting a deployable set of files from an automated build. I found Web Deployment Projects (links and all in the old question) did what I needed - they're a VS and MSBuild add-on.
This is a common problem (and I wish I had read it sooner) for all development, not just ASP.NET. Being one of its developers, my team naturally uses BuildMaster internally for the entire release process, and for most scenarios it's free. Within the tool, we are able to perform all the standard CI builds to create artifacts and then set up an automation process to deploy these artifacts to any one of the 40+ servers we have internally or externally hosted, depending on the specific application or environment.
Since you specifically mentioned deployment to different testing environments, this is a fundamental aspect of the tool. The idea is to model the environment workflow (e.g. Integration -> QA -> Production) you already have in place and essentially promote a build all the way from source control to production. Most times, it's as simple as adding a deployment action that deploys an artifact to the environment, other times it can be much more complex.
You also casually mentioned configuration file changes are part of deployment, which is another built-in component to BuildMaster. The idea we had was to use the tool itself as the central hub for all configuration files and deployments, thus ensuring the latest changes are applied automatically with a simple "deploy configuration files" action in your deployment plan.
One thing you didn't mention with regard to this process is the database deployment aspect. Most ASP.NET applications require an associated database, otherwise they could just be static HTML files. It is crucial that the database schema gets updated to the appropriate database version with every deployment. There is, not surprisingly, a module within BuildMaster that handles this for you as well. The idea is to store DDL-DML scripts within the tool itself, and by executing scripts only once per environment, it ensures that all of your databases across each environment are up-to-date as your builds are deployed through them. Other scripts (e.g. stored procedures, views, triggers, etc.) are essentially code files and therefore belong in source control. These DROP-CREATE-CONFIGURE type scripts can be run each and every time in most cases with a simple deployment action.
Another piece of the deployment puzzle that most developers do not think about is process automation. Many developers need to perform sign-offs or fill out change request forms in order to manually perform these processes. Again, this is all available as part of the automated workflow setup within BuildMaster. You can setup blockers that do not allow promotion to say the QA environment unless all unit tests have passed, or block promotion to the Staging environment unless someone from the QA team approves the build and all issues in your issue tracking tool are resolved/closed for that particular release.
While I realize I left out CC.NET from the answer, our applications are all built and deployed through BuildMaster so we no longer need it, though we could however just as easily pickup the artifacts from a drop location and deploy them in later environments.
I see that many people use CC for their .NET projects, but why not use Jenkins, Sonarqube? They got all you need. I setup all this in 3 days. I have a Win 2008 server R2, MSSQL, Jenkins, VIsual SVN and Sonarqube.
It all works great and u get all metrics on your project. Sonarqube uses Gallio, Gendarme, FXcop, Stylecop, NDepths and PartCover to get your metrics and all this is pretty straight forward since SonarQube do this automatically without much configuration.
i post som pictures for u too get a feeling for it. Here is Jenkins witch builds and get Sonar metrics and a another job for deploying automatically to IIS
And Sonarqube, all metrics for my project. This is a simple MVC4 app, but it works great!:
If you want more information i can be more specific but i think you should at least consider jenkins. If CC suites you better, at least you looked at good alternative before you chose.
This whole setup uses MSBuild, too build and deploy the apps.

Resources