Improving Your Build Process - build-process

Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).

When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.

I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"

We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction

I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.

Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.

One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.

Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.

We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT

Related

Updating a single .NET assembly in a production environment

Does anyone know whether it is 100% safe to replace (copy + paste) an assembly with an updated version of itself, where all version history (AssemblyInfo.vb) is exactly the same but the only difference being that a minor code change took place in one of the aspx.vb files.
It is safe if you are sure you didn't break your existing code (method doesn't exist anymore, ...).
If you manually load assemblies make sure to update versions and tokens.
If you are not sure you can duplicate your website into another folder, create a test IIS instance and test the deployment of single files.
Keep in mind that a clean deployment is safe rather than single file deployments that may cause breaking changes if you are not extremely careful. This may not be a problem on test instances but should never be done on Live.
It is definitely possible and very easily doable but it can easily become a habit that can have a negative impact when the production system is becoming very large, possibly unstable and have a large number of users.
There are a couple of ways I would suggest trying or to be mindful of:
Create a installer or multiple installers depending on the projects
in you solution. These installer produce te excutable(s) to install
on the production server. This will be done manually. Backup the previous
installers for rollback.
You can create a ClickOnce application. This can be run manually and also can be scheduled using a stable scheduling app.
One can also make use of Visual Studio's publishing function. After
compiling code and setting it to release mode it gets publish via
file/network/ftp on the production space. This replaces all the markup files and assemblies.
The automate the process you can use a TFS Builder server and
schedule daily or weekly builds. These installations can be made
manually or one can use SMS(Microsoft System Management Services) to
schedule timely installations
One can use Windows Powershell as well
TFS Build Server and SMS carry cost implications of course but it
will be a small price to pay if problems on a production environment
can bring a company down.
There are a couple of ways to do this. See what might work and get into good habits when it comes to an production environment

Best practices and tools to perform code move for ASP.NET projects

I have a ASP.NET project with my own Development, Staging and Production servers.
In all environements, I move code manually. So everytime I have to promote a change, I perform the following steps:
Get my latest code from SVN.
Merge the code between lower and to be promoted environment using tools like Beyond Compare.
Then I move the respective ASPX and DLL files and any Stored Procedures or table data manually to Production.
This is a very time consuming process and I would like to get some automated methods for code moves.
Is there a way I can get the code moved from SVN to my Servers using some automated tools or in a automated packages.
I am using ASP.NET 2.0 with IIS 7 and SQL Server 2008.
msbuild can help you with getting the code from svn and building it. You will need to create simple batch files to run it, alternatively you can use Cruisecontrol for that.
Manual merges should be avoided. If you are using VS2010, you can use xsd to transform your config files to production version
I am not a big fan of storedprocs. If you can encapsulate your stored procs with the code there is less room for errors and rolling back changes etc as well as making the deployment easier. Database schema updates should be done in a batch file and applied automatically.
There are multiple ways to deploy: webdeploy or msi file. It depends on how much work is required during the deployment process
I would look into continuous integration. My favorite because it is simplest to use is TeamCity.
You will still have to do some work with MSBuild.
You can set up the builds to be a button push from the site.
Have the code pushed when you check into svn.
Just about any way you can think of.
I would strongly urge you to use it to ALWAYS build you code and run unit tests on each SVN check it. It does not have to deploy but TeamCity will provide to you constant feedback on the state of your solution.

Release Process Improvements

The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next.
I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'.
So the question is, specify one painful/time consuming part of your release process and what did you do to improve it?
My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases.
This works reasonably well but the problems with this approach are:-
ALL changes in the Dev database are included, some of which may still be 'works in progress'.
Sometimes developers made conflicting changes (that were not noticed until the release was in production)
It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2).
When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly.
It's not an ideal scenario for Continuous Integration.
So the solution was:-
Enforce a policy of all changes to the database must be scripted.
A naming convention was important for ensuring the correct running order of the scripts.
Create/Use a tool to run the scripts at release time.
Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes')
The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script.
Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth.
We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.
We've done a few things over the past year or so to improve our build process.
Fully automated and complete build. We've always had a nightly "build" but we found that there are different definitions for what constitutes a build. Some would consider it compiling, usually people include unit tests, and sometimes other things. We clarified internally that our automated build literally does everything required to go from source control to what we deliver to the customer. The more we automated various parts, the better the process is and less we have to do manually when it's time to release (and less worries about forgetting something). For example, our build version stamps everything with svn revision number, compiles the various application parts done in a few different languages, runs unit tests, copies the compile outputs to appropriate directories for creating our installer, creates the actual installer, copies the installer to our test network, runs the installer on the test machines, and verifies the new version was properly installed.
Delay between code complete and release. Over time we've gradually increased the amount of delay between when we finish coding for a particular release and when that release gets to customers. This provides more dedicated time for testers to test a product that isn't changing much and produces more stable production releases. Source control branch/merge is very important here so the dev team can work on the next version while testers are still working on the last release.
Branch owner. Once we've branched our code to create a release branch and then continued working on trunk for the following release, we assign a single rotating release branch owner that is responsible for verifying all fixes applied to the branch. Every single check-in, regardless of size, must be reviewed by two devs.
We were already using TeamCity (an excellent continuous integration tool) to do our builds, which included unit tests. There were three big improvements were mentioning:
1) Install kit and one-click UAT deployments
We packaged our app as an install kit using NSIS (not an MSI, which was so much more complicated and unnecessary for our needs). This install kit did everything necessary, like stop IIS, copy the files, put configuration files in the right places, restart IIS, etc. We then created a TeamCity build configuration which ran that install kit remotely on the test server using psexec.
This allowed our testers to do UAT deployments themselves, as long as they didn't contain database changes - but those were much rarer than code changes.
Production deployments were, of course, more involved and we couldn't automate them this much, but we still used the same install kit, which helped to ensure consistency between UAT and production. If anything was missing or not copied to the right place it was usually picked up in UAT.
2) Automating database deployments
Deploying database changes was a big problem as well. We were already scripting all DB changes, but there were still problems in knowing which scripts were already run and which still needed to be run and in what order. We looked at several tools for this, but ended up rolling our own.
DB scripts were organised in a directory structure by the release number. In addition to the scripts developers were required to add the filename of a script to a text file, one filename per line, which specified the correct order. We wrote a command-line tool which processed this file and executed the scripts against a given DB. It also recorded which scripts it had run (and when) in a special table in the DB and next time it did not run those again. This means that a developer could simply add a DB script, add its name to the text file and run the tool against the UAT DB without running around asking others what scripts they last ran. We used the same tool in production, but of course it was only run once per release.
The extra step that really made this work well is running the DB deployment as part of the build. Our unit tests ran against a real DB (a very small one, with minimal data). The build script would restore a backup of the DB from the previous release and then run all the scripts for the current release and take a new backup. (In practice it was a little more complicated, because we also had patch releases and the backup was only done for full releases, but the tool was smart enough to handle that.) This ensured that the DB scripts were tested together at every build and if developers made conflicting schema changes it would be picked up quickly.
The only manual steps were at release time: we incremented the release number on the build server and copied the "current DB" backup to make it the "last release" backup. Apart from that we no longer had to worry about the DB used by the build. The UAT database still occasionally had to be restored from backup (eg. since the system couldn't undo the changes for a deleted DB script), but that was fairly rare.
3) Branching for a release
It sounds basic and almost not worth mentioning, yet we weren't doing this to begin with. Merging back changes can certainly be a pain, but not as much of a pain as having a single codebase for today's release and next month's! We also got the person who made the most changes on the release branches to do the merge, which served to remind everyone to keep their release branch commits to an absolute minimum.
Automate your release process whereever possible.
As others have hinted, use different levels of build "depth". For instance a developer build could make all binaries for runnning your product on the dev machine, directly from the repository while an installer build could assemble everything for installation on a new machine.
This could include
binaries,
JAR/WAR archives,
default configuration files,
database scheme installation scripts,
database migration scripts,
OS configuration scripts,
man/hlp pages,
HTML documentation,
PDF documentation
and so on. The installer build can stuff all this into an installable package (InstallShield, ZIP, RPM or whatever) and even build the CD ISOs for physical distribution.
The output of the installer build is what is typically handed over to the test department. Whatever is not included in the installation package (patch on top of the installation...) is a bug. Challenge your devs to deliver a fault free installation procedure.
Automated single step build. The ant build script edits all the installer configuration files, program files that need changed ( versioning) and then builds. No intervention required.
There is still a script run to generate the installers when it's done, but we will eliminate that.
The CD artwork is versioned manually; that needs fixed too.
Agree with previous comments.
Here is what has evolved where I work. This current process has eliminated the 'gotchas' that you've described in your question.
We use ant to pull code from svn (by tag version) and pull in dependencies and build the project (and at times, also to deploy).
Same ant script (passing params) is used for each env (dev, integration, test, prod).
Project process
Capturing requirements as user 'stories' (helps avoid quibbling over an interpretation of a requirement, when phrased as a meaningful user interaction with the product)
following an Agile principles so that each iteration of the project (2 wks) results in demo of current functionality and a releasable, if limited, product
manage release stories throughout the project to understand what is in and out of scope (and prevent confusion abut last minute fixes)
(repeat of previous response) Code freeze, then only test (no added features)
Dev process
unit tests
code checkins
scheduled automated builds (cruise control, for example)
complete a build/deploy to an integration environment, and runs smoke test
tag the code and communicate to team (for testing and release planning)
Test process
functional testing (selenium, for example)
executing test plans and functional scenarios
One person manages the release process, and ensures everyone complies. Additionally all releases are reviewed a week before launch. Releases are only approved if there are:
Release Process
Approve release for a specific date/time
Review release/rollback plan
run ant with 'production deployment' parameter
execute DB tasks (if any) (also, these scripts can be version and tagged for production)
execute other system changes / configs
communicate changes
I don't know or practice SDLC, but for me, these tools have been indispensible in achieving smooth releases:
Maven for build, with Nexus local repository manager
Hudson for continuous integration, release builds, SCM tagging and build promotion
Sonar for quality metrics.
Tracking database changes to development db schema and managing updates to qa and release via DbMaintain and LiquiBase
On a project where I work we were using Doctrine's (PHP ORM) migrations to upgrade and downgrade the database. We had all manner of problems as the generated models no longer matched with the database schema causing the migrations to completely fail half way.
In the end we decided to write our own super basic version of the same thing - nothing fancy, just up's and down's that execute SQL. Anyway it worked out great (so far - touch wood). Although we were reinventing the wheel slightly by writing our own, the fact that the focus was on keeping it simple meant that we have far less problems. Now a release is a cinch.
I guess the moral of the story here is that it is sometimes OK to reinvent the wheel some times as long as you are doing so for a good reason.

Integrating Automated Web Testing Into Build Process

I'm looking for suggestions to improve the process of automating functional testing of a website. Here's what I've tried in the past.
I used to have a test project using WATIN. You effectively write what look like "unit tests" and use WATIN to automate a browser to click around your site etc.
Of course, you need a site to be running. So I made the test actually copy the code from my web project to a local directory and started a web server pointing to that directory before any of the tests run.
That way, someone new could simply get latest from our source control and run our build script, and see all the tests run. They could also simply run all the tests from the IDE.
The problem I ran into was that I spent a lot of time maintaining the code to set up the test environment more than the tests. Not to mention that it took a long time to run because of all that copying. Also, I needed to test out various scenarios including installation, meaning I needed to be able to set the database to various initial states.
I was curious on what you've done to automate functional testing to solve some of these issues and still keep it simple.
MORE DETAILS
Since people asked for more details, here it is. I'm running ASP.NET using Visual Studio and Cassini (the built in web server). My unit tests run in MbUnit (but that's not so important. Could be NUnit or XUnit.NET). Typically, I have a separate unit test framework run all my WATIN tests. In the AssemblyLoad phase, I start the webserver and copy all my web application code locally.
I'm interested in solutions for any platform, but I may need more descriptions on what each thing means. :)
Phil,
Automation can just be hard to maintain, but the more you use your automation for deployment, the more you can leverage it for test setup (and vice versa).
Frankly, it's easier to evolve automation code, factoring it and refactoring it into specific, small units of functionality when using a build tool that isn't
just driving statically-compiled, pre-factored units of functionality, as is the case with NAnt and MSBuild. This is one of the reasons that many people who were relatively early users of toole like NAnt have moved off to Rake. The freedom to treat build code as any other code - to cotinually evolve its content and shape - is greater with Rake. You don't end up with the same stasis in automation artifacts as easily and as quickly with Rake, and it's a lot easier to script in Rake than NAnt or MSBuild.
So, some part of your struggle is inherently bound up in the tools. To keep your automation sensible and maintained, you should be wary of obstructions that static build tools like NAnt and MSBuild impose.
I would suggest that you not couple your test environment boot-strapping from assembly load. That's an inside-out coupling that only serves brief convenience. There's nothing wrong (and, likely everything right) with going to the command line and executing the build task that sets up the environment before running tests either from the IDE or from the command line, or from an interactive console, like the C# REPL from the Mono Project, or from IRB.
Test data setup is simply just a pain in the butt sometimes. It has to be done.
You're going to need a library that you can call to create and clean up database state. You can make those calls right from your test code, but I personally tend to avoid doing this because there is more than one good use of test data or sample data control code.
I drive all sample data control from HTTP. I write controllers with actions specifically for controlling sample data and issue GETs against those actions through Selenium. I use these to create and clean up data. I can compose GETs to these actions to create common scenarios of setup data, and I can pass specific values for data as request parameters (or form parameters if needs be).
I keep these controllers in an area that I usually call "test_support".
My automation for deploying the website does not deploy the test_support area or its routes and mapping. As part of my deployment verification automation, I make sure that the test_support code is not in the production app.
I also use the test_support code to automate control over the entire environment - replacing services with fakes, turning off subsystems to simulate failures and failovers, activating or deactivating authentication and access control for functional testing that isn't concerned with these facets, etc.
There's a great secondary value to controlling your web app's sample data or test data from the web: when demoing the app, or when doing exploratory testing, you can create the data scenarios you need just by issuing some gets against known (or guessable) urls in the test_support area. Really making a disciplined effort to stick to restful routes and resource-orientation here will really pay off.
There's a lot more to this functional automation (including test, deployment, demoing, etc) so the better designed these resources are, the better the time you'll have maintaining them over the long hall, and the more opportunities you'll find to leverage them in unforseen but beneficial ways.
For example, writing domain model code over the semantic model of your web pages will help create much more understandable test code and decrease the brittleness. If you do this well, you can use those same models with a variety of different drivers so that you can leverage them in stress tests and load tests as well as functional test as well as using them from the command line as exploratory tools. By the way, this kind of thing is easier to do when you're not bound to driver types as you are when you use a static language. There's a reason why many leading testing thinkers and doers work in Ruby, and why Watir is written in Ruby. Reuse, composition, and expressiveness is much easier to achieve in Ruby than C# test code. But that's another story.
Let's catch up sometime and talk more about the other 90% of this stuff :)
We used Plasma on one project. It emulates a web server in process - just point it at the root of your web application project.
It was surprisingly stable - no copying files or starting up an out of process server.
Here is how a test using Plasma looks for us...
[Test]
public void Can_log_in() {
AspNetResponse response = WebApp.ProcessRequest("/Login.aspx");
AspNetForm form = response.GetForm();
form["UserName"] = User.UserName;
form["Password"] = User.Password;
AspNetResponse loggedIn = WebApp.ProcessRequest(Button.Click(form, "LoginUser"));
Assert.IsTrue(loggedIn.IsRedirect());
AspNetResponse homePage = WebApp.ProcessRequest(loggedIn.GetRedirectUrl());
Assert.AreEqual(homePage.Status, 200);
}
All the "AspNetResponse" and "AspNetForm" classes are included with Plasma.
We are currently using an automated build process for our asp.net mvc application.
We use the following tools:
TeamCity
SVN
nUnit
Selenium
We use an msbuild script that runs on a build agent which can be any amount of machines.
The msbuild script gets the latest version of code from svn and builds it.
On success it then deploys the artifacts to a given machine/folder and creates the virtual site in IIS.
We then use MSBuild contrib tasks to run sql scripts to install the database and load data, you could also do a restore.
On success we kick off the nUnit tests. The test setup ensures that selenium is up and running and then drives the selenium tests much in the same way that Watin does. Selenium has a good recorder for tests which can be exported to c#.
The good thing about Selenium is that you can drive FF, Chorme and IE rather than being restricted to IE which was the case with Watin the last time i looked at it. You can also use Selenium to do load testing with the Selenium Grid therefore you can reuse the same tests.
On success msbuild then tags the build in svn. TeamCity has a job that runs overnight that will deploy the latest tag to a staging environment ready for the business users to check the project status the following morning.
In a previous life we had nant & msbuild scripts to fully manage the environment (installing java, selenium etc) however this does take a lot of time so as a pre req we assume each build agent has these installed. In time we will include these tasks.
Why do you need to copy code? Ditch Cassini and let Visual Studio create a virtual directory for you. Sure the devs must remember to build before running web tests if the web app has changed. We have found that this is not a big deal, especially if you run web tests in CI.
Data is a big challenge. As far as I can see, you must choose between imperfect alternatives. Here's how we handle it. First, I should explain that we are working with a large complex legacy WebForms app. Also I should mention that the domain code is not well-suited for creating test data from within the test project.
This left us with a couple of choices. We could: (a) run data setup scripts under the build, or (b) create all data via web tests using the actual web site. The problem with option (a) is that tests become coupled with scripts at a minute level. It makes my head throb to think about synchronizing web test code with T-SQL. So we went with (b).
One benefit of (b) is that your setup also validates application behavior. The problem is...time.
Ideally tests should be independent, without temporal coupling (can run in any order) and not sharing any context (e.g., common test data). The common way to handle this is to set up and tear down data with every test. After some careful thought, we decided to break this rule.
We use Gallio (MbUnit 3), which provides some nice features that support our strategy. First, it lets you specify execution order at the fixture and test level. We have four "setup" fixtures which are ordered -4, -3, -2, -1. These run in the specified order and before all "non setup" fixtures, which by default have an order of 0.
Our web test project depends on the build script for one thing only: a single well-known username/password. This is a coupling I can live with. As the setup tests run they build up a "data context" object that holds identifiers of data (companies, users, vendors, clients, etc.) that is later used (but never changed) throughout other all fixtures. (By identifiers, I don't necessarily mean keys. In most cases our web UI does not expose unique keys. We must navigate the app using names or other proxies for true identifiers. More on this below.)
Gallio also allows you to specify that a test or fixture depends on another test or fixture. When a precedent fails, the dependent is skipped. This reduces the evil of temporal coupling by preventing "cascading failures" which can reap much confusion.
Creating baseline test data once, instead of before each test, speeds things up a lot. However, the setup tests still might take 10 minutes to run. When I'm working on new tests I want to run and rerun them frequently. Enter another cool Gallio feature: Ambience. Ambience is a wrapper around DB4 that provides a very simple way to persist objects. We use it to persist the data context automatically. Thus setup tests must only be run once between rebuilds of the database. After that you can run any or all other fixtures repeatedly.
So what about cleaning up test data? Don't we need to start from a known state? This is a rule we have found it expedient to break. A strategy that is working for us is to use long random values for things like company name, username, etc. We have found that it is not very difficult to keep a test run inside a logical "data space" such that it does not bump into other data. Certainly I fear the day that I spend hours chasing down a phantom failing test only to find that it's some data collision. It's a trade off that is working for us currently.
We are using Watin. I quite like it. Another key to success is something Scott Bellware alluded to. As we create tests we are building up an abstract model of our UI. So instead of this:
browser.TextField("ctl0_tab2_newNote").TypeText("foo");
You will see this in our tests:
User.NotesTab.NewNote.TypeText("foo");
This approach provides three benefits. First, we never repeat a magic string. This greatly reduces brittleness. Second, tests are much easier to read and understand. Last, we hide most of the the Watin framework behind our own abstractions. In the second example, only TypeText is a Watin method. This will make it easier to change as the framework changes.
Hope this helps.
It was difficult, but not impossible, to build an integration test phase into the build process using maven. What happened was essentially this:
Ignore all JUNit tests in a specific directory unless the integration-test phase fires.
Add a maven profile to execute the integration tests.
For the pre-integration test phase -
Start Jetty running the application hitting a test database.
Start the selenium server
Run selenium integration tests in integration test phase
Stop selenium server
Stop selenium
The difficulty in this step was really setting up jetty - we couldn't get it to just launch from a war, so we actually have to have jetty unpack the war, then run the server - but it works, well, and is automated - all you have to do is type mvn -PintegrationTest (that was our integration test profile name) and off it went.
Do you mean automatically starting testing after build finished?
You could write automated scripts to copy the build files to a working IIS while the build complied successfully. And then start the automated BVT by call mstest.exe or other methods.
You could get a try with autoitx or some function language,such as Python,ruby.

What is involved with making a "build"?

Where I work we use a set schedule to build our applications. What is involved with builds? What is involved with getting an application to build somewhere other than at the local host?
I came across this just the other day... It helps clarify some things regarding building ASP.NET websites. Not a definitive answer, but worth a look.
http://msdn.microsoft.com/en-us/library/399f057w.aspx:
ASP.NET offers two options for precompiling a site:
Precompiling a site in place. This option is useful for existing sites where you want to enhance performance and perform error checking.
Precompiling a site for deployment. This option creates a special output that you can deploy to a production server.
Additionally, you can precompile a site to make it read-only or updatable. The following sections provide more details about each option.
The basic purpose of a build is to ensure that consistent set of all files needed for the website are deployed together.
This is typically accomplished by retrieving the files from source control, packaging them in an appropriate archive or archives (called artifacts of the build process) and then having scripts which send these files to the production or QA servers where the application is then launched.
Depending on the technology and the project, the build script might compile files, run tests, create archives, copy files to different locations and things of that nature.
There are different tools to support the build process to provide the underlying scripting ability. Build scripting can sometimes have unique properties, such as ensuring that a given task is only run once even though different parts of a larger build need to ensure that it is run (such as a task to create a directory to deploy artifacts, the directory is only created once and the build script engine (such as NAnt) makes sure that even if the task is called multiple times it is only run once per build.
If you want to understand more about builds, look around for some open source projects that use a technology you are familiar with, and look at their build proceedure (how you change code and deploy it).
This Joel On Software article is my favorite explanation of builds, daily builds, build servers of all time.
His central thesis in this and many other articles is the more effort it takes for you to create a build (which means any compilation or transformation of source files, including changing their location so that they will be production/test ready) the less productive and lower quality your software will be and the more mistakes you'll be prone to during deployment/builds. Ideally, with a web application, you have a single script that you can double click, it grabs the most current files from source control, compiles them, and deploys everything to the target location.
Many commercial products exist that can help you with this end. I like Team Foundation Server but it may not fit your Budget/Culture/Programming language choice. Having a custom build script is not the worst idea either because it gives your programming a team a great view of your process.
There's a wikipedia article on the subject:
In the field of computer software, the
term software build refers either to
the process of converting source code
files into standalone software
artifact(s) that can be run on a
computer, or the result of doing so.
One of the most important steps of a
software build is the compilation
process where source code files are
converted into executable code.
While for simple programs the process
consists of a single file being
compiled, for complex software the
source code may consist of many files
and may be combined in different ways
to produce many different versions.
The process of building a computer
program is usually managed by a build
tool, a program that coordinates and
controls other programs. Examples of
such a program are make, ant, maven
and SCons. The build utility needs to
compile and link the various files, in
the correct order. If the source code
in a particular file has not changed
then it may not need to be recompiled
(may not rather than need not because
it may itself depend on other files
that have changed). Sophisticated
build utilities and linkers attempt to
refrain from recompiling code that
does not need it, to shorten the time
required to complete the build. Modern
build utilities may be partially
integrated into revision control
programs like Subversion. A more
complex process may involve other
programs producing code or data for
the build process.

Resources