How to deploy an ASP.NET Application with zero downtime - asp.net

To deploy a new version of our website we do the following:
Zip up the new code, and upload it to the server.
On the live server, delete all the live code from the IIS website directory.
Extract the new code zipfile into the now empty IIS directory
This process is all scripted, and happens quite quickly, but there can still be a 10-20 second downtime when the old files are being deleted, and the new files being deployed.
Any suggestions on a 0 second downtime method?

You need 2 servers and a load balancer. Here's in steps:
Turn all traffic on Server 2
Deploy on Server 1
Test Server 1
Turn all traffic on Server 1
Deploy on Server 2
Test Server 2
Turn traffic on both servers
Thing is, even in this case you will still have application restarts and loss of sessions if you are using "sticky sessions". If you have database sessions or a state server, then everything should be fine.

The Microsoft Web Deployment Tool supports this to some degree:
Enables Windows Transactional File
System (TxF) support. When TxF support
is enabled, file operations are
atomic; that is, they either succeed
or fail completely. This ensures data
integrity and prevents data or files
from existing in a "half-way" or
corrupted state. In MS Deploy, TxF is
disabled by default.
It seems the transaction is for the entire sync. Also, TxF is a feature of Windows Server 2008, so this transaction feature will not work with earlier versions.
I believe it's possible to modify your script for 0-downtime using folders as versions and the IIS metabase:
for an existing path/url:
path: \web\app\v2.0\
url: http://app
Copy new (or modified) website to server under
\web\app\v2.1\
Modify IIS metabase to change the website path
from \web\app\2.0\
to \web\app\v2.1\
This method offers the following benefits:
In the event new version has a problem, you can easily rollback to v2.0
To deploy to multiple physical or virtual servers, you could use your script for file deployment. Once all servers have the new version, you can simultaneously change all servers' metabases using the Microsoft Web Deployment Tool.

You can achieve zero downtime deployment on a single server by utilizing Application Request Routing in IIS as a software load balancer between two local IIS sites on different ports. This is known as a blue green deployment strategy where only one of the two sites is available in the load balancer at any given time. Deploy to the site that is "down", warm it up, and bring it into the load balancer (usually by passing a Application Request Routing health check), then take the original site that was up, out of the "pool" (again by making its health check fail).
A full tutorial can be found here.

I went through this recently and the solution I came up with was to have two sites set up in IIS and to switch between them.
For my configuration, I had a web directory for each A and B site like this:
c:\Intranet\Live A\Interface
c:\Intranet\Live B\Interface
In IIS, I have two identical sites (same ports, authentication etc) each with their own application pool. One of the sites is running (A) and the other is stopped (B). the live one also has the live host header.
When it comes to deploy to live, I simply publish to the STOPPED site's location. Because I can access the B site using its port, I can pre-warm the site so the first user doesn't cause an application start. Then using a batch file I copy the live host header to B, stop A and start B.

OK so since everyone is downvoting the answer I wrote way back in 2008*...
I will tell you how we do it now in 2014. We no longer use Web Sites because we are using ASP.NET MVC now.
We certainly do not need a load balancer and two servers to do it, that's fine if you have 3 servers for every website you maintain but it's total overkill for most websites.
Also, we don't rely on the latest wizard from Microsoft - too slow, and too much hidden magic, and too prone to changing its name.
Here's how we do it:
We have a post build step that copies generated DLLs into a 'bin-pub' folder.
We use Beyond Compare (which is excellent**) to verify and sync changed files (over FTP because that is widely supported) up to the production server
We have a secure URL on the website containing a button which copies everything in 'bin-pub' to 'bin' (taking a backup first to enable quick rollback). At this point the app restarts itself. Then our ORM checks if there are any tables or columns that need to be added and creates them.
That is only milliseconds downtime. The app restart can take a second or two but during the restart requests are buffered so there is effectively zero downtime.
The whole deployment process takes anywhere from 5 seconds to 30 minutes, depending how many files are changed and how many changes to review.
This way you do not have to copy an entire website to a different directory but just the bin folder. You also have complete control over the process and know exactly what is changing.
**We always do a quick eyeball of the changes we are deploying - as a last minute double check, so we know what to test and if anything breaks we ready. We use Beyond Compare because it lets you easily diff files over FTP. I would never do this without BC, you have no idea what you are overwriting.
*Scroll to the bottom to see it :( BTW I would no longer recommend Web Sites because they are slower to build and can crash badly with half compiled temp files. We used them in the past because they allowed more agile file-by-file deployment. Very quick to fix a minor issue and you can see exactly what you are deploying (if using Beyond Compare of course - otherwise forget it).

Using Microsoft.Web.Administration's ServerManager class you can develop your own deployment agent.
The trick is to change the PhysicalPath of the VirtualDirectory, which results in an online atomic switch between old and new web apps.
Be aware that this can result in old and new AppDomains executing in parallel!
The problem is how to synchronize changes to databases etc.
By polling for the existence of AppDomains with old or new PhysicalPaths it is possible to detect when the old AppDomain(s) have terminated, and if the new AppDomain(s) have started up.
To force an AppDomain to start you must make an HTTP request (IIS 7.5 supports Autostart feature)
Now you need a way to block requests for the new AppDomain.
I use a named mutex - which is created and owned by the deployment agent, waited on by the Application_Start of the new web app, and then released by the deployment agent once the database updates have been made.
(I use a marker file in the web app to enable the mutex wait behaviour)
Once the new web app is running I delete the marker file.

The only zero downtime methods I can think of involve hosting on at least 2 servers.

I would refine George's answer a bit, as follows, for a single server:
Use a Web Deployment Project to pre-compile the site into a single DLL
Zip up the new site, and upload it to the server
Unzip it to a new folder located in a folder with the right permissions for the site, so the unzipped files inherit the permissions correctly (perhaps e:\web, with subfolders v20090901, v20090916, etc)
Use IIS Manager to change the name of folder containing the site
Keep the old folder around for a while, so you can fallback to it in the event of problems
Step 4 will cause the IIS worker process to recycle.
This is only zero downtime if you're not using InProc sessions; use SQL mode instead if you can (even better, avoid session state entirely).
Of course, it's a little more involved when there are multiple servers and/or database changes....

To expand on sklivvz's answer, which relied on having some kind of load balancer (or just a standby copy on the same server)
Direct all traffic to Site/Server 2
Optionally wait a bit, to ensure that as few users as possible have pending workflows on the deployed version
Deploy to Site/Server 1 and warm it up as much as possible
Execute database migrations transactionally (strive to make this possible)
Immediately direct all traffic to Site/Server 1
Deploy to Site/Server 2
Direct traffic to both sites/servers
It is possible to introduce a bit of smoke testing, by creating a database snapshot/copy, but that's not always feasible.
If possible and needed use "routing differences", such as different tenant URL:s (customerX.myapp.net) or different users, to deploy to an unknowing group of guinea pigs first. If nothing fails, release to everyone.
Since database migrations are involved, rolling back to a previous version is often impossible.
There are ways to make applications play nicer in these scenarios, such as using event queues and playback mechanisms, but since we're talking about deploying changes to something that is in use, there's really no fool proof way.

This is how I do it:
Absolute minimum system requirements:
1 server with
1 load balancer/reverse proxy (e.g. nginx) running on port 80
2 ASP.NET-Core/mono reverse-proxy/fastcgi chroot-jails or docker-containers listening on 2 different TCP ports
(or even just two reverse-proxy applications on 2 different TCP ports without any sandbox)
Workflow:
start transaction myupdate
try
Web-Service: Tell all applications on all web-servers to go into primary read-only mode
Application switch to primary read-only mode, and responds
Web sockets begin notifying all clients
Wait for all applications to respond
wait (custom short interval)
Web-Service: Tell all applications on all web-servers to go into secondary read-only mode
Application switch to secondary read-only mode (data-entry fuse)
Updatedb - secondary read-only mode (switches database to read-only)
Web-Service: Create backup of database
Web-Service: Restore backup to new database
Web-Service: Update new database with new schema
Deploy new application to apt-repository
(for windows, you will have to write your own custom deployment web-service)
ssh into every machine in array_of_new_webapps
run apt-get update
then either
apt-get dist-upgrade
OR
apt-get install <packagename>
OR
apt-get install --only-upgrade <packagename>
depending on what you need
-- This deploys the new application to all new chroots (or servers/VMs)
Test: Test new application under test.domain.xxx
-- everything that fails should throw an exception here
commit myupdate;
Web-Service: Tell all applications to send web-socket request to reload the pages to all clients at time x (+/- random number)
#client: notify of reload and that this causes loss of unsafed data, with option to abort
# time x: Switch load balancer from array_of_old_webapps to array_of_new_webapps
Decomission/Recycle array_of_old_webapps, etc.
catch
rollback myupdate
switch to read-write mode
Web-Service: Tell all applications to send web-socket request to unblock read-only mode
end try

A workaround with no down time and I am regularly using is:
Rename running .NET core application dll to filename.dll.backup
Upload the new .dll (web application is available and serving the requests while file is being uploaded)
Once upload is complete recycle the Application Pool. Either Requires RDP Access to server or function to recycle application pool in your hosting control panel.
IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. So requests still come in without every knowing the app pool has been recycled and the requests are served seamlessly with no downtime.
I am still searching for more better method than this..!! :)

IIS/Windows
After trying every possible solution we use this very simple technique:
IIS application points to a folder /app that is a symlink (!) to /app_green
We deploy the app to /app_blue
We change the symlink to point to /app_blue (the app keeps working)
We recycle the application pool
Zero downtime, but the app does choke for 3-5 seconds (JIT compilation and other initialization tasks)
Someone called it a "poor man's blue-green deployment" without a load balancer.
Nginx/linux
On nginx/linux we use "proper" blue-green deployment:
nginx reverse proxy points to localhost:3000
we deploy to localhost:3001
warmup the localhost:3001
switch the reverse proxy
shot down localhost:3000
(or use docker)
Both windows and linux solutions can be easily automated with powershell/bash scripts and invoked via Github Actions or a similar CD/CI engine.

I would suggest keeping the old files there and simply overwriting them. That way the downtime is limited to single-file overwrite times and there is only ever one file missing at a time.
Not sure this helps in a "web application" though (i think you are saying that's what you're using), which is why we always use "web sites". Also with "web sites" deploying doesn't restart your site and drop all the user sessions.

Related

Meteor - Transfer users to another replica server if one server goes down

I am using Meteor server v1.8.
I want to create a backup server.
If Main server goes down, users should automatically transferred to a backup server, to avoid any down time.
How can I achieve such behaviour.
Thanks in advance.
You could use process spawning tools like Phusion Passenger to make your application failsafe. If you app crashes, Passenger restarts it immediately.
Some resources on that:
https://github.com/phusion/passenger/wiki/Phusion-Passenger:-Meteor-tutorial
https://www.phusionpassenger.com/docs/tutorials/installation/meteor/
Or use some container orchestration and make your app available on more than one machines. If one instance fails, your app should still be available.
In both cases: install your mongodb on a separate server. This is also why you need to define the MONGO_URL environment variable on your Meteor deployment, so your app process is separated from the Database process.
In such a setup you won't need to "submit" data on failure to a separate server, which I think might even not be a realistic approach in a production environment.

CD-CM setup with merge replication

I am in the process of trying to make the publishing process quicker and simpler for one of our customers, on their sitecore based website. Through research I stumbled upon Merge Replication which might solve some of our issues, but it introduces other issues.
I need your help and guidance to figure out which way is the best!
We've got a CD & CM setup, with 1 CM server which has its own SQL instance. 2 CD servers with a SQL instance each.
At the moment I have the current setup:
CM (Master-, web- and core-database) Web is shown only internally on a secure admin url for the site, this works like a preview site.
CD1 & CD2 are the servers for visiting users, these each have a publishing-target in Sitecore.
When we deploy a release:
1. Deploy new code for CM. Publish templates and potential content changes for Sitecore to Web. Verify and authenticate that everything is correct.
2. Take out CD1 of the Load Balancer, deploy new code for CD1, publish templates and potential changes to Sitecore, verify and authenticate, then put server back into the load balancer.
3. Repeat step 2 for CD2.
4. Deployment done
this process is working OK for us now, we are up and running at all time without downtime on the site.
We've got a few issues with the current setup:
Our search (Elastic search) are being populated when CM publishes to Web, so atm there is an issue with elastic search potentially can have data which is not yet published to the CD servers.
When publishing, the editors could forget to publish to one of the CD servers, which would cause inconsistencies between the servers, which we would like to avoid.
Everything needs to be published multiple times for same environment, takes up time.
Editors do not know what a CD server is, they just want to have a “preview” and “Live” publishing target.
I've looked into the Merge Replication for Sitecore, and actually also have it working in a test environment. The advantage we want from this is that we only have two publishing targets:
Preview (CM server preview database)
Live (CM server web database, which then gets replicated out into the CD servers web databases)
The Elastic search instance will relay on data from CM’s web database, which is live data.
We have can have a Elastic search instance running on preview as well.
The issue here is, that now I can't deploy only for CD1 or CD2, when doing deployment. What if I have breaking changes towards Sitecore? The site will break if I publish new breaking Sitecore items to a server which hasn't been deployed to yet?
How can I get the best of these two worlds? Any?
Do you have an ES for each CD?
If you publish data to a single CD and have a shared ES you will get inconsistency either way.
Else I would make make changes to the publish dialog where only an admin/developer could see the CD servers individually.
Example of normal user:
Preview
Live
Example of admin user:
Preview
Live
CD1
CD2

Updating code on production server when using Go

When I develop and update files on production server with PHP I just copy the files on the fly and everything seems to work without interrupting the server.
But if I am to update the code on the Go server and application and would need to kill the server, copy the src files to the server, run go install, and then start the server, this would interrupt the service, and if I do this quite often then it is going to look very bad for my users of the service.
How can I update files without the downtime when using Go with Go's http server?
PHP is an interpreted language, which means you provide your code in source format and the PHP interpreter will read it and execute it (it may create a more compact binary form so that it doesn't have to analyze the source again when needed).
Go is a compiled language, it compiles into a native executable binary; going further it is statically linked which means every code and library your app is referring to is compiled and linked when the executable is created. This implies you can't just "drop-in" new go modules into a running application.
You have to stop your running application and start the new version. You can however minimize the downtime: only stop the running application when the new version of the executable is already created and ready to be run. You may choose to compile it on a remote machine and upload the binary to the server, or upload the source and compile it on the server, it doesn't matter.
With this you could decrease the downtime to a maximum of few seconds, which your users won't notice. Also you shouldn't update in every hour, you can't really achieve significant updates in just an hour of coding. You could schedule updates daily (or even less frequently), and you could schedule them for hours when your traffic is low.
If even a few seconds downtime is not acceptable to you, then you should look for platforms which handle this for you automatically without any downtime. Check out Google App Engine - Go for example.
The grace library will allow you to do graceful restarts without annoyance for your users: https://github.com/facebookgo/grace
Yet in my experience restarting Go applications is so quick, unless you have an high traffic website it won't cause any trouble.
First of all, don't do it in that order. Copy and install first. Then you could stop the old process and run the new one.
If you run multiple instances of your app, then you can do a rolling update, so that when you bounce one server, the other ones are still serving. A similar approach is to do blue-green deployments, which has the advantage that the code your active cluster is running is always homogeneous (whereas during a rolling deploy, you'll have a mixture until they've all rolled), and you can also do a blue-green deployment where you normally have only one instance of your app (whereas rolling requires more than one). It does however require you to have double the instances during the blue-green switch.
One thing you'll want to take into consideration is any in-flight requests -- you may want to make sure that in-flight requests continue to go to old-code servers until their finished.
You can also look into Platform-as-a-Service solutions, that can automate a lot of this stuff for you, plus a whole lot more. That way you're not ssh'ing into production servers and copying files around manually. The 12 Factor App principles are always a good place to start when thinking about ops.

Best practice for updating a live website running on IIS

Currently, when we have updates that we want to roll out to our live website, we run a .bat file that copies the entire folder structure up from our development environment to the live servers. This replaces the folder that the virtual directory points to with the new updated new one. This is done while the server and IIS are live and obviously while users are accessing the website.
We occasionally get errors that are caused by files or folders becoming 'locked' immediately after an update, and usually the only option is to stop IIS or reboot the server. We're guessing that this 'locking' is caused by the .bat file attempting to overwrite a file while it is in use by IIS.
Has anybody else experienced this and/or what would you recommend as the best way to update a live website on the fly with minimal downtime (i.e. almost no downtime at all).
Thanks.
Reposted as an answer rather than a comment, was having a stupid moment there!
If you can't have downtime, then the best thing is to have another copy of the server running that you can divert users to when you want to update the primary. Then you divert your users back to the primary after the update.
You can use the State Server sitting on another server to ensure that any session state is maintained when they switch from one to the other.
We are currently experimenting with Microsoft's Web Farm Framework, which seems to do this kind of thing very well.
Our setup involves a front end server, a primary and secondary web server, and a separate state server. WFF keeps copies of web apps in sync on both machines and state server ensures that if a user switches servers between requests (or their current server goes offline), that they should not notice the change.
To upgrade the primary, take it out of load-balancing which will divert all of it's requests to the secondary server. Do your upgrade, put it back in to rotation, and then repeat the process with the second server.
Buy a second webserver, buy a load balancer. Mark 1 server offline, upgrade, bring back online. Mark server 2 offline, upgrade, bring back online.
Another option is to have two folders that you alternate between for every deployment, like in blue-green deployment. So when the blue folder is currently live, you deploy the new codebase to the green folder and then when it is ready you change the IIS settings to point to the green folder.

How do you deploy your ASP.NET applications to live servers?

I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.

Resources