Related
I am working on a ASP.net MVC4 project where a same project needs to be deployed to many clients on daily basis, each client will have its own domain / sub domain and a separate app pool and db (MSSSQL).
Doing each deployment manually could take at least 1-2 hours if everything goes well. Is there anyway using which I can do this in some automated way?
Moreover, we also need to update all of the apps when a new version is released.. may be one by one or all of them at same time. However, doing this manually could take weeks and once we have more clients then it will not possible doing this update manually.
The update involves, suspending app for some time, taking a full backup of files and db, update application code/ files in app folder, upgrade db with a script and then start app, doing some diagnosis script to check if update was successful or not, if not we need to check what went wrong?
How can we automate this updates? Any idea would be great on how to approach this issue.
As a developer for BuildMaster, I can say that this scenario, known as the "Core Version" pattern, is a common one. If you're OK with a paid solution, you can setup your deployment plans within the tool that do exactly what you described.
As a more concrete example, we experience this exact situation in a slightly different way. BuildMaster has a set of 60+ extensions that rely on a specific SDK version. In our recent 4.0 release, we had to re-deploy every extension because of breaking API changes within the SDK. This is essentially equivalent to having a bunch of customers and deploying to them all at once. We have set up our deployment plans such that any time we create a new release of the SDK application, we have the option to set a variable that says to build every extension that relies on the SDK:
In BuildMaster, the idea is to promote a build (i.e. an immutable object that travels through various environments like Dev, Test, Staging, Prod) to its final environment (where it becomes the deployed build for the release). In your case, this would be pushing your MVC application to its final environment, and that would then trigger the deployments of all dependent applications (i.e. your customers' instances of your application). For our SDK, the plan looks like this:
For your scenario, you would only need the single action, "Promote Build". As I mentioned before, any dependents would then be promoted to their final environments, so all your customer deployments would kick off once that action is run during deployment. As an example, our Azure extension's deployment plan for its final environment looks like this (internal URLs redacted):
You may have noticed that these plans are marked "Shared", which means every extension we have has the exact same deployment plan, but utilizes different variables to handle the minor differences like names, paths, etc.
Since this is such an enormous topic I could go on for ages, but I think that should be sufficient for your use-case if you wanted to try it out.
There are others but you could setup Team Server Foundation to deploy automated builds.
http://msdn.microsoft.com/en-us/library/ff650529.aspx
I find the easiest way to do this from an MVC project is to create a publish profile.
This is done by right-clicking your project selecting publish and then configuring it to your needs.
Then from TFS you create a new build definition, this kicks of a wizard which takes you through it.
There are quite a few options which would be too long to go into for every scenario.
The main change I usually find the most important is to set an MSBuild Argument to deploy with the publish profile.
This can be found at Process > Advanced > MSBuild Arguments.
Once this is configured correctly it's a simple case of right-clicking and queue new build to build and deploy.
You wil need different PublishProfile/Build configuration per deployment environment.
For backups I use a powershell script which can be called manually or from TFS.
You also have a drop folder in TFS which keeps a backup of x many releases.
The datbases are automatically configured via Sql server to backup, TBH I didn't set that up it was a DB admin guy who is also involved with releases.
From a dev testing side I use jMeter (http://jmeter.apache.org/) to run some automated scripts that check that users can login and view certain screens, just to confirm nothing major has gone wrong. However there is usually a testing team to run more detailed tests, again not setup by me.
All of the above will probably take you sometime to setup but in the long run it will literally save you weeks of time over a year.
A free alternative to TFS is http://www.cruisecontrolnet.org/, I have used this in the past too and is pretty good.
You can automate your .Net deployments with Beanstalk, which will give you a way to trigger deployments with a single click, watch progress, manage permissions and see history of deployments. Check out this guide on the topic:
http://guides.beanstalkapp.com/deployments/deploy-dotnet.html
I hope you will find it useful.
P.S. - I work at Beanstalk.
For our company websites we develop in a style that would be most accurately called Continuous Deployment (http://toni.org/2010/05/19/in-praise-of-continuous-deployment-the-wordpress-com-story/)
We have a web farm and every site is located on two servers for fail over purposes. We wish to automate the process of deploying to our dev server, test server, and prod servers. Out websites are ASP.Net websites (not web applications) so we don’t do builds, we just push the pages. We generally change our main site anywhere from 2-20 times a day depending on many variables. These changes can be simple text/html changes to adding new pages/functionality.
Currently we use TFS for our source control, and each site is its own repository. We don’t have branches, you manually push your changes to dev (we develop on local machines) when it needs to be reviewed, then after sign off you check in your changes and push them to prod manually (this ensures that the source files are what is actually on prod and merges happen right then). Obviously there is a lot of room for error in here and it happens, infrequently, where a dev forgets a critical file and brings down the entire site.
It seems that most deployment tools are built upon a build process, which isn’t something we are likely to move to since we need to be extremely agile in meeting our business needs. What we’d really like, we think, is something like the following:
Dev gets latest version of website from source control, which is essentially a copy of production
Dev does work on his own box.
When it is ready to be reviewed by task owner he creates some form of changeset (either checking in the change or something else) and a tool would then push that change to dev
After review (and further possible changes) dev then is able to have the tool push all the changes for this task to test (UAT)
After this sign off the changes are automatically pushed to prod and checked into the prod branch, or whatever, for anyone else to get
We’ve toyed around with the idea of having three branches of the code in TFS, but we aren’t sure how you would pull from one branch (prod) but check in to a dev branch (none of us are TFS gurus).
So, how have other people handled this scenario? We are not wedded to TFS as a source control provider if there is something out there that would do what we wanted, nor do we require a vertical solution, we are more than happy to plug in different components as long as they can be plugged in, or even develop our own if we have to.
Thanks in advance.
I'm sure you can do this with TFS, but I've successfully done what you're describing with Mercurial, and it works pretty well.
With Mercurial (I'd assume Git is the same way), you can pull from one repo, and push to another. So you'd pull from Prod, do your change, commit locally, push to Dev, and then whenever you're happy, push the changesets into Prod (either from Dev or from Local, doesn't matter).
To get started, you'd have a Prod repo, then clone it into a Dev repo - both of these are on your server. At this point they're identical.
TortoiseHG on Windows gives you the ability to easily choose which repository you're synchronizing with. You can keep both the Prod and Dev repos in your list, and decide which one to use when it's time to pull or push.
At work we currently use the following deployment strategy:
Run a batch script to clear out all Temporary ASP.NET files
Run a batch script that compiles every ASPX file into its own DLL (ASP.NET Web Site, not Web Application)
Copy each individually changed file (ASPX and DLL) to the appropriate folder on the live server.
Open up the Deployment Scripts folder, run each SQL script (table modifications, stored procs, etc.) manually on the production database.
Say a prayer before going to sleep (joking on this one, maybe)
Test first thing the next morning and hope for the best - fix bugs as they come up.
We have been bitten a few times in the past because someone will forget to run a script, or think they ran something but didn't, or overwrote a sproc related to some module because there are two files (one in a Sprocs folder and one in a [ModuleName]Related folder) or copied the wrong DLL (since they can have the same names with like a random alphanumeric number generated by .NET).
This seems highly inefficient to me - a lot of manual things and very error prone. It can sometimes take 2-3 or more hours for a developer to perform a deployment (we do them late at night, like around midnight) due to all the manual steps and remembering what files need to be copied, where they need to be copied, what scripts need to be run, making sure scripts are run in the right order, etc.
There has got to be an easier way than taking two hours to copy and paste individual ASPX pages, DLLs, images, stylesheets and the like and run some 30+ SQL scripts manually. We use SVN as our source control system (mainly just for update/commit though, we don't do branching) but have no unit tests or testing strategy. Is there some kind of tool that I can look into to help us make our deployments smoother?
I did not go through all of it, but the You're deploying it wrong series from Troy Hunt might be a good place to look at.
Points discussed in the series :
Config transforms
Build Automation
Continuous Integration
We have four stages before it can be deployed.
Development
QA
UAT
Production
We have build scripts (inside bamboo build server) running against QA and against UAT. Our DBA is the only person who can run create scripts against QA, UAT, and PROD. Anything that goes from QA -> UAT is like a test run deployment. UAT gets reverted by copying the production systems down again.
When we release into Production we just create a whole new site and point it at the UAT database and test that environmentally it is working fine. Then when this is working good we flick the 'switch' and point the production IIS record at the next site, and change the DB connection to point at Prod DB.
Because we are using a completely diff folder structure all of our files get copied up so there is no chance of missing one. Because we have had test runs of deployment into UAT we know we haven't missed a DB script (DB Scripts are combined into one generally). Because we have tested a shadow copy of the IIS website we know that environmentally it should work. We can then do all this set up during the day - and then do the final switch flicking at midnight or whenever - reducing the impact on devs.
tl;dr; Automated build and deploy; UAT system for test running deployment; Deployment during work hours; Flick switch/run DB update at midnight.
I am a developer for BuildMaster, a tool which can very easily automate the steps you have outlined above, and we have a limited version free for a team of 5 developers.
Most of your pain points will disappear the moment you set up the deployment automation - mainly the batch script execution and the file-by-file copying. Once you're fully automated, you can even schedule the deployment for night time and only have to worry about it if there's an error in the process (you can set up a notifier for a failed build).
On the database side, you can integrate your database with BuildMaster as well and if you upload the scripts into the tool it will keep track of which ones were run against which database.
To see how to set up a simple web application deployment plan, you can run one of the example applications included. You can also check out: http://inedo.com/support/tutorials/lunchmaster/part-1 to see how to create one yourself - it's slightly outdated since we've made it even easier to get started out-of-the-box but the main concepts are the same.
Please see this blog post and associated talk by Scott Hanselman titled "Web Deployment Made Awesome"
Blog
Video
As for SQL Deployment, you might want to consider one of the following:
RikMigrations
Migrator.NET
FluentMigrator
Mantee Introduction & Source
Have a User Acceptance Testing (UAT) environment which is completely isolated from your development environment, and only accessible to the UAT manager.
Setup a UAT build which you can manually trigger upon each release, when triggered this should send all your deployment files as well as a deployment checklist to the UAT manager, who will redeploy all files to the UAT environment, and run any database upgrade scripts.
Once the applications users and testers have signed off the UAT release, the UAT manager can be authorised to deploy to the PRODUCTION environment using the exact same procedure and checklists as the UAT release. This will guarantee that you never miss any deployment steps, and test the deployment process prior to moving it into production.
Caveats- I'm in an environment where we can't use MSI, batch, etc for the final deployment
Things that helped:
A build server that does the full compilation on a build server and runs all unit tests and integration tests. Why find out you have something in an aspx page that doesn't compile on deployment night? (I admit your Q doesn't make it clear if compilation is happening on deployment night)
I have a page that administrators can reach that exercises environment and deployment failure points, e.g. connect to db, connect to reporting services, send an email, read and write to the temp folder.
Also, put all the things that the administrator needs to change into a file external from web.config. The connection string and app settings sections natively support a way to do this (i.e. don't reinvent the web.config system just to create a separate file)
Here is an article on how to do better integration tests: http://suburbandestiny.com/Tech/?p=601 There is a ton of good literature how to do unit tests, but often if you app already exists, you will have to refactor until unit testing becomes possible. If that isn't an option, then don't be a purist and put together some integration tests that are fast and repeatable as possible.
Keep your dependencies in bin instead of GAC, since it's easier to tell an administrator to copy files than it is to teach them to administer the GAC.
I've always deployed my web applications via FTP (sometimes even xcopy), and then manually run database scripts myself.
I started deploying this way in the 90's, but lately, I've seen a few web apps with installers. I'm starting to question, if I'm locked into an out dated process. I'm a consultant, my apps are usually internal, so I don't worry about distributing and having others installing them.
But I'm curious; does anybody create installers to deploy internal asp.net web applications?
If so, why? (Voluntarily, mandated, or part of an automation process)
And have you had any problems doing it this way?
absolutely. We use it to do all of our apps. That way we create the installer and run it on the qa and uat environments to test and we know exactly what is going to happen in production. There are no guesses as to what order someone might do something in, or if they miss a step. It makes things a lot easier.
Ooh I forgot about the automated process too. We have systems in place (Ant Hill Pro) which automatically deploy it to the proper environments. The qa people don't have to wait for something to be done, because it's all done at 2 am. If they need to rerun the build with updates, the devs check the code in and we push a button, and it's automatically deployed. No waiting for the build engineer, because he's in a meeting or sick or whatever.
You always want to have an automated way to build and deploy - it greatly reduces the chances of a one-off error if you forget a certain step. Also, it allows you to offload the deploy to someone else easily without having to teach them 100 customized steps. Whether the project is internal or not, all applications should follow best practices.
Personally I'm a bit like the OP; generally I just deploy using FTP, but in saying that typically my applications are internal, or in the case of other projects, 100% managed by me.
I've also been thinking about this lately however, and have started to think about how using proper deployment may improve the process - having to document a detailed install process can be a real pain.
I use Powershell and found really easy to automate lots of tasks. You will probably find a bit different at the very begining but at the end you will see that it's all about the power of the .NET libraries !!!
I have use the "Web Setup Project" to create an MSI that installed the output of a "Web Deployment Project" for an internal app. Our server admin wasn't up to the task to doing a 50 step manual install. For my current app, my server admin doesn't like the 'black box' feel of MSI installers and prefers getting a pile of files and a 50 step deployment manual. (See a pattern here? Ask your server admin what he wants.)
The Web Setup Project doesn't make it immediately obvious how to install to anything other than the "Default Website", other than that, it made the installation process repeatable and created a built in way to rollback (by just running the installer from 1 version ago).
This of course assumes that your virtual directory doesn't hold any user modified content-- I wouldn't trust an MSI to properly merge user created and new files.
We use the "XCopy" deploy model here, since the Ops folks have their own method of setting up security on a new web application on the server.
However, we did need to use an installer when we had to install a web application that was using a newer version of Crystal Reports since it had to do something special with a key and we didn't have a full blown version of CR on the server itself. So keep that in mind when working with third party apps, they may need to do some kind of merge module that the MSI handles easily.
Yep...we have an app that needs a lot of pre-requisites set up....web service, windows service, user accounts, security, folder creation, GAC bits etc....I rolled it all up into a nice MSI with custom actions that can install and uninstall cleanly. Saved about one hours worth of work to deploy on a new box.
A lot of the other smaller apps are just deployed by doing Publish Website to a local folder then ftp'ing the contents to the target.
It greatly depends upon the scale of your project, your enviornment and your internal user base. I rarely deploy with an msi because we are too small an operation to have multiple environments (except for SharePoint, that's different all together) . We develop and use VS to deploy web apps to a development box, assuming they are approved then we use VS again to deploy to the live box.
The only proviso is that we have multiple copies of the web.config (appended with test, dev and live) and we then delete the suffix off the relevant file depending upon where its been deployed.
It's probably not the best methodology (I know it's not), but it works and it aids rapid deployment of small to medium sized solutions in a small-scale user environment.
F5ToDebug...
Your saying its OK to take short cuts if you dont have time to do it properly?
"who's going to test the code on the test environment?" You said it yourself that you have config files for _test - why would that not be a suitable test?
Seems like there are so many different ways of automating one's build/deployment that it becomes difficult to parse through all the different scenarios that people support in tutorials on the web. So I wanted to present the question to the stackoverflow crowd ... what would be the best way to set up an automated build and deployment system using the following configuration:
Visual Studio 2008
Web Application Project
CruiseControl.NET
One of the first things I tried was to have CCnet automatically zip the output and copy it to the server, but then that requires manual work to unzip at the destination. However, if we try to copy all the files individually, then it could potentially take a long time if it's a large application (build server lives outside of the datacenter in our office ... I know).
Also of particular interest is how we would support multiple environments as we have dev, qa, uat, and then of course prod.
MSDeploy seems really interesting, but unless I'm interpreting the literature incorrectly, doesn't help in the scenario of deploying from the output of a build server. If anything, it seems like it'll be useful in deploying one build across a build farm ... but even for deploying from one environment to another, one would have to manually change config settings and web service URLs, etc.
I recently spent a few days working on automating deployments at my company.
We use a combination of CruiseControl, NAnt, MSBuild to generate a release version of the app. Then a separate script uses MSDeploy and XCopy to backup the live site and transfer the new files over.
Our solution is briefly described in an answer to this question Automate Deployment for Web Applications?
You might be interested in MSDeploy. Here's a Scott Hanselman post on this. It's only available as a technical preview at the moment (September 2008) but is worth evaluating against your requirements.
There is another new build tool (a very intelligent wrapper) called NUBuild. Its lightweight, open source and extremely easy to setup and provides almost no-touch maintenance. I really like this new tool and we have made it standard tool for our continuous build and integration process of our projects (we have about 400 projects across 75 developers). Try it out.
http://nubuild.codeplex.com/
Easy to use command line interface
Ability to target all .Net framework
version i.e. 1.1, 2.0, 3.0 and 3.5
Supports XML based configuration
Supports both project and file
references
Automatically generates the “complete
ordered build list” for a given
project – No touch maintenance.
Ability to detect and display
circular dependencies
Perform parallel build -
automatically decides which of the
projects in the generated build list
can be built independently.
Ability to handle proxy assemblies
Provides visual clue to the build
process e.g. showing “% completed”,
“current status” etc.
Generates detailed execution log both
in XML and text format
Easily integrated with
Cruise-Control.Net continuous
integration system
Can use custom logger like XMLLogger
when targeting 2.0 + version
Ability to parse error logs
Ability to deploy built assemblies to
user specified location
Ability to synchronize source code
with source-control system
Version management capability
Do you have the ability to run commands remotely? The PsExec utility from Systinternals would let run a command line unzip program on the remote machine. If you have a script that copies the build as a .zip file to the remote site, you would just need one more line for the PsExec call to unzip the files.
I had a related question about getting a deployable set of files from an automated build. I found Web Deployment Projects (links and all in the old question) did what I needed - they're a VS and MSBuild add-on.
This is a common problem (and I wish I had read it sooner) for all development, not just ASP.NET. Being one of its developers, my team naturally uses BuildMaster internally for the entire release process, and for most scenarios it's free. Within the tool, we are able to perform all the standard CI builds to create artifacts and then set up an automation process to deploy these artifacts to any one of the 40+ servers we have internally or externally hosted, depending on the specific application or environment.
Since you specifically mentioned deployment to different testing environments, this is a fundamental aspect of the tool. The idea is to model the environment workflow (e.g. Integration -> QA -> Production) you already have in place and essentially promote a build all the way from source control to production. Most times, it's as simple as adding a deployment action that deploys an artifact to the environment, other times it can be much more complex.
You also casually mentioned configuration file changes are part of deployment, which is another built-in component to BuildMaster. The idea we had was to use the tool itself as the central hub for all configuration files and deployments, thus ensuring the latest changes are applied automatically with a simple "deploy configuration files" action in your deployment plan.
One thing you didn't mention with regard to this process is the database deployment aspect. Most ASP.NET applications require an associated database, otherwise they could just be static HTML files. It is crucial that the database schema gets updated to the appropriate database version with every deployment. There is, not surprisingly, a module within BuildMaster that handles this for you as well. The idea is to store DDL-DML scripts within the tool itself, and by executing scripts only once per environment, it ensures that all of your databases across each environment are up-to-date as your builds are deployed through them. Other scripts (e.g. stored procedures, views, triggers, etc.) are essentially code files and therefore belong in source control. These DROP-CREATE-CONFIGURE type scripts can be run each and every time in most cases with a simple deployment action.
Another piece of the deployment puzzle that most developers do not think about is process automation. Many developers need to perform sign-offs or fill out change request forms in order to manually perform these processes. Again, this is all available as part of the automated workflow setup within BuildMaster. You can setup blockers that do not allow promotion to say the QA environment unless all unit tests have passed, or block promotion to the Staging environment unless someone from the QA team approves the build and all issues in your issue tracking tool are resolved/closed for that particular release.
While I realize I left out CC.NET from the answer, our applications are all built and deployed through BuildMaster so we no longer need it, though we could however just as easily pickup the artifacts from a drop location and deploy them in later environments.
I see that many people use CC for their .NET projects, but why not use Jenkins, Sonarqube? They got all you need. I setup all this in 3 days. I have a Win 2008 server R2, MSSQL, Jenkins, VIsual SVN and Sonarqube.
It all works great and u get all metrics on your project. Sonarqube uses Gallio, Gendarme, FXcop, Stylecop, NDepths and PartCover to get your metrics and all this is pretty straight forward since SonarQube do this automatically without much configuration.
i post som pictures for u too get a feeling for it. Here is Jenkins witch builds and get Sonar metrics and a another job for deploying automatically to IIS
And Sonarqube, all metrics for my project. This is a simple MVC4 app, but it works great!:
If you want more information i can be more specific but i think you should at least consider jenkins. If CC suites you better, at least you looked at good alternative before you chose.
This whole setup uses MSBuild, too build and deploy the apps.