As part of my overall development practices review I'm looking at how best to streamline and automate our ASP.net web development practices.
At the moment, our process goes something like this:
Designer builds frontend as static HTML/CSS on a network share. This gets tweaked until signed off. (e.g. http://myserver/acmesite_design)
Once signed off, developer takes over and copies over frontend HTML/CSS to a new directory on the same server (e.g. http://myserver/acmesite_development)
Multiple developers work on local copy until project is complete.
Developer publishes code to an external publicly accessible server for a client to review/signoff.
Edits made locally based on feedback.
Republish to external server.
Signoff
Developer publishes to live public server
What goes wrong? Lots of things!
Version Control — this is obviously a must and is being introduced
Configuration errors — many many times, there are environment specific paths and variables (such as DB names, image upload directories, web server paths etc. etc.) which incorrectly get copied from local to staging to live etc. etc. with very embarrassing results.
I'm pretty confident I've got no.1 under control. What about configuration management? Does anyone have any advice as to how best to manage an applications structure within asp.net apps to minimize these kinds of problems?
I found that using SVN, NAnt and NUnit with Cruise Control.net solves a lot of the issues you describe. I think it works well for small groups and it's all free. Just need to learn how to use them.
CruiseControl.net helps you put together builds and continuous integration.
Use NAnt or MSBuild to do different environment builds (DEV, TEST, PROD, etc).
http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET
You got the most important part right. Use version control. Subversion is a good choice.
I usually store configuration along with the site; i.e. when coding a PHP-based site I have a file named config.php-dist. If you want the site to work at all you'll have to copy + edit in all the required parameters (this avoids storing passwords in version control). The -dist file should have reasonable defaults.
Upload directories should be relative if possible; actually all directories should be relative. I'm not experienced in ASP.net, but if it's anything like PHP the current directory is always the directory of the file being requested. If you channel all requests through a single file (i.e. index.asp), then this can even be found programmatically. Or you could find it programmatically by using the equivalent of dirname(____FILE____) in your configuration file.
I also recommend installing IIS (or whatever webserver you are using) on all development workstations (including the designers). Makes life easier as noone can step on each others toes. What one has to do is simply add test hosts to the hosts file (\windows\system32\drivers\etc\hosts iirc) in addition to adding a site to the local IIS. This plays well with version control (checkout, add site to IIS and hosts-file, edit edit edit commit).
One thing that really helps is making sure you keep your paths relative where you can and centralise them where you can't, so when I've been working with ASP.Net I have tended to use web.config to store any configuration and path related data that can't be found programmatically. It is quite possible to find information like your current application path programmatically through the Request object - it's worth looking in some detail over what the environment makes available to you.
One way to make sure you don't end up on something that is dependent on the path name is having a continuous integration server executing your test suite against your application. Each time this happens you create a random filepath. As soon as someone introduces a dependency on the filepath it will fail.
Related
I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?
Maybe i'm totally outdated but for last four years i've been using simple FTP upload feature while uploading new website even without building it within Visual Studio. Just bunch of ASPX and CS files as in Visual Studio.
I do understand that compiling the project will provide me with some security defence so ones who have access to the server won't be able to read those files in text editors and i will avoid first time compilation but is that so important?
I mean, you can always do a lot of harm if you have access to server that just reading CS files instead of DLL.
First time compilation usually takes no more than 1 minute just searching for compiled version of the site will take as much time.
Now i'm watching video on PluralSight which explains new MSDeploy tool available from ASP.NET and i can't see any good reason to use it.
So what's wrong with the old fashioned way of just sending files via FTP without compiling or using fancy tools?
I did speed test and with MSDeploy i can deploy a website twice faster than old-fashioned FTPing. So instead of 4 minutes it will take 2.
Now from another perspective, when i already have alive project on the web. In which have to change Default.aspx because i have typo in some html tag. Deployment via MSDeploy will take 10 times more than uploading one file
Maybe i miss something?
MSDeploy does things which FTPing to a site can't do. Need to change a machine.config? You're unlikely to have FTP write access to the folder which contains it. Want to change a server setting in a server-version-independent manner? FTP won't do that. Etc. FTP works fine for copying files to folders in which you have write access, but that's all it can do.
When you deploy a project you can do a lot of things with it.
You can set up a job in your deploy that packages all your javascript into one file and all your css into one file.
You can set up a job in your deployment that changes a bunch of config settings to match your production server settings (rather then development settings).
The idea of deployment is that you take your current development website and transform it into a production website without having to do any of that manually.
The most important thing is that when you can only deploy your website you will never forget to package your js or forget to remove some debugging code because you can't just sneakly update a single file.
Where's the best place for a production asp.net application? I mean a place that we need less permission manipulation on folders and probably the experts choice.
under C:\inetpub\wwwroot or C:\inetpub or elswhere ?
In development/test phases I usually put it under C:\inetpub\wwwroot and create a new web application without setting bindings. But on production version with binding I'm not sure where's the right place.
You can put it anywhere you like, they key thing is to ensure that the app pool it is running under is set to run as a low privileged user (like NT AUTHORITY\NETWORK SERVICE), then ensure that user has Read (and possibly Browse if you want it) permissions on the folder you put your web app in. Very seldom (if ever) will the user need Write or Modify permissions on the folder.
and on a new system I had a lot of problem to modify batch files, setting permissions
Setting permissions should not be a problem, you should set the same basic permissions i mentioned above for the user you want to run the app pool as. You can use PowerShell or WMI for this, and you should use the same permissions no matter what folder you install in to.
You could always wrap all this up into an installer, then it can be as simple as hitting Next.. Next... Finish... in an installer wizard to set up your website on any machine. Doing this in an installer also gives you some certainty that nothing has been missed.
Personally I have a 'Development' folder on my D: drive which is then subdivided into different categories depending on the work. I generally don't use inetpub directory and any permission issues I come across I just set directly onto the relevant folder within my own development structure.
On production environments I've used in the past, we've generally done the same thing. Mainly to help backup scenarios really, but also because there's no strict need to use the default IIS directories - you're free to structure things how you like.
Personally, I always create a new folder (in the root of a drive) called WebSites. I then make sure it has the appropriate permissions for the website process(es) (aka App Pools).
eg.
C:\
|_WebSites
|_www.Foo.com
|_www.Bah.com
It also makes it easier to manage because you don't have to hunt through the folder structures to find any/all websites.
But technically, it can be (more or less) anywhere - just needs to have the correct permissions set.
Bonus Answer
I also remove the Default Website from IIS .. which in effect means I can also delete c:\inetpub\wwwroot.
You can put the website any where on the server hard disk, Just make sure it is a secure folder and also I recommend to don't put it in the same OS drive, in case it failed and you needed to formate it.
C:\inetpub\wwwroot and C:\inetpub are just the default places nothing more.
Really depends on how the production server is configured and how operations likes to operate over there. Typically we setup a second "data" drive on servers for a few reasons:
a) Back in the old days, there were a lot of cannonical attacks where the attacker would try to navigate from c:\inetpub to c:\winnt\cmd.exe. Putting things on a different drive prevented this sort of thing.
b) Recovery -- if the OS gets hosed, you can pretty easily reinstall/reimage or move the data disks to another box and get things stood up fast.
c) Typically is lots easier to do things like swap the non-os disk in case you need more disk space or faster disks or whatever.
Basically, off the OS drive is a good idea. Though virtualization and modern deployment tools make lots of this matter less.
I write a lot of code, most of it I throw away eventually when I am done with it; recently I was thinking that if I just kept every small piece of utility script I wrote, named it, tagged it and filed it in a dev shell, I will never loose the code, and on top of that I won't need to redo something I have done already, which is the main motivation, as I keep finding myself writing something I've done earlier.
Is there a ASP.NET shell style environment anywhere?
If not, what would be the best way to go about this?
I am looking to be able to do the following:
Write big or small bits of code.
Derive from or chain together alread written code/libraries/services.
Ability to have everything on my desktop (would that mean IIS on the desktop? or is there an lighter weight mechanism?), sync'ed with the server at home, so if I am on the move I can still access this and make this part of my day-to-day workflow.
You could build a unique solution, with many class library projects inside. Each project would address a specific scenario, something like this:
MyStuff (Solution)
MyStuff.Common
MyStuff.Validation
MyStuff.Web
MyStuff.Encryption
etc.
Then you can put this solution on an online versioning service like bitbucket or assembla, so you can access your source code from anywhere, edit it and commit it back to the server. This way you get the advantages of versioning and you store your code on a remote server so even if your harddisk breaks it's not a problem, cause what's on the server is what matters.
You should either look into a source control system (Git perhaps?) or into a file storage / syncing / sharing service like DropBox.
DropBox would allow you to access code snippets from wherever you are and works really easily (just drop a file into a folder).
If you need versioning and branching you're going to have to look into a source control system. Since you have a server at home, that should be no problem.
I'm trying desperately to move from VSS to a real source control system. Options include TFS and SVN.
My designers need to keep their ability to modify source files and instantly preview their changes in a browser without having to commit their changes. Using FPSE with VSS, this works flawlessly, since saving a file causes the copy in the working folder on the dev server to be updated, so they can just save and refresh their browser which is pointed at the dev server.
The site in question consists of 350k+ lines of classic ASP code and some new ASP.NET MVC. They only need to be able to modify views within the MVC code, not C#.
Though Expression includes a version of Cassini for local debugging, Cassini does not support classic ASP.
Surely someone has solved this problem before. It can't be necessary to install IIS on each designer's machine (this is absolutely untenable). I need a way to have a common working folder on a dev webserver updated whenever someone saves a file locally, just like using FPSE.
I'd rather not write an FPSE proxy that knows how to talk to TFS/SVN. Any suggestions?
(I know I've asked this question in the past, but I haven't yet found a solution.)
Why the need to copy the source files when they are saved, why not simply save the files to a network share and work on them directly? If the dev server is constantly being overwritten after every save anyway surely the effect is the same?
This probably won't be as instantaneous as you like, but with TFS you could set up a Continuous Integration (CI) build that builds and deploys the project to a test server on check-in. If you do this, you'll want them checking in to a QA type branch, then, once they are happy with how they look, they can then merge to the mainline branch for the real build and integration.