Development and Test Environment Best Practices? - asp.net

This question is for ASP.NET and SQL Server developers. What are your best practices with respect to setting up your development and test environment? I'm interesting in the following issues:
How many tiers do you recommend and what goes on on each tier? Just dev, test, and production or perhaps dev, test, staging, and production?
Which types of applications and/or servers should run on actual physical hardware and which can get away with a VM?
What are your strategies for loosely coupling users from web sites, web developers from their web/app/DB servers, and DB developers from their DB servers?
How do developers stay "DRY?" (no deodorant jokes, please ;)
What are the pros and cons to putting web, app, and DB servers on their own machines? Does putting servers on separate machines in order to minimize contention for a machine's resources trump any NIC and network latencies that might be introduced by putting them on different machines?
How do you configure your web apps to minimize contention for resources (e.g. virtual directories, separate application pools, etc.)
How and how often do you refresh your databases on each tier? Do you just refresh the data or both the data and objects?
Thanks.

I can't comment on all of these but here's what i've found to work best in my experience.
1) Depends on your resources but ideally i like to have 4.
Dev is hyper flexible and owned by your dev team. It can get updated whenever they feel is best or as features are completed.
QA is updated on a scheduled or delivery basis depending on your process. If you do waterfall its updated when your in the testing phase, if you do iterative agile its updated each iteration. It should mimic prod as closely as possible but you may be able to get away with some compromise (see #2)
Staging should be identical in every way to prod. It should even use real production data if possible (potentially restored from a recent backup of the true production environment.) It should be used for acceptance testing prior to any release.
& Prod
2) Dev can be on a VM usually. QA can too most of the time. Staging and prod should match. I've seen folks run prod on VMs before, it depends on your resources and the demand for your app.
3) Our devs use a backup of prod on local SQL servers for development. This keeps everyone off of a central dev SQL server. Dev web and dev sql are separate boxes (just out of necessity they manage a bunch of projects.) same with QA, Staging and Prod.
4) A lot of testing and communication. If you have one small / medium team this isnt that hard. If you have lots of teams look at something like scrum, formal code reviews, something to keep communication going between teams. Don't treat DRY issues like suggested fixes, treat them like bugs, that need to be fixed. You'll spend way more time maintaining the code than writing it up front so treat maintenance as a 1st class citizen and make sure management is on board with that.
5 & 6) Not really qualified to comment
7) Dev whenever the teams need to, QA and up on a schedule depending on deployments. QA is every iteration / sprint, Staging and Prod is every release.

Related

Memory consumption differs by environment

I have an MVC4 web application that, when volume is put through it, consumes ~400MB RAM in all environments excluding the production environment. When a similar volume of load is put through it on a production server (hosted externally), the memory utilisation trebles to ~1.2GB and the memory isn't released even when the application is idle. The IIS configuration across all environments is the same.
Its also worth noting that the application, when idle, releases some of that memory in my test environments, but doesn't do the same in production. The RAM gradually increases and tops out at 1.2-1.3GB, but never drops below – even if traffic is completely routed away from the server.
I have not been able to recreate this issue on any other environment other than my third party hosting platform, but before I conclusively blame the infrastructure and get the hosting company on the case I wondered:
a) Is this a common problem and why does it happen
b) How can I see what is using the memory
c) Would you expect the same code to consume significantly different levels of system resources based on platform (I know my host may have monitoring etc. in production which will perhaps inflate a little)
Any help on this is appreciated.
This is a common problem which we normally face when we work on Different Environments. This is because System configuration, Windows etc differs from system to system.
In this particular case as we see its a big difference, probably there is some loops or memory is not freed at regular intervals.
Few steps:
Try to get root of the problem i.e. which method is taking time. Use Loggers like nlog.
Try using profilers if you are using Sql Server
And the third is use ants-performance-profiler
Also it depends on number of user hitting on site and some deadlock conditions.
There can be numerous reasons for the same.

Proper DTAP setup for Content Delivery

I've had this setup, but it didn't seem quite right.
How would you improve Content Delivery (CD) development across multiple .NET (customer) development teams?
CMS Server -> Presentation Server Environments
CMS Production -> Live and Preview websites
CMS Combined Test + Acceptance (internally called "Staging") -> Live ("Staging")
CMS Development (DEV) -> Live (Dev website) and sometimes Developer local machines (laptops)
Expectations and restrictions:
Multiple teams and multiple websites
Single DEV CMS license (typical for customers, I believe?)
Enough CD licenses for each developer
Preferably developer could program and run changes locally--was this a reasonable expectation?
Worked
We developed ASP.NET pages using the Content Delivery API against the same broker database for local machines and CD DEV. Local machines had CD dlls, their own license files, and ran/debug fine with queries and component presentation calls.
Bad
We occasionally published to both the Dev presentation server and Developer machines which doesn't seem right now, but I think it was to get schema files on our local machines. But yes, we didn't trust the Dev broker database.
Problematic:
Local machines sometimes needed Tridion-published pages but we couldn't reliably publish to local machines:
Setting multiple publication destinations for a single "Local Machine" publication target wouldn't work--we'd often take these "servers" home.
VPN blocked access to laptops offsite (used "incoming" folder at the time).
Managing publication targets for each developer and setting up CD for each new laptop was good practice (as in exercise, not necessarily as a good idea) but just a little tedious.
Would these hindsight approaches apply?
Synchronize physical files from Dev to local machines on our own?
Don't run presentation sites locally (localhost) but rather build, upload dll, and test from Dev?
We were simply missing a fourth CMS environment? As much as we liked our Sales Guy, we weren't interested in purchasing another CM license.
How could you better setup .NET CD for several developers in an organization?
Edit: #DominicCronin pointed out this is only a subset of a proper DTAP setup. I updated my terms and created a separate question to clarify DTAP with Tridion.
The answer to this one is heavily depending on the publish model you choose.
When using a dynamic model with a framework like DD4T you will suffice with just a single dev environment. There is one CMS, and one CD server in that environment and everything is published to a broker database. The CD environment could be used as an auto build system, the developers purely work locally on a localhost website (which gets the data from the dev broker database), and their changes are checked in an VCS (based on which the auto build could be done).
This solution can do with only a single CMS because there is hardly any code developed on the CMS side (templates are standardized and all work is done on the CD side).
It gets more complex if you are using a static or broker publishing model. Then I think the solution is to split Dev up in Unit-Dev and Dev indeed as indicated by Nuno and Chris.
This solution requires coding on both the CMS and CD side, so every developer has a huge benefit in having its own local CMS and CD env.
Talk to your Tridion account manager and agree a license package that suits the development model you want to have. Of course, they want to maximise their income, but the various things that get counted are all really meant to ensure that big customers pay accordingly, and smaller customers get something they can afford at a price that reflects the benefits they get. In fact, setting up a well-thought-out development street with a focus on quality is the very thing that will ensure good customer satisfaction and a long-running engagement.
OK - so the account managers still have internal rules to follow, but they also have a fair amount of autonomy in coming to a sensible deal with a customer. I'm not saying this will always work, but its way better than blindly assuming that they are going to insist on counting every server the same way.
On the technical side - sure, try to have local developer setups and a common master dev server a-la Chris's 5th. These days, your common dev environment should probably be seen as a build/integration server: the first place where the team guarantees all the tests will run.
Requirements for CM and CD development aren't very different, although you may be able to publish to multiple developer targets from one CM if there's not much CM development going on. (This is somewhat true of MVC-ish approaches, but it's no silver bullet.)

Migrating an ASP.Net App to Azure

I'm getting close to finishing a public-facing ASP.Net app and I'm starting to weigh deployment options. I'm an ASP.Net/SQLServer veteran but noob when it comes to Azure. I'm wondering how others have felt about the learning curve to effectively migrate a local dev ASP.Net/SQLServer apps into Azure cloud.
More specifically:
How steep is the learning curve towards understanding administration and programming concepts, and do you think it's worth the investment?
What is Microsoft's support like if I have catastrophic problems from my cloud infrastructure and my live site is down? My expectation is a large price tag for a not-so-urgent SLA.
Will my non-Azure ASP.Net app require significant modification and/or coupling to run in the Azure environment?
Thanks
I answered a similar question a while back, here. Azure has evolved since then:
Azure's AppFabric Cache is currently in CTP (community technology preview) and will go live some time later this year (sorry, I can't quote a date). With a single configuration change, you'll be able to enable the asp.net session state provider without changing any code, and have your session state available to all of your web role instances.
With Azure v1.3 which rolled out in November, you have have the ability to run tasks at startup with elevated privileges (e.g. to run an MSI to install some prerequisite control suite).
For monitoring, you can take advantage of Microsoft System Center, which now supports Azure directly. Alternatively, you can look into 3rd-party options such as AzureWatch.
With Azure's extra-small instance, you can run a site for approx. $44 monthly. You mentioned catastrophic failures and SLA. With Azure, you need a minimum of two instances for SLA to take effect (this is because your virtual machines are located in physically different areas of the data center, in separate fault domains). So you're looking at approx. $90 / month to run a site with 99.95% uptime. Only you can determine whether this is worth it to you. Yes, you can host with a simple hosting provider for significantly less (such as GoDaddy). However, if your site fails there, you have to wait for it to be detected and then installed on a separate box. Also, you share each server with potentially dozens of other tenants, which will impact your site's performance. With Azure, at most 8 tenants will occupy a box, depending on how many cores you configure your virtual machines to use. And it's incredibly simple to scale up or down to handle traffic increases and decreases.
My personal experience is that there isn't much documentation and you have to search through blogs / forums to find answers for more advanced questions. If you have a nicely design app then there shouldn't be much problem with porting - you can google for Azure version of ASP.NET providers, ie. membership.
The biggest disadvantage may be cost: you have to do your maths but for me it turned out that a VPS hosting is much cheaper than Azure.
I would say that unless you get considerable savings on infrastructure don't move to Azure for just the sake of doing it. A hosted server with SQL and IIS will give you less problems and a bit more freedom.
I see an excellent answer by David Makogon already. The following might be helpful for you as well. The last episode of the Connected Show podcast was about migrating Wold Maps to Azure. If you are considering moving to Azure it is certainly worth listening to, as they explain the challenges they faced during the migration.
You could give a look at Moving Applications to the Cloud on the Microsoft Windows Azure Platform in MSDN.
Cheers.

How much should the staging environment equal the live one?

Management has decided to go for Windows 2008 64 bit with IIS7 to service our main website.
They want to have it staged on a Windows 2003 server with IIS6. [Edit] Yes 32 bit is what they are planning for staging [End Edit]
I want to know what issues, beyond the security issues, that I should put forward, suggesting we should opt for the same server in staging as in the live environment.
I have read great posts like this, but I want something I can say with a few bullet points
That staging and live environments should be the same, is easy for any seasoned developer to understand, my problem is that I am trying to explain this to upper level management people who seem to have already made up their mind...
[Edit]
#Luke:
Its basically a website which gets updated quite often, the whole site is to be staged, tested, before deploying to the live environment.
The site is to be left at the hands of the Marketing department, (non developers) and have them verify that the site has no issues before deployment.
[Edit++]
Code is ASP.NET, used in 3 important customer ordering pages.
Thanks,
Ric
I hope thats not a 32-bit Windows 2003 staging server you're using to test functionality for a 64-bit Windows 2008 production server or you are in for a world of pain.
The staging server should be, as far as possible, the equivalent of the production server because what you are using it for is to answer the question "Does this software work on the production environment?" before actually committing to loading it on the production environment.
Answering the question "Does this software work on a server that is almost totally unlike our production server?" is not useful and in reality all you are doing is committing to testing and debugging the software in yet another environment, but in an environment that you won't actually use. Its more work and in the end you still don't know if it works on your production environment, which is the entire point of having a staging server in the first place.
The more the staging environment matches live, the more issues can be found in test. If you have only a poor match, like what you have here, this limits the kind of bugs might be uncovered. For example, suppose there is an incompatibility with 2008 64bit and some component of the site? You will not find it until you have gone live. This could be too late.
Perhaps you should ask them what they believe a staging environment is. Explain to them that the entire point of a staging environment is to mimic the production environment as well as possible. Explain that if the staging environment is to be drastically different, you might as well not have it. Then if you do not have it, your production site will be used for testing. Tell them that it's really not that big of a deal, just that the site will break a couple times, and possibly have some major security leaks before you get everything fixed due to the lack of proper staging. I'm sure they'll understand.
The general rule is that you can only validate changes that use common subsystems between stage and live. If you are only validating HTML copy changes, and can guarantee that only HTML is being rolled from stage to live, it will probably give you high confidence that the site will work on live.
You have so many differences between stage and live that you can not validate any coding or IIS configuration changes. It will be "push and pray" going to live.
Preferably live and staging should be the same technologies of course (same box?). But what are you staging here, technology or content? If the staging environment is mainly for content then you might get away with both servers not being the same. However, if you're staging technology then you will definitely run into issues where you put stuff live that doesn't work properly. I guess, if the guy with the wallet is willing to be responsible for that, go ahead...
Explain it to the business in terms of risk and money.
The risk of your site encountering issues upon production deploy is known and non-trivial.
The cost of your site going down because of an unforeseen issue is extremely high.
The potential cost of the time it takes your support staff and developers to pinpoint issues each time they're encountered in production because your staging environment isn't answering the right question ("Will my software work in production?") is high, and exacerbates the former.
The late nights and high stress levels repeated failed deployments can incur will lead to an unhappy, unproductive team, which can lead to unacceptably high turnover rates.
The cost of mitigating all of this via the purchase of hardware is relatively low, and many reputable engineers recommend it as a best practice.

Advatages to using virutalization for web development

It's one of those things I see a lot but never really think of. Do you think for the purpose of web application development (specifically ASP.NET WebForms/MVC). Do you think it's advantageous to do such a thing and if so, what kind of advantages come out of it?
By virtualization I mean using products like Hyper-V to separate the server context like your SQL and Web Server, etc.
First question is, virtualization of what? Do you mean server virtualization? Do you mean running VMWare on each dev's laptop with multiple OSes? Do you mean moving everything to the cloud?
Virtualization of servers, in web app context, is not really different from that in general IT - most of the servers on the Internet, including StackOverload's, are bought to handle peak loads and spend most of the time idling away the cycles, so virtualizing them makes sense when you have more than a certain amount.
VMWare on the desktop (or other parallels on other operating systems) is superb because a) your devs can run a full instance of your server environment, including multiple virtual servers connnected in a virtual network - this is about as close to the real thing that you can get, minus hardware costs and minus devs messing with each other's servers. For clients, you can use Linux and multiple Windows installs to test various browsers, font sizes, etc. quickly - also a big win.
Moving everything to the cloud makes sense in many cases, but is probably a topic for a separate full-sized question :)
One big advantage I see is, that every developer can have his/her own sandbox to work on. If someone messes up his/her sandbox he/she can take a clean image and all is OK again. So I guess that means that there is room to experiment without losing valuable time getting back to the normal setup, you can simply do a rollback.
I'm in doubt a bit on whether you should use virtualisation for production environments. Depending on the application of course.
The only time I would use a virtual for ASP.Net development was if the app required specific setup, such as relying on installed software, wierd settings or particular shares. Every developer has their own webserver and can run their own database so if it's a "basic" webapp I don't see much value in virtuals.. it's pretty hard to break anything with a basic web app deployment :)
With a virtual server, you can test your code in a production-like environment. It is also possible to quickly revert back to the original setup. For many applications, it is useful in that time period just after you write the code, but before it goes to production.
I'm a fan of virtualizaion and use it in testing and production (VMWare and Hyper-v) but over the last year I find it less important on a dev machine. TFS provides me with all the backup/rollback ability that I need, multiple versions of .net can now exist on the same machine and VS2008 can target all those versions.
In a development environment a virtual environment is useful to put several different servers on one box, you can have an instance for your web app, one for your services, one for database, etc. That way it mimics your production environment if you are using separate servers.
One of the benefits of using virtualization in production is that your application is not tied to a specific machine. If you wanted to move your web server instance to another box, it is trivial to do so. You don't need to install or configure things on the new server and hope that everything is set up properly.
One problem I have had though in testing virtual instances is that it can run slower for some applications, specifically engineering apps that like running the CPU at 100%. So test before you leap.

Resources