I have question about ASP.NET web site and MSSQL database deployment. We are hosting asp.net web site and developed new version, the some asp.net files are changed and database is modified a little. What is the best why to upload new version of web site and upgrade MSSQL database without downtime?
I've managed a large website for the past 5 years with monthly releases and manage to have zero downtime more than 95% of the time. The key, unfortunately, is ensuring that the database is always backwards compatible, but only to the previous release, so you have the opportunity to rollback.
So, if you plan to drop a column, for example, that your application depends on:
Change your application code to not depend on the column, and release that (without removing the column in the database).
In the next release drop the column (as the application no longer relies on it).
It takes some discipline from your dev team, but is surprisingly easy to achieve if you have proper the right environments setup (dev/test/staging/production).
When you release:
Deploy database changes to a staging environment, which is as close to production as possible. Do this preferably in an automated fashion, using something like SQL Compare and SQL Data Compare, so you know the database is completely up to date with your test environment.
Perform "smoke tests" using the old application, but the new database schema, ensuring no major breaking changes have been introduced to the database.
Release your application code.
Smoke test your staging application.
Release to production.
Another thing we do to ensure zero downtime on the website is blue-green deployment. This involves having 2 folders for each website, updating one and switching the IIS home directory once it is up to date. I've blogged about this here: http://davidduffett.net/post/4833657659/blue-green-deployment-to-iis-with-powershell
Not to do it. Point.
ZERO downtime installsĀ“s are VERY hard to do and involve multipel copies of the database, prechecking it in a staging engironment, carefull programming and resyncing the database.
It is pretty much always better to take a small downtime. Sleep long in the night, deploy at 2 in the morning. Or wake up earlier. Identify when it is lease inconvenient for your users.
100% uptime is VERY expensive to imeplement, in terms of the amount of time spent on it. Unless here is a strict business case for it, occasional downtime is a much saner busienss decision.
Even large sites like salesforce.com and ebay.com have scheduled maintenance windows in which at least portions of those sites are unavailable for a certain period of time due to changes to the backends.
For ebay it's every Thursday night and lasts for 4 hours in which "certain features may be slow or unavailable during this time". For salesforce, they schedule and notify users as needed.
Depending on your site, you might be better off scheduling a 1 hour window at some late hour in which your site is at it's lowest traffic level. Notify users ahead of time at 1 week before, 1 day before, and 1 hour before.
Prior to taking it offline make sure you test your deployment from a copy of your current production database on a different server. This will give you an idea of any problems you could run into as well as let you know exactly how long it should take. Double that number when notifying users. Run the tests multiple times to ensure not only the time it's going to take but also to verify data consistency.
Duffman has a good answer with regards to running versions in parallel for a very short window in order to get the updates pushed. However, their is usually a reason for data model changes and it's typically better to transform all of your existing data at the time of deployment. Running this transformation might make certain transactions invalid while it's going on and result in corrupted data.
Having gone through many "hot" production pushes I can say with 100% certainty that neither I nor my clients ever want to deal with those again. There is absolutely no room for error.
Related
This may very well be a question that is too broad to answer but any ideas would be incredibly beneficial. I have a web site where load times are incredibly slow in one environment but not the other. In general, the time to first byte is around 15 seconds on most pages. It takes this long on every page within the entire application and not only on first load. I have been troubleshooting the issue for several days now and feel completely lost as to the actual cause for the latency.
Now for a long explanation about the issue.
The environment is a Frankenstein monster of different sources where too many people have had their hands in it, from what I can gather. I have carefully taken the time to compare each of the two environments and haven't identified a key difference. There are numerous things at play here, but I can summarize the main components.
It is a .NET web application built using Orchard CMS running within IIS and has a SQL Server backend. A dedicated server hosts the database and the another dedicated server hosts the web application itself, which is pretty standard. The main difference between the environments is the production site is running in Liquid Web and the new development site is running in AWS. Basically, the site will ultimately be migrated to AWS once the latency issues are resolved.
AWS has more than enough resources. In fact, production (Liquid Web) has been running into issues as of late due to the CPU usage being nearly maxed out. There are many more resources in AWS, and neither of the servers appear to be using more than 1% or 2% of their available resources. I verified this.
If the issue is within the database, I'm not really sure where else to look. I used SQL Server Profiler on the database server to analyze traffic and no transactions were taking more than a half second, aside from the Audit Logins/Outs (which from my research is normal behavior). The main database queries execute almost immediately after trying to navigate to a page within the site, not 15 seconds later when the page loads.
I had a thought that the network traffic in AWS application server and the database server could be bottlenecked somewhere. However, resolving the application locally does not improve performance. I thought it could have been an issue with the routing within the domain, such as the way in which DNS is set up, but that does not seem to be the case either... or perhaps it is, and I just haven't figured out the best way to troubleshoot that. Either way, resolving the application on localhost does not improve performance. The page still hangs for 15-20 seconds.
The vRAM usage for the site's application pool and the default app pool certainly does seem on the high end, if that makes a difference.
I have browsed the IIS logs and cannot find anything obvious. Granted, I don't have much experience in IIS and could be missing something. Windows Event logs show me nothing out of the ordinary either. There are some errors in both Liquid Web and AWS in regards to printer drivers not being installed, but those have nothing to do with the application itself.
I am unsure of how to check if it has something to do with the Orchard CMS. Granted, this is just a package/framework that was migrated over into the dev server, directly along with the application itself. I see nothing that would have changed within the environment.
The fact is that the two environments seem identical, yet one is running very slowly based on some factor that I just can't seem to identify.
Thank you!
We have an application deployed on Windows Azure as a Web Role and we are using Pingdom for testing page load times: http://tools.pingdom.com/fpt/
The url for the application on Windows Azure is: http://www.doctorspring.com .
The load time of the app is usually around 7s.
The database is an SQL Azure database and the role and the database are in the same zone.
Sample pingdom result: http://tools.pingdom.com/fpt/#!/CllGggrMz/http://www.doctorspring.com/
Sample pingdom result(with gzip):http://tools.pingdom.com/fpt/#!/f2TUbR6OX/www.doctorspring.com
Suspecting that Azure could be the problem, we tried a free hosting from Somee as:
http://www.doctorspring.somee.com
The load time of the app on Somee is around 3.5s.
Sample pingdom result: http://tools.pingdom.com/fpt/#!/o3gZOjTwH/http://www.doctorspring.somee.com/
That is a huge performance issue for us.
Can you please help us understand the problem with Azure or suggest a method, as to how can we overcome it?
Thanks,
Manish
In both cases, loading the homepage is unacceptably slow - 3.5 seconds to generate a page is around 10 times slower than you need to be when there's no load on the site. I'd expect the site to crumble under even moderate load with this kind of performance.
Without knowing how the site is constructed, it's hard to explain the reason one environment is faster than the other - but my guess is that whatever is generating the page (some kind of CMS?) is the cause. Azure is known to be a touch slow when doing database queries - though normally this only manifests itself under extreme conditions.
I'd recommend tuning the CMS - especially with caching. We found that Azure is normally pretty fast, but when doing database lookups (e.g. retrieving content for the CMS), it can be variable; if your CMS is doing a LOT of database queries to get the homepage content, it's going to be slow.
It's also worth running Yslow - there's some low-hanging fruit on getting performance up.
What services are you running in Azure? Web-role, VM, Website? Are you connecting to an Azure Database instance from the homepage (if so how many distinct calls are you making)?. I'm getting around a 7.5 second load time from London, but to be honest even 3 seconds is too slow for the homepage. It's hard to know what's causing the prolonged page-load but if you are connecting to a DB instance there's a great deal you can do e.g.
Render the page and make some asynchronous calls to spool in additional data.
Make sure your Azure services are running close together
Consider caching database content to a blob. E.g. for the data in "Medical Questions Answered in Last 24 Hours" if you are pulling this from a DB on every load you could considerably speed up access by routinely caching this to a html file stored in a blob container and inject it into the page.
If you must make DB calls from the homepage try to make as few round trips as possible by batching up your queries into a stored procedure.
I've made a lot of assumptions here, but there are certainly things you could do to drastically improve performance on this page.
What is the best way for developing a database based application? We can have two approaches.
One common database for all the developers.
Separate database for all the developers.
What are the pros and cons of each? And which one is better way?
Edit: More then one developer is supposed to update the database and we already have SqlExpress 2005 on each developer machine.
Edit: Most of us are suggesting a common database. However if one of the dev has modified the code and database schema . He has not committed the code changes but the schema changes has gone to the common database. Will it not possibly break the other developers code.
Both -
I like a single database that changes are tested on before going live, or going to a 'formal' test environment. This is your developer's sanity check; it stays up to date with the live system and it makes sure they always consider each others changes. The rule should be that changes don't go on here if they might break something else.
A database per developer is great (even essential) when more than one developer is making updates. It allows them all the development flexibility they want without breaking things for other developers.
The key is to have a process for moving database changes from development through to your live system, and stick to your process.
Shared database
Simpler
Less cases of "It works on my machine".
Forces integration
Issues are found quickly (fail fast)
Individual databases
Never affect other developers, but this is also a bad thing, in continuous integration
We use a shared development database and it works out nicely. Our schema rarely changes in a way that makes it backwards incompatible, but occasionally a design change will occur before we go live, and we simply ask the other developers to update.
We do have separate development application (web) servers, but they share the same database. Our developers do have the option to use their own database, as they know how to set this up, and will do that on occasion, but only temporarily. The norm, for us, is to share the database.
Thought I'd throw this out there, but why not let every developer host their own instance of SQL Server Developer on their desktops and then have a shared server for each of the other environments (development, QA, and prod)? I think even the basic MSDN that comes with Visual Studio Pro (if you opt for it) includes a license for SQL Server Developer.
The developer can work on their desktop without impacting the others and then you can have them move the code to the next shared environment as you see fit (at will, with daily/weekly builds, etc.).
EDIT:
I should add that the desktop instance allows developers to do things that he DBAs often restrict on shared environments. This includes database creation, backup/restore, profiler, etc.. These things are not essential but they allow the developer to become so much more productive while reducing the demands they make against your DBAs.
The shared environment is completely necessary for testing - I would not recommend going from desktop to production. But you can add so much by allowing the developers to have 100% control over a given database environment (including isolation from others) with a relatively minor cost.
Depends on your development, testing and maintenance cycles. Also on the size and location of the development team (and of course organization). If you support several versions of the database you might need even more environments.
In real world I found the following approach rather satisfying:
single central database/application for testing purposes, gets all the changes by various developers periodically merged into it
local copies for development (so you are free to drop and reload the whole database)
upgrade scripts are maintained for any changes to schema, auxiliary and sample data sets
Here are some further points:
If two developers (two teams) are working on changes that can affect each other then they should complete their tasks independently and then integrate/merge and test. For this it is much better to have separate development environments (unless they have to work together in which case I consider them to be a part of the same team; still they can work on their own copies of the database and share it if necessary)
If they work on the changes that do not influence each other they could work on the main server. Or on their own local copies of the database.
So, developing on the local copy has all the benefits with no risk in a general case (when you support multiple versions of the system and maintain upgrade scripts anyway).
Still it is great if you can share test cases so ability to dump/restore the database easily and quickly is a big plus.
EDIT:
All of the above assume that having a copy on the local machine of the whole system for testing purposes is feasible (size, performance, licenses, etc).
I would opt for solution #1 : One common database for all the developers.
Pros
Less expensive for the infrastructure;
Only one dump is required when it's time to refresh the development database;
Everyone develops with the same data, so it closely represents the production environment;
Cons
If one developer performs a bad operation, this could impact a larger amount of developers.
As for solution #2 : One independant database for each of the developers;
Pros
This could be useful for new features developments, when development requires isolation;
Cons
More expensive for the company (infrastructure, licences...);
Multiplication of problems caused by eager isolation development environment (works in devloper's environement, not integrated);
Multiplication of dumps by the DBAs of the same copy from the production environment.
Considering the above, I would recommend, depending on your company size:
One database for development;
One database for testing the integration;
One database for acceptance tests;
One for new feature development that will perhaps require integration tests.
If your company doesn't require integration tests, then go with acceptance tests, this step is crucial before going to production.
One per developer plus a continuous integration and build server to run unit and integration tests. That gives you the best of both worlds.
Having all developers modify a single dev database quickly becomes less productive once the amount of database change reaches a certain level because it forces a developer to deploy changes to the shared database before he is ready to check-in, which means other parts of the code line may break unnecessarily.
Simple answer:
Have one development database, and if the developers want their own, they can just run their own instance on their own machines. Just be sure to test/publish on the shared.
We do both:
We use code generation where I'm at and our database is generated as well. So we have an instance on each developer's box where the database is generated. Then we use the scripts that are generated to apply the changes to a central test database. If that goes well we apply the changes to the production database during a release.
What's nice with this approach is that when our "source of truth" is checked in to source control, all the database changes are automatically distributed to the other developers when they rebase and regenerate. It works well for us.
The best way is single database on Test/QA server and one database (probably on developer's local computer) for each developer (so, 10 developers work with 10 + 1 databases).
The same approach as for general development: each developer has own copy of source code on local machine.
Also, multiple-database approach simplifies the keeping database schema in version control systems. We are keeping database creation scripts in SVN.
We are using the approach, described here:
http://www.sqlaccessories.com/Howto/Version_Control.aspx
You might also want to look at Refactoring Databases. Aside from discussing database changes, he includes discussions on going from development to production in a way that reduces risk.
Why on earth would you want a separate database for all developers?
Have one common database for all, that way the table structure is consistent and the sql statements are as well.
The biggest problems with developers having their own databases are:
First it is unlikely to be the size
of the real production database (if
you take all the databases we need to
work with here, they would take up
several hundred gigabytes of space, I
don't have that available on my
machine), this causes bad code to be
written that will never work on a
large database for performance
reasons. SQL code should never be written against a data set significantly smaller than the one on prod.
Second, developers who use their own
database create problems when they
spend a long time developing
something and then find out only
after they merge with a real datbase
that it affects something else. You
find this stuff much faster when you
share the environment. So there is
inthe end less wasted development
time.
Third developers working on related
things need to know about the changes
you are making, it will affect their
change.
When you know you are going to affect others, I think you tend to be more careful what you do which isa plus in my book.
Now the shared database server should have what we call a scratch database, a place where people can create and test table changes, so if they are doing something that might need to drop and recreate a table (which should be a rare case!), they can test the process first by copying the table to the scratch database and running their process there and then changin to the real database when they are sure it works. Or we often copy a backup table to the scratch database before testing a particular change, so we can easily recreate the old data if it goes bad.
I see no advantages at all to using individual databases.
I've recently deployed an ASP.NET application to my shiny new VPS and while I'm happy with the general performance increase that a VPS can give over a shared hosting solution, I'm unhappy with the startup time of my application.
My web application takes a fair amount of time to start up when my client first hits it. I'm not running it in debug mode (disabled that in my web.config), and it doesn't have any real work to do on startup - I have no code in my application start event handler, I don't start any extra threads, nothing. The first time my client hits my application it takes a good 15-20 seconds to respond. Subsequent calls take 1-2 seconds, unless I wait a few minutes for my application to shut down. Then it's back to a 15-20 second startup time.
(I'm aware that my timing benchmark is very unscientific, those numbers should just give a feel for the performance on startup of my app).
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
So, after that book-sized preface, here are my questions:
Is my understanding of ASP.NET's compilation incorrect? How does it actually work?
Is there a way I can force IIS to cache my binaries, or keep my application alive indefinitely?
If it's a bad idea to do either of the things in my previous question, why is it a bad idea, and what can I do instead to increase startup performance?
Thanks!
Edit: it appears my question is a slight duplicate of this question (I thought I did a better job of searching for an answer to this on here, haha). I think, however, that my question is more comprehensive, and I'd appreciate if it wasn't closed as a duplicate unless there are better, already-asked questions on here that address this.
IIS also shuts down your web app after a given time period, depending on its configuration. I'm not as familiar with IIS7 and where to configure this, so you might want to do a little research on how to configure this (starting point?).
Is it bad? Depends on how good your code is. If you're not leaking memory or resources, probably not.
The other solution is to precompile your website. This might be the better option for you. You'll have to check it out and see, however, as it may come with a downside, depending on how you interact with your website.
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
That is correct. Specifically, the assemblies are built as shadow copies (not to be confused with the volume snapshot service / shadow copy feature). This enables you to replace the code in the folder on the fly without affecting existing running sessions. ASP.NET will detect the change, and compile new versions into the target directory (typically Temporary ASP.NET Files). More on that process: Understanding ASP.NET Dynamic Compilation
If its purely the compilation time then often the most efficient approach is to hit the website yourself after the recycle. Make a call at regular intervals to ensure that it is you who receives the 15 second delay not your client.
I would be surprised if that is all compilation time however (depending on hardware) - do you have a lot of static instances of classes? Do they do a lot of work on start-up?
Either with tracing or profiling you could probably quite quickly work out where the start-up time was spent.
As to why keeping a process around is a bad idea, I believe its due to clear-up. No matter how well we look after our data or how well behaved the GC is, there is a good clear-up performed by restarting the process. Things like fragmentation can go away and any other resource issues that build up over time are cleared down. Therefore it is quite a bad idea to keep a server process running indefinitely.
I am developing e-commerce project on Asp.Net 3.5 with C#. I am using 3 tiers (Data + Business + UI) structure to reach the data from database (Msql 2005).
There are stored procedures and everything going on from them.(CRUD methods)
There is a performance issue here, project is running so slowly. I couldn't find any problem in transaction model.
Also the project is running on shared hosting at overseas country.Database server and web server are running on different machines.Database server has nearly 1000 databases.
How can I test and learn where is the problem ?
Since there is upwards of 1000 Databases sharing resources I would take a stab that might be your issue.... If you connect to your database and it takes 5 seconds to run a simple query then you can guess the problem.
I would add some stopwatch functionality onto a "testpage" that runs on your web server. This should give you the basic info to see if there is a "bottle neck" in waiting for the database to return your query. If you have made it that far then I would suspect it would be your web server.
Your last option would be be to set up a simple low spec machine with DB and web server on it and just test. Depending on how much traffic your site is getting you should be able to get a pretty good idea of its response time.
Tools such as YSlow might also be of some help however these are usually used more for fine tuning.
Since you're running on a shared hosting service, I would guess that's where your problem is. You're competing for server resources with every other website and database on those servers.
To make sure, I would set up a local environment that mimics your production environment. Then perform some standard stress tests to see how it performs. If it performs how you would expect, then it is probably your hosting solution.
With shared hosting solutions, you really do get what you pay for. If it's a system that requires a lot more speed then you're getting, you should look at a dedicated hosting solution.
I suggest you take a look at Tracing:
http://davidhayden.com/blog/dave/archive/2005/07/17/2396.aspx
This enables you to see a stack trace (The last picture in the article), and localize your performance bottlenecks.
A quick solution I developed to keep logs of performance on my web app may help you here. I have a web server and DB server running a similar-sounding app. I wrote a web service that runs a "benchmarking" stored procedure and returns the run time. I wrote a win app that runs on my development server that calls the web service, passes it the name of the stored procedure to run, and times how long the whole request takes. The win app writes the data to a log file and runs every 10 minutes as a scheduled task. Extra bells and whistles include automatic emails to team members when performance exceeds the specified threshold 3 consecutive times, fails to connect, and when it recovers to normal performance after a slow period.
This provides a general indication of how a user's experience on the website will be at any given time and serves as a warning bell for the team. Not exactly the best solution, but I wrote it in a couple of hours several months ago and have used the data it creates for troubleshooting purposes many times.