Setting up a backup DB server in ASP.NET web.config file - asp.net

I currently have an asp.net website hosted on two web servers that sit behind a Cisco load balancer. The two web servers reference a single MSSQL database server.
Since this database server is a single point of failure, I'm adding an additional MSSQL server for backup. I would like to setup my web.config files to write everything to both MSSQL servers, but only read from the "primary" database server unless it is unreachable for some reason, at which point the backup MSSQL server would be used.
Is this possible via a web.config file setting, or must this be done in code? Thanks in advance for any help.
New Information:
I just wanted to add further information on this topic after researching it for the past several days - Microsoft TechNet has a good article title "Implementing Application Failover with Database Mirroring" (http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/implappfailover.mspx#EMD).
This specifically covers the database mirroring feature in Microsoft SQL Server 2005 and the new new "Failover Partner" connection string keyword that allows you to specify two server/db instances in a single connection string.
The article is well worth a read if your interested in implementing this type of feature.

What you want is called "failover", where if one database fails your queries are automatically redirected to the other. This is acheived at the database level, not the application. There are a lot of walkthroughs etc for setting up failover clusters: here's one for SQL 2000, and another for SQL 2005. Basically, once you set it up, the primary database communicates all activity to the secondary one. If the primary fails, the secondary is (almost) up to date and takes over.
The servers form a cluster, and look like a single unit - similar to the way your load-balanced web servers look to the outside world. The backup monitors the primary, and if the primary stops responding, the backup takes over receiving queries. If you're Googling, try also looking adding the keywords "database mirroring" and "quorum".

Its a bit more complex than that. Does your webpage write to the databases? Or just read?
If they write, then you'll have to worry about keeping the 2 databases synchronized, probably using mirroring or log shipping.
But what you are (in essense) talking about doing is setting up a SQL cluster.

I've written a blog entry that shows how to setup MSSQL Database mirroring as well as how to actually utilize it from a managed code perspective:
http://www.improve.dk/blog/2008/03/23/sql-server-mirroring-a-practical-approach

Nice answer from "Rick" but I just wanted to add my 2 cents of information. Normally for a setup with failover without a lot of expensive equipment, I would set it up like that:
You can have your 2 SQL Server box waiting for request and have a third box low-end system with SQL Server 2005 Express as a "health monitoring". What that saves you is 10K$ for the box and one SQL Server licence. SQL Server Express (as in Free) can do the health monitoring between the 2 databases servers without any issues.
That is my setup :)

Related

project of file storage system in asp.net how to implement correctly?

on upload.aspx page i have
conn1.ConnectionString = "Data Source=.\ip-of-remote-database-server;AttachDbFilename=signup.mdf;Integrated Security=True;User Instance=True";
and all the queries are also on same page ,only database on another machine..
so is this the correct way of implementing ?? or i have to create all queries on another machine and call them by application??
Any given query query might originate from the client code (such as ASP.NET), or it might be stored a-priori in the DBMS itself as a VIEW or a stored procedure (or even a trigger).
But no matter where it originated from, the query is always executed by the DBMS server. This way, the DBMS can guarantee the integrity of data and "defend" itself from the bugs in the client code.
The logical separation of client and server is why this model is called client/server, but that doesn't mean they must be separate physical machines - you'll decide that based on expected workload1 and usage patterns2.
1 Distributing the processing to multiple machines might increase performance.
2 E.g. you might need several "fat" clients around the LAN (communicating with the same database server) to reach all your users. This is less relevant for Web where there are additional layers of indirection between users and the database.
It depends on your infrastructure. If you have got Sql Server locally you can use it. I assume that it is a school project so it does not matter. In real life it usually a good idea to separate web server and database server

How can I handle a web application that connects to many SQL Server databases?

I am building an ASP.NET web application that will use SQL Server for data storage. I am inheriting an existing structure and I am not able to modify it very much. The people who use this application are individual companies who have paid to use the application. Each company has about 5 or 10 people who will use the application. There are about 1000 companies. The way that the system is currently structured, every company has their own unique database in the SQL Server instance. The structure of each database is the same. I don't think that this is a good database design but there is nothing I can do about it. There are other applications that hit this database and it would be quite an undertaking to rewrite the DB interfaces for all of those apps.
So my question is how to design the architecture for the new web app. There are times of the month where the site will get a lot of traffic. My feeling is that the site will not perform well at these times because I am guessing that when we have 500 people from different companies accessing the site simultaneously that they will each have their own unique database connection because they are accessing different SQL Server databases with different connection strings. SQL Server will not use any connection pooling. My impression is that this is bad.
What happens if they were to double their number of customers? How many unique database connections can SQL Server handle? Is this a situation where I should tell the client that they must redesign this if they want to remain scalable?
Thanks,
Corey
You don't have to create separate connections for every DB
I have an app that uses multiple DBs on the same server. I prefix each query with a "USE dbName; "
I've even run queries on two separate DB's in the same call.
As for calling stored procs, it's a slightly different process. Since you can't do
Use myDB; spBlahBLah
Instead you have to explicity change the DB in the connection object. In .Net it looks something like this:
myConnection.ChangeDatabase("otherDBName");
then call your stored procedure.
Hopefully, you have a single database for common items. Here, I hope you have a Clients table with IsEnabled, Logo, PersonToCallWhenTheyDontPayBills, etc. Add a column for Database (i.e. catalog) and while you're at it, Server. You web application will point to the common database when starting up and build the list of database connetions per client. Programmatically build your database connection strings with the Server and Database columns in the table.
UPDATE:
After my discussion with #Neil, I want to point out that my method assumes a singleton database connection. If you don't do this then it would be silly to follow my advice.
Scaling is a complex issue. However why are you not scaling the web aspect as well? Then the connection pooling is limited to the web application.
edit:
I'm talking about the general case here. I know tha pooling occurs at many levels, not just the IDbConnection (http://stackoverflow.com/questions/3526617/are-ado-net-2-0-connection-pools-pre-application-domain-or-per-process). I was wondering whether the questioner had considered scaling at the we application level.

Define Failover SQL Server

We are begining to test some BC solution's for our SQL Server DB's, we have decided that where possible we will use DB Mirroring and for all other less critical or where DB mirroring is not possible to use log shipping.
I have setup two test SQL Servers to test log shipping to be able to document procedures, and also to establish what needs to change in our client connections to allow failover to the secondary server.
We have a mix of applications that include ASP Classic, ASP.NET, and ODBC. I have come accross that ODBC (when using SQLNCLI) has the ability to use a mirror server, and with ASP.NET you can define a failover partner.
Can anyone provide information on how we can achieve failover support for our ASP Classic applications, and can anyone confirm wether the SQLNCLI and ASP.NET failover partner works with SQL log shipping?
I have done some testing in ASP.NET with adding failover partner to the connection string however the application keeps querying the principal server which makes me think I am missing something or this is not supported in log shipping.
My ASP.NET connection string is:
<add name="test-BC_originalConnectionString2" connectionString="Data Source=primary;Failover Partner=secondary;Initial Catalog=test-BC_SQL;User ID=me;Password=passw" providerName="System.Data.SqlClient" />
I would greatly appreciate any assistance anyone is able to provide me.
If there is any further inforamtion required please dont hesitate in letting me know.
Thanks,
Matt
From the lack of responses and also finding nothing further on the internet, I can only assume that log shipping doesnt support this sort of client-side failover configuration.
I have found lots of references to DNS (too slow to update all clients), and load-balancing appliances (too expensive for the project) but not much else.
We are going to introduce DB Mirroring for tier 1 services and look at other alternatives such as log shipping for lower tier services which have a lower return to operation level that will allow us to change the application connection strings.
Thanks,
Matt

ASP.NET performance: counting SQL requests

We had huge performance problem when deploying our ASP.NET app at a customer, which had the DB sitting on a remote location.
We found that it was due to the fact that pages made ridiculous amount of individual SQL queries to the DB. It was never a problem we noticed because usually, the web and DB are on the same local network (low latency). But on this (suddenly) low latency configuration, it was very very slow.
(Notice that each sql request by itself was fast, it is the number and serial nature of the sequence that is the problem).
I asked the engineering team to be able to report and maintain a "wall of shame" (or stats) telling us for each page the number of SQL requests so we can use it as a reference. They claim it is expensive..
anyone can tell me how to be able to maintain or get such report cheaply and easily?
We are using SQL Server 2005
We have a mix of our own DB access layer and subsonic
I know and use the profiler, but that is a bit manual. Asking here if there is a tip on how to automate or maybe I am just crazy?
If you are on SQL Server, read up on Profiler.
http://msdn.microsoft.com/en-us/library/ms187929.aspx
Running profiler from the UI is expensive, but you can run traces without the UI and that will give you what you want.
First, check out SubSonic's BatchQuery functionality--it might help alleviate alot of the stress in the first cut without getting into material modification of your code.
You can schedule trace jobs/dumps from the SQL server's end of things. You can also run perfmon counters to see how many database requests the app is serving.
All that said, I'd try and encourage the customer to move the database (or a mirrored copy of the database) closer to your app. It is probably the cheapest solution in the long term, depending on how thick the app is.
I have had good success using this tool in the past, not sure if the price is right for you but it will uncover any issues you may have:
Spotlight on SQL Server
The MiniProfiler (formerly known as the MVC mini profiler; but it works for all both MVC and Webforms) is a must in such a case IMO. If the code creating the database connections is well architectured it's a piece of cake to get it running for almost any ASP.NET application.
It generates a report on each rendered page with profiling stats, including each SQL query sent to the database for the request. You can see it in action on the Stack Exchange Data Explorer pages (top left corner).

What's the ASP.NET Connection String Format for a Linked Server?

I've got a database server that I am unable to connect to using the credentials I've been provided. However, on the staging version of the same server, there's a linked server that points to the production database. Both the staging server and the linked server have the same schema.
I've been reassured that I should expect to be able to connect to the live server before we go live. Unfortunately, I've reached a point in my development where I need more than the token sample records that are currently in the staging database. So, I was hoping to connect to the linked server.
Thus far in my development against this schema has been against the staging server itself, using Subsonic objects. That all works fine.
I can connect via SQL Server Management Studio to that linked server and execute my queries directly. I can also execute 'manual" queries in C# against the linked server by having my connection string hook up to the staging server and running my queries as
SELECT * FROM OpenQuery([LINKEDSERVER],'QUERY')
However, the Subsonic objects are what's enabling me to bring this project in on time and under budget, so I'm not looking to do straight queries in my code.
What I'm looking for is whether there's a way to state the connection string to the linked server. I've looked at lots of forum entries, etc. on the topic and most of the answers seem to completely gloss over the "linked server" portion of the question, focusing on basic connection string syntax.
I don't believe that you can access a linked server directly from an application without the OpenQuery syntax. Depending on the complexity of your schema, it might make sense to write a routine or sproc to populate your staging database with data from your live database.
You might also consider looking at Redgates SQL Data Generator or any other data gen tool. Redgates is pretty easy to use.
One other idea - can you get a backup of the live database that you can install in development to do your testing? If its just data for development and testing that you seek, you probably want to stay away from connecting to your production database at all.
Create testing stored procedures on server B that reference the data on server A via the linked server. e.g. if your regular sproc references a table on Server B say:
databaseA.dbo.tableName
then use the linked servername to reference the same database/table on server A:
linkedServerName.databaseA.dbo.tableName
If server A is identical in its database/table/column names than you will be able to do this by some quick find/replace work.
creating a linked server from .NET doesn't make any sense since a linked server is nothing but a connection from one sqlserver to another server (sql, file, excel, sybase etc etc), in essence it is just a connection string (you can impersonate and do some other stuff when creating a linked server).
One Way is to create two connection strings and access the approperiate database when required.
Second option is create connection for Database A only and create a link server For Databse B in Database.good article, i really like it. I am doing a bit on research about Asp.net connection and i found also macrotesting www.macrotesting.com to be very good source. Thanks for you article.....
Regards...
Meganathan .J

Resources