I'm a total unix-way guy, but now our company creates a new application under ASP.NET + SQL Server cluster platform.
So I know the best and most efficient principles and ways to scale the load, but I wanna know the MS background of horizontal scaling.
The question is pretty simple – are there any built-in abilities in ASP.Net to access the least loaded SQL server from SQL Server cluster?
Any words, libs, links are highly appreciated.
I also would be glad to hear best SQL Server practices or success stories around this theme.
Thank you.
Pavel
SQL Server clustering is not load balancing, it is for high-availability (e.g. one server dies, cluster is still alive).
If you are using SQL Server clustering, the cluster is active/passive, in that only one server in the cluster ever owns the SQL instance, so you can't split load across both of them.
If you have two databases you're using, you can create two SQL instances and have one server in the cluster own one of the two instances, and the other server own the other instance. Then, point connection strings for one database to the first instance, and connection strings for the second database to the second instance. If one of the two instances fails, it will failover to the passive server for that instance.
An alternative (still not load-balancing, but easier to setup IMO than clustering) is database mirroring: http://msdn.microsoft.com/en-us/library/ms189852.aspx. For mirroring, you specify the partner server name in the connection string: Data Source=myServerAddress;Initial Catalog=myDataBase;User Id=myUsername;Password=myPassword;Failover Partner=myBackupServerAddress; ADO.Net will automatically switch to the failover partner if the primary fails.
Finally, another option to consider is replication. If you replicate a primary database to several subscribers, you can split your load to the subscribers. There is no built-in functionality that I am aware of to split the load, so your code would need to handle that logic.
Related
I'm using JDBC in my application with business logic(client). This JDBC connects to the database which is in another machine(server). In this case, my JDBC directly connects with the database and stores & retrieves data. This is TWO-TIER architecture right?
In another application, for example servlet programming, I'm simply having browser in my client machine which is the presentation Layer(Client tier). Let me consider my business logic as Application Layer(Second tier) and database as Data layer(Third tier). Still I'm using JDBC to connect my application(business logic) with the database. Second and third tiers reside at server now.
By the above example, in three tier architecture a browser only added additionally and kept my business logic at server. I'm not feeling any performance difference other than these. If I'm wrong please correct me and explain me the exact architecture of 2-tier and 3-tier with other examples. Thanks in advance dear friends.
What you say is right.
You first example is two-tier.
The second example is three-tier.
A three-tier architecture can represent an important performance gain if the link between browser and server is slower than the link between server and DBMS. This is because usually the business logic needs to make several calls to the DBMS and/or present to the user only a small part of the information returned by the DBMS. Having the business logic in the client while having a slow connection to the DBMS would represent an important performance penalty.
In a typical web scenario, the connection between client and server is usually several times slower than the connection between server and DBMS, and there is your performance gain.
I'm designing a new web project and, after studying some options aiming scalability, I came up with two database solutions:
Local SQLite files carefully designed for a scalable fashion (one new database file for each X users, as writes will depend on user content, with no cross-user data dependence);
Remote MongoDB server (like Mongolab), as my host server doesn't serve MongoDB.
I don't trust MySQL server at current shared host, as it cames down very frequently (and I had problems with MySQL on another host, too). For the same reason I'm not goint to use postgres.
Pros of SQLite:
It's local, so it must be faster (I'll take care of using index and transactions properly);
I don't need to worry about tcp sniffing, as Mongo wire protocol is not crypted;
I don't need to worry about server outage, as SQLite is serverless.
Pros of MongoDB:
It's more easily scalable;
I don't need to worry on splitting databases, as scalability seems natural;
I don't need to worry about schema changes, as Mongo is schemaless and SQLite doesn't fully support alter table (specially considering changing many production files, etc.).
I want help to make a decision (and maybe consider a third option). Which one is better when write and read operations is growing?
I'm going to use Ruby.
One major risk of the SQLite approach is that as your requirements to scale increase, you will not be able to (easily) deploy on multiple application servers. You may be able to partition your users into separate servers, but if that server were to go down, you would have some subset of users who could not access their data.
Using MongoDB (or any other centralized service) alleviates this problem, as your web servers are stateless -- they can be added or removed at any time to accommodate web load without having to worry about what data lives where.
i have an application that runs normally in my first website.
If i move this applicatio to another server, but i don't move db (that remains to server 1), it runs very slowly to retrive data from sql.
The problem is only network or is there any issue in my code?
I use ADO.NET with LINQ...
Thank
If they were residing on the same server before and now their not, then yeah it's almost certainly a network issue. Are the servers housed in the same location? I'm taking a guess here since I don't have adequate information, but there's a chance that your intranet isn't configured correctly or you're using an external IP's. In that case one or both of the request and response are being sent out over the internet when they could be using your or your company's internal network to communicate.
Profile your queries. See how much time it takes to execute each. If your queries return fast, the problem might be your front-end code or the network.
You can log your Linq queries to console (or a Textwriter, for example). Something like:
dataContext.Log = Console.Out;
Then run the queries in SQL Server and see how efficient they are. Do they use indexes? Do they perform table scans? etc.
hoping someone has experience with both sql server and postgresql.
Between the two db's, which one is easier to scale?
Is creating a read only db that mirrors the main db easier/harder than sql server?
Seeing as sql server can get $$, I really want to look into postgresql.
Also, are the db access libraries written well for an asp.net application?
(please no comments on: do you need the scale, worry about scaling later, and don't optimize until you have scaling issues...I just want to learn from a theoretical standpoint thanks!)
Currently, setting up a read-only replica is probably easier with SQL Server. There's a lot of work going on to get hot standby and streaming replication part of the next release, though.
Regarding scaling, people are using PostgreSQL with massive databases. Skype uses PostgreSQL, and Yahoo has something based on PostgreSQL with several petabyte in it.
I've used Postgresql with C# and ASP.Net 2.0 and used the db provider from devart:
http://www.devart.com/dotconnect/postgresql/
The visual designer has a few teething difficulties but the connectivity was fine.
I have only used SQL Server and not much PostgreSQL, so I can only answer for SQL Server.
When scaling out SQL Server you have a couple of options. You can use peer to peer replication between databases, or you can have secondary read-only DB(s) as you mention. The last option is relatively straight-forward to set up using database mirroring or log shipping. Database mirroring also gives you the benefit of automatic switchover on primary DB failure. For an overview, look here:
http://www.microsoft.com/sql/howtobuy/passive-server-failover-support.mspx
http://technet.microsoft.com/en-us/library/cc917680.aspx
http://blogs.technet.com/josebda/archive/2009/04/02/sql-server-2008-database-mirroring.aspx
As for licensing, you only need a license for the standby server if it is actively used for serving queries - you do not need one for a pure standby server.
If you are serious, you can set up a failover cluster with a SAN for storage, but that is not really a load balancing setup in itself.
Here are some links on the general scale up/out topic:
http://www.microsoft.com/sqlserver/2008/en/us/wp-sql-2008-performance-scale.aspx
http://msdn.microsoft.com/en-us/library/aa479364.aspx
ASP.NET Libraries are obviously very well written for SQL Server, but I would believe there exists good alternatives for PostgreSQL as well.
We currently have an ASP.NET Web Application running on a single server. That server is about to hit the danger zone regarding CPU usage, and we want to deploy a second server.
What would be the best way to handle Session State?
We currently run InProc. But that's not an option with 2+ servers, as we want to exclude a single server from the WLB rotation sometimes to do maintenance work. Even though we use sticky load balancing, we would have to wait for all users to exit before we can exclude the server from the WLB rotation.
So I was looking at this MSDN page: http://msdn.microsoft.com/en-us/library/ms178586(VS.80).aspx
I guess my main question is: If we use the State Server mode. Can we ensure rendudancy by deploying the state server across two servers? To avoid having a single point of failure.
If you want one of the standard options I'd use SQL Server in a failover cluster. BTW have you consider memcacheddb?
Sql State server would be a better option: This link might help Sql State Server
. I don't believe you can run state server across multiple machines.
Use either Scale Out State server (faster better) or SQL State (slower simpler). But beware if you store any none serializable objects into Session state, because you'll get exceptions after migration.
you might want to look into project velocity (http://msdn.microsoft.com/en-us/data/cc655792.aspx) . It has limited support now because it is on CTP3 but later this year it would be RTM. I highly suggest you watch the MIX09 session about it here
I would suggest looking into p2p Session State Server - link is here: http://www.codeproject.com/KB/aspnet/p2pstateserver.aspx
Worked for me. The only drawback is that if you have a large data set it was taking forever to replicate between peers.