I support a group of developers that are telling me to setup a registry entry for an application that they made in asp.net to connect to our SQL backend. Would it not be better to do this from an ODBC connection? Is this lazy programming or is this common practice?
If all their connections are in registry entries how will I be able to spin up the DRP site in case we have an issue? Right now we replicate the content across and it would be a heck of a lot easier if the DB connections were in ODBC instead of having to redo all these registry entries. (there are multiple apps doing this).
Please fill me in. Thanks
Why are they not using the web.config to store connection strings? http://msdn.microsoft.com/en-us/library/ms178411.aspx
Related
I'm designing a new web project and, after studying some options aiming scalability, I came up with two database solutions:
Local SQLite files carefully designed for a scalable fashion (one new database file for each X users, as writes will depend on user content, with no cross-user data dependence);
Remote MongoDB server (like Mongolab), as my host server doesn't serve MongoDB.
I don't trust MySQL server at current shared host, as it cames down very frequently (and I had problems with MySQL on another host, too). For the same reason I'm not goint to use postgres.
Pros of SQLite:
It's local, so it must be faster (I'll take care of using index and transactions properly);
I don't need to worry about tcp sniffing, as Mongo wire protocol is not crypted;
I don't need to worry about server outage, as SQLite is serverless.
Pros of MongoDB:
It's more easily scalable;
I don't need to worry on splitting databases, as scalability seems natural;
I don't need to worry about schema changes, as Mongo is schemaless and SQLite doesn't fully support alter table (specially considering changing many production files, etc.).
I want help to make a decision (and maybe consider a third option). Which one is better when write and read operations is growing?
I'm going to use Ruby.
One major risk of the SQLite approach is that as your requirements to scale increase, you will not be able to (easily) deploy on multiple application servers. You may be able to partition your users into separate servers, but if that server were to go down, you would have some subset of users who could not access their data.
Using MongoDB (or any other centralized service) alleviates this problem, as your web servers are stateless -- they can be added or removed at any time to accommodate web load without having to worry about what data lives where.
I'm a total unix-way guy, but now our company creates a new application under ASP.NET + SQL Server cluster platform.
So I know the best and most efficient principles and ways to scale the load, but I wanna know the MS background of horizontal scaling.
The question is pretty simple – are there any built-in abilities in ASP.Net to access the least loaded SQL server from SQL Server cluster?
Any words, libs, links are highly appreciated.
I also would be glad to hear best SQL Server practices or success stories around this theme.
Thank you.
Pavel
SQL Server clustering is not load balancing, it is for high-availability (e.g. one server dies, cluster is still alive).
If you are using SQL Server clustering, the cluster is active/passive, in that only one server in the cluster ever owns the SQL instance, so you can't split load across both of them.
If you have two databases you're using, you can create two SQL instances and have one server in the cluster own one of the two instances, and the other server own the other instance. Then, point connection strings for one database to the first instance, and connection strings for the second database to the second instance. If one of the two instances fails, it will failover to the passive server for that instance.
An alternative (still not load-balancing, but easier to setup IMO than clustering) is database mirroring: http://msdn.microsoft.com/en-us/library/ms189852.aspx. For mirroring, you specify the partner server name in the connection string: Data Source=myServerAddress;Initial Catalog=myDataBase;User Id=myUsername;Password=myPassword;Failover Partner=myBackupServerAddress; ADO.Net will automatically switch to the failover partner if the primary fails.
Finally, another option to consider is replication. If you replicate a primary database to several subscribers, you can split your load to the subscribers. There is no built-in functionality that I am aware of to split the load, so your code would need to handle that logic.
a few days we had a strange error with sqlite. We use a sqlite database on a network share with several computers accessing it. Our client reported, that the database is gone. A quick overview showed, that the database was still there but no computer could access it. It also showed a s3db-journal file indicating that someone is/was accessing the db when something happened. The thing that is strange - the s3db-journal file was locked by the file system (we could not copy/delete it). After restarting all applications, the locked file disappeared as it should be.
How does this happen? We would like to deduct somehow how our client got into this situation. We know, that there was a corrupt network cabeling to one of the computers.
Thank you for your help.
Tobias
To clarify this: several = up to 10 computer
From the "Appropriate uses for SQLite" page:
If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, the file locking logic of many network filesystems implementation contains bugs (on both Unix and Windows). If file locking does not work like it should, it might be possible for two or more client programs to modify the same part of the same database at the same time, resulting in database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem.
It very well might be a bug in the network filesystem you're using. Either way, the SQLite developers explicitly recommend against using databases on network filesystems.
The issue is resolved. The database-component (zeos) threw an exception and we tried a rollback. Due to the way the component was designed, this is only allowed when you started a transaction. If you don't you get the locked s3db-journal file.
In the end we learned 2 things: never rollback when you did not start a transaction, second - there is a function InTransaction from zeos for that.
hoping someone has experience with both sql server and postgresql.
Between the two db's, which one is easier to scale?
Is creating a read only db that mirrors the main db easier/harder than sql server?
Seeing as sql server can get $$, I really want to look into postgresql.
Also, are the db access libraries written well for an asp.net application?
(please no comments on: do you need the scale, worry about scaling later, and don't optimize until you have scaling issues...I just want to learn from a theoretical standpoint thanks!)
Currently, setting up a read-only replica is probably easier with SQL Server. There's a lot of work going on to get hot standby and streaming replication part of the next release, though.
Regarding scaling, people are using PostgreSQL with massive databases. Skype uses PostgreSQL, and Yahoo has something based on PostgreSQL with several petabyte in it.
I've used Postgresql with C# and ASP.Net 2.0 and used the db provider from devart:
http://www.devart.com/dotconnect/postgresql/
The visual designer has a few teething difficulties but the connectivity was fine.
I have only used SQL Server and not much PostgreSQL, so I can only answer for SQL Server.
When scaling out SQL Server you have a couple of options. You can use peer to peer replication between databases, or you can have secondary read-only DB(s) as you mention. The last option is relatively straight-forward to set up using database mirroring or log shipping. Database mirroring also gives you the benefit of automatic switchover on primary DB failure. For an overview, look here:
http://www.microsoft.com/sql/howtobuy/passive-server-failover-support.mspx
http://technet.microsoft.com/en-us/library/cc917680.aspx
http://blogs.technet.com/josebda/archive/2009/04/02/sql-server-2008-database-mirroring.aspx
As for licensing, you only need a license for the standby server if it is actively used for serving queries - you do not need one for a pure standby server.
If you are serious, you can set up a failover cluster with a SAN for storage, but that is not really a load balancing setup in itself.
Here are some links on the general scale up/out topic:
http://www.microsoft.com/sqlserver/2008/en/us/wp-sql-2008-performance-scale.aspx
http://msdn.microsoft.com/en-us/library/aa479364.aspx
ASP.NET Libraries are obviously very well written for SQL Server, but I would believe there exists good alternatives for PostgreSQL as well.
What is the best way to handle connection pooling with Oracle 11g and asp.net, I'm having issues where Oracle refuses to open up any new connections for the web app after a while.
This causes the request to time out and queue up.!
EDIT:
Is there anything that I need to do in Oracle to fine tune this?
Since you didn't mention your Oracle config, its hard to tell you a first course of action, so you need to clarify how many sessions you have.
SELECT username, count(1) FROM v$session GROUP BY username;
Oracle's max is controlled by the "PROCESSES" instance parameter. The default may be something like 150. You may try bumping that to 300 or so for an OLTP web app, however, if you do have a leak, it will only delay the inevitable. But check the PROCESSES is at least as large as your "Max Pool Size" setting for your Oracle ADO connection string. Default for 11g ODP.NET is 100 I think.
Closing the connections is all you need to do. The framework should handle all of the pooling.
Querying the v$session would show all outstanding sessions.
How many connections do you have and how quickly are you trying to create/disconnect them ?
Shared servers is one mechanism to have multiple end clients share a limited number of connections.