I am connecting the oracle db server from sqlserver 2005 using linked server with microsoft oledb oracle drivers.
Table present in the oracle db is having the DateTime Stamp. So i will be fetching only the latest records from that table, by passing the query with where condition.But query is taking 6 to 7 minutes approx.
20 million records are present in the table which i am querying.
I know this does not answer your question directly, but maybe you could try connecting directly to oracle to grab your data.
In the company I work for, we use a wide range of database backends. Furthermore, each of which we connect to directly. There is LINQ to Oracle available in the community. The method we use in the company I work for is the Oracle .NET Data Provider.
On a side note... LINQ to LDAP is the one I'm waiting for. There are ways to write one that compares to the LINQ to Active Directory libraries, but not worth destroying our current model for.
Related
AFAIK, Memcached does not support synchronization with database (at least SQL Server and Oracle). We are planning to use Memcached (it is free) with our OLTP database.
In some business processes we do some heavy validations which requires lot of data from database, we can not keep static copy of these data as we don't know whether the data has been modified so we fetch the data every time which slows the process down.
One possible solution could be
Write triggers on database to create/update prefixed-postfixed (table-PK1-PK2-PK3-column) files on change of records
Monitor this change of file using FileSystemWatcher and expire the key (table-PK1-PK2-PK3-column) to get updated data
Problem: There would be around 100,000 users using any combination of data for 10 hours. So we will end up having a lot of files e.g. categ1-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-78-data250, categ2-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-33-data100, etc.
I am expecting 5 million files at least. Now it looks a pathetic solution :(
Other possibilities are
call a web service asynchronously from the trigger passing the key
to be expired
call an exe from trigger without waiting it to finish and then this
exe would expire the key. (I have got some success with this approach on SQL Server using xp_cmdsell to call an exe, calling an exe from oracle's trigger looks a bit difficult)
Still sounds pathetic, isn't it?
Any intelligent suggestions please
It's not clear (to me) if the use of Memcached is mandatory or not. I would personally avoid it and use instead SqlDependency and OracleDependency. The two both allow to pass a db command and get notified when the data that the command would return changes.
If Memcached is mandatory you can still use this two classes to trigger the invalidation.
MS SQL Server has "Change Tracking" features that maybe be of use to you. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
I have no experience with Oracle, but i believe it may also have some tracking functionality as well. This article might get you started:
20 Using Oracle Streams to Record Table Changes
What is difference between transaction in SQL Server and using transaction in ADO.NET?
Please reply with proper logic. I just want to know in terms of performance.
I just want to know if i am using transaction(Begin End Trans) and using SqlTransaction class in ado.net for similar set of queries then which is better to use ?
There is no difference between a ADO.Net transaction and SQL Server transaction, as far as transaction handling. Personally, I prefer initiating transactions at a higher level that ADO.NET offers, because it normally gives me greater flexibility in setting the scope of the transaction.
SQL Server level transactions only when I need to update Multiple Tables, like I have a Master Table, and a Detail Table, I want to update both the Master and Detail tables.
ADO.Net (.Net Level) transaction only for one project, it is an SQL 2008 project, there was some requirement like, I need to save some DOC files in Database, it is using a SQL 2008 feature, called FileStream. If enable FileStream it will create some shared folder on the server, and Saves all the file data into this Folder, which can only read by SQL Server.
An ADO.net transaction is more convenient if you are making changes to multiple databases within a transaction and want to roll all of them back in case of an error.
Just about everything I've seen relating to ASP.Net's Login control treats it like a black box. I'm interested in seeing the SQL commands issued against ASPNETDB and watching the dataflow.
For example, the Login control uses ASPNETDB and stored procedure dbo.aspnet_Membership_FindUsersByName. I'm not clear on how to call the procedure because it expects #PageIndex and #PageSize parameters (#ApplicationName and #UserNameToMatch make sense to me). I would like to read about the procedure or trace it.
Would anyone know of good reading on the topic, or suggest a path to explore the control?
What you are looking for is called a SQL Server Trace. The Graphical User Interface for SQL Traces is SQL Server Profiler. This only ships with certain versions of SQL Server (for instance, if you have SQL Server Express Edition then you will not have SQL Server Profiler, but you will still be able to utilize the Trace stored procedures and database objects).
Using Profiler (or the Trace db objects), you'll be able to filter out certain events and data depending on what you are specifically looking to capture. This will give you all the information you'll need to find out the data being transmitted to and from the server -> client application (or in this case, the ASP.NET application).
The events and data that a Trace puts forth can be extremely daunting, especially if you are new to this (which it sounds like you are) and there are a lot of hits to the database. Learn about the Profiler Templates you can utilize, and the individual Events you can analyze.
If you have access to SQL Server, then fire up the profiler and you can see in real-time the sql statements executed against the db.
Just for good measure a brief step by step guide for starting up profiler.
Starting up SQL profiler
If your using SQL express you may not have profiler, however here's an open source alternative (note. I've never used it)
free profiler
If you set it up to use SQL Server (using aspnet_regsql.exe), you can see the stored procedures it uses.
I am building an ASP.NET web application that will use SQL Server for data storage. I am inheriting an existing structure and I am not able to modify it very much. The people who use this application are individual companies who have paid to use the application. Each company has about 5 or 10 people who will use the application. There are about 1000 companies. The way that the system is currently structured, every company has their own unique database in the SQL Server instance. The structure of each database is the same. I don't think that this is a good database design but there is nothing I can do about it. There are other applications that hit this database and it would be quite an undertaking to rewrite the DB interfaces for all of those apps.
So my question is how to design the architecture for the new web app. There are times of the month where the site will get a lot of traffic. My feeling is that the site will not perform well at these times because I am guessing that when we have 500 people from different companies accessing the site simultaneously that they will each have their own unique database connection because they are accessing different SQL Server databases with different connection strings. SQL Server will not use any connection pooling. My impression is that this is bad.
What happens if they were to double their number of customers? How many unique database connections can SQL Server handle? Is this a situation where I should tell the client that they must redesign this if they want to remain scalable?
Thanks,
Corey
You don't have to create separate connections for every DB
I have an app that uses multiple DBs on the same server. I prefix each query with a "USE dbName; "
I've even run queries on two separate DB's in the same call.
As for calling stored procs, it's a slightly different process. Since you can't do
Use myDB; spBlahBLah
Instead you have to explicity change the DB in the connection object. In .Net it looks something like this:
myConnection.ChangeDatabase("otherDBName");
then call your stored procedure.
Hopefully, you have a single database for common items. Here, I hope you have a Clients table with IsEnabled, Logo, PersonToCallWhenTheyDontPayBills, etc. Add a column for Database (i.e. catalog) and while you're at it, Server. You web application will point to the common database when starting up and build the list of database connetions per client. Programmatically build your database connection strings with the Server and Database columns in the table.
UPDATE:
After my discussion with #Neil, I want to point out that my method assumes a singleton database connection. If you don't do this then it would be silly to follow my advice.
Scaling is a complex issue. However why are you not scaling the web aspect as well? Then the connection pooling is limited to the web application.
edit:
I'm talking about the general case here. I know tha pooling occurs at many levels, not just the IDbConnection (http://stackoverflow.com/questions/3526617/are-ado-net-2-0-connection-pools-pre-application-domain-or-per-process). I was wondering whether the questioner had considered scaling at the we application level.
I've got a database server that I am unable to connect to using the credentials I've been provided. However, on the staging version of the same server, there's a linked server that points to the production database. Both the staging server and the linked server have the same schema.
I've been reassured that I should expect to be able to connect to the live server before we go live. Unfortunately, I've reached a point in my development where I need more than the token sample records that are currently in the staging database. So, I was hoping to connect to the linked server.
Thus far in my development against this schema has been against the staging server itself, using Subsonic objects. That all works fine.
I can connect via SQL Server Management Studio to that linked server and execute my queries directly. I can also execute 'manual" queries in C# against the linked server by having my connection string hook up to the staging server and running my queries as
SELECT * FROM OpenQuery([LINKEDSERVER],'QUERY')
However, the Subsonic objects are what's enabling me to bring this project in on time and under budget, so I'm not looking to do straight queries in my code.
What I'm looking for is whether there's a way to state the connection string to the linked server. I've looked at lots of forum entries, etc. on the topic and most of the answers seem to completely gloss over the "linked server" portion of the question, focusing on basic connection string syntax.
I don't believe that you can access a linked server directly from an application without the OpenQuery syntax. Depending on the complexity of your schema, it might make sense to write a routine or sproc to populate your staging database with data from your live database.
You might also consider looking at Redgates SQL Data Generator or any other data gen tool. Redgates is pretty easy to use.
One other idea - can you get a backup of the live database that you can install in development to do your testing? If its just data for development and testing that you seek, you probably want to stay away from connecting to your production database at all.
Create testing stored procedures on server B that reference the data on server A via the linked server. e.g. if your regular sproc references a table on Server B say:
databaseA.dbo.tableName
then use the linked servername to reference the same database/table on server A:
linkedServerName.databaseA.dbo.tableName
If server A is identical in its database/table/column names than you will be able to do this by some quick find/replace work.
creating a linked server from .NET doesn't make any sense since a linked server is nothing but a connection from one sqlserver to another server (sql, file, excel, sybase etc etc), in essence it is just a connection string (you can impersonate and do some other stuff when creating a linked server).
One Way is to create two connection strings and access the approperiate database when required.
Second option is create connection for Database A only and create a link server For Databse B in Database.good article, i really like it. I am doing a bit on research about Asp.net connection and i found also macrotesting www.macrotesting.com to be very good source. Thanks for you article.....
Regards...
Meganathan .J