Let's say I have a dataset in an ASP.NET website (.NET 3.5) with 5 tables, each has roughly 30,000 rows and an average of 12 columns. I want to insert all of the data from the dataset into 5 very-similar-but-not-quite-identical tables in SQL Server 2008. I also want to use LINQ (personal preference - trying to learn something new).
Is it as simple as iterating through the dataset and, for each row, creating a new instance of the associated class, initializing its data with the dataset's row, adding it to the data model, and then doing one giant SubmitChanges at the end?
Are there better ways of doing this with LINQ? Or is this the de-facto standard?
Creating objects and inserting them is fine. But to avoid a gigantic commit at the end, you might want to perform a SubmitChanges() every 100 rows or so.
Alternately you could get a copy of Red Gate's "SQL Data Compare" utility if you have the cash. Then you never have to write one of these things again. :-)
Edit 2010-04-19: If you want to use a transaction, I think you should still use my approach instead of a single SubmitChanges(). In this case you'll want to explicitly manage your own transaction in L2S (see http://msdn.microsoft.com/en-us/library/bb386995.aspx). Run your queries in a try/catch and roll back the transaction if you get any failures.
Two last bits of advice:
Make sure your ASP.NET timeout is set high enough.
Consider printing out some kind of progress indicator. It makes running these kind of long-running things much more palatable.
Linq To Sql doesn't natively have anything like the SqlBulkCopy class. I did a quick search and it looks like there's an implementation for Linq To Sql. No clue if it is any good but it can't hurt to check it out.
DataContext.ExecuteCommand can be used with an arbitrary SQL statement. You could do a "INSERT FROM".
Related
I have a handful of records, 5-10, that I need to take from the user and run a SQL merge statement against. I can think of three ways of accomplishing this.
.net Loop processing one record at a time - Wondering what the performance of this would be compared to the other options. I would think it is pretty good given connection pooling?
SQL Data Table type - I have seen these used elsewhere in the project, but as I learned first hand these are a pain to update the table definitions if need, dropping the entire object and recreating
XML variable - I have used this in the past. I like it because it is flexible to change the definition of the object. The .net is simple with XMLSerializer. But I am sure there is probably a performance hit to call XMLSerializer. And then on the SQL side to use the .nodes() function.
Does anyone know by personal experience or some reference, such as a white paper, which method is the most efficient when inserting/updating records in a database via .net application?
For 5-10 items you can use "clasic" insert with more records.
INSERT INTO MyTable
(ColumnA, ColumnB, ColumnC)
VALUES
(#ColumnA_0, #ColumnB_0, #ColumnC_0),
(#ColumnA_1, #ColumnB_1, #ColumnC_1),
(#ColumnA_2, #ColumnB_2, #ColumnC_2)
This is MUCH faster than XML or DataTable. And is faster than isolated inserts in loop.
The limit for number of inserted records is 1000. If you want more, you need execute more statements.
I have a database that's been populated with data on a server. I've made some
changes to the database models and I wish to merge the existing data with the new data without losing everything.
Do you know of an appropriate and no-nonsense method to achieve this?
Well one method would be to set up replication from each of them to a single database. But that seems pretty time intensive. I think you might have to write a program to do it (either .net or SQL) because how would it know which record to take if you have two records with the same primary key?
If you know that there will never be any duplicates, you could just write sql statements to do it. But that's a pretty big if.
You might try something like http://www.red-gate.com/products/sql-development/sql-data-compare/ which will synchronize data and I believe it will also figure out what to do when there are two records. I've had very good luck with Red-Gate products.
Background: I am using SQLite database in my flex application. Size of the database is 4 MB and have 5 tables which are
table 1 have 2500 records
table 2 have 8700 records
table 3 have 3000 records
table 4 have 5000 records
table 5 have 2000 records.
Problem: Whenever I run a select query on any table, it takes around (approx 50 seconds) to fetch data from database tables. This has made the application quite slow and unresponsive while it fetches the data from the table.
How can i improve the performance of the SQLite database so that the time taken to fetch the data from the tables is reduced?
Thanks
As I tell you in a comment, without knowing what structures your database consists of, and what queries you run against the data, there is nothing we can infer suggesting why your queries take much time.
However here is an interesting reading about indexes : Use the index, Luke!. It tells you what an index is, how you should design your indexes and what benefits you can harvest.
Also, if you can post the queries and the table schemas and cardinalities (not the contents) maybe it could help.
Are you using asynchronous or synchronous execution modes? The difference between them is that asynchronous execution runs in the background while your application continues to run. Your application will then have to listen for a dispatched event and then carry out any subsequent operations. In synchronous mode, however, the user will not be able to interact with the application until the database operation is complete since those operations run in the same execution sequence as the application. Synchronous mode is conceptually simpler to implement, but asynchronous mode will yield better usability.
The first time SQLStatement.execute() on a SQLStatement instance, the statement is prepared automatically before executing. Subsequent calls will execute faster as long as the SQLStatement.text property has not changed. Using the same SQLStatement instances is better than creating new instances again and again. If you need to change your queries, then consider using parameterized statements.
You can also use techniques such as deferring what data you need at runtime. If you only need a subset of data, pull that back first and then retrieve other data as necessary. This may depend on your application scope and what needs you have to fulfill though.
Specifying the database with the table names will prevent the runtime from checking each database to find a matching table if you have multiple databases. It also helps prevent the runtime will choose the wrong database if this isn't specified. Do SELECT email FROM main.users; instead of SELECT email FROM users; even if you only have one single database. (main is automatically assigned as the database name when you call SQLConnection.open.)
If you happen to be writing lots of changes to the database (multiple INSERT or UPDATE statements), then consider wrapping it in a transaction. Changes will made in memory by the runtime and then written to disk. If you don't use a transaction, each statement will result in multiple disk writes to the database file which can be slow and consume lots of time.
Try to avoid any schema changes. The table definition data is kept at the start of the database file. The runtime loads these definitions when the database connection is opened. Data added to tables is kept after the table definition data in the database file. If changes such as adding columns or tables, the new table definitions will be mixed in with table data in the database file. The effect of this is that the runtime will have to read the table definition data from different parts of the file rather than at the beginning. The SQLConnection.compact() method restructures the table definition data so it is at the the beginning of the file, but its downside is that this method can also consume much time and more so if the database file is large.
Lastly, as Benoit pointed out in his comment, consider improving your own SQL queries and table structure that you're using. It would be helpful to know your database structure and queries are the actual cause of the slow performance or not. My guess is that you're using synchronous execution. If you switch to asynchronous mode, you'll see better performance but that doesn't mean it has to stop there.
The Adobe Flex documentation online has more information on improving database performance and best practices working with local SQL databases.
You could try indexing some of the columns used in the WHERE clause of your SELECT statements. You might also try minimizing usage of the LIKE keyword.
If you are joining your tables together, you might try simplifying the table relationships.
Like others have said, it's hard to get specific without knowing more about your schema and the SQL you are using.
How do I pass a dataset object to a stored procedure? The dataset comprises multiple tables and I'll need to be able to access them from within the SQL.
You can use Table valued parameter for passing single table in SQL 2008 http://msdn.microsoft.com/en-us/library/bb675163.aspx
or
refer to this article and use SQL CLR procedure to pass dataset http://blogs.msdn.com/b/jpapiez/archive/2005/09/26/474059.aspx
It looks like you can do this with SQL Server 2008 or newer (at least with a DataTable). Here are the links:
http://www.eggheadcafe.com/community/aspnet/10/10138579/passing-dataset-to-stored-procedure.aspx
http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters
As the article from MusiGenesis' answer states
In SQL Server 2005 and earlier, it is
not possible to pass a table variable
as a parameter to a stored procedure.
When multiple rows of data to SQL
Server need to send multiple rows of
data to SQL Server, developers either
had to send one row at a time or come
up with other workarounds to meet
requirements. While a VB.Net developer
recently informed me that there is a
SQLBulkCopy object available in .Net
to send multiple rows of data to SQL
Server at once, the data still can not
be passed to a stored proc.
At the risk of stating obvious here are two more approaches
Parametrize your processing procedure
You might re-evaluate if you truly and really need to pass a general table variable. While sometimes this can not be avoided the reason why this is a later addition to the set of features that MS SQL Server has is partially because usually you can get around it by structuring your stored procedures and the flow of your data processing.
If you are able to 'parametrize' your process then you should be able to let stored procedures retrieve full dataset based on a limited number of parameters.
This will make the process less flexible, but it will also make it more controlled, which is not a bad thing (similarly like the database which interfaces with applications only on the level of stored procedures is more robust, this approach also, by limiting the flexibility reduces the number of possible cases and consequently the number of possibly unhandeled cases. read: security holes and general bugs)
Temp tables
Besides the above there's always approach with temp tables, which can be more or less complicated, depending on the scope of sharing that you need on the data (sharing can be between db users, app users, connections, processes, etc..).
Nice side effect is that such approach would allow persistence of the process (which bring you closer to having undo, redo and ability to continue interrupted work).
Is it quicker to make one trip to the database and bring back 3000+ plus rows, then manipulate them in .net & LINQ or quicker to make 6 calls bringing back a couple of 100 rows at a time?
It will entirely depend on the speed of the database, the network bandwidth and latency, the speed of the .NET machine, the actual queries etc.
In other words, we can't give you a truthful general answer. I know which sounds easier to code :)
Unfortunately this is the kind of thing which you can't easily test usefully without having an exact replica of the production environment - most test environments are somewhat different to the production environment, which could seriously change the results.
Is this for one user, or will many users be querying the data? The single database call will scale better under load.
Speed is only one consideration among many.
How flexible is your code? How easy is it to revise and extend when the requirements change? How easy is it for another person to read and maintain your code? How portable is your code? what if you change to a diferent DBMS, or a different progamming language? Are any of these considerations important in your case?
Having said that, go for the single round trip if all other things are equal or unimportant.
You mentioned that the single round trip might result in reading data you don't need. If all the data you need can be described in a single result table, then it should be possible to devise a query that will get that result. That result table might deliver some result data in more than one row, if the query denormalizes the data. In that case, you might gain some speed by obtaining the data in several result tables, and composing the result yourself.
You haven't given enough information to know how much programming effort it will be to compose a single query or to compose the data returned by 6 queries.
As others have said, it depends.
If you know which 6 SQL statements you're going to execute beforehand, you can bundle them into one call to the database, and return multiple result sets using ADO or ADO.NET.
http://support.microsoft.com/kb/311274
the problem I have here is that I need it all, i just need it displayed separately...
The answer to your question is 1 query for 3000 rows is better than 6 queries for 500 rows. (given that you are bringing all 3000 rows back regardless)
However, there's no way you're going (to want) to display 3000 rows at a time, is there? In all likelihood, irrespective of using Linq, you're going to want to run aggregating queries and get the database to do the work for you. You should hopefully be able to construct the SQL (or Linq query) to perform all required logic in one shot.
Without knowing what you're doing, it's hard to be more specific.
* If you absolutely, positively need to bring back all the rows, then investigate the ToLookup() method for your linq IQueryable< T >. It's very handy for grouping results in non-standard ways.
Oh, and I highly recommend LINQPad (free) for trying out queries with Linq. It has loads of examples, and it also shows you the sql and lambda forms so you can familiarize yourself with Linq<->lambda form<->Sql.
Well, the answer is always "it depends". Do you want to optimize on the database load or on the application load?
My general answer in this case would be to use as specific queries as possible at the database level, therefore using 6 calls.
Thx
I was kind of thinking "ball park", but it sounds as though its a choice thing...the difference is likely small.
I was thinking that getting all the data and manipulating in .net would be the best - I have nothing concrete to base this on (hence the question), I just tend to feel that calls to the DB are expensive and if I know i need all the data...get it in one hit?!?
Part of the problem is that you have not provided sufficient information to give you a precise answer. Obviously, available resources need to be considered.
If you pull 3000 rows infrequently, it might work for you in the short term. However, if there are say 10,000 people that execute the same query (ignoring cache effects), this could become a problem for both the app and db.
Now in the case of something like pagination, it makes sense to pull in just what you need. But that would be a general rule to try to only pull what is necessary. It's much more elegant to use a scalpel instead of a broadsword. =)
If you are talking about a query that has already been run by SQL (so optimized by SQL Server), working with LINQ or a SqlDataReader might actually have the same performance.
The only difference will be "how hard will it be to maintain your code?"
LINQ doesn't query anything to the database until you ask for the result with ".ToList()" or ".ToArray()" or even ".Count()". LINQ is dynamically building your query so it is exactly the same as having a SqlDataReader but with runtime verification.
Rather than speculating, why don't you try both and measure the results?
It depends
1) if your connector implementation precaches a lot of objects AND you have big rows (for example blobs, contry polygons etc.) you have a problem, you have to download a LOT of data. I've optimalized once a code that had this problem and it was just downloading some megs of garbage all the time via localhost, and my software runs now 10 times faster because i removed the precaching by an option
2) If your rows are small and you have a good chance that you need to read through all the 3000, you're better going on a big resultset
3) If you don't use prepared statements, all queries have to be parsed! Big resultset might be better.
Hope it helped
I always stick to the rule of "bring in what I need" and nothing more...the problem I have here is that I need it all, I just need it displayed separately.
So say...
I have a table with userid and typeid. I want to display all records with a userid, and display on the page in grids say separated by typeid.
At the moment I call sproc that does "select field1, field2 from tab where userid=1",
then on the page set the datasource of a grid to from t in tab where typeid=2 select t;
Rather than calling a different sproc "select field1, field2 from tab where userid=1 and typeid=2" 6 times.
??