ASP.net merge with an existing database? - asp.net

I have a database that's been populated with data on a server. I've made some
changes to the database models and I wish to merge the existing data with the new data without losing everything.
Do you know of an appropriate and no-nonsense method to achieve this?

Well one method would be to set up replication from each of them to a single database. But that seems pretty time intensive. I think you might have to write a program to do it (either .net or SQL) because how would it know which record to take if you have two records with the same primary key?
If you know that there will never be any duplicates, you could just write sql statements to do it. But that's a pretty big if.
You might try something like http://www.red-gate.com/products/sql-development/sql-data-compare/ which will synchronize data and I believe it will also figure out what to do when there are two records. I've had very good luck with Red-Gate products.

Related

SQL Performance from .net: Insert via loop vs XML vs Sql Data Table Type

I have a handful of records, 5-10, that I need to take from the user and run a SQL merge statement against. I can think of three ways of accomplishing this.
.net Loop processing one record at a time - Wondering what the performance of this would be compared to the other options. I would think it is pretty good given connection pooling?
SQL Data Table type - I have seen these used elsewhere in the project, but as I learned first hand these are a pain to update the table definitions if need, dropping the entire object and recreating
XML variable - I have used this in the past. I like it because it is flexible to change the definition of the object. The .net is simple with XMLSerializer. But I am sure there is probably a performance hit to call XMLSerializer. And then on the SQL side to use the .nodes() function.
Does anyone know by personal experience or some reference, such as a white paper, which method is the most efficient when inserting/updating records in a database via .net application?
For 5-10 items you can use "clasic" insert with more records.
INSERT INTO MyTable
(ColumnA, ColumnB, ColumnC)
VALUES
(#ColumnA_0, #ColumnB_0, #ColumnC_0),
(#ColumnA_1, #ColumnB_1, #ColumnC_1),
(#ColumnA_2, #ColumnB_2, #ColumnC_2)
This is MUCH faster than XML or DataTable. And is faster than isolated inserts in loop.
The limit for number of inserted records is 1000. If you want more, you need execute more statements.

Axapta: How does temporally or inMemory tables works, multiple users once a time

Could you help to understand this approach:
I have to do a query that make some operations, i do not want to use containers, since i read that temporal tables are faster, at least for my case, but i dont get how it works:
The Web Service that i will use to make inserts in temporally table, will be consumed by some people in the same time, each values for each users will be diferents, because thats the reason why i want to do this.... but i dont understand how the temporal table, will manage data for each user; Because it will be only a table, so, if an user perform the WS, the table will contain some rows, but then another user could perform in the same time the WS, that should fill the table with another values, how is works?
Temporal tables are saved for each users or how it works, for my case?
Thanks in advance
Both temp tables are based on scope. When variable/buffer goes out of scope tables are dropped. So each user or WS call uses its own table.
You can find specs here:
https://msdn.microsoft.com/en-us/library/gg845661.aspx
https://msdn.microsoft.com/en-us/library/bb314749.aspx

ASP.NET: Create static collection for table data that doesn't change

I'm creating an ASP.NET MVC app that uses EF to perform all DB tasks.
There's a couple of related tables in the database that never change and I was thinking on creating a static collection that retrieves the data from those two tables (it's a few hundred records) the first time it is requested and just stores it in an object to prevent hitting the database every time.
Since I've read several people saying that you should avoid static objects in ASP.NET I was wondering if this was a bad practice or if it is acceptable for scenarios like this (read-only and small amount of data which should prevent concurrency problems).
Also I would like to know if there are other better alternatives to do this.
Thanks.
I have done exactly what you are planning on doing for exactly the same reason. It has been working well for several years already.
Just make sure that you get the initialization of the data right and you should be fine. When initializing, keep in mind:
Don't use locking if at all possible (or your app will deadlock 2
minutes before you're going on vacation)
You MUST NOT under any circumstance let a static constructor fail
Make sure no consumer of your cache has the ability to modify it
If the data isn't really static and you would actually need to re-read it fairly often then this might not be the best solution.
Just in case you're wondering, I've used this approach to cache for instance country data, currency data (base data, not rates), sales unit data (pcs, m, kg etc). These are all stored in a database but almost never change.
It is not a very good approach to use static objects. I would use something like RavenDB which can be used to store your settings or DB data in-memory. It has a very small footprint and is very fast. It has full LINQ support.

Use LINQ to insert data from dataset to SQL

Let's say I have a dataset in an ASP.NET website (.NET 3.5) with 5 tables, each has roughly 30,000 rows and an average of 12 columns. I want to insert all of the data from the dataset into 5 very-similar-but-not-quite-identical tables in SQL Server 2008. I also want to use LINQ (personal preference - trying to learn something new).
Is it as simple as iterating through the dataset and, for each row, creating a new instance of the associated class, initializing its data with the dataset's row, adding it to the data model, and then doing one giant SubmitChanges at the end?
Are there better ways of doing this with LINQ? Or is this the de-facto standard?
Creating objects and inserting them is fine. But to avoid a gigantic commit at the end, you might want to perform a SubmitChanges() every 100 rows or so.
Alternately you could get a copy of Red Gate's "SQL Data Compare" utility if you have the cash. Then you never have to write one of these things again. :-)
Edit 2010-04-19: If you want to use a transaction, I think you should still use my approach instead of a single SubmitChanges(). In this case you'll want to explicitly manage your own transaction in L2S (see http://msdn.microsoft.com/en-us/library/bb386995.aspx). Run your queries in a try/catch and roll back the transaction if you get any failures.
Two last bits of advice:
Make sure your ASP.NET timeout is set high enough.
Consider printing out some kind of progress indicator. It makes running these kind of long-running things much more palatable.
Linq To Sql doesn't natively have anything like the SqlBulkCopy class. I did a quick search and it looks like there's an implementation for Linq To Sql. No clue if it is any good but it can't hurt to check it out.
DataContext.ExecuteCommand can be used with an arbitrary SQL statement. You could do a "INSERT FROM".

Which is fastest? Data retrieval

Is it quicker to make one trip to the database and bring back 3000+ plus rows, then manipulate them in .net & LINQ or quicker to make 6 calls bringing back a couple of 100 rows at a time?
It will entirely depend on the speed of the database, the network bandwidth and latency, the speed of the .NET machine, the actual queries etc.
In other words, we can't give you a truthful general answer. I know which sounds easier to code :)
Unfortunately this is the kind of thing which you can't easily test usefully without having an exact replica of the production environment - most test environments are somewhat different to the production environment, which could seriously change the results.
Is this for one user, or will many users be querying the data? The single database call will scale better under load.
Speed is only one consideration among many.
How flexible is your code? How easy is it to revise and extend when the requirements change? How easy is it for another person to read and maintain your code? How portable is your code? what if you change to a diferent DBMS, or a different progamming language? Are any of these considerations important in your case?
Having said that, go for the single round trip if all other things are equal or unimportant.
You mentioned that the single round trip might result in reading data you don't need. If all the data you need can be described in a single result table, then it should be possible to devise a query that will get that result. That result table might deliver some result data in more than one row, if the query denormalizes the data. In that case, you might gain some speed by obtaining the data in several result tables, and composing the result yourself.
You haven't given enough information to know how much programming effort it will be to compose a single query or to compose the data returned by 6 queries.
As others have said, it depends.
If you know which 6 SQL statements you're going to execute beforehand, you can bundle them into one call to the database, and return multiple result sets using ADO or ADO.NET.
http://support.microsoft.com/kb/311274
the problem I have here is that I need it all, i just need it displayed separately...
The answer to your question is 1 query for 3000 rows is better than 6 queries for 500 rows. (given that you are bringing all 3000 rows back regardless)
However, there's no way you're going (to want) to display 3000 rows at a time, is there? In all likelihood, irrespective of using Linq, you're going to want to run aggregating queries and get the database to do the work for you. You should hopefully be able to construct the SQL (or Linq query) to perform all required logic in one shot.
Without knowing what you're doing, it's hard to be more specific.
* If you absolutely, positively need to bring back all the rows, then investigate the ToLookup() method for your linq IQueryable< T >. It's very handy for grouping results in non-standard ways.
Oh, and I highly recommend LINQPad (free) for trying out queries with Linq. It has loads of examples, and it also shows you the sql and lambda forms so you can familiarize yourself with Linq<->lambda form<->Sql.
Well, the answer is always "it depends". Do you want to optimize on the database load or on the application load?
My general answer in this case would be to use as specific queries as possible at the database level, therefore using 6 calls.
Thx
I was kind of thinking "ball park", but it sounds as though its a choice thing...the difference is likely small.
I was thinking that getting all the data and manipulating in .net would be the best - I have nothing concrete to base this on (hence the question), I just tend to feel that calls to the DB are expensive and if I know i need all the data...get it in one hit?!?
Part of the problem is that you have not provided sufficient information to give you a precise answer. Obviously, available resources need to be considered.
If you pull 3000 rows infrequently, it might work for you in the short term. However, if there are say 10,000 people that execute the same query (ignoring cache effects), this could become a problem for both the app and db.
Now in the case of something like pagination, it makes sense to pull in just what you need. But that would be a general rule to try to only pull what is necessary. It's much more elegant to use a scalpel instead of a broadsword. =)
If you are talking about a query that has already been run by SQL (so optimized by SQL Server), working with LINQ or a SqlDataReader might actually have the same performance.
The only difference will be "how hard will it be to maintain your code?"
LINQ doesn't query anything to the database until you ask for the result with ".ToList()" or ".ToArray()" or even ".Count()". LINQ is dynamically building your query so it is exactly the same as having a SqlDataReader but with runtime verification.
Rather than speculating, why don't you try both and measure the results?
It depends
1) if your connector implementation precaches a lot of objects AND you have big rows (for example blobs, contry polygons etc.) you have a problem, you have to download a LOT of data. I've optimalized once a code that had this problem and it was just downloading some megs of garbage all the time via localhost, and my software runs now 10 times faster because i removed the precaching by an option
2) If your rows are small and you have a good chance that you need to read through all the 3000, you're better going on a big resultset
3) If you don't use prepared statements, all queries have to be parsed! Big resultset might be better.
Hope it helped
I always stick to the rule of "bring in what I need" and nothing more...the problem I have here is that I need it all, I just need it displayed separately.
So say...
I have a table with userid and typeid. I want to display all records with a userid, and display on the page in grids say separated by typeid.
At the moment I call sproc that does "select field1, field2 from tab where userid=1",
then on the page set the datasource of a grid to from t in tab where typeid=2 select t;
Rather than calling a different sproc "select field1, field2 from tab where userid=1 and typeid=2" 6 times.
??

Resources