We are having an ASP.Net Web app, where in we are to access a database server quite frequently.
Since we need to make multiple queries to the database, current approach that we are following is to get the data at one go at the time of connect and store this is as a dataset. This approach will just save us to make multiple queries to the database.
But, we won't be able to use database features like indexes or case insensitive searches on to dataset/dataview. This is now leading a poor performance.
So, I'm wondering is there any way by which we can enhance the performance. I personally am not liking the idea of storing the entire data on the web application. But being a novice in asp.net I am not able to think of any specific solution.
So any help in fixing this is really appreciated.
As a rule, talking to the database server isn't going to cause you a performance issue. It may be that the scripts you are using are slow and that is your issue.
Do you have a script that is taking a long time to run, or a huge data set (millions of rows)?
Applications quite often make 5-10 DB calls per page and perform perfectly well.
A DataReader can provide you access to data in a very efficient manner, and in most cases it's actually faster than the DataSet.
Related
I need to access some data on my asp.net website. The data relates to around 50 loan providers.
I could simply build it into the web page at the moment, however I know that I will need to re-use it soon, so its probably better to make it more accessisble.
The data will probably only change once in a while, maybe once a month at most. I was looking at the best method of storing the data - database/xml file, and then how to persist that in my site (cache perhaps).
I have very little experience so would appreciate any advice.
It's hard to beat a database, and by placing it there, you could easily access it from anywhere you wanted to reuse it. Depending on how you get the updates and what DBMS you are using, you could use something like SSIS (for MS SQL Server) to automate updating the data.
ASP.NET also has a robust API for interacting with a database and using it as a datasource for many of it's UI structures.
Relational databases are tools for storing data when access to the data needs to be carefully controlled to ensure that it is atomic, consistent, isolated, and durable. (ACID). To accomplish this, databases include significant additional infrastructure overhead and processing logic. If you don't need this overhead why subject your system to it? There is a broad range of other data storage options at your disposal that might be more appropriate, but should at least be considered options in your decision process.
Using Asp.Net, you have access to several other options, including text files, custom configuration files (stored as Xml), custom Xml, and dotNet classes serialized to binary or Xml files. The fact that your data changes so infrequently may make one of these options more appropriate. Using one of these options also reduces system coupling. Functions dependent on this data are now no longer dependent on the existence of a functioning database.
I need to store the application settings somewhere, but can't find a satisfying solution. Read only settings are pretty easy to store in web.config, but what about settings for application administration that would should be accessible through web-page? Writing to web.config doesn't seem to be a good idea. I have considered storing the settings in custom xml file, but then if there is sensitive information involved in the settings, that seems to be problem, also if there are multiple users modifying the settings at the same time some kind of file locking has to be involved. Now I am inclined to store the app settings the MS-SQL database, it seems like a secure and well scale-able solution, however it feels wrong to have a table to store just one row - the setting. What's your opinion? How would you design that?
Are there any ready to go .NET solutions for storing dynamic web app settings?
Your question is so subjective that I don't even know why I am answering it instead of voting to close. But anyway, a database is a good place. And if you are bored and tired of relational data there are great NoSQL databases out there such as MongoDB and RavenDB that will make this very easy. And if you want a very fast database Redis could be worth checking out.
Storing things in files in a web application is far more difficult than it might look at the first place. If it is for readonly then web.config could indeed be a good place. But once you start writing you will have to take into account that a web application is a multithreaded environment where you will have to synchronize the access to this file. And what looked in the first place as an easy solution, could quickly turn into a nightmare if you want to design it properly. That's why I think that a database is a good solution as it gives you concurrency, security, atomicity, data integrity, ...
I absolutely think that storing settings of dynamic nature in database is the right way. Don't feel bad about having one simple table. This table can save you a lot of headaches. If you'll code it smart you can really benefit from it (but that depends on the type of values you want to store). The only problem with db is that someone might actually modify values directly in database. But it can be easily solved. For example I have a "configuration-values" class that I feed from database upon start and put it to cache with some timeout. Then after a while I can lazily feed it again, catching situations like I mentioned above. I hope it makes some sense.
A customer has a web based inventory management system. The system is proprietary and complicated. it has around 100 tables in the DB and complex relationships between them. it has ~1500000 items.
The customer is doing some reorganisations in his processes and now has the need to make massive updates and manipulation to the data (only data changes, no structural changes). The online screens do not permit such work, since they where designed at the begining without this requirement in mind.
The database is MS Sql 2005, and the application is an asp.net running on IIS.
one solution is to build for him new screens where he could visialize the data in grids and do the required job on a large amount of records. This will permit us to use the already existing functions that deal with single items (we just need to implement a loop). At this moment the customer is aware of 2 kinds of such massive manipulations he wants to do, but says there will be others.This will require design, coding, and testing everytime we have a request.
However the customer needs are urgent because of some regulatory requirements, so I am wondering if it will be more efficient to use some kind of mapping between MSSQL and Excel or Access to expose the needed informations. make the changes in Excel or Access then save in the DB. may be using SSIS to do this.
I am not familiar with SSIS or other technologies that do such things, that's why I am not able to judge if the second solution is indeed efficient and better than the first. of course the second solution will require some work and testing, but will it be quicker and less expensive?
the other question is are there any other ways to do this?
any ideas will be greatly appreciated.
Either way, you are going to need testing.
Say you export 40000 products to Excel, he re-organizes them and then you bring them back into a staging table(s) and apply the changes to your SQL table(s). Since Excel is basically a freeform system, what happens if he introduces invalid situations? Your update will need to detect it, fail and rollback or handle it in some specified way.
Anyway, both your suggestions can be made workable.
Personally, for large changes like this, I prefer to have an experienced database developer develop the changes in straight SQL (either hardcoding or table-driven), test it on production data in a test environment (doing a table compare between before and after) and deploy the same script to production. This also allows the use of the existing stored procedures (you are using SPs, to enforce a consistent interface to this complex database, right?) so that basic database logic already in place is simply re-used.
I doubt Excel will be able to deal with 1.5mil elements/rows.
When you say to visualise data in grids - how will your customer make changes? Manually or is there some automation behind it? I would strongly encourage automation (since you know about only 2 types of changes at the moment). Maybe even a simple standalone "converter" application - don't make part of the main program - it will be too tempting for them in the future to manually edit data straight in the DB tables.
Here is a strategy that I think will get you from A to B in the shortest amount of time.
one solution is to build for him new
screens where he could visialize the
data in grids and do the required job
on a large amount of records.
It's rarely a good idea to build an interface into the main system that you will only use once or twice. It takes extra time and you'll probably spend more time maintaining it than using it.
This will permit us to use the already
existing functions that deal with
single items (we just need to
implement a loop)
Hack together your own crappy little interface in a .NET Application, whose sole purpose is to fulfill this one task. Keep it around in your "stuff I might use later" folder.
Since you're dealing with such massive amounts of data, make sure you're not running your app from a remote location.
Obtain a copy of SQL 2005 and install it on a virtualization layer. Copy your production database over to this virtualized SQL server. Take a snap shot of your virtualized copy before you begin testing. Write and test your app against this virtualized copy. Roll back to your original snap shot each time you test. Keep changing your code, testing, and rolling back until your app can flawlessly perform the desired changes.
Then, when the time comes for you to change the production database, you can sit back and relax while your app does all of the changes. Since the process will likely take a while, so add some logging so you can check the status as it runs.
Oh yeah, make sure you have a fresh backup before you run your big update.
I don't know the first thing about Informatica but I am looking for ways to resolve duplicating business logic that for inserting and updating records in a table. The problem is doing it in an efficient manner.
1) We have web pages that insert, update and delete records one at a time.
2) We have Informatica ETL load programs that take records from a staging (temp) table and load them. This process is rather a black box to me, but I know that Informatica has efficiencies built in, such as reading fairly large tables in memory, etc, so many records can be validated quickly.
I understand that if the Business edits were placed in web service that this web service could be reused by both the web page doing CRUD operations as well as the Informatica load process, but how do you do this efficiently? Passing one record at a time to a web service would kill the ETL efficiency. So would passing thousands of records.
I feel like I am in the dark cause I don't know how Informatica works.
Does anybody have any suggestions?
I'm afraid you'll have to go learn about InfoMatica. It may operate in a totally different manner than you imagine, making all your assumptions false
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I recently read this Question about SQLite vs MySQL and the answer pointed out that SQLite doesn't scale well and the official website sort-of confirms this, however.
How scalable is SQLite and what are its upper most limits?
Yesterday I released a small site* to track your rep that used a shared SQLite database for all visitors. Unfortunately, even with the modest load that it put on my host it ran quite slowly. This is because the entire database was locked every time someone viewed the page because it contained updates/inserts. I soon switched to MySQL and while I haven't had much time to test it out, it seems much more scaleable than SQLite. I just remember slow page loads and occasionally getting a database locked error when trying to execute queries from the shell in sqlite. That said, I am running another site from SQLite just fine. The difference is that the site is static (i.e. I'm the only one that can change the database) and so it works just fine for concurrent reads. Moral of the story: only use SQLite for websites where updates to the database happen rarely (less often than every page loaded).
edit: I just realized that I may not have been fair to SQLite - I didn't index any columns in the SQLite database when I was serving it from a web page. This partially caused the slowdown I was experiencing. However, the observation of database-locking stands - if you have particularly onerous updates, SQLite performance won't match MySQL or Postgres.
another edit: Since I posted this almost 3 months ago I've had the opportunity to closely examine the scalability of SQLite, and with a few tricks it can be quite scalable. As I mentioned in my first edit, database indexes dramatically reduce query time, but this is more of a general observation about databases than it is about SQLite. However, there is another trick you can use to speed up SQLite: transactions. Whenever you have to do multiple database writes, put them inside a transaction. Instead of writing to (and locking) the file each and every time a write query is issued, the write will only happen once when the transaction completes.
The site that I mention I released in the first paragraph has been switched back to SQLite, and it's running quite smoothly once I tuned my code in a few places.
* the site is no longer available
Sqlite is scalable in terms of single-user, I have multi-gigabyte database that performs very well and I haven't had much problems with it.
But it is single-user, so it depends on what kind of scaling you're talking about.
In response to comments. Note that there is nothing that prevents using an Sqlite database in a multi-user environment, but every transaction (in effect, every SQL statement that modifies the database) takes a lock on the file, which will prevent other users from accessing the database at all.
So if you have lots of modifications done to the database, you're essentially going to hit scaling problems very quick. If, on the other hand, you have lots of read access compared to write access, it might not be so bad.
But Sqlite will of course function in a multi-user environment, but it won't perform well.
SQLite drives the sqlite.org web site and others that have lots of traffic. They suggest that if you have less than 100k hits per day, SQLite should work fine. And that was written before they delivered the "Writeahead Logging" feature.
If you want to speed things up with SQLite, do the following:
upgrade to SQLite 3.7.x
Enable write-ahead logging
Run the following pragma: "PRAGMA cache_size = Number-of-pages;" The default size (Number-of-pages) is 2000 pages, but if you raise that number, then you will raise the amount of data that is running straight out of memory.
You may want to take a look at my video on YouTube called "Improve SQLite Performance With Writeahead Logging" which shows how to use write-ahead logging and demonstrates a 5x speed improvement for writes.
Sqlite is a desktop or in-process database. SQL Server, MySQL, Oracle, and their brethren are servers.
Desktop databases are by their nature not a good choices for any application that needs to support concurrent write access to the data store. This includes at some level most web sites ever created. If you even have to log in for anything, you probably need write access to the DB.
Have you read this SQLite docs - http://www.sqlite.org/whentouse.html ?
SQLite usually will work great as the
database engine for low to medium
traffic websites (which is to say,
99.9% of all websites). The amount of web traffic that SQLite can handle
depends, of course, on how heavily the
website uses its database. Generally
speaking, any site that gets fewer
than 100K hits/day should work fine
with SQLite. The 100K hits/day figure
is a conservative estimate, not a hard
upper bound. SQLite has been
demonstrated to work with 10 times
that amount of traffic.
SQLite scalability will highly depend on the data used, and their format. I've had some tough experience with extra long tables (GPS records, one record per second). Experience showed that SQLite would slow down in stages, partly due to constant rebalancing of the growing binary trees holding the indexes (and with time-stamped indexes, you just know that tree is going to get rebalanced a lot, yet it is vital to your searches). So in the end at about 1GB (very ballpark, I know), queries become sluggish in my case. Your mileage will vary.
One thing to remember, despite all the bragging, SQLite is NOT made for data warehousing. There are various uses not recommended for SQLite. The fine people behind SQLite say it themselves:
Another way to look at SQLite is this: SQLite is not designed to replace Oracle. It is designed to replace fopen().
And this leads to the main argument (not quantitative, sorry, but qualitative), SQLite is not for all uses, whereas MySQL can cover many varied uses, even if not ideally. For example, you could have MySQL store Firefox cookies (instead of SQLite), but you'd need that service running all the time. On the other hand, you could have a transactional website running on SQLite (as many people do) instead of MySQL, but expect a lot of downtime.
i think that a (in numbers 1) webserver serving hunderts of clients appears on the backend with a single connection to the database, isn't it?
So there is no concurrent access in the database an therefore we can say that the database is working in 'single user mode'. It makes no sense to diskuss multi-user access in such a circumstance and so SQLite works as well as any other serverbased database.
Think of it this way. SQL Lite will be locked every time someone uses it (SQLite doesn't lock on reading). So if your serving up a web page or a application that has multiple concurrent users only one could use your app at a time with SQLLite. So right there is a scaling issue. If its a one person application say a Music Library where you hold hundreds of titles, ratings, information, usage, playing, play time then SQL Lite will scale beautifully holding thousands if not millions of records(Hard drive willing)
MySQL on the other hand works well for servers apps where people all over will be using it concurrently. It doesn't lock and it is quite large in size. So for your music library MySql would be over kill as only one person would see it, UNLESS this is a shared music library where thousands add or update it. Then MYSQL would be the one to use.
So in theory MySQL scales better then Sqllite cause it can handle mutiple users, but is overkill for a single user app.
SQLite's website (the part that you referenced) indicates that it can be used for a variety of multi-user situations.
I would say that it can handle quite a bit. In my experience it has always been very fast. Of course, you need to index your tables and when coding against it, you need to make sure you use parameritized queries and the like. Basically the same stuff you would do with any database to improve performance.