SqlServer Express slow performance - asp.net

I am stress testing a .NET web application. I did this for 2 reasons: I wanted to see what performance was like under real world conditions and also to make sure we hadn't missed any problems during testing. We had 30 concurrent users in the application using it as they would during the normal course of their jobs. Most users had multiple windows of the application open.
10 Users: Not bad
20 Users: Slowing down
30 Users: Very, very slow but no timeouts
It was loaded on the production server. It is a virtual server with a 2.66G Hz Xeon processor and 2 GB of RAM. We are using Win2K3 SP2. We have .NET 1.1 and 2.0 loaded and are using SQLExpress SP1.
We rechecked the indexes on all of the tables afterword and they were all as they should be.
How can we improve our application's performance?

This is just something that I thought of, but check to see how much memory SQL Server is using when you have 20+ users - one of the limitations of the Express version is that it is limited to 1GB of RAM. So it might just be a simple matter of there not being enough memory available to to server due to the limitations of Express.

You may be running into concurrency issues, depending on how your application runs. Try performing your reads with the "nolock" keyword.
Try adding in table aliases for your columns (and avoid the use of SELECT *), this helps out MSSQL, as it doesn't have to "guess" which table the columns come from.
If you aren't already, move to SPROCs, this allows MSSQL to index your data better for a given query's normal result set.
Try following the execution plan of your SPROCS to ensure they are using the indexes you think they are.
Run a trace against your database to see what the incoming requests look like. You may notice a particular SPROC is being run over and over: generally a good sign to cache the responses on the client if possible. (lookup lists, etc.)

Update: Looks like SQL Server express is not the problem as they were using the same product in previous version of the application. I think your next step is in identifying the bottlenecks. If you are sure it is in the database layer, I would recommend taking a profiler trace and bringing down the execution time of the most expensive queries.
This is another link I use for collecting statistics from SQL Server Dynamic Management Views (DMVs) and related Dynamic Management Functions (DMFs). Not sure if we can use in the Express edition.
Uncover Hidden Data to Optimize Application Performance.
Are you using SQL Server Express for a web app? As far as I know, it has some limitations for production deployment.
SQL Server Express is free and can be redistributed by ISV's (subject to agreement). SQL Server Express is ideal for learning and building desktop and small server applications. This edition is the best choice for independent software vendors, non-professional developers, and hobbyists building client applications. If you need more advanced database features, SQL Server Express can be seamlessly upgraded to more sophisticated versions of SQL Server.

I would check disk performance on the virtual server. If that's one of the issues, I would recommend putting the database on a separate spindle.
Update: Move to separate spindle or Upgrade SQL Server version as Gulzar aptly suggests.

make sure you close connections after retrieving data.

Run SQL Profiler to see the queries sent to the database. Look for queries that are:
returning too much data
constructed poorly
are being executed too many times

Related

Should I consider migrating from SQL Server to Oracle for my ASP.NET applications?

We're upgrading our systems to support clustering and auto failover features. Our business runs .NET 4 applications, web apps and services on SQL Server Express. We can upgrade to SQL Server Standard, but the cost has motivated us to consider other options. Is it a legitimate option to integrate our .NET data layer with ODP.NET? After searching, I have seen a tendentious statement or two in the negative (viz) and yet it would seem that people are doing it anyway. What development features in the Visual Studio IDE will we lose? Thanks for your help!
Well, I'm now working since 20+ years with Oracle and MS SQL Server, having done a lot of projects. Some projects are running now more than 10 years, with all the updates, maintenance and so on.
My quick answer is: Stay with MS SQL Server. Go to Oracle only, if you have really GOOD TECHNICAL reason, or if you are planning really an ENORMOUS database, and if you have enough staff to handle all thge administration.
The main reason is that SQL Server is much easier to maintain; and it also integrates greatly into the Microsoft environment.
Oracle, in contrast, has a steep learning curve. The handling of Oracle is much more "manual" then MS SQL Server. Well, that's also a good thing, because you are in control of every small detail, but it also means that you need to learn a lot; or you need to pay experts. And it is not so easy to find people who really know what to do.
I really like both Systems, but for a rule of thumb, I normally suggest to use MS SQL Server.
I've been using .net with Oracle for years, and migrate away from it whenever the option is available.
If all your database code is in stored procs and you call it though the codebehind or a library and you use ansi sql your migration from ms sql to oracle will be fairly painless.
If you use TableAdapters, they re-write any sql you put in to the oldschool oracle 8 syntax like table1,table2,table3 then have a big where clause to do the join conditions. There's also some weird bugs where sometimes sql that runs fine over in SQL Developer won't work in the TableAdapters.
If you use Entity Framework migration should be pretty easy, but the MS SQL driver is much better then the Oracle one. There have been several queries I couldn't do though EF in oracle because of some of the various errors with the current driver.
If you need more info let me know.
Also if Cost is the main reason to consider migration, why not go with mysql?
Since you are already working in MS SQL, you must be habitual of the way it work, be it entity framework or any other data execution. Yes offcource, microsoft has very high license rates for it. But if you want to move to any other database, it is perfectly alright. I have personally used MS SQL and MySQL both. Initially you might face some syntax related issues, but do remember that logic remains the same for fetching and saving the data. Further it gives a benefit that you got to learn a new language and that too at the expense of far less money.

Can I install WAMP on Microsoft Azure (Bizspark account)?

I have got a Bizspark account from Microsoft and they are providing a basic Azure account. I have been told that it can run PHP, however I would like to use a more tested solution like WAMP. On top of that, I want to place a quite heavy WordPress / BuddyPress installation (that I hope will bring a lot of trafic :)
Has anyone done something similar to this? If so, what is your experience / pitfalls etc.?
Thanks
Stelios
Yes, you can do this. At the end of the day you are just using Windows Server, so anything that installs there will install in the cloud as well. I have done this myself for hosting WordPress in Windows Azure.
However, there are some pitfalls here. Mostly the pitfalls are around the M (MySQL). To setup MySQL in Windows Azure is not really that hard, but you have several considerations on how to make sure it is always available. You can:
Setup a single instance of MySQL in
a role and store the db on local
disk (this is a bad idea).
Setup a single instance of MySQL in
a role and store the db on a drive
(blob backed storage)
Setup 2 instances of MySQL to each
point to a shared drive
(hot-failover). Only one drives will
be able to mount. Now, you have reliability and failover, but a single instance at a time working for you.
Setup 1 writer of MySQL on a drive,
and multiple readers on a snapshot
of a drive. Put in some logic via
connection strings to make sure only
writes goto a single one and reads
to the others. Snapshot every X
mins to update readers.
Setup multiple instances of MySQL
and use native replication features
(each storing to local disk) and
rely on that if you lose an
instance.
There are probably more permutations, but the gist of the problem is how you scale out MySQL to be available and reliable. In Windows Azure, you don't get to rely on the fact that the local disk will always be around or that you will always have the same instance. In fact, you can guarantee that your instances will be down for some period of time each month and eventually, given enough time, you will lose the local disk.
Overall, with multiple instances however, you can guarantee they won't be down simultaneously (to the service SLA level at least). So, you need to make sure MySQL works with multiple instances (or live with single instance downtime) and that your data is backed by blob storage to guarantee it is persisted.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Ditto.
Installing MySQL on an Azure role is not a good idea for plenty of reasons, most notably (lack of) scalability and reliability. (That's just for deploying on Azure, MYSQL itself is great)
To set it up remotely reliably you're going to need a dedicated instance which will run you at least $40 a month, going with SQL Azure is $10/Gb, or free if you get an introductory offer or Bizspark.
If you're just looking to play around with a single instance app, I'd suggest you rather use SQLite or some other in memory db, it'll be a lot less painful.

Migrate Access to ASP.NET

The current application is a kind of CRM application built upon MS Access. The application is for internal use. My job is to migrate it to ASP.NET web-based application. Now boss requires to keep Access as database and develop ASP.NET code against it.
My question is, is there any disadvantages of using Access as database in ASP.NET application? (e.g. optimistic concurrency issue?) Should I persuade boss to upgrade Access to MS-SQL?
Many thanks!
We've used Access as a backend for web sites with good success. It's cheap, can be used effectively by moderately skilled programmers, and you can store the MDB on a document server so it gets backed up.
Most IT people dislike Access, but from a business perspective, Access can be very valuable.
MS Access is notoriously unstable in multiuser environments. A WEB app is by definition heavily multi-user.
So IMHO leaving MS Access as underlying DB is a call for trouble. At least use SQL Express (it is free)
The problem you are going to face in upgrading from Access to MS-SQL is that there is a major cost investment for the application. If your company already has the infrastructure in place(licensing, hardware...) then you won't have such a hard fight to pursuade your boss.
As for a technical answer:
I'd say you need to let you boss know that access databases aren't ideal for concurrent usage which a web application suggests is the intended goal of the application. My view is that Access is for database information that a SMALL set of users will be simply using for small data entry and querying. NEVER use Access to build an enterprise-level solution.
If you are planning to upgrade a Microsoft Access database to SQL Server 2008, use the SQL Server Migration Assistant (SSMA) rather than the upsizing wizard built into MS
10+ tips for upsizing an Access database to SQL ServerAccess.
Your boss probably likes to do ad-hoc stuff with access / excel. If you move the DB to SQL Server Express you can use Access and it's linked table feature to let your boss keep doing his ad-hoc needs through Access while keeping the data in SQL Server Express. If you keep the linked tables named the same as the old physical ones all his reports and queries will should keep working.
I'm an Access promoter, but not for use on websites because Jet/ACE is not threadsafe (though Michael Kaplan once said that is is threadsafe if you access it via ADO/OLEDB; I don't quite understand how a database abstraction layer can wash away a characteristic of the underlying database engine it's calling, but if MichKa said, it's 99% likely to be true).
Now, the exceptions would be if you're using it for prototyping something that will use a different database, or if it's read-only, or is read-write but will only ever have a very small number of users.
Michael Kaplan's website, trigeminal.com, used to use a Jet database as the back end (it may still -- I don't know that MichKa ever changed it), and when that was his main website he reported getting 100K hits a day. But it's a read-only site, so fits my restrictions.
There are so many different alternatives and they are mostly easy to use that I just don't see the point of trying to use Jet/ACE as back end for a website. I'd never do it myself (all the websites I'm responsible for use MySQL).
Simply put, go with MSSQL. Express edition is free, and will give you everything you need to migrate away from Access. These articles are talking about Access applications specifically, but the same issues will plague you.
http://resources.zdnet.co.uk/articles/features/0,1000002000,39285074,00.htm
https://web.archive.org/web/1/http://techrepublic%2ecom%2ecom/5208-6230-0.html?forumID=102&threadID=205509&messageID=2136367

What is Sqlite used for?

I don't know how authoritative this is but I found this:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations
and it doesn't seem good to have a lot of connections to sqlite. This seems to be bad for the web and most applications that have more than a few users. I'm having a hard time thinking of what sqlite would be used for when you don't need that many connections. Every program I can think of needs users, lots of them sometimes, so what would I use a database for that doesn't allow that many connections? I thought about prototypes but why would I use that when I can just connect to a larger database? Embedded apps maybe?
Thank you.
EDIT: Thanks everyone. I look at the page recommended below but an confused about something:
Under appropriate uses for sqlite it has:
Situations Where SQLite Works Well
•Websites
SQLite usually will work great as the database engine for low to medium traffic websites (which is to say, 99.9% of all websites). The amount of web traffic that SQLite can handle depends, of course, on how heavily the website uses its database. Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.
Situations Where Another RDBMS May Work Better
•Client/Server Applications
If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, the file locking logic of many network filesystems implementation contains bugs (on both Unix and Windows). If file locking does not work like it should, it might be possible for two or more client programs to modify the same part of the same database at the same time, resulting in database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem.
The Question:
I'm going to show my ignorance here but what is the difference between these two?
This is answered well by sqlite itself : Appropriate use of sqlite
Another way to look at SQLite is this:
SQLite is not designed to replace Oracle. It is designed to replace fopen().
It's good for situations where you don't have access to a "real" database and still want the power of a relational db. For example, Firefox stores a bunch of information about your settings/history/etc in an SQLite database. You can't expect everyone that runs firefox to have MySQL or postgre installed on their machine.
It's also perfectly capable of running relatively-low traffic, read-heavy websites. The performance of it is overall very good, it's more than the large majority of websites need for their traffic levels.
It's often used for embedded applications.
It can be very handy to use a database like storage when you have no access to a database service. So SQLite is used since it's just a file you store somewhere.
I also find that using SQLite is good for getting a prototype application together pretty quickly without the overhead of having a seperate DB server or bogging a development environment with an instance of MySQL/Oracle/Whatever.
Also easy to pick up and move the database to a different machine if you need to.
The iPhone uses it for call history, SMS messages, contacts, and other type of data. Like Ólafur Waage said, good for embedded applications on mobile device because it's lightweight. I have used it also on stand alone applications. Easy to use and available on most platforms.
Think about simple client or desktop apps that could make use of a db, like as a poor example, an address book. Rather than bundling a huge db engine like mysql or postgre with your deliverable, sqlite is very lightweight and easy to include with your finished app.
This FLOSS Weekly podcast episode talks with the creator of SQLite and covers among other things goes over the type of things you would use it for. Everything from file systems for mobile phones to smallish web sites.
In the simplest terms, SQLite is a public-domain software package that provides a
relational database management system, or RDBMS. Relational database systems are
used to store user-defined records in large tables. In addition to data storage and management,
a database engine can process complex query commands that combine data
from multiple tables to generate reports and data summaries. Other popular RDBMS
products include Oracle Database, IBM’s DB2, and Microsoft’s SQL Server on the
commercial side, with MySQL and PostgreSQL being popular open source products.
The “Lite” in SQLite does not refer to its capabilities. Rather, SQLite is lightweight
when it comes to setup complexity, administrative overhead, and resource usage.
For detail info and solution about SQLite visit the link below:
http://blog.developeronhire.com/what-is-sqlite-sqlite/
Thank you.
What the above two answers say. Expanding slightly on Chad Birch's answer, its teh calls to the SQLite db, and a rather poor implementation of sync() that causes FF3 to be so slow in linux.

How to identify performance and concurrency issues on an ASP.NET / IIS / SQL Server website

I would appreciate any advice regarding tools and practices I could use to confirm my recently completed website is performing correctly.
Although I am confident the code is not producing errors and is functionally operating as it should, I have little understanding of how to identify IIS, SQL Server and Windows performance/concurrency issues. For example if the website was briefly hit by a huge deluge of traffic, how would I be aware that event had ever happened and how would I know whether the website coped with it.
The website was written using ASP.NET 2.0 and C# running on Windows 2003 R2 Standard Edition, SQL Server 2005 Workgroup Edition and IIS 6.
Consider using a logging mechanism that also raises alerts, so when a database call takes too long, indicating a high server load, the logger raises a warning. Check out log4net.
Regarding tools and practises, I recommend badboy and jmeter as tools for load testing your site. Badboy is simple and can generate urls that may also be used in jmeter. The latter does a very good job load testing your site. Do tests that run over a long period and use different hardware setups to see how adding more web/app servers affect performance.
Also, check out PerfMon, a tool that lets you monitor a local or remote Windows server regarding contention rate, cpu load and so on.
You can use a load generating tool like WebLoad to capture and then replay (with possible variations through scripting) user interactions with your application's UI with lots of threads and connections.
As mentioned, load generation tools are quite helpful. One thing you can add for the database side is to use SQL Tracing. Setup a test plan with very specific steps, and as you step through your plan, trace the SQL that is running on the server.
This way, you can identify if certain actions are causing unnecessary/duplicate database calls. Also, you may discover very large and non-performant queries being run for very simple actions.
For SQL Server use the sys.dm_exec_requests DMV and check for CPU usage, reads, writes, blocking etc etc
select blocking_session_id,wait_type,*
from sys.dm_exec_requests

Resources