Slow queries when putting files into "LIVE" environment - asp.net

Here's the situation...
We have a local development server at Location A where we build all our aspx pages. Our databases are located at Location A also.
When testing our files on the development server our queries run quickly most under 1 second.
We have just move our files up to our live server which is located at Location B (databases are still at Location A) and the queries now take anywhere between 5-10 times longer than the development server. Location A is in East Anglia, Location B is in London roughly 100 miles apart.
Also on both dev and live server the first query that is run takes a lot longer than the rest of the queries thereafter.
Any ideas what may be causing the slowness?
EDIT
I've turned tracing on for a few of the pages and it seems that End Load is taking the longest of all the methods however I'm unsure why
I also do not have access to our external server to be able to install SSMS or Oracle developer on there to test any queries unfortunately.

"the first query that is run takes a lot longer than the rest of the
queries thereafter."
That's the effect of caching. The first query pays the toll of physical IO. Subsequent queries benefit from finding relevant records already in cache, either the DB Buffer Cache or some other OS or architectural buffer.
As for the difference in performance between the two environments, that's probably down to this:
"roughly 100 miles apart"
It is likely the network connection between the two locations is throttling data transfer. You need to talk to your network admin, assuming it's a private connection. If you're using public infrastructure your options are limited.
"seems that End Load is taking the longest of all the methods"
Okay, so I'm not a ASPX expert (I'm here for the [oracle] tag) but some light searching shows up several threads which suggest that it might be "user controls", as these fire just before the End Load event. For instance this other SO question.

Related

Multiple wordpress site installation

guys i have question,
lets say i want to upload hundreds thousands of post / product to wordpress, which will slow down the website performance, and the database size will also getting bigger.
what if i split the wordpress site into several different installation to different sub directory based on the product or post category, so lets say one website only contain 25-30k post / producst, but there will be like 10 of those in different installation, in this way the database will be a lot smaller.
do you think it will make the performance better than put everything in single website?
my server is around 16gb ram and 8 cpu cores.
I don't think it will make any difference given you will run it on the same hardware. In case of multiple machines and one ingress node/load balancer you could route the request to the different backend server basing on the product requested, but if you have only one server for hosting everything: web server, database, etc. you will hit the limits of CPU/RAM/etc. much faster than the size of the database table (given it's properly designed, has indices and so on)
However you can measure the performance in both cases using a load testing tool and see how does response time, resources usage and database slow query log looks like in both deployment scenarios.
Data size doesn't have to slow the site. It becomes a matter of how fast can you get the data from the DB. A few things to consider:
Place the Database on a dedicated host. If locally hosted dedicate a crossover cable from the web tier to the DB tier, with a second IP for admin on the database host. You might consider a managed instance of your database with a cloud provider.
Indexes are your friend. Larger datasets result in longer indexes, but you can make deliberately shortened indexes. Choose a database that supports partitioned indexes. Combine these partitioned indexes along with higher I/O operations per second of SSDs for your index partitions and ensuring that all lookup access via index will result in your performance for large data sets doesn't suffer. How does a partitioned index increase access speed? Instead of having to traverse an index from A to S for an index supported query with an S based where clause, in a partioned index you might have 26 indexes, one for A, then B, then C, then ... You jump straight to the S partition for the lookup.
Shape your pool size on the PHP/Web tier. You have already increased the pool size by pulling the database onto its own host. The next thing to do is to effectively manage your cache of fixed assets, the items that do not change across user sessions. Commonly these items are style sheets, images, fonts, javascript files, ... Minimally look at a cache node in front of your wordpress site. Take a look at Varnish or Nginx for this. I am partial to Varnish, but either should do the trick. If you pair this with a CDN for a multigenerational cache then all the better. If you are in the cloud then you have built in CDN options with each cloud provider. You can also widen your bandwidth by placing these fixed assets ona dedicated host and then caching that one host, but this would require a lot of base modification of your wordpress image.
There is no reason why you cannot have multiple web fronts with a common database back end. You would need a load balancer to distribute the load and your first generation cache would sit in front of the load balancer. Realistically, if all of your queries are index supported and your cache is effectively managed, then you can easily scale to hundreds of concurrent users on moderate hardware. Your most taxing item is going to be your PHP execution to pull dynamic data for user sessions. Make the queries respond as fast as possible then you have a small lock window on PHP for each session.
Watch your locks per session! You may be at the mercy of a template and how it is managing your finite resource pool, but in general (a) unless 50%+1 use something, do not allocate it early, (b) be merciless in cutting sessions to release the session based locks on memory, (c) pinch your assets until they bleed - No 45 MB images on the front page when a color optimized 120K compressed image will do the job, (d) Watch the repeat access problem - This applies to subqueries in the database as well as building a web page with hundreds of assets to resolve a page.
Have you considered other options, such as Drupal? The setup is a bit more complex, but I can validate running a dozen distinct websites out of a single Drupal instance with no degradation in performance with the above dedicated database and cache nodes with hundreds of concurrent users on fairly moderate hardware (mini-itx atom based PCs)

Determining what is putting pressure on IIS

I got a dedicated server running both IIS 7.5 and SQL Server 2010. Server CPU load is often near 100%. The SQL server does not take too much but the w3wp process is taking a significant amount of CPU (often 70+%).
I'd like to find out, what is causing this pressure:
* Too many requests of static files (a CDN could be added)
* Too many ajax requests (I am thinking about comet/web sockets anyways)
* Single asp.net pages consuming too much processing power (should be easy to optimize)
Where would you start looking to find out where to start optimizing?
The easiest possible way is to profile the app in production. Not sure if that is possible in your case. Some options:
look into the logs and look at the duration of the requests. Long requests are likely to put load on the system
Remote debug w3wp with Visual Studio and pause the debugger 10 times to see where it stops most. That is the hot spot
Use XPerf or PerfView to capture (managed) stacks. This has almost no impact on production performance
A good starting point would be to fire up the development tools (F12 in IE / Chrome) and look at the timings under the network tab. That will show you a waterfall-style diagram for how the page has loaded and should help you identify any particularly slow-loading static files which might be sensibly moved off to a cdn, any unnecessary requests being made, how much time is being spent getting the actual page itself, etc.
After that, profile the application with a performance profiler. A good profiler like ANTS Performance Profiler will let you look at things like execution time / hit counts for different methods, as well as what database queries are being run and how long they’re taking. A new version of ANTS (currently in EAP) will also group that activity by http request so you can see if specific pages need optimisation or are being hit too many times.
You'd also do well to check that caching is working as you intend it so that users aren’t unnecessarily re-requesting pages.
There's also a nice article on ASP.NET performance which you might want to read at http://aspalliance.com/1533_ASPNET_Performance_Tips.7.
Disclaimer: I work for Red Gate which makes ANTS.
I found an easy way to see what's going on on the server.
Nevertheless, the professional way is probably to go and use a profiling tool.
What did I do?
In IIS Console you can get a list of all current worker threads and if you choose one you can see what this thread is working on. So I was able to see that the thread was handling 100 requests in parallel, 70 of those were tracing back to the same ajax call.
The immediate solution was to reduce the frequency of that call (from every 10 to every 30 seconds). The next step will be to further optimize the call on the server side since I do have other ajax calls with the same frequency (every 10 seconds) which nearly never showed up in the active requests list since they were so fast.
Probably the easiest way to figure it out would be to install New Relic on the server. The trial lasts 30 days I think so it should give you enough time to get to the bottom of this. It'll show you long-running SQL queries, .NET methods, as well as just about everything else you can think of. It makes it very easy to identify bottlenecks.
By the way, I suggested New Relic because it sounds like your problem is in a production environment. New Relic isn't an incredibly detailed profiler. It gathers enough information to be helpful, but not so much as to slow down the server. That makes it well suited to this purpose.
If, however, you could reproduce the problem in a development environment you might try something like the free Eqatec profiler.

How many is too many databases on SQL Server?

I am working with an application where we store our client data in separate SQL databases for each client. So far this has worked great, there was even a case where some bad code selected the wrong customer ids from the database and since the only data in the database belonged to that client, the damage was not as bad as it could have been. My concerns are about the number of databases you realistically have on an SQL Server.
Is there any additional overhead for each new database you create? We we eventually hit a wall where we have just to many databases on one server? The SQL Server specs say you can have something like 32000 databases but is that possible, does anyone have a large number of database on one server and what are the problems you encounter.
Thanks,
Frank
The upper limits are
disk space
memory
maintenance
Examples:
Rebuilding indexes for 32k databases? When?
If 10% of 32k databases each has a active set of 100MB data in memory at one time, you're already at 320GB target server memory
knowing what DB you're connected too
...
The effective limit depends on load, usage, database size etc.
Edit: And bandwidth as Wyatt Barnett mentioned.. I forgot about network, the bottleneck everyone forgets about...
The biggest problem with all the multiple databases is keeping them all in synch as you make schema changes. As far as realistic number of databases you can have and have the system work well, as usual it depends. It depends on how powerful the server is and how large the databases are. Likely you would want to have multiple servers at some point not just because it will be faster for your clients but because it will put fewer clients at risk at one time if something happens to the server. At what point that is, only your company can decide. Certainly if you start getting a lot of time-outs another server might be indicated (or fixing your poor queries might also do it). Big clients will often pay a premium to be on a separate server, so consider that in your pricing. We had one client so paranoid about their data we had to have a separate server that was not even co-located with the other servers. They paid big bucks for that as we had to rent extra space.
ISPs routinely have one database server that is shared by hundreds or thousands of databases.
Architecturally, this is the right call in general. You've seen the first huge advantage--oftentimes, damage can be limited to a single client and you have near zero risk of a client getting into another client's data. But you are missing the other big advantage--you don't have to keep all the clients on the same database server. When you do get big enough that your server is suffering, you can offload clients onto another box entirely with minimal effort.
I'd also bet you'll run out of bandwidth to manage the databases before your server runs out of steam to handle more databases . . .
What you are really asking about is Scalability; Though, ideally setting up 32,000 Databases on one Server is probably not advantageous it is possible (though, not recommended).
Read - http://www.sql-server-performance.com/articles/clustering/massive_scalability_p1.aspx
I know this is an old thread but it's the same structure we've had in place for the past 2 years and current run 1768 databases over 3 servers.
We have the following setup (not included mirrors and so on):
2 web farm servers and 4 content servers
SQL instance just for a master database of customers, which is queried when they access their webpage by the ID to get the server/instance and database name which their data resides on. This is then stored in the authentication ticket.
3 SQL servers to host customer databases on with load spread on creation based on current total number of learners that exist within all databases on each server (quickly calculated by license number field in master database).
On each SQL Server there is a smaller master database setup which contains shared static data that is used by all clients, therefore allowing smaller client databases and quicker updating of the content.
The biggest thing as mentioned above is keeping the database structures synchronises! For this I ended up programming a small .NET windows form that looks up all customers in the master database and you paste code in to execute and it'll loop through getting the database location and executing the SQL you past.
Creating new customers also caused some issues for us, so I ended up programming a management system for our sale people and it create a new database based on a backup of a inactive "blank" database, therefore we have the latest DB without need to re-script the entire database creation script. It then inserts the customer details inside the master database with location of where the database was created and migrates any old data from an old version of our software. All this is done on a separate instance before moving, therefore reducing any SQL locks.
We are now moving to a single database for our next version of the software as database redundancy is near impossible with so many databases! This is a huge thing to consider as SQL creates a couple of waiting tasks which mirror your data per database, once you start multiplying the databases it gets out of hand and the system almost solely is tasked with synchronising and can lock up due to the shear number of threads. See page 30 of Microsoft document below:
SQLCAT's Guide to High Availability Disaster Recovery.pdf
I do however have doubts about moving to a single database, due to some concerns as mentioned above, such as constantly checking in every single procedure that the current customer has access to only their data and also things along the lines of one little issue will now affect every single database, such as table indexing and so on. Also at the minute our customer are spaced over 3 servers, but the single database will mean yes we have redundancy, but if the error was within the database rather than server going down, then that's every single customer down, not just 1 customer database.
All in all, it depends what you're doing and if you are wanting the redundancy; for me, the redundancy is now key and everything else in a perfect world shouldn't happen (such as error which causes errors within the database for everyone). We only started off expecting a hundred or so to move to the system from the old self hosted software and that quickly turned into 200,500,1000,1500... We now have over 750,000 users use our system each year and in August/September we have over 15,000 concurrent users online (expecting to hit 20,000 this year).
Hope that this is of help to someone along the line :-)
Regards
Liam

ASP.NET performance: counting SQL requests

We had huge performance problem when deploying our ASP.NET app at a customer, which had the DB sitting on a remote location.
We found that it was due to the fact that pages made ridiculous amount of individual SQL queries to the DB. It was never a problem we noticed because usually, the web and DB are on the same local network (low latency). But on this (suddenly) low latency configuration, it was very very slow.
(Notice that each sql request by itself was fast, it is the number and serial nature of the sequence that is the problem).
I asked the engineering team to be able to report and maintain a "wall of shame" (or stats) telling us for each page the number of SQL requests so we can use it as a reference. They claim it is expensive..
anyone can tell me how to be able to maintain or get such report cheaply and easily?
We are using SQL Server 2005
We have a mix of our own DB access layer and subsonic
I know and use the profiler, but that is a bit manual. Asking here if there is a tip on how to automate or maybe I am just crazy?
If you are on SQL Server, read up on Profiler.
http://msdn.microsoft.com/en-us/library/ms187929.aspx
Running profiler from the UI is expensive, but you can run traces without the UI and that will give you what you want.
First, check out SubSonic's BatchQuery functionality--it might help alleviate alot of the stress in the first cut without getting into material modification of your code.
You can schedule trace jobs/dumps from the SQL server's end of things. You can also run perfmon counters to see how many database requests the app is serving.
All that said, I'd try and encourage the customer to move the database (or a mirrored copy of the database) closer to your app. It is probably the cheapest solution in the long term, depending on how thick the app is.
I have had good success using this tool in the past, not sure if the price is right for you but it will uncover any issues you may have:
Spotlight on SQL Server
The MiniProfiler (formerly known as the MVC mini profiler; but it works for all both MVC and Webforms) is a must in such a case IMO. If the code creating the database connections is well architectured it's a piece of cake to get it running for almost any ASP.NET application.
It generates a report on each rendered page with profiling stats, including each SQL query sent to the database for the request. You can see it in action on the Stack Exchange Data Explorer pages (top left corner).

Running a Asp.net website with MS SQL server - When should i worry about scalability?

I run a medium sized website on an ASP.net platform and using MS SQL server to store the data.
My current site stats are:
~ 6000 Page Views a day
~ 10 tables in the SQL server with around 1000 rows per table
~ 4 queries per page served
The hosting machine has 1GB RAM
I expect by the end of 2009 to hit around:
~ 20,000 page views
~ 10 tables and around 4000 rows per table
~ 5 queries per page served
My question is should I plan for scalability right now itself? Will the machine hold up till the end of the year with the expects stats.
I know my description is very top level and does not provide insight into the kind of queries etc. But just wanted to know what your gut instinct tells you?
Thanks!
You should always plan for scalability. When to put resources into doing the actual scaling is usually the tough guess.
Will the machine hold up until the end
of the year
Way too little information to answer this. If a page request takes 30 CPU seconds to process due to massive interaction with a legacy enterprise application through the four queries per page - then there's no way. If it's taking miniscule fractions of a second to serve some static content stored in the cache and your queries are only executed every half hour to refresh the content - then you're good until 2020 at the rate of traffic growth you describe.
My guess is that you're somewhere closer to the latter scenario. 20,000 page hits a day is not really a ton of traffic, but you'll need to benchmark your page and server performance at some point so that you can make the calculations you need.
Things to look at for scaling your site when it is time:
Output Caching
Optimizing Viewstate
Using Ajax where appropriate
Session optimization
Request, script, css and html minification
Two years ago I saw a relatively new (for two years ago) laptop running IIS and serving up 1100 to 1200 simple dynamic page requests per second. It had been set up by a consulting firm whose business was optimizing ASP.Net websites, but it goes to show you how much you can do.
Essentially, by the end of 2009, you expect to do 100,000 SQL queries per day. This is about 1.157 queries per second.
I am making the assumption that your configuration is "normal" (i.e. you're not doing something funky and these are pretty straightforward SELECT, UPDATE, INSERT, etc), and that your server is running RAID disks.
At 4,000 rows per table this is nothing to SQL server. You should be just fine. If you wanted to be proactive about it, put another stick of RAM in the server and bring it up to at least 2GB, that way IIS and SQL have plenty of memory (SQL will certainly take advantage of it).
The hosting machine? Does this mean that you have IIS and SQL installed on the same box or IIS on your host machine with a dedicated SQL Server provided by your hosting company? Either way I would suggest starting to take a look at how you might implement a caching layer to minimize the hits (where possible) to the database. Once this is PLANNED (not necessarily implemented) I would then start to look at how you might build a caching layer around your output (things built in ASP.NET). If you see a clear an easy path to building caching layers...then this is a quick and easy way to start to minimize request to the database and work on your web server. I suggest that this cache layer be flexible...read not use anything provided by .NET! Currently I still suggest using MemCached Win32. You can install it on your one hosted local box easily and configure your cache layer to use local resources (add memory...1gb is not enough). Then if you find that you really need to squeeze every little bit of performance out of your system...splurge for a second box. Split your cache between your current box...and the new box (allowing you to keep more in cache). This will give you some room (and time) to grow. Offloading to more cache should help address any future spikes...and with the second box you can now also focus on making your site work in farmed environment. If you are using local session..push that into your cache layer so that a request from one box or another won't matter (standard session is local to the box that it is managed on).
This is a huge subject...so without real details this is all speculation of course! You might be just right for adding better and more hardware to the existing installation.
Have you tried setting up a quick performance test using sample data? 20,000 page views is less than one/sec (assuming even distribution over 8 hours), which is pretty minimal given your small tables. Assuming you're not sending a ton of data with each page view (i.e. a data table with all 1000 rows from one of your tables), you are likely OK.
You may need to increase RAM, but other than running a performance test I wouldn't worry too much about performance right now.
I don't think the load you are describing would be too much of a problem for most machines. Of course it doesn't just depend on the few metrics you outlined but also on query complexity, page size, and a heap of other things.
If you worry about scalability do some load testing and see how your site handles, say 10000 page views per hour (about 3 views per second). It's mostly always good to plan ahead as long as you plan for probable scenarios.
Guts say: Given 10 tables with 4,000 rows each and assuming about 2KB of data per row is only 80MB for the entire database. Easily cached within memory available. Assuming everything else about the application is equally simple, you should be able to easily serve hundreds of pages per second.
Engineers say: If you want to know, stress test your application.

Resources