How can I profile an ASP.NET application running on a production server? - asp.net

I have an ASP.NET application that is consistently using 75% - 100% of the CPU on a production server. How can I profile the application to figure out what part of the code is using up the most CPU? I have looked at a couple of different tools (Xte Profiler, EQATEC, dotTrace), but they all seem to want you to load and run the application within their tool. It seems to me that they want you to load up the application in their tool and run tests locally (not in production). I want to profile the application while it is running in production with people hitting it to see what is actually going on. Is this possible?
I am a newbie to application profiling so forgive me if I have missed something obvious or am not thinking about this correctly.
Thanks,
Corey

Sam Saffron (one of the StackoverFlow creators) has written a great command-line tool a while ago, but unfortunately has abandoned it.
A friend of mine forked the code to make it work in 2015:
https://github.com/jitbit/cpu-analyzer
(the page has a link to Sam's post explaining how to use it)
The great thing about this tool (besides "no-install required" portability, cmd-line interface, etc etc) is that APM packages like NewRelic etc only monitor http-requests. If your app has some background threads - they won't help much.

You should consider taking a memory dump on the production server while it's experiencing high CPU. Check out ADPlus and taking a hang dump on the asp.net process. This can then be analyzed with Windbg or other tools.
I just went through a similar experience where our production servers were experiencing excessive CPU load - a scenario we could not recreate locally or in test/staging environments. It had nothing to do with the database (database CPU was normal). Analyzing the dump file is what clued us in on what was causing the problem (excessive compilation of regex objects by some library we were using).
This answer would be incomplete without Tess' blog, so here's the link.

My guess it has to do with long running database queries rather than the ASP.net application itself. In my experience 9 times out of 10 this is what I see and this takes the APPLICATION server down to a crawl as resources are consumed and the app has to wait for each query to finish to move on. Take a look at SQL profilier on the DB server and see if there are any queries that are taking a long time to execute.
It could be as simple as adding an index to a column or some other small minor optimizations. Once you know the query, you can then also go back to your code and tweak that section as well.

For those who stumble upon this question still, it really depends on what you are trying to accomplish.
If a server is running that high on CPU, odds are, a standard profiler will bring it to a grinding halt due to it's additional overhead.
There are actually three different types of profilers. Standard profilers, lightweight transaction profilers, and APM tools. You can read more about this in my blog post that discusses all 3:
.NET Profilers: 3 types and why you need all of them

It's certainly possible to profile ASP.NET with the EQATEC Profiler. See:
Profiling ASP.NET websites with EQATEC Profiler
EQATEC Profiler instruments your app in a separate step that enable the app itself to collect it's own profiling info, and the profiler then merely displays that timing data afterwards.
That means that you can run your instrumented ASP.NET app completely independent of the profiler itself.
You could e.g. instrument your app, mail it to your test site in India, have them run it on their server for some days where it will generate timing reports all on it's own, and have them mail back those reports to you, which you can then view in the profiler. Pretty neat.
Note: To have the profiled app generate timing snapshots "on it's own" it must know when to generate them. By default this is when the method Application_End is called in an ASP.NET app. You can programmatically dump snapshots when it suits you by using the EQATEC Profiler API. See the user guide or check out this thread.

You can read about this on Microsoft Developer Network.
You can select documentation according to the version of your Visual Studio. You should verify profiling functionality is provided for your Visual Studio type.
How to: Profile a Web Site or Web Application Using the Performance Wizard

Your best bet is to profile your code on your own machine to identify where it is spending time.
Grab a ten day free trial of this:
http://www.jetbrains.com/profiler/
Here are some links to get you going:
Link
http://msdn.microsoft.com/en-us/library/ms178643(v=VS.100).aspx
http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx

Related

Debugging an ASP.NET website that is running slowly?

We're getting more and more complaints from users that our ASP.NET 4.5.2 website is running slowly or just generally "freezing up." Things look fine from our test servers and from our workstations, but we're probably using better workstation hardware and browsers than our customers. We're running ASP.NET 4.5.2, C#, SQL Server.
What are some areas that we should concentrate on for debugging such a nebulous request? Should I be looking at system performance and resources on the application servers? System performance and resources on the SQL server? We're tracking application page load times, and they don't seem to be excessive or much changed from months ago, even though customer complaints have gone up.
What are some best practices for starting our investigation, and where's the low hanging fruit on improving performance overall?
If your page is getting slower "sometimes" during the day, I would suggest first to check the Performance Monitor at your IIS server. This could easily be an issue with the server hitting it's limits (Machine or IIS settings). One way verifying this is by creating a sandbox server and run your application from there for your testers.
After that if you are executing stored procedures, add a monitor function in them to gather some cases and then check if any of them causes the process to freeze or delay.
I must also mention here the possibility of locked tables, so maybe a code review maybe in line. (most time consuming from all the above..)
This should be able to give you a hint where your issue originate.
Good luck
If you suspect some SQL problems, you can try to run a Sql Server Profiler to check what is running at the moment and if there is something that could be "freezing up" your system. This way you can check what is going on when the system is slow.
Reference

Profiling warm-up of ASP.NET MVC3 Application on Azure

Throughout the process of developing my Application, the first-response time has got worse and worse, It is now taking 10 minutes to load! I am using Web-Deploy to speed up publishing my changes, and from what i've read on MSDN, i understand that this delay is due to compilation and loading assemblies.
It's an ASP.NET MVC3 Application which uses EF CodeFirst, MVC-MiniProfiler etc. Im wondering if its one of these assemblies that is slowing things down.
Is there a way to track down the long running process plaguing my development/testing process?
As a side note, the issue is nowhere near as bad in the Azure Emulator.
Using Windows Azure SDK 1.4 and later, you have the option to enable Profiling for you application (beside the IntelliTrace). You can read about some of the options available (in 1.5) from my blog post here where you will also find a good screenshot showing the option to enable either IntelliTrace or Profiling.
The trick is that you can only have one of them running (either ItelliTrace or Profiling). So I suggest you first run the ItelliTrace and inspect ItelliTrace logs for any exceptions during your application execution. Then do another deployment using Profiling to catch which are the most time consuming methods.
Please note that enabling IntelliTrace /Profiling is only accomplishable during deployment process, and cannot be changed with simple WebDeploy, so you'll have to make at least two deployments for test.
It's hard to say what the slowdown is - as Awais mentioned, IntelliTrace is your friend. However, the delay might be unavoidable (I have seen this a number of times). If this is the case, you can add a startup script that will "prime" IIS, preventing the problem when the first user hits the site.

ASP.NET Profiling

I have a slow asp.net program running. I would like to profile the production server to see what is going on, but I don't want to slow down the production server noticeably.
In general, is it standard practice to profile a production box or just local dev boxes? Also, what progams do you recommend to accomplish this?
I can recommend you to use "dynatrace Ajax edition 3" for client side profiling (it's free and easy tool) and "JetBrains dotTrace" for server side profiling. This tools does not slow down server as i know.
You can use Tracing and it is recommended to check these things on your local machine, but if you want to check something on server, you can enable tracing for short in your web.config.
ASP.NET tracing enables you to view diagnostic information about a single request for an ASP.NET page. ASP.NET tracing enables you to follow a page's execution path, display diagnostic information at run time, and debug your application. ASP.NET tracing can be integrated with system-level tracing to provide multiple levels of tracing output in distributed and multi-tier applications.
ASP.NET Tracing Overview
Tracing in ASP.NET
I guess the answer is really 'it depends'! I would start by considering whether the program runs slowly just on the production server, or whether it runs slowly on a development environment as well. I would also consider how closely I could get my development/test environment to match the production environment.
Once you've done that, consider whether there are any areas that could represent obvious bottlenecks that you might be able to eliminate. So, for example, is the ASP.NET application backed by some form of database? If it is, you can monitor the performance of the database separately and establish whether that is where the problem lies.
Next, try and be very specific about what you mean by 'slow performance'. Is it consistently slow (compared to what?), or just when you do specific actions. This may give you another clue as to where your problem lies, or at least what questions you should be asking.
Having answered lots of these questions, I'd then bust out ANTS Performance Profiler to try and profile what's going on. It has a fairly minimal overhead when profiling an application, and you should only really be running it for a fairly short time anyway, as you'll hopefully by this point have more specific questions you want to answer, or specific actions that you want to dig into.
Your best option is Prefix (http://www.prefix.io). It will let you see all of your SQL queries, logs, HTTP calls, and a lot more.
Another option is Glimpse or the Mini profiler.

Asp.net E-commerce performance

I am developing e-commerce project on Asp.Net 3.5 with C#. I am using 3 tiers (Data + Business + UI) structure to reach the data from database (Msql 2005).
There are stored procedures and everything going on from them.(CRUD methods)
There is a performance issue here, project is running so slowly. I couldn't find any problem in transaction model.
Also the project is running on shared hosting at overseas country.Database server and web server are running on different machines.Database server has nearly 1000 databases.
How can I test and learn where is the problem ?
Since there is upwards of 1000 Databases sharing resources I would take a stab that might be your issue.... If you connect to your database and it takes 5 seconds to run a simple query then you can guess the problem.
I would add some stopwatch functionality onto a "testpage" that runs on your web server. This should give you the basic info to see if there is a "bottle neck" in waiting for the database to return your query. If you have made it that far then I would suspect it would be your web server.
Your last option would be be to set up a simple low spec machine with DB and web server on it and just test. Depending on how much traffic your site is getting you should be able to get a pretty good idea of its response time.
Tools such as YSlow might also be of some help however these are usually used more for fine tuning.
Since you're running on a shared hosting service, I would guess that's where your problem is. You're competing for server resources with every other website and database on those servers.
To make sure, I would set up a local environment that mimics your production environment. Then perform some standard stress tests to see how it performs. If it performs how you would expect, then it is probably your hosting solution.
With shared hosting solutions, you really do get what you pay for. If it's a system that requires a lot more speed then you're getting, you should look at a dedicated hosting solution.
I suggest you take a look at Tracing:
http://davidhayden.com/blog/dave/archive/2005/07/17/2396.aspx
This enables you to see a stack trace (The last picture in the article), and localize your performance bottlenecks.
A quick solution I developed to keep logs of performance on my web app may help you here. I have a web server and DB server running a similar-sounding app. I wrote a web service that runs a "benchmarking" stored procedure and returns the run time. I wrote a win app that runs on my development server that calls the web service, passes it the name of the stored procedure to run, and times how long the whole request takes. The win app writes the data to a log file and runs every 10 minutes as a scheduled task. Extra bells and whistles include automatic emails to team members when performance exceeds the specified threshold 3 consecutive times, fails to connect, and when it recovers to normal performance after a slow period.
This provides a general indication of how a user's experience on the website will be at any given time and serves as a warning bell for the team. Not exactly the best solution, but I wrote it in a couple of hours several months ago and have used the data it creates for troubleshooting purposes many times.

How to identify performance and concurrency issues on an ASP.NET / IIS / SQL Server website

I would appreciate any advice regarding tools and practices I could use to confirm my recently completed website is performing correctly.
Although I am confident the code is not producing errors and is functionally operating as it should, I have little understanding of how to identify IIS, SQL Server and Windows performance/concurrency issues. For example if the website was briefly hit by a huge deluge of traffic, how would I be aware that event had ever happened and how would I know whether the website coped with it.
The website was written using ASP.NET 2.0 and C# running on Windows 2003 R2 Standard Edition, SQL Server 2005 Workgroup Edition and IIS 6.
Consider using a logging mechanism that also raises alerts, so when a database call takes too long, indicating a high server load, the logger raises a warning. Check out log4net.
Regarding tools and practises, I recommend badboy and jmeter as tools for load testing your site. Badboy is simple and can generate urls that may also be used in jmeter. The latter does a very good job load testing your site. Do tests that run over a long period and use different hardware setups to see how adding more web/app servers affect performance.
Also, check out PerfMon, a tool that lets you monitor a local or remote Windows server regarding contention rate, cpu load and so on.
You can use a load generating tool like WebLoad to capture and then replay (with possible variations through scripting) user interactions with your application's UI with lots of threads and connections.
As mentioned, load generation tools are quite helpful. One thing you can add for the database side is to use SQL Tracing. Setup a test plan with very specific steps, and as you step through your plan, trace the SQL that is running on the server.
This way, you can identify if certain actions are causing unnecessary/duplicate database calls. Also, you may discover very large and non-performant queries being run for very simple actions.
For SQL Server use the sys.dm_exec_requests DMV and check for CPU usage, reads, writes, blocking etc etc
select blocking_session_id,wait_type,*
from sys.dm_exec_requests

Resources