I have a blog I made. Recently, I've been noticing some performance problems. I'm getting about 400ms waiting times for the index page. I think this is quite high. When I first deployed it(with less features, but still) I recall index load times of something like 80ms.
Now I would profile it, but the problem is that this only happens in my production environment. In my test environment, the index page only takes 10ms.
How do I go about profiling my production application? I use Apache+mono+mod_mono on Arch Linux with MongoDB. I have a similar test environment except I use xsp.
I'm unsure of where to look: my code, Apache's configuration, or MongoDB? How can I profile my production server to figure out why it's so much slower than my development environment?
Tough to be specific without details, but here's a shot at a general guide:
First I would recommend using something like Firebug for Firefox - there are equivalents in other browsers, but this is my old go-to tool for this kind of thing. Enable it and got the Net Panel view for a waterfall diagram that will show you a list of every object that is loading on a page (you might have to refresh) - it will also have a blue showing the render event (when the page becomes visible).
The waterfall should make it pretty obvious where the slow pieces of the page are and armed with that information you can go to the next stage - figuring out why particular pieces are slow.
If plugins are not your thing, or you suspect that it could be something local to yur machine causing the issue, then take a look at: http://www.webpagetest.org/
That will give you the ability to remote test from different locations, different browsers, speeds etc. and give you similar detailed results.
If it is a static file being fetched, look at network problems, Apache as a cause. If it is dynamically generated then look at Apache, ASP, mongodb etc.
For Apache, what do the access logs say is the response time for the index page? Assuming Apache 2 or newer, make sure you have %D (and %T if you like) being logged so you can see the time taken to serve the page (from the Apache perspective) at the required level of details. For more info on that, take a look at the LogFormat directive.
I can't help on the ASP/Mono side, not my thing, but adding debug statements at various points to track the index page generation (assuming it is dynamically generated) would be a pretty standard approach.
For the database, MongoDB by default logs only "slow" queries that take >100ms - if you are trying to track down a sub-100ms response time issue via the logs you will need to adjust that or you will likely get very little. That can be done as follows:
> db.setProfilingLevel(0,20) // leave profiling off, slow threshold=20ms
You can also adjust it as a startup parameter (--slowms) to the mongod process. More information on profiling, which may also help but has overheads, can be found here:
http://www.mongodb.org/display/DOCS/Database+Profiler
I'd suggest you have a look a Sam Saffrons Mini Profiler. If you use it in your site, it allows you to turn on profiling on production.
By adding sufficient instrumentation to your code, you should then be able to identify which bit is taking the time and then focus your efforts there.
Related
Firstly apologies I cannot be more specific as if I could then I might know where to look. I have an ASP.net web app with a bunch of pages using AjaxControlToolKit. The speed the pages render differs greatly between my two environments. On one it is super slow taking a ~5 seconds to render a relatively simple page which has two grids and a bunch of controls. Everywhere i look articles say "check your SQL" but that cannot be it as SQL perf should be common across all environments. I have also narrowed it down to where the page is just doing a basic post back, no sql, and still the issue is repro. A user clicks Select All and we check a bunch of items in a list. I timed the code behind for this and it is fast 00:00:00.0002178.
The two environments are sitting side by side, same location, both have IE9, except one is running on W2K8 and one is W7. That is the only real difference. On W7 the pages are relatively fast to render.
Any pointers greatly appreciated.
EDIT:
Changing the debug to false did have a positive impact.
Debug Page Time
True 0.143363802770544
False 0.0377570798614921
So what I will do next is systematically look at each component of the application to see why I am making mistakes, SQL, ViewState, etc. I'll update the post with my final findings for those interested.
Thanks All for the help!
I would check the following
Check the CPU usage on the server through task manager (web app and database). Is it maxed out
Are the servers out of physical memory - again task manager
Are the returned pages massive. This is obvious but sometimes it's not. Sheer quantity of HTML can kill a page dead. This could be hidden (ViewState, elements with display:'none') or actually on the page but you are so used to looking that you can't see
Get fiddler onto it. Are you calling any external resources you shouldn't be. maybe there is a web service you are relying on that is suddenly inaccessible from a particular box. We had a twitter feed that was timing out that totally killed a site.
Do profile the database. You think it can't be but are you sure. Are you sure you're sure. You might not be comparing like with like and bring back huge amounts of data on one test without realising.
Certain processes are very sensitive to page/data length. For instance I had a regex that utterly failed and timed the page out once the page reached a certain size. The performance hardly tailed off - it stopped dead (badly written - who did that - erm me!!)
Is the physical box just dying. Are they any other processes/sites on there that are killing it. Who are you sharing the box with? Is the hard disc on it's way out?
Firewalls can be naughty. We have seen massive increases in performance by rebooting a physical firewall. Can you trace those packets?
Any there will be more - performance debugging can be a bit of an art. But hey - we're all artists aren't we.
I probably first check to see whether I have 'debug' mode switched on in my web.config
All answers point to "no!". However, some people say having debugging enabled is convenient when errors occur on the server. I am not sure what they mean by this. Do people actually debug live server code? I honestly didn't even know you could. With the website I work on, we use ELMAH for error reporting. When a server error occurs, we are emailed a complete stack trace. After acquiring a rough idea of where and how the error occurred, I will open the local solution containing all the code that's currently deployed to the production environment and debug locally. I never actually debug the code on the server itself, so I am not sure what people mean by that.
I ask this because I just found out today while consolidating web.config XML that debug=true exists on in the staging and production environments' web.config files. It must have been this way for a few years now and I am wondering what benefits we will experience by turning it off. Could anything possibly depend on debugging being turned on that might break if shut off after being enabled for over two years since the beginning of the project?
It should be fine to turn it off, and you should get a slight performance boost. It sounds like you are doing the right thing using ELMAH. I cannot think of a good reason why you would want to have it turned ON in production... hope that helps.
The "advantage" that people are talking about is that when an error occurs on the site, the default asp.net error page will show you the actual line of code that failed. If you have debug=false, then you will not see any of that information. I think most who would recommend something like this either do not know about logging frameworks like ELMAH (and hence, cannot easily find the cause of errors on the site without this), or they have left it on the production machine in the beginning of the project while they are installing/testing the site, and then forgot to change it later.
However, with a proper logging framework in place, you can still get good error information behind the scenes without presenting it to your end-users in that way. In fact, you don't want to show that kind of information to end-users because a) they won't know what it means, and b) it could be a possible security issue if sensitive aspects of your code are shown (info that might help somebody find vulnerabilities).
All the System.Diagnostic library is depend from this flag. If you do not use any of this function, then probably you can not see any direct effect, but for see other effect and messages that come from debug functions you need to monitor at lease the windows log file.
Functions like Debug.Assert, and Debug.Fail are still active if you do not set the debug flag to off, and affect performance, and maybe create small issues that you never see if you do not check the windows system log file.
In our library that they are full with assert, the debug flag are critical.
Also with Debug flag to on, probably the compiler is not make optimization's that also affect performance.
Either an advantage or disadvantage depending on how you look at it is that Webresource.axd type files are not cached when debug="true". You've got the advantage of having the latest files every time and a disadvantage of having the latest files every time.
This is often true with other third party compression/combining type modules due to the fact that it is easier to debug non minified javascript etc so they usually only begin properly function once debug is disabled.
I have tried looking at "related" questions for answers to this but they don't seem to actually be related...
Basically I have a VB.Net application with a catalogue, administration section (which can alter the catalogue, monitor page views etc etc) and other basic pages on the customer front end.
When I compile and run the app on my local machine it seems to compile fairly quickly and run very fast. However when deployed on the server it seems to take forever and a day on the very first page load (no matter what page it is, how many stylesheets / JS files there are, how many images there are, how big the page markup is and so on). After this ALL the pages load really fast. My guess is this is due to having to load the code from scratch; after that, until it is recycled, the application runs perfectly fast. Does anyone have any idea how I could speed this part of the application up? I am afraid that some customers (on slow connections such as my own at less than dial-up speed) may be leaving the site never to return as a result of it not loading fast enough. Any help would be greatly appreciated.
Thanks in advance.
Regards,
Richard
PS If you refer to some of my other questions you will find out a bit more about the system, such as the fact that most of the data is loaded into objects on the first page load - I am slowly sorting this out but it does not appear to be making all that much of a difference. I have considered using Linq-to-SQL instead but that, as far as I know, does not give me too much flexibility. I would rather define my own system architecture and make it specific to the company, rather than working within the restrictions of Linq-to-SQL.
If you can, the quickest easiest solution is simply to configure the AppDomain not to recycle after a period of inactivity. How this is accomplished differs between IIS 6 & IIS 7.
Another option is to write a small utility program that requests a page from your site every 4 minutes and set it up as a scheduled task on another PC that is on all the time. That at least will prevent the timeout and consequent AppDomain recycle from happening. It is a hack, to be sure, but sometimes any solution is better than none.
The proper solution, however, is to precompile your views. How exactly to accomplish and deploy that will depend on the exact type of Visual Studio project your web site is.
I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue.
I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness.
Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?
Every intermittent performance problem I ever had turn out to be caused by something in the database.
You need to check out my blog post Unexplained-SQL-Server-Timeouts-and-Intermittent-Blocking. No, it's not caused by a heavy INSERT or UPDATE process like you would expect.
I would run a database trace for 1/2 a day. Yes, the trace has to be done on production because the problem doesn't usually happen in a low use environment.
Your trace log rows will have a "Duration" column showing how long an event took. You are looking at the long running ones, and the ones before them that might be holding up the long running ones. Once you find the pattern you need to figure out how things are working.
IIS 7.0 has built-in ETW tracing capability. ETW is the fastest and least overhead logging. It is built into Kernel. With respect to IIS it can log every call. The best part of ETW you can include everything in the system and get a holistic picture of the application and the sever. For example you can include , registry, file system, context switching and get call-stacks along with duration.
Here is the basic overview of ETW and specific to IIS and I also have few posts on ETW
I would start by monitoring ASP.NET related performance counters. You could even add your own counters to your application, if you wanted. Also, look to the number of w3wp.exe processes running at the time of the slow down vs normal. Look at their memory usage. Sounds to me like a memory leak that eventually results in a termination of the worker process, which of course fixes the problem, temporarily.
You don't provide specifics of what your application is doing in terms of the resources (database, networking, files) that it is using. In addition to the steps from the other posters, I would take a look at anything that is happening at "out-of-process" such as:
Databases connections
Files opened
Network shares accessed
...basically anything that is not happening in the ASP.NET process.
I would start off with the following list of items:
Turn on ASP.Net Health Monitoring to start getting some metrics & numbers.
Check the memory utilization on the server. Does re-cycling the IIS periodically remove this issue (memory leak??).
ELMAH is a good tool to start looking at the exceptions. Also, go though the logs your application might be generating.
Then, I would look for anti-virus software running at a particular time or some long running processes which might be slowing down the machine etc., a database backup schedule...
HTH.
Of course ultimately I just want to solve the intermittent slowness issues (and I'm not yet sure if I have). But in my initial question I was asking for a rather specific logger.
I never did find an answer for that so I wrote my own stopwatch threshold logging. It's not quite as detailed as my initial idea but it has the benefit of being very easy to apply globally to a web application.
From my experience performance related issues are almost always IO related and is rarely the CPU.
In order to get a gauge on where things are at without writing instrumentation code or installing software is to use Performance Monitor in Windows to see where the time is being spent.
Another quick way to get a sense of where problems might be is to run a small load test locally on your machine while a code profiler (like the one built into VS) is attached to the process to tell you where all the time is going. I usually find a few "quick wins" with that approach.
For the last few months we've had a wierd problem with our website. Once in a while various queries to the database, using ADO.NET DataSets, will throw an error... the most common of which is "Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints."
The data is actually valid though, as without changing anything the error will be intermittent. Further, the "fix" for it is to recycle the app pool on both web servers... so the problem can't be bad data being returned. Once this is done it can run fine for weeks at a time, or break 3 times in one day. There's no consistency to it...
It also seems like newer means of data access, such as Linq 2 SQL, work just fine... though it's hard to tell since the site is using both at the moment. (Working on getting everything over to L2S, but don't have a lot of time to rewrite old components unfortunately...)
So has anyone had anything like this before? Is it something with the load balancing? Maybe something wrong with the servers? (I've forced all connections to each server in turn and experienced the error on both of them.) Could it be something wrong with running in a VM?
Err... ok, so the overall question is: What's causing this and how do I fix it?
Oh, and the website is in .NET 3.5...
Based off of what you've said, I would guess that this is related to the load experienced on the servers at the time of the error.
If you can, set up a staging environment that is load balanced like your production servers are. Then start load testing the app.
Also, make sure you have all the latest service packs / updates applied on your production servers. MS has a tendency to not tell us everything they are fixing. Finally, look on MS connect to see if a hotfix corrects the problem you are talking about.
UPDATE:
Load testing can be as simple or complicated as you can afford. What it should do is run through a sequence of pages that perform standard operations on your site in a repeatable way. You usually want to simulate "think" times between each page load / operation that are in line with expected user behavior.
Then, you execute the test as a certain number of simulataneous users. While the test is executing, you need to record any errors and the servers performance counters to get an idea of how the app really performs.
Some links to load testing tools are here. Another list is here.
As a side note, I've seen apps start exhibiting strange behavior under a load of only 5 simultaneous users. It really depends on how the site is built.