This may very well be a question that is too broad to answer but any ideas would be incredibly beneficial. I have a web site where load times are incredibly slow in one environment but not the other. In general, the time to first byte is around 15 seconds on most pages. It takes this long on every page within the entire application and not only on first load. I have been troubleshooting the issue for several days now and feel completely lost as to the actual cause for the latency.
Now for a long explanation about the issue.
The environment is a Frankenstein monster of different sources where too many people have had their hands in it, from what I can gather. I have carefully taken the time to compare each of the two environments and haven't identified a key difference. There are numerous things at play here, but I can summarize the main components.
It is a .NET web application built using Orchard CMS running within IIS and has a SQL Server backend. A dedicated server hosts the database and the another dedicated server hosts the web application itself, which is pretty standard. The main difference between the environments is the production site is running in Liquid Web and the new development site is running in AWS. Basically, the site will ultimately be migrated to AWS once the latency issues are resolved.
AWS has more than enough resources. In fact, production (Liquid Web) has been running into issues as of late due to the CPU usage being nearly maxed out. There are many more resources in AWS, and neither of the servers appear to be using more than 1% or 2% of their available resources. I verified this.
If the issue is within the database, I'm not really sure where else to look. I used SQL Server Profiler on the database server to analyze traffic and no transactions were taking more than a half second, aside from the Audit Logins/Outs (which from my research is normal behavior). The main database queries execute almost immediately after trying to navigate to a page within the site, not 15 seconds later when the page loads.
I had a thought that the network traffic in AWS application server and the database server could be bottlenecked somewhere. However, resolving the application locally does not improve performance. I thought it could have been an issue with the routing within the domain, such as the way in which DNS is set up, but that does not seem to be the case either... or perhaps it is, and I just haven't figured out the best way to troubleshoot that. Either way, resolving the application on localhost does not improve performance. The page still hangs for 15-20 seconds.
The vRAM usage for the site's application pool and the default app pool certainly does seem on the high end, if that makes a difference.
I have browsed the IIS logs and cannot find anything obvious. Granted, I don't have much experience in IIS and could be missing something. Windows Event logs show me nothing out of the ordinary either. There are some errors in both Liquid Web and AWS in regards to printer drivers not being installed, but those have nothing to do with the application itself.
I am unsure of how to check if it has something to do with the Orchard CMS. Granted, this is just a package/framework that was migrated over into the dev server, directly along with the application itself. I see nothing that would have changed within the environment.
The fact is that the two environments seem identical, yet one is running very slowly based on some factor that I just can't seem to identify.
Thank you!
Related
We're getting more and more complaints from users that our ASP.NET 4.5.2 website is running slowly or just generally "freezing up." Things look fine from our test servers and from our workstations, but we're probably using better workstation hardware and browsers than our customers. We're running ASP.NET 4.5.2, C#, SQL Server.
What are some areas that we should concentrate on for debugging such a nebulous request? Should I be looking at system performance and resources on the application servers? System performance and resources on the SQL server? We're tracking application page load times, and they don't seem to be excessive or much changed from months ago, even though customer complaints have gone up.
What are some best practices for starting our investigation, and where's the low hanging fruit on improving performance overall?
If your page is getting slower "sometimes" during the day, I would suggest first to check the Performance Monitor at your IIS server. This could easily be an issue with the server hitting it's limits (Machine or IIS settings). One way verifying this is by creating a sandbox server and run your application from there for your testers.
After that if you are executing stored procedures, add a monitor function in them to gather some cases and then check if any of them causes the process to freeze or delay.
I must also mention here the possibility of locked tables, so maybe a code review maybe in line. (most time consuming from all the above..)
This should be able to give you a hint where your issue originate.
Good luck
If you suspect some SQL problems, you can try to run a Sql Server Profiler to check what is running at the moment and if there is something that could be "freezing up" your system. This way you can check what is going on when the system is slow.
Reference
We have an application deployed on Windows Azure as a Web Role and we are using Pingdom for testing page load times: http://tools.pingdom.com/fpt/
The url for the application on Windows Azure is: http://www.doctorspring.com .
The load time of the app is usually around 7s.
The database is an SQL Azure database and the role and the database are in the same zone.
Sample pingdom result: http://tools.pingdom.com/fpt/#!/CllGggrMz/http://www.doctorspring.com/
Sample pingdom result(with gzip):http://tools.pingdom.com/fpt/#!/f2TUbR6OX/www.doctorspring.com
Suspecting that Azure could be the problem, we tried a free hosting from Somee as:
http://www.doctorspring.somee.com
The load time of the app on Somee is around 3.5s.
Sample pingdom result: http://tools.pingdom.com/fpt/#!/o3gZOjTwH/http://www.doctorspring.somee.com/
That is a huge performance issue for us.
Can you please help us understand the problem with Azure or suggest a method, as to how can we overcome it?
Thanks,
Manish
In both cases, loading the homepage is unacceptably slow - 3.5 seconds to generate a page is around 10 times slower than you need to be when there's no load on the site. I'd expect the site to crumble under even moderate load with this kind of performance.
Without knowing how the site is constructed, it's hard to explain the reason one environment is faster than the other - but my guess is that whatever is generating the page (some kind of CMS?) is the cause. Azure is known to be a touch slow when doing database queries - though normally this only manifests itself under extreme conditions.
I'd recommend tuning the CMS - especially with caching. We found that Azure is normally pretty fast, but when doing database lookups (e.g. retrieving content for the CMS), it can be variable; if your CMS is doing a LOT of database queries to get the homepage content, it's going to be slow.
It's also worth running Yslow - there's some low-hanging fruit on getting performance up.
What services are you running in Azure? Web-role, VM, Website? Are you connecting to an Azure Database instance from the homepage (if so how many distinct calls are you making)?. I'm getting around a 7.5 second load time from London, but to be honest even 3 seconds is too slow for the homepage. It's hard to know what's causing the prolonged page-load but if you are connecting to a DB instance there's a great deal you can do e.g.
Render the page and make some asynchronous calls to spool in additional data.
Make sure your Azure services are running close together
Consider caching database content to a blob. E.g. for the data in "Medical Questions Answered in Last 24 Hours" if you are pulling this from a DB on every load you could considerably speed up access by routinely caching this to a html file stored in a blob container and inject it into the page.
If you must make DB calls from the homepage try to make as few round trips as possible by batching up your queries into a stored procedure.
I've made a lot of assumptions here, but there are certainly things you could do to drastically improve performance on this page.
I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue.
I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness.
Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?
Every intermittent performance problem I ever had turn out to be caused by something in the database.
You need to check out my blog post Unexplained-SQL-Server-Timeouts-and-Intermittent-Blocking. No, it's not caused by a heavy INSERT or UPDATE process like you would expect.
I would run a database trace for 1/2 a day. Yes, the trace has to be done on production because the problem doesn't usually happen in a low use environment.
Your trace log rows will have a "Duration" column showing how long an event took. You are looking at the long running ones, and the ones before them that might be holding up the long running ones. Once you find the pattern you need to figure out how things are working.
IIS 7.0 has built-in ETW tracing capability. ETW is the fastest and least overhead logging. It is built into Kernel. With respect to IIS it can log every call. The best part of ETW you can include everything in the system and get a holistic picture of the application and the sever. For example you can include , registry, file system, context switching and get call-stacks along with duration.
Here is the basic overview of ETW and specific to IIS and I also have few posts on ETW
I would start by monitoring ASP.NET related performance counters. You could even add your own counters to your application, if you wanted. Also, look to the number of w3wp.exe processes running at the time of the slow down vs normal. Look at their memory usage. Sounds to me like a memory leak that eventually results in a termination of the worker process, which of course fixes the problem, temporarily.
You don't provide specifics of what your application is doing in terms of the resources (database, networking, files) that it is using. In addition to the steps from the other posters, I would take a look at anything that is happening at "out-of-process" such as:
Databases connections
Files opened
Network shares accessed
...basically anything that is not happening in the ASP.NET process.
I would start off with the following list of items:
Turn on ASP.Net Health Monitoring to start getting some metrics & numbers.
Check the memory utilization on the server. Does re-cycling the IIS periodically remove this issue (memory leak??).
ELMAH is a good tool to start looking at the exceptions. Also, go though the logs your application might be generating.
Then, I would look for anti-virus software running at a particular time or some long running processes which might be slowing down the machine etc., a database backup schedule...
HTH.
Of course ultimately I just want to solve the intermittent slowness issues (and I'm not yet sure if I have). But in my initial question I was asking for a rather specific logger.
I never did find an answer for that so I wrote my own stopwatch threshold logging. It's not quite as detailed as my initial idea but it has the benefit of being very easy to apply globally to a web application.
From my experience performance related issues are almost always IO related and is rarely the CPU.
In order to get a gauge on where things are at without writing instrumentation code or installing software is to use Performance Monitor in Windows to see where the time is being spent.
Another quick way to get a sense of where problems might be is to run a small load test locally on your machine while a code profiler (like the one built into VS) is attached to the process to tell you where all the time is going. I usually find a few "quick wins" with that approach.
We currently have a Live ASP.NET application (Basically a CMS) running on our IIS7 web-server.
Every once and a while (Talking every few months) it's app pool will go to 100% CPU-usage and stay there until the page times out. We've tried increasing the timeout for the page to 30 minutes in the web.config but it still just stayed at full CPU so I'm presuming it's some form of infinite loop.
It is a massive application, one of the biggest we have, and far too large to blindly search for an issue. The prevailing opinion is that since it's so rare we can just restart the app-pool whenever it happens, but I'd much prefer to fix it.
I have access to the code and full administrator access to the hosting server, and the monitoring software we're running gives me plenty of time to be on the server while the issue is taking place but I can't find any way to get useful data about what's going on at the time without adding a massive constant overhead to the site (Which given it'll take months to happen isn't really viable).
I'm wondering if anyone has some advice as to how I could narrow down our search? A stack trace of the currently running threads would be spectacular, but even just a list of the pages that are actively being served would make a huge difference. I can add code to the project to make it more traceable, but logging everything in the hopes of catching it would be unrealistic (It gets a lot of traffic and we don't want to add significant overhead to page loads).
Tess's blog is an excellent resource on debugging production asp.net applications.
I think this blog post from her blog will be really helpful in getting started in debugging this problem: Hang debugging walkthrough.
Hope this helps
I recommend you to use ASP.Net performance counter, (like the requests queue and number of requests)
I am developing e-commerce project on Asp.Net 3.5 with C#. I am using 3 tiers (Data + Business + UI) structure to reach the data from database (Msql 2005).
There are stored procedures and everything going on from them.(CRUD methods)
There is a performance issue here, project is running so slowly. I couldn't find any problem in transaction model.
Also the project is running on shared hosting at overseas country.Database server and web server are running on different machines.Database server has nearly 1000 databases.
How can I test and learn where is the problem ?
Since there is upwards of 1000 Databases sharing resources I would take a stab that might be your issue.... If you connect to your database and it takes 5 seconds to run a simple query then you can guess the problem.
I would add some stopwatch functionality onto a "testpage" that runs on your web server. This should give you the basic info to see if there is a "bottle neck" in waiting for the database to return your query. If you have made it that far then I would suspect it would be your web server.
Your last option would be be to set up a simple low spec machine with DB and web server on it and just test. Depending on how much traffic your site is getting you should be able to get a pretty good idea of its response time.
Tools such as YSlow might also be of some help however these are usually used more for fine tuning.
Since you're running on a shared hosting service, I would guess that's where your problem is. You're competing for server resources with every other website and database on those servers.
To make sure, I would set up a local environment that mimics your production environment. Then perform some standard stress tests to see how it performs. If it performs how you would expect, then it is probably your hosting solution.
With shared hosting solutions, you really do get what you pay for. If it's a system that requires a lot more speed then you're getting, you should look at a dedicated hosting solution.
I suggest you take a look at Tracing:
http://davidhayden.com/blog/dave/archive/2005/07/17/2396.aspx
This enables you to see a stack trace (The last picture in the article), and localize your performance bottlenecks.
A quick solution I developed to keep logs of performance on my web app may help you here. I have a web server and DB server running a similar-sounding app. I wrote a web service that runs a "benchmarking" stored procedure and returns the run time. I wrote a win app that runs on my development server that calls the web service, passes it the name of the stored procedure to run, and times how long the whole request takes. The win app writes the data to a log file and runs every 10 minutes as a scheduled task. Extra bells and whistles include automatic emails to team members when performance exceeds the specified threshold 3 consecutive times, fails to connect, and when it recovers to normal performance after a slow period.
This provides a general indication of how a user's experience on the website will be at any given time and serves as a warning bell for the team. Not exactly the best solution, but I wrote it in a couple of hours several months ago and have used the data it creates for troubleshooting purposes many times.