I have an .Net Framework 4.0, ASP.NET, ASP.NET MVC 3 web application hosted on a Windows 7 / IIS 7.5. IIS logging is enabled on this machine and set to log in W3C mode.
The application is compiled by using the Release configuration and has been deployed to IIS with <compilation debug='false' attribute set explicitly. The Web.config specifies the use of SQL Server based session state.
I have added the following statements in Global.asax in BeginRequest and EndRequest events respectively. The results i.e. "sw.Elapsed.TotalMilliseconds" are getting stored in an Application level list of values. I dump these values out via a debug page and get an average of the same.
// in BeginRequest
HttpContext.Current.Items.Add("RequestStartEnd", System.Diagnostics.Stopwatch.StartNew());
// in EndRequest
var sw = (System.Diagnostics.Stopwatch)HttpContext.Current.Items["RequestStartEnd"];
sw.Stop();
I have created a load test which runs a single request against this application with a concurrent user load of 20 users. The test is run in Visual Studio 2010 Ultimate edition.
After running the load test, I am getting an average time-taken as recorded by the stopwatch as 681 milliseconds.The average time-taken as per IIS for these requests (I cleaned out all logs before running the load test) is 2121 milliseconds. The average time-taken as per IIS tallys with the value shown in Visual Studio load test report.
The stopwatch time-taken only accounts for 32% of time-taken as reported by IIS logs / Visual Studio. Where does the other 68% time go?
Update 1:
I set the session state to InProc and re-ran the load test. In this scenario the difference between the average time reported by stopwatch and average time-taken reported by IIS logs grew to more than 70%!!! Where is all that time going?
Update 2:
#Peter - I tried out the failed request tracing by putting a trace rule to log on status code of 200. Next, I ran the load test with 20 concurrent users for approx 1.5 minutes. Went through last 50 trace files and found that the "Time Taken" field in that report had range of 750ms to 1300ms. The Visual Studio report showed avg. time taken as 2300ms. In the report, using the compact view, I see that the time taken changes between the following transitions
(1) AspNetStart -> AspNetAppDomainEnter
(2) ManagedPipelineHandler-start ManagedPipelineHandler-end.
The (2) item is probably my application's code. Still there is a big difference between max of time-taken as per failed request logs i.e. 1300ms and the avg. time-taken as shown by Visual Studio 2300ms. How to find accounting for that? Thanks for this great tip though!
How long are you running your load test for? If you're running it under windows 7 you may get different results than under a windows server as well. You could be having issues with getting threads allocated to the thread pool under burst loads. .Net will immediately allocate threads up to the thread pool min and then slowly allocate up to the thread pool max. I believe the default settings for client OS's are different than for server OS's. It may be on a client OS the default min thread setting is equal to the number of cores on your machine, meaning that .net will then slowly allocate more threads to meet your burst load.
A simple check would be to let your load test run longer and see if the gap between your 2 measurements narrows.
There is a better way to look into the internals of your app, by using "Failed Request Tracing Rules"
http://learn.iis.net/page.aspx/266/troubleshooting-failed-requests-using-tracing-in-iis-7/
With that you can follow exactly what you app is doing in IIS
I would suggest looking into MvcMiniProfiler. It's a NuGet package you could add and wire in (almost effortlessly) that really breaks down the execution times of various points in your MVC app. You could see a live view of each request and where any bottlenecks reside.
More info:
http://code.google.com/p/mvc-mini-profiler/
http://www.hanselman.com/blog/NuGetPackageOfTheWeek9ASPNETMiniProfilerFromStackExchangeRocksYourWorld.aspx
http://www.nuget.org/List/Packages/MiniProfiler
Its most likely a combination of the Managed Pipeline overhead and your network (even localhost or 127.0.0.1) is probably where the "lost" time can be accounted for.
You mention that the IIS logs tally with your Visual Studio load
test figures - both of those involve the network stack and the managed pipleline.
Your stopwatch code only executes inside the ASP.NET context (just
before ASP.NET execution begins and just before it ends), and does not take into account
the IIS overhead of processing a TCP network request, parsing
HTTP-headers for both the request and response and transmission time
through the managed pipeline and TCP stack.
[EDIT] The proportion is exaggerated even more if your ASP.NET page execution time is really fast, e.g. it does not consume a lot of CPU relative to the rest of the TCP stack and managed pipeline that marshals the HTTP request for you
if you are having issues with long load times it may be related to several things if you have a high number of IO operations it can affect this
if you are doing database queries with an orm try profiling your sql statements either in vs with a sql profiler or in sql servers built in profiler if you are getting alot of individual querys you may want to consider using some includes in you linq queries to bundle up some of your data into single queries
you may also want to consider want to consider using async controllers to improve response times if you have alot of IO operations so your application isnt continuously waiting for each io operation to complete
see wintellect.com/CS/blogs/jprosise/archive/2010/03/29/…
there also better solutions out there for logging than the iis logging as it is a fairly old component you may want to take a look at log4net for more general logging or elmah for logging errors and exceptions as these solutions can bundle up a bunch of log entries and write several in a single operation also improving IO performance
Related
I have a COM+ application which is called by a series of ASP Pages. I found this previous post which suggested the following were the ideal set of metrics to monitor:
Errors During Script Runtime
Errors From ASP Preprocessor
Requests Executing
Requests Queued
Sessions Total
Errors From Script Compilers
Debugging Requests
Request Execution Time
Request Wait Time
Requests/Sec
Requests Total
Requests succeeded
Requests Failed Total
Template Cache Hit Rate
Process (inetinfo) Private Bytes
Is this still prevalent in IIS 8.5?
Also, they made mention of a tool called PAL and shared a link on codeplex, however, PAL has been moved off of codeplex, and is now available on Git Here. Any insights from someone who may have recently had to track Perfmon stats for a COM+ application are welcomed.
Thanks!
Dustin
I am having some performance issues with my iis webserver. It is hanging randomly and I am trying to figure out how to speed up the server. I enabled Failed request tracing on the server and set it to generate a log when the request is over 3 seconds.
The resulting logs(xml) dont show much but there is a point in the compact performance log that indicates what part of the log the server is hanging on. Below is the part of the log where the large time loss is occurring.
65. i GENERAL_GET_URL_METADATA PhysicalPath="", AccessPerms="513" 17:46:32.577
66. i HANDLER_CHANGED OldHandlerName="", NewHandlerName="ExtensionlessUrlHandler-Integrated-4.0", NewHandlerModules="ManagedPipelineHandler", NewHandlerScriptProcessor="", NewHandlerType="System.Web.Handlers.TransferRequestHandler" 17:46:32.577
67. i VIRTUAL_MODULE_UNRESOLVED Name="FormsAuthentication", Type="System.Web.Security.FormsAuthenticationModule" 17:46:47.771
I am not sure what Handler changed is but it is taking a long time, any tips would be great on where to start looking.
It is hard to come up with a solution without having any piece of code in sight. Here are some general hints/tips you can follow in order to have great performances with an ASP.NET application.
The fastest way to do a request is to not do it in the first place. Try caching everything that can be cached. There are server-side caches and client-side caches. Each have their own uses, but you are not limited to only one type.
Make sure you do not cache and/or keep references of any request-related objects into memory. ASP.NET have a limited number of concurrent requests and keeping a request reference in memory will hang your server if it runs out of threads
Close the request as soon as you are done with it
Everything that is not needed by the client at the time of the request should be done in the background
Make sure you have no memory leak in your application. Garbage Collections are often the cause of hangs in ASP.NET application. When garbage collecting, all running threads are paused. This is especially true for Gen 2 garbage collections. You can enable background generation 2 garbage collections.
Isolate the problematic code. Use a profiler and see which type of request is CPU-intensive. Then dig deeper and see what inside that request makes it slow.
In any well-balanced application, objects should either be short-lived and live forever. In the case of an ASP.NET application, the objects created during the course of a request should ideally die within that request or during the next GC gen 0.
Consider object pooling for large objects and objects that are long to initialize
Make sure your app pool doesn't totally crash and restarts (look the IIS logs and/or the Windows Events)
Some useful debugging tools you can use:
LeanSentry. Great for diagnosing ASP.NET server hangs
windbg. High learning curve but by far the most powerful debugging tool you can use
PerfView. Useful for analyzing ETW events like I/O or CPU usage
There are many ways to improve server performance. But before that you should start with checking CPU usage during the "hang". An infinite loop in the application code may cause this behavior. Unless there is I/O, locking, or sleeps in the loop, you will be able to see it from the CPU usage as you will get exactly one full core's worth of CPU usage for each infinite loop.
Help link to improve server performance
More Info:
I can see entry related to VIRTUAL MODULE UNRESOLVED: which is related to bad use of Response.Redirect(url); Also make sure you have deployed your app on integrated mode on IIS.
here's a simple checklist you might want to reconsider:
Always pre-compiling your site, as opposed to copying it! you might gain a significant performance boost compiling your website before deployment: ASP.NET Precompilation Overview
Do not run the production application with debug="true" enabled, when debug flag is true in your web.config, Much more memory is used within the application at runtime, and since some additional debug paths are enabled, codes can execute much slower
Check your Web.config file to ensure trace is disabled in the section
IIS 7.5 comes with the Auto-Start Feature. WAS (Windows Process Activation Service) starts all the application pools that are configured to start automatically, ensure that your application pool is configured to AlwaysRunning in the IIS 7.5 applicationHost.config, check out here for more detail.
Every asp.net server can be well configured by aspnet.config file located in the root of the framework folder. Ensure that Publisher Evidence for Code Access Security (CAS) is set to false in your aspnet.config file, This might increase the initial page load when you restart the ASP.NET app pool. you can read more about it here.
Also you might want to try Application Initialization Module for IIS 7.5, this module also available on IIS 8.0 can decrease the response time for first requests by pre-loading worker processes
I have a webservice which communicates with a desk top application using .Net remoting. The clients or users are using this webservice to insert or update any data to data base (through desk top application after some processing). The problem i'm facing is, during peaks time, that means calling 2000 - 3000 times this webservice to insert/update data with in 15 to 20 mins.. i can see that the number of threads increases upto around 2000. (In the TaskManager of w3wp.exe). What could be the possible reason for creating these much threads? As i can see other webservices are showing only less than 50 threads.
NB: The web service which is causing the issue is in a diff application pool.
Thanks
It means many calls are waiting for response.
Since ASP.NET uses thread pool, you should use them for fast operations or instead use ASYNC handlers so the thread backs to the pool and ASP.NET assigns a new thread as soon as the result gets ready.
It can be result of deadlock in BRL or DLL or anything else. Try putting a break-point in web service code and find where it gets blocked.
Greetings!
I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process.
There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS.
The issue:
When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly.
When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page.
Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic.
Is this normal? If I bump the server RAM up to 4GB, will that be enough?
Will multiple users require even more memory?
Could the culprit be AJAX or the list of objects?
Thanks for any insight you can provide.
Did you try running this in your dev environment under IIS 7.5?
Make sure debug="false", not "true" in your web.config
I think you need to dig out some debugging tools and capture a worker process dump of your production server, you won't be able to properly diagnose this issue with just PerfMon and Task Manager.
I posted this answer on Stack Overflow a while back which should get you started:
CLR out Of Memory Exceptions
Well, after some hard work, I have identified the culprit: our old nemesis the Endless Loop. Of course, if the development environment had thrown an exception, I would have caught and excised the problem - but it didn't.
I would say the lesson learned here is to understand that different versions of IIS and DevEnv respond to errors differently and that we must test the app in the same configuration in which it will be deployed.
Thanks everyone for your feedback.
Under windows server 2008 64bit, IIS 7.0 and .NET 4.0 if an ASP.NET application (using ASP.NET thread pool, synchronous request processing) is long running (> 30 minutes). Web application has no page and main purpose is reading huge files ( > 1 GB) in chunks (~5 MB) and transfer them to the clients. Code:
while (reading)
{
Response.OutputStream.Write(buffer, 0, buffer.Length);
Response.Flush();
}
Single producer - single consumer pattern implemented so for each request there are two threads. I don't use task library here but please let me know if it has advantage over traditional thread creation in this scenario. HTTP Handler (.ashx) is used instead of a (.aspx) page. Under stress test CPU utilization is not a problem but with a single worker process, after 210 concurrent clients, new connections encounter time-out. This is solved by web gardening since I don't use session state. I'm not sure if there's any big issue I've missed but please let me know what other considerations should be taken in your opinion ?
for example maybe IIS closes long running TCP connections due to a "connection timeout" since normal ASP.NET pages are processed in less than 5 minutes, so I should increase the value.
I appreciate your Ideas.
Personally, I would be looking at a different mechanism for this type of processing. HTTP Requests/Web Applications are NOT designed for this type of thing, and stability is going to be VERY hard, you have a number of risks that could cause you major issues as you are working with this type of model.
I would move that processing off to a backend process, so that you are OUTSIDE of the asp.net runtime, that way you have more control over start/shutdown, etc.
First, Never. NEVER. NEVER! do any processing that takes more than a few seconds in a thread pool thread. There are a limited number of them, and they're used by the system for many things. This is asking for trouble.
Second, while the handler is a good idea, you're a little vague on what you mean by "generate on the fly" Do you mean you are encrypting a file on the fly and this encryption can take 30 minutes? Or do you mean you're pulling data from a database and assembling a file? Or that the download takes 30 minutes to download?
Edit:
As I said, don't use a thread pool for anything long running. Create your own thread, or if you're using .NET 4 use a Task and specify it as long running.
Long running processes should not be implemented this way. Pass this off to a service that you set up.
IF you do want to have a page hang for a client, consider interfacing from AJAX to something that does not block on IO threads - like node.js.
Push notifications to many clients is not something ASP.NET can handle due to thread usage, hence my node.js. If your load is low, you have other options.
Use Web-Gardening for more stability of your application.
Turn-off caching since you don't have aspx pages
It's hard to advise more without performance analysis. You the VS built-in and find the bottlenecks.
The Web 1.0 way of dealing with long running processes is to spawn them off on the server and return immediately. Have the spawned off service update a database with progress and pages on the site can query for progress.
The most common usage of this technique is getting a package delivery. You can't hold the HTTP connection open until my package shows up, so it just gives you a way to query for progress. The background process deals with orchestrating all of the steps it takes for getting the item, wrapping it up, getting it onto a UPS truck, etc. All along the way, each step is recorded in the database. Conceptually, it's the same.
Edit based on Question Edit: Just return a result page immediately, and generate the binary on the server in a spawned thread or process. Use Ajax to check to see if the file is ready and when it is, provide a link to it.