IIS Version :7.5,
Idle time-out for the ApplicationPool: 20 minutes.
step:
1. User visits a page.
2. when the server receives the request, a new thread is created by the code to process an complex operation. And at the same time, the response is sent to the user saying that the request is processed in the background.
After 20 minute(no visit to the site), the worker process is shut down. The complex operation has not been finished.
How to make the iis know that the worker process is not idle if a thead is running?
I have the same problem at this tame, currently found the best solution just create new request from IIS worker thread to IIS itself to avoid background thread shutdown.
Related
I am running Windows 8.1 and I have an integration test suite that leverages HostableWebCore to spin up isolated ASP.NET web server processes. For performance reasons, I am launching 8 of these at a time and once they are started up I send a very simple web request to each, which is handled by an MVC application loaded into each. Every instance is listening on a different port.
The problem is that the requests are getting held up (I believe) in HTTP.sys (or whatever it is called these days). If I look at fiddler, I can see all 8 requests immediately (within a couple milliseconds) hit the ServerGotRequest state. However, the requests sit in this state for 20-100 seconds, depending on how many I run in parallel at a time.
The reason I suspect this is HTTP.sys problem is because the amount of time I have to wait for any of them to respond increases with the number of hosting applications I spin up in parallel. If I only launch a single hosting application, it will start responding in ~20 seconds. If I spin up 2 they will both start responding in ~30 seconds. If I spin up 4, ~40 seconds. If I spin up 8, ~100 seconds (which is default WebClient request timeout).
Because of this long delay, I have enough time to attach a debugger and put a breakpoint in my controller action and that breakpoint will be hit after the 20-100 second delay, suggesting that my process hasn't yet received the request. All of the hosts are sitting idle for those 20-100 seconds after ~5-10 seconds of cold start CPU churning. All of the hosts appear to receive the requests at the same time, as if something was blocking any request from going through and then all of a sudden let everything through.
My problem is, I have been unable to locate any information related to how one can debug HTTP.sys. How can I see what it is doing? What is causing the block? Why is it waiting to forward on the requests to the workers? Why do they all come through together?
Alternatively, if someone has any idea how I can work around this and get the requests to come through immediately (without the waiting) I would very much appreciate it.
Another note: I can see System (PID 4) immediately register to listen on the port I have specified as soon as the hosting applications launch.
Additional Information:
This is what one of my hosting apps looks like under netsh http show servicestate
Server session ID: FD0000012000004C
Version: 2.0
State: Active
Properties:
Max bandwidth: 4294967295
Timeouts:
Entity body timeout (secs): 120
Drain entity body timeout (secs): 120
Request queue timeout (secs): 120
Idle connection timeout (secs): 120
Header wait timeout (secs): 120
Minimum send rate (bytes/sec): 150
URL groups:
URL group ID: FB00000140000018
State: Active
Request queue name: IntegrationTestAppPool10451{974E3BB1-7774-432B-98DB-99850825B023}
Properties:
Max bandwidth: inherited
Max connections: inherited
Timeouts:
Timeout values inherited
Logging information:
Log directory: C:\inetpub\logs\LogFiles\W3SVC1
Log format: 0
Number of registered URLs: 2
Registered URLs:
HTTP://LOCALHOST:10451/
HTTP://*:10451/
Request queue name: IntegrationTestAppPool10451{974E3BB1-7774-432B-98DB-99850825B023}
Version: 2.0
State: Active
Request queue 503 verbosity level: Basic
Max requests: 1000
Number of active processes attached: 1
Controller process ID: 12812
Process IDs:
12812
Answering this mainly for posterity. Turns out that my problem wasn't HTTP.sys but instead it was ASP.NET. It opens up a shared lock when it tries to compile files. This shared lock is identified by System.Web.HttpRuntime.AppDomainAppId. I believe that since all of my apps are built dynamically from a common applicationHost.config file, they all have the same AppDomainAppId (/LM/W3SVC/1/ROOT). This means they all share a lock and effectively all page compilation happens sequentially for all of the apps. However, due to the nature of coming/going from the lock all of the pages tend to finish at the same time because it is unlikely that any of them will get to the end of the process in a timely fashion, causing them all to finish around the same time. Once one of them makes it through, others are likely close behind and finish just after.
We have an ASP.Net page with a button that kicks off a have a process that can take up to 3 minutes to complete. The problem we are running into is that the connection between IE and IIS aborts before the process can complete.
We are looking at improving the performance of the task or moving the task into an asynchronous process but in the mean time I'm looking for a quick fix that would allow the task to complete.
I have...
changed the executionTime in to 300 seconds
increased the connection timeout in IIS on the site to 300 seconds
increased the Application Pool timeouts to 300 seconds
but I continue to see (Aborted) in the Result column of the Network monitor in IE after 90 seconds.
As a side note I should point out that the process does complete (I can see in the logs). The user just doesn't know because IE is no longer connected.
Is there something else I am missing?
You have to do the below two things,
In Web.Config file,
<sessionState mode="InProc" timeout="300"/>
In application pool’s worker process default idle timeout is also set to 300 seconds.
There have been some questions about how to detect when an application is shutting down using IRegisteredObject. However, IRegisteredObject.Stop will not be invoked until all active requests complete.
This will be the case for long running requests (pushlet, long polling, web socket), meaning that an app pool recycle can be held up by these requests indefinitely.
Is there a way to detect from a long running request that an application shut down is pending?
I've already tested using IRegisteredObject or polling HostingEnvironment.ShutdownReason. Neither one works until active requests are completed.
The Katana/Owin project accesses the internal System.Web.Hosting.UnsafeIISMethods.MgdHasConfigChanged method to detect a shutdown so that long running requests can detect this state.
See ShutdownDectector and UnsafeIISMethods for sample implementation.
Is it safe to assume that when a user requests an .aspx page via HTTP, that ASP.NET creates at least 1 thread for it?
If so, how long does it last?
If 1000 people make the HTTP request to the same .aspx page, is there some recycling of threads involved, so it doesn't spawn different 1000 threads?
Each request is allocated a thread from the iis page pool. the idea is that this should be a short running process so that thread can be returned to the page pool for use by another request coming (page pool sizes are not huge, usually, like 50). So, if you have a long running request, it's important you make an async call to free the thread for some other request. then, on your long running requests completion, you will get another thread from the pool and finish up.
Bottom line, if 1000 people make requests at the same time and none of them finish, 50 or so will run and the other 950 will wait.
Given an IIS server which receives heavy traffic and a website has been restarted, what happens to pending requests during the Application_Start event in ASP.NET?
It is my understanding that the first request triggers the applications completion and startup. Do the other requests just queue up?
Our Application_Start event does a lot of configuration and setup and can take several seconds. Is it bad to have heavy traffic during this time?
It is bad to get heavy traffic during startup. How bad? It depends on how much time you take to start and how much incoming traffic you get.
While your application is starting, check out the ASP.NET performance counter for "Requests Queued". The more traffic you get, the more requests are queued up to the limit (5k?). Any incoming request when the queue is full will get a HTTP 503 right away.
If your startup takes longer than the default request timeout (100s in .NET 2.0+), the requests in the queue will start to timeout too and new ones will take their place.