Timeout during longer ASP.Net process - asp.net

We have an ASP.Net page with a button that kicks off a have a process that can take up to 3 minutes to complete. The problem we are running into is that the connection between IE and IIS aborts before the process can complete.
We are looking at improving the performance of the task or moving the task into an asynchronous process but in the mean time I'm looking for a quick fix that would allow the task to complete.
I have...
changed the executionTime in to 300 seconds
increased the connection timeout in IIS on the site to 300 seconds
increased the Application Pool timeouts to 300 seconds
but I continue to see (Aborted) in the Result column of the Network monitor in IE after 90 seconds.
As a side note I should point out that the process does complete (I can see in the logs). The user just doesn't know because IE is no longer connected.
Is there something else I am missing?

You have to do the below two things,
In Web.Config file,
<sessionState mode="InProc" timeout="300"/>
In application pool’s worker process default idle timeout is also set to 300 seconds.

Related

Request Wait Time is steady in IIS (ASP.NET)

I'm monitoring my application health in SolarWinds, the strange thing I'm noticing that the Request Wait Time Counter is steady. It is not changing for quite a few hours even though there are no requests in the queue and also the Requests/Sec is not that much. Is there any specific reason for the same? What can be the possible reason?
Request Wait Time
The number of milliseconds that the most recent request waited in the queue for processing.
https://msdn.microsoft.com/en-us/library/fxk122b4.aspx
In other words: This value should remain the same until IIS processes another request.
Credit: https://serverfault.com/a/579180

How to adjust the request queue timeout in IIS

Is there a time out for a http request which is kept in the IIS request queue?
If there is a time out, what will happens if a request stayed longer time in the IIS request queue ?
a - Does it discards or execute by the server when threads available?
Good question, I'm surprised it's infinite by default, as a surge would overload IIS with requests (up to the limit, which is 3000 by default).
If you have a well tuned application, I would say 1-3 seconds is a good range. Users typically don't wait longer than a second anyway, they'll hit refresh. In my case I have a dinosaur with all kinds of clunky reports so have set to 30 seconds.

How can I debug buffering with HTTP.sys?

I am running Windows 8.1 and I have an integration test suite that leverages HostableWebCore to spin up isolated ASP.NET web server processes. For performance reasons, I am launching 8 of these at a time and once they are started up I send a very simple web request to each, which is handled by an MVC application loaded into each. Every instance is listening on a different port.
The problem is that the requests are getting held up (I believe) in HTTP.sys (or whatever it is called these days). If I look at fiddler, I can see all 8 requests immediately (within a couple milliseconds) hit the ServerGotRequest state. However, the requests sit in this state for 20-100 seconds, depending on how many I run in parallel at a time.
The reason I suspect this is HTTP.sys problem is because the amount of time I have to wait for any of them to respond increases with the number of hosting applications I spin up in parallel. If I only launch a single hosting application, it will start responding in ~20 seconds. If I spin up 2 they will both start responding in ~30 seconds. If I spin up 4, ~40 seconds. If I spin up 8, ~100 seconds (which is default WebClient request timeout).
Because of this long delay, I have enough time to attach a debugger and put a breakpoint in my controller action and that breakpoint will be hit after the 20-100 second delay, suggesting that my process hasn't yet received the request. All of the hosts are sitting idle for those 20-100 seconds after ~5-10 seconds of cold start CPU churning. All of the hosts appear to receive the requests at the same time, as if something was blocking any request from going through and then all of a sudden let everything through.
My problem is, I have been unable to locate any information related to how one can debug HTTP.sys. How can I see what it is doing? What is causing the block? Why is it waiting to forward on the requests to the workers? Why do they all come through together?
Alternatively, if someone has any idea how I can work around this and get the requests to come through immediately (without the waiting) I would very much appreciate it.
Another note: I can see System (PID 4) immediately register to listen on the port I have specified as soon as the hosting applications launch.
Additional Information:
This is what one of my hosting apps looks like under netsh http show servicestate
Server session ID: FD0000012000004C
Version: 2.0
State: Active
Properties:
Max bandwidth: 4294967295
Timeouts:
Entity body timeout (secs): 120
Drain entity body timeout (secs): 120
Request queue timeout (secs): 120
Idle connection timeout (secs): 120
Header wait timeout (secs): 120
Minimum send rate (bytes/sec): 150
URL groups:
URL group ID: FB00000140000018
State: Active
Request queue name: IntegrationTestAppPool10451{974E3BB1-7774-432B-98DB-99850825B023}
Properties:
Max bandwidth: inherited
Max connections: inherited
Timeouts:
Timeout values inherited
Logging information:
Log directory: C:\inetpub\logs\LogFiles\W3SVC1
Log format: 0
Number of registered URLs: 2
Registered URLs:
HTTP://LOCALHOST:10451/
HTTP://*:10451/
Request queue name: IntegrationTestAppPool10451{974E3BB1-7774-432B-98DB-99850825B023}
Version: 2.0
State: Active
Request queue 503 verbosity level: Basic
Max requests: 1000
Number of active processes attached: 1
Controller process ID: 12812
Process IDs:
12812
Answering this mainly for posterity. Turns out that my problem wasn't HTTP.sys but instead it was ASP.NET. It opens up a shared lock when it tries to compile files. This shared lock is identified by System.Web.HttpRuntime.AppDomainAppId. I believe that since all of my apps are built dynamically from a common applicationHost.config file, they all have the same AppDomainAppId (/LM/W3SVC/1/ROOT). This means they all share a lock and effectively all page compilation happens sequentially for all of the apps. However, due to the nature of coming/going from the lock all of the pages tend to finish at the same time because it is unlikely that any of them will get to the end of the process in a timely fashion, causing them all to finish around the same time. Once one of them makes it through, others are likely close behind and finish just after.

Idle time-out for a background thread on IIS

IIS Version :7.5,
Idle time-out for the ApplicationPool: 20 minutes.
step:
1. User visits a page.
2. when the server receives the request, a new thread is created by the code to process an complex operation. And at the same time, the response is sent to the user saying that the request is processed in the background.
After 20 minute(no visit to the site), the worker process is shut down. The complex operation has not been finished.
How to make the iis know that the worker process is not idle if a thead is running?
I have the same problem at this tame, currently found the best solution just create new request from IIS worker thread to IIS itself to avoid background thread shutdown.

Does an ASP.NET HTTP Request Translate to 1 Thread?

Is it safe to assume that when a user requests an .aspx page via HTTP, that ASP.NET creates at least 1 thread for it?
If so, how long does it last?
If 1000 people make the HTTP request to the same .aspx page, is there some recycling of threads involved, so it doesn't spawn different 1000 threads?
Each request is allocated a thread from the iis page pool. the idea is that this should be a short running process so that thread can be returned to the page pool for use by another request coming (page pool sizes are not huge, usually, like 50). So, if you have a long running request, it's important you make an async call to free the thread for some other request. then, on your long running requests completion, you will get another thread from the pool and finish up.
Bottom line, if 1000 people make requests at the same time and none of them finish, 50 or so will run and the other 950 will wait.

Resources