Fixing slow initial load for IIS - asp.net

IIS has an annoying feature for low traffic websites where it recycles unused worker processes, causing the first user to the site after some time to get an extremely long delay (30+ seconds).
I've been looking for a solution to the problem and I've found these potential solutions.
A. Use the Application Initialization plugin
B. Use Auto-Start with .NET 4
C. Disable the idle-timeout (under IIS Reset)
D. Precompile the site
I'm wondering which of these is preferred, and more importantly, why are there so many solutions to the same problem? (My guess is they aren't, and I'm just not understanding something correctly).
Edit
Performing C seems to be enough to keep my site warmed up, but I've discovered that the real root of my site's slowness has to do with Entity Framework, which I can't seem to figure out why it's going cold. See this question, which unfortunately hasn't been answered yet has been answered!
I eventually just had to make a warm up script to hit my site occasionally to make sure it stayed speedy.

Options A, B and D seem to be in the same category since they only influence the initial start time, they do warmup of the website like compilation and loading of libraries in memory.
Using C, setting the idle timeout, should be enough so that subsequent requests to the server are served fast (restarting the app pool takes quite some time - in the order of seconds).
As far as I know, the timeout exists to save memory that other websites running in parallel on that machine might need. The price being that one time slow load time.
Besides the fact that the app pool gets shutdown in case of user inactivity, the app pool will also recycle by default every 1740 minutes (29 hours).
From technet:
Internet Information Services (IIS) application pools can be
periodically recycled to avoid unstable states that can lead to
application crashes, hangs, or memory leaks.
As long as app pool recycling is left on, it should be enough.
But if you really want top notch performance for most components, you should also use something like the Application Initialization Module you mentioned.

Web Hosting Challenge
You have to remember that none of the machine configuration options are available if you are hosted on a shared server as many of us (smaller companies and individuals) are.
ASP.NET MVC Overhead
My site takes at least 30 seconds when it hasn't been hit in over 20 minutes (and the web app has been stopped). It is terrible.
Another Way to Test Performance
There's another way to test if it is your ASP.NET MVC start up or something else. Drop a normal HTML page on your site where you can hit it directly.
If the problem is related to ASP.NET MVC start up then the HTML page will render almost immediately even when the web app hasn't been started.
That's how I first recognized that the problem was in the ASP.NET MVC startup.
I loaded an HTML page at any time and it would load blazing fast. Then, after hitting that HTML page I'd hit one of my ASP.NET MVC URLs and I'd get the Chrome message "Waiting for raddev.us..."
Another Test With Helpful Script
After that I wrote a LINQPad (check out http://linqpad.net for more) script that would hit my web site every 8 minutes (less than the time for the app to unload -- which should be 20 minutes) and I let it run for hours.
While the script was running I hit my web site and every time my site came up blazingly fast. This gives me a good idea that most likely the slowness I was experiencing was because of ASP.NET MVC startup times.
Get LinqPad and you can run the following script -- just change the URL to your own and let it run and you can test this easily.
Good luck.
NOTE: In LinqPad you'll need to press F4 and add a reference to System.Net to add the library which will retrieve your page.
ALSO : make sure you change the String URL variable to point at a URL that will load a route from your ASP.NET MVC site so the engine will run.
System.Timers.Timer webKeepAlive = new System.Timers.Timer();
Int64 counter = 0;
void Main()
{
webKeepAlive.Interval = 5000;
webKeepAlive.Elapsed += WebKeepAlive_Elapsed;
webKeepAlive.Start();
}
private void WebKeepAlive_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
webKeepAlive.Stop();
try
{
// ONLY the first time it retrieves the content it will print the string
String finalHtml = GetWebContent();
if (counter < 1)
{
Console.WriteLine(finalHtml);
}
counter++;
}
finally
{
webKeepAlive.Interval = 480000; // every 8 minutes
webKeepAlive.Start();
}
}
public String GetWebContent()
{
try
{
String URL = "http://YOURURL.COM";
WebRequest request = WebRequest.Create(URL);
WebResponse response = request.GetResponse();
Stream data = response.GetResponseStream();
string html = String.Empty;
using (StreamReader sr = new StreamReader(data))
{
html = sr.ReadToEnd();
}
Console.WriteLine (String.Format("{0} : success",DateTime.Now));
return html;
}
catch (Exception ex)
{
Console.WriteLine (String.Format("{0} -- GetWebContent() : {1}",DateTime.Now,ex.Message));
return "fail";
}
}

Writing a ping service/script to hit your idle website is rather a best way to go because you will have a complete control. Other options that you have mentioned would be available if you have leased a dedicated hosting box.
In a shared hosting space, warmup scripts are the best first level defense (self help is the best help). Here is an article which shares an idea on how to do it from your own web application.

I'd use B because that in conjunction with worker process recycling means there'd only be a delay while it's recycling. This avoids the delay normally associated with initialization in response to the first request after idle. You also get to keep the benefits of recycling.

A good option to ping the site on a schedule is to use Microsoft Flow, which is free for up to 750 "runs" per month. It is very easy to create a Flow that hits your site every hour to keep it warm. You can even work around their limit of 750 by creating a single flow with delays separating multiple hits of your site.
https://flow.microsoft.com

See this article for tips on how to help performance issues. This includes both performance issues related to starting up, under the "cold start" section. Most of this will matter no matter what type of server you are using, locally or in production.
https://blogs.msdn.microsoft.com/b/mcsuksoldev/2011/01/19/common-performance-issues-on-asp-net-web-sites/
If the application deserializes anything from XML (and that includes web services…) make sure SGEN is run against all binaries involved in deseriaization and place the resulting DLLs in the Global Assembly Cache (GAC). This precompiles all the serialization objects used by the assemblies SGEN was run against and caches them in the resulting DLL. This can give huge time savings on the first deserialization (loading) of config files from disk and initial calls to web services.
http://msdn.microsoft.com/en-us/library/bk3w6240(VS.80).aspx
If any IIS servers do not have outgoing access to the internet, turn off Certificate Revocation List (CRL) checking for Authenticode binaries by adding generatePublisherEvidence=”false” into machine.config. Otherwise every worker processes can hang for over 20 seconds during start-up while it times out trying to connect to the internet to obtain a CRL list.
http://blogs.msdn.com/amolravande/archive/2008/07/20/startup-performance-disable-the-generatepublisherevidence-property.aspx
http://msdn.microsoft.com/en-us/library/bb629393.aspx
Consider using NGEN on all assemblies. However without careful use this doesn’t give much of a performance gain. This is because the base load addresses of all the binaries that are loaded by each process must be carefully set at build time to not overlap. If the binaries have to be rebased when they are loaded because of address clashes, almost all the performance gains of using NGEN will be lost.
http://msdn.microsoft.com/en-us/magazine/cc163610.aspx

I was getting a consistent 15 second delay on the first request after 4 minutes of inactivity. My problem was that my app was using Windows Integrated Authentication to SQL Server and the service profile was in a different domain than the server. This caused a cross-domain authentication from IIS to SQL upon app initialization - and this was the real source of my delay. I changed to using a SQL login instead of windows authentication. The delay was immediately gone. I still have all the app initialization settings in place to help improve performance but they may have not been needed at all in my case.

Related

How do I get the web site's root URL from a Quartz.NET job?

I am implementing a scheduler with Quartz.NET in an ASP.NET web project. The web site is purely there to house some WCF services that form the back-end of a WPF desktop application. Therefore, the web site will only be active when the users of the desktop application are active. This means that the web site application is likely to close down during the night. However, I want the scheduler to keep running at all times.
Note that using a Windows service isn't an option due to our hosting, even though that would seem to be the best option overall. I am stuck with something like Quartz.NET (as far as I know).
Whilst reading around about this, I have seen a lot of suggestions to use a scheduled job that calls a page on the site every 19 minutes, to avoid the 20 minute time out. First question is, is this the best way to do it?
If it is, then I have a second question. All of the examples I have seen show a hard-coded URL, which I want to avoid. Ideally, I want the URL to be picked up in code, so that when running in Visual Studio, it will pick up and call the localhost URL, and when deployed, it will pick up the live one.
I know I can put the URL in the web.config file, and use a transform to change this to the live one when deploying, but I was wondering if there was a better way to do it.
This guy seems to have thought it out well:
https://www.mikesdotnetting.com/article/254/scheduled-tasks-in-asp-net-with-quartz-net
I will paste the header paragraph here, in case the link above dies. Then future readers have some piece of concrete info to search on.
A perennial question on the ASP.NET forums concerns how to schedule regular tasks as part of a web application. Typically, the requirement is to send emails once every 24 hours at a particular time each day, but it could actually be anything from tweeting on a schedule to performing maintenance tasks. Equally typically, half a dozen members on the forum dive in with recommendations to install Windows Services or schedule batch files with the Task Scheduler - regardless of the fact that most web site owners are not afforded such privileges as part of their shared hosting plan.
The above URL, I got from the below.
Since you cannot write a windows-service, then look at this url. It has some options.
https://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
It occurred to me that the answer is quite simple. The job itself doesn't know anything about the web context, but the containing application does.
All I needed to do was add a static string property to the job that pings the web site...
public static string WebSiteRootUrl = "";
...then in the Global.asax.cs of the ASP.NET web project, I did the following...
protected void Application_BeginRequest(object sender, EventArgs e) {
if (KeepWebSiteAliveJob.WebSiteRootUrl == "") {
string uri = HttpContext.Current.Request.Url.AbsoluteUri;
KeepWebSiteAliveJob.WebSiteRootUrl = uri.Substring(0, uri.IndexOf("/", 8) + 1);
}
}
As the Request object isn't available when the application starts, I had to do this in Application_BeginRequest, which means that a) it will be fired every time a request is made and b) the URL returned by HttpContext.Current.Request.Url.AbsoluteUri will include the full path to the requested resource, not just the desired root URL.
To get around the first issue, I only set KeepWebSiteAliveJob.WebSiteRootUrl if is hasn't been set yet (ie is an empty string) This will be on the very first request. I don't think this would actually be an issue without the check, as it's so quick that it is unlikely to cause any problems, but it was an easy check, so worth including just in case.
As for the second issue, I took advantage of the fact that an URL will contain two forward slashes between the scheme and the domain, and then a third after the domain and (optional) port. As we are only using http or https for the scheme, the magic number 8 starts the IndexOf() search after the double slashes, meaning that we get back the root URL.
Hope this helps someone.

ASP.NET MVC slow on host?

I have uploaded my ASP.NET MVC(3) site to my host but it site is alot slower in first time load of all pages(even with no data fetch)?
First time I visiting startpage It takes 7.30 s, if I hit reaload after 1 min it will take 1.05 s, if I hit reaload repetly it will give me between 500 ms and 800 ms.
If I return after around 5 min and hit reaload I will get a 7 s load again?
If I run the same websight from my localhost(IIS7) I will get 1 s first time and then 650 ms for rapid reload.
The webpage is using database but its the same database in both cases (that is placed at my host).
The webpage is www.biss.se
Where should I begin to look?
Edit:
This is my Application_Start()
protected void Application_Start()
{
AccountModel accountModel = new AccountModel();
AreaRegistration.RegisterAllAreas();
RegisterRoutes(RouteTable.Routes);
MappingHandler.RegisterMappings();
#region Register Extra DataNotations for Display Attribute
ModelMetadataProviders.Current = new DisplayMetaDataProvider();
#endregion
if (!accountModel.CheckIfAdminAccountExists("adminAccount"))
{
accountModel.CreateUser("adminAccount",
"Admin",
"Admin",
"",
"",
postCode: "",
locationId: "",
inactive: false,
siteRole: Controllers.SiteRoles.Admin,
activatedByUser: true);
}
}
When the first request hits an ASP.NET application, this application is loaded in memory by the web server by creating an AppDomain and the code inside Application_Start is executed. This process could take more or less time depending on the actions you are performing inside this event and the number of assemblies to be loaded. After a period of inactivity or if certain memory/CPU thresholds are reached IIS could recycle the application and unload it from memory. On the next request the same process repeats.
So basically what you should be looking for is the tasks you are performing inside your Application_Start event which is executed upon the first request. If those tasks involve I/O operations such as database access or stuff you could log the time it takes to perform them. This way you will be able to pinpoint the exact procedure of your code that take long time and be able to fix it if it depends on you, or contact your hosting provider if it is a problem on their side.
The MiniProfiler is a great tool for this profiling purpose.
You should look into your IIS settings.
IIS shuts down all sites witch was not hit by requests for a certain period of time.
So if there was no requests for a few minutes site ll be unloaded from memory and need
Thats the reason you have defferent behavior on local and remote machine.
Some times hosts block this settings for reason to low memory usage of clients on one virtual machine.
I cant recall the setting to be changed. Some one should give more certain answer.
Just had the same behavior with an ASP.MVC3 app running on IIS8 Windows2012 server.
If you are sure of what you are doing you can configure IIS to keep your app pool alive.
The solution can be found here on G+
The most important thing is to configure the idle time-out setting for the application pool.
If you go to Advanced Settings, Application Pool then you can see the Maximum Worker Processes property and set the value to 2, instead of 1.
I solved my problem in that way.

Considerations for ASP.NET application with long running synchronous requests

Under windows server 2008 64bit, IIS 7.0 and .NET 4.0 if an ASP.NET application (using ASP.NET thread pool, synchronous request processing) is long running (> 30 minutes). Web application has no page and main purpose is reading huge files ( > 1 GB) in chunks (~5 MB) and transfer them to the clients. Code:
while (reading)
{
Response.OutputStream.Write(buffer, 0, buffer.Length);
Response.Flush();
}
Single producer - single consumer pattern implemented so for each request there are two threads. I don't use task library here but please let me know if it has advantage over traditional thread creation in this scenario. HTTP Handler (.ashx) is used instead of a (.aspx) page. Under stress test CPU utilization is not a problem but with a single worker process, after 210 concurrent clients, new connections encounter time-out. This is solved by web gardening since I don't use session state. I'm not sure if there's any big issue I've missed but please let me know what other considerations should be taken in your opinion ?
for example maybe IIS closes long running TCP connections due to a "connection timeout" since normal ASP.NET pages are processed in less than 5 minutes, so I should increase the value.
I appreciate your Ideas.
Personally, I would be looking at a different mechanism for this type of processing. HTTP Requests/Web Applications are NOT designed for this type of thing, and stability is going to be VERY hard, you have a number of risks that could cause you major issues as you are working with this type of model.
I would move that processing off to a backend process, so that you are OUTSIDE of the asp.net runtime, that way you have more control over start/shutdown, etc.
First, Never. NEVER. NEVER! do any processing that takes more than a few seconds in a thread pool thread. There are a limited number of them, and they're used by the system for many things. This is asking for trouble.
Second, while the handler is a good idea, you're a little vague on what you mean by "generate on the fly" Do you mean you are encrypting a file on the fly and this encryption can take 30 minutes? Or do you mean you're pulling data from a database and assembling a file? Or that the download takes 30 minutes to download?
Edit:
As I said, don't use a thread pool for anything long running. Create your own thread, or if you're using .NET 4 use a Task and specify it as long running.
Long running processes should not be implemented this way. Pass this off to a service that you set up.
IF you do want to have a page hang for a client, consider interfacing from AJAX to something that does not block on IO threads - like node.js.
Push notifications to many clients is not something ASP.NET can handle due to thread usage, hence my node.js. If your load is low, you have other options.
Use Web-Gardening for more stability of your application.
Turn-off caching since you don't have aspx pages
It's hard to advise more without performance analysis. You the VS built-in and find the bottlenecks.
The Web 1.0 way of dealing with long running processes is to spawn them off on the server and return immediately. Have the spawned off service update a database with progress and pages on the site can query for progress.
The most common usage of this technique is getting a package delivery. You can't hold the HTTP connection open until my package shows up, so it just gives you a way to query for progress. The background process deals with orchestrating all of the steps it takes for getting the item, wrapping it up, getting it onto a UPS truck, etc. All along the way, each step is recorded in the database. Conceptually, it's the same.
Edit based on Question Edit: Just return a result page immediately, and generate the binary on the server in a spawned thread or process. Use Ajax to check to see if the file is ready and when it is, provide a link to it.

Why is calling a web service slower from a web page?

We have a DLL used as the middle layer between our website front end and our back end ticketing system. The method of insertion into the ticketing system is a bit complicated to explain, but the short version is that it's slow. The best case scenario I've gotten is a 9 second submission time.
The real problem though, is that I can only get that time through a Windows app, not through an ASP.NET web site. I've set up both a Windows test application and a web page for testing, and even though the code is copied between them the web page is consistently submitting in 17-20 seconds, while the windows app is getting 8-11 seconds.
What could be causing that?
EDIT: In response to a couple of the answers...
The call to the web service is taking the bulk of the time, but I have no control over this web service as it's provided by the ticketing system vendor. I need to find out why the web service is taking different amounts of time when it's being called form a different kind of application. The code is exactly the same in both cases, and it's running a loop then reporting the times recorded.
The code is:
for (int i = 0; i < numIterations; i++)
{
startTimes[i] = DateTime.Now;
try
{
cvNum = Clearview.Submit(req, DateTime.Now, DateTime.Now, false);
}
catch (Exception ex)
{
exceptionCount++;
lblResult.Text += #"<br />Exception Caught: " + ex.Message + #"<br />";
}
endTimes[i] = DateTime.Now;
}
It's the same loop in both cases, and I'm marking the time right before and after the call to the library, which does further processing and then calls the web service. But that processing should be consistent shouldn't it? I have traced during debugging and not seen any delay getting to the actual web service call...
EDIT Again: Working with Ants, in both cases 99.4% of the time is being sent just on the web service call. There appears to be no difference there... except that when timed out the web page is taking longer than the windows app.
Potentially the location of the web service in relation to the web server could be having an issue. Also, the page structure and other processing inside your web UI could be having an impact on how long it takes the application to process.
As mentioned logging items on both sides is a great idea, if that doesn't get you what you need, you might try a performance profiler such as Ants Profiler by Red Gate that can help identify the line, method, or class that is using the bulk of the time.
Are you running both on the same machine? Is the middle-layer that you are calling located on a remote machine? The time durations you mentioned vaguely feels like a DNS timeout issue, when opening a connection incurs the penalty for the first (down/misaddressed) DNS response to timeout. Are you sure that whatever config file/var pointing the DLL to the middle-layer are the same in both invocations?
I second the suggestion to use Wireshark to see what is going on. You can at least satisfy yourself that the backend processing time is (should be, anyways) the same...
Pepper your application on both sides with logs - that will show you where the time is going. If that doesn't help, use Wireshark to trace the network activity.

Can I use threads to carry out long-running jobs on IIS?

In an ASP.Net application, the user clicks a button on the webpage and this then instantiates an object on the server through the event handler and calls a method on the object.
The method goes off to an external system to do stuff and this could take a while. So, what I would like to do is run that method call in another thread so I can return control to the user with "Your request has been submitted".
I am reasonably happy to do this as fire-and-forget, though it would be even nicer if the user could keep polling the object for status.
What I don't know is if IIS allows my thread to keep running, even if the user session expires.
Imagine, the user fires the event and we instantiate the object on the server and fire the method in a new thread. The user is happy with the "Your request has been submitted" message and closes his browser. Eventually, this users session will time out on IIS, but the thread may still be running, doing work. Will IIS allow the thread to keep running or will it kill it and dispose of the object once the user session expires?
EDIT: From the answers and comments, I understand that the best way to do this is to move the long-running processing outside of IIS. Apart from everything else, this deals with the appdomain recycling problem. In practice, I need to get version 1 off the ground in limited time and has to work inside an existing framework, so would like to avoid the service layer, hence the desire to just fire off the thread inside IIS. In practice, "long running" here will only be a few minutes and the concurrency on the website will be low so it should be okay. But, next version definitely will need splitting into a separate service layer.
You can accomplish what you want, but it is typically a bad idea. Several ASP.NET blog and CMS engines take this approach, because they want to be installable on a shared hosting system and not take a dependency on a windows service that needs to be installed. Typically they kick off a long running thread in Global.asax when the app starts, and have that thread process queued up tasks.
In addition to reducing resources available to IIS/ASP.NET to process requests, you also have issues with the thread being killed when the AppDomain is recycled, and then you have to deal with persistence of the task while it is in-flight, as well as starting the work back up when the AppDomain comes back up.
Keep in mind that in many cases the AppDomain is recycled automatically at a default interval, as well as if you update the web.config, etc.
If you can handle the persistence and transactional aspects of your thread being killed at any time, then you can get around the AppDomain recycling by having some external process that makes a request on your site at some interval - so that if the site is recycled you are guaranteed to have it start back up again automatically within X minutes.
Again, this is typically a bad idea.
EDIT: Here are some examples of this technique in action:
Community Server: Using Windows Services vs. Background Thread to Run Code at Scheduled Intervals
Creating a Background Thread When Website First Starts
EDIT (from the far distant future) - These days I would use Hangfire.
I disagree with the accepted answer.
Using a background thread (or a task, started with Task.Factory.StartNew) is fine in ASP.NET. As with all hosting environments, you may want to understand and cooperate with the facilities governing shutdown.
In ASP.NET, you can register work needing to stop gracefully on shutdown using the HostingEnvironment.RegisterObject method. See this article and the comments for a discussion.
(As Gerard points out in his comment, there's now also HostingEnvironment.QueueBackgroundWorkItem that calls down to RegisterObject to register a scheduler for the background item to work on. Overall the new method is nicer since it's task-based.)
As for the general theme that you often hear of it being a bad idea, consider the alternative of deploying a windows service (or another kind of extra-process application):
No more trivial deployment with web deploy
Not deployable purely on Azure Websites
Depending on the nature of the background task, the processes will likely have to communicate. That means either some form of IPC or the service will have to access a common database.
Note also that some advanced scenarios might even need the background thread to be running in the same address space as the requests. I see the fact that ASP.NET can do this as a great advantage that has become possible through .NET.
You wouldn't want to use a thread from the IIS thread pool for this task because it would leave that thread unable to process future requests. You could look into Asynchronous Pages in ASP.NET 2.0, but that really wouldn't be the right answer, either. Instead, what it sounds like you would benefit from is looking into Microsoft Message Queuing. Essentially, you would add the task details to the queue and another background process (possibly a Windows Service) would be in charge of carrying out that task. But the bottom line is that the background process is completely isolated from IIS.
I would suggest to use HangFire for such requirements. Its a nice fire and forget engine runs in background, supports different architecture, reliable because it is backed by persistence storage.
There is a good thread and sample code here: http://forums.asp.net/t/1534903.aspx?PageIndex=2
I've even toyed with the idea of calling a keep alive page on my website from the thread to help keep the app pool alive. Keep in mind if you are using this method that you need really good recovery handling, because the application could recycle at any time. As many have mentioned this is not the right approach if you have access to other service options, but for shared hosting this may be one of your only options.
To help keep the app pool alive, you could make a request to your own site while the thread is processing. This may help keep the app pool alive if your process runs a long time.
string tempStr = GetUrlPageSource("http://www.mysite.com/keepalive.aspx");
public static string GetUrlPageSource(string url)
{
string returnString = "";
try
{
Uri uri = new Uri(url);
if (uri.Scheme == Uri.UriSchemeHttp)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
CookieContainer cookieJar = new CookieContainer();
req.CookieContainer = cookieJar;
//set the request timeout to 60 seconds
req.Timeout = 60000;
req.UserAgent = "MyAgent";
//we do not want to request a persistent connection
req.KeepAlive = false;
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
Stream stream = resp.GetResponseStream();
StreamReader sr = new StreamReader(stream);
returnString = sr.ReadToEnd();
sr.Close();
stream.Close();
resp.Close();
}
}
catch
{
returnString = "";
}
return returnString;
}
We started down this path, and it actually worked ok when our app was on one server. When we wanted to scale out to multiple machines (or use multiple w3wp in a web garen) we had to re-evaluate and look at how to manage a work queue, error handling, retries and the tricky problem of correctly locking to ensure only one server picks up the next item.
... we realized we are not in the business of writing background processing engines so we looked for existing solutions and we landed up using the awesome OSS project hangfire
Sergey Odinokov has created a real gem which is really easy to get started with, and allows you to swap out the backend of how work is persisted and queued. Hangfire uses background threads, but persists the jobs, handles retries and gives you visibility into the work queue. So hangfire jobs are robust and survive all the vagaries of appdomains being recycled etc.
Its basic setup uses sql server as the storage but you can swap out for Redis or MSMQ when its time to scale up. It also has an excellent UI for visualizing all the jobs and their status plus allowing you to re-queue jobs.
My point is that while its entirely possible to do what you want in an background thread, there is a lot of work to make it scalable and robust. Its fine for simple workloads but when things get more complex I much prefer to use a purpose built library rather than go through this effort.
For some more perspective on options available check out Scott Hanselman's blog which covers off a few options for handling background jobs in asp.net. (He gave hangfire a glowing review)
Also as referenced by John its worth reading up Phil Haack's blog on why the approach is problematic, and how to gracefully stop work on the thread when appdomain is unloaded.
Can you create a windows service to do that task? Then use .NET remoting from the Web Server to call the Windows Service to do the action? If that is the case that is what I would do.
This would eliminate the need to relay on IIS, and limit some of its processing power.
If not then I would force the user to sit there while the process is done. That way you ensure it is completed and not killed by IIS.
There does seem to be one supported way of hosting long-running work in IIS. Workflow Services seem designed for this, especially in conjunction with Windows Server AppFabric. The design allows for application pool recycling by supporting automatic persistence and resumption of the long-running work.
You may run tasks in the background and they will complete even after the request ends. Don't let an uncaught exception be thrown. Normally you want to always throw your exceptions. If an exception is thrown in a new thread then it will crash the IIS worker process - w3wp.exe, because you are no longer in the request's context. That's also then going to kill any other background tasks you have running in addition to in-process memory backed sessions if you are using them. This would be hard to diagnose, which is why the practice is discouraged.
Just create a surrogate process to run the async tasks; it doesn't have to be a windows service (although that is the more optimal approach in most cases. MSMQ is way over kill.

Resources