Asp.Net App Pool Overlapped Recycling Timing? - asp.net

As best I can tell when a worker process recycles:
a) a new one spins up before the old one shuts down
b) the old one shuts down once all the active requests its servicing completes
Is the above accurate?
If so, I have data that I store in SQL once Application_End() fires from the global.ascx file. I pull this data back in when Application_Start() fires.
The problem is based on my testing, the new worker process fires the Application_Start() before my old worker process gets a chance to complete its Application_End().
What are best practices for handling this situation?
cheers in advance
edit: I just noticed a feature on IIS 7 'Disabled Overlapped Recycle' - I'm guessing this is the best route

Your description of overlapped recycling is accurate, yes (1); and there is a setting for disabling it, but it's intended to prevent HTTP errors which you would be re-introducing. App pool recycles are a normal occurrence for managed apps (stems from, among other things, a CLR limitation that prevents the unloading of assemblies in the same memory space) that you must design for.
Your technique would be difficult to manage in a web-farm or web-garden scenario.
I think a better design would be to rely on out-of-process storage for the data (using distributed cache products like ScaleOut, App Fabric, and the like) so that all app pools have the same view of the cached data.
(1) -
http://mvolo.com/blogs/serverside/archive/2008/02/25/Starting_2C00_-stopping-and-recycling-IIS-7.0-Web-sites-and-application-pools.aspx

Related

ISessionFactory recreates after app pool recycles

My shared hosting provider set up IIS recycle app pool every 3 minutes for idle.
So my session factory often recreates (at application startup). As I have about 70-100 entities it takes about 2-5 seconds to construct factory. So cold start of my application is rather long. I haven't access to IIS setting.
You can offset a lot of the cost of setting up your factory by generating your proxies at build-time instead of runtime. This article explains the steps how.
Being realistic, the simplest change is to ask that the app-pool isn't recycled so frequently (since this is an expensive operation for your application). I'm sure they've set the timeout very low as a "performance" setting, but really this is generating work and slowing things down.
You might not have access to the IIS settings directly, but this shouldn't stop you from contacting your supplier's technical support and getting it resolved.
If you are in a full trust environment (doubtful, but provider may be willing to work with you on this), you can try serializing your configuration so it doesn't need to be rebuilt each time. Merging all your entity mappings into a single XML doc can help also (just do this as build step so its not a nightmare to work with mappings).
More info here: http://nhibernate.info/blog/2009/03/13/an-improvement-on-sessionfactory-initialization.html
Have you tried to stop your site from being idle in the first place? I use uptime robot that is FREE and pings your site every 5 minutes. The benefit of this service is that it only requests the headers of the page you set up as a monitor and therefore does not affect logging such as Google Analytics.
However said you will need to test this to see when your app does indeed recycle to see if uptime robot works with your shared hosting provider. The best way is to log every time the session factory is re-built.
not much you can do. app pool recycle shuts down your app...
I guess you could try to fool the recycler by having the application do something every 2:45.

ASP.Net -- monitors/lock or mutex

I have an ASP.net (c#) application, that has a portion of code that modifies a globally accessible resource (like a web.config file). When modifying the resource, naturally, to prevent race conditions only one user is allowed at a time so I need to lock the code using a monitor:
lock(Global.globallyAccessibleStaticObject)
{
//..modify resource
//..save resource
}
I was satisfied with this locking approach but then I thought, what if this isn't enough? should i use a mutex instead? I know a mutex is useful for inter-process locking (across many processes and applications), and thus slower, but given the nature of a deployed asp.net page (multiple requests at once across multiple app domains), is this necessary?
The answer it seems, would depend on how asp pages are handled on the server side. I have done research regarding the http pipeline, app domain, thread pooling etc. but i remain confused as to whether it is necessary to implore inter-process locking for my synchronization, or is intra-process locking sufficient for a web app???
Note: I don't want to get caught up in the specific task because I want this question to remain general, as it can be relevant in many (mult-threading) scenarios. Furthermore, I know there are more ways to accomplish these tasks (async handlers/pages, web services, etc) that I don't care about right now.
If your application only runs in one AppPool, then it is running in one physical w3wp.exe process, so the monitors/lock should be sufficient for guarding the shared resource. With that strategy, you only need to protect across threads running in the same process.
We encountered a situation in work where we had an IIS application configured to run in a single AppDomain but lock was not sufficient to protect access to a resource.
The reason we think this was happening is that IIS was recycling the AppDomain before the lock was released, and kicking off a new AppDomain, so we were getting conflicts.
Changing to use a Mutex has resolved this for us (so far).

Re-enable ASP.NET session that caused IIS hang

I'm trying to implement some fail safes on a client's web server which is running two of their most important sites (ASP.NET on IIS7). I'm going to set up application pool limiting so that if any w3wp process uses 90%+ CPU for longer than a minute then it gets killed (producing a temporary 503 Service Unavailable message to any visitors), and based on my local testing will be restarted within a minute - a much better solution than having one CPU-hogging process taking down the whole server for any length of time.
This seems to work, however during my fiddling on my local IIS7 instance I've noticed that if a request calls my "Kill.aspx", even when the site comes back up IIS will not serve the session that caused it to hang. I can only restart the test site from a different session - but as soon as I clear my cookies on the "killer" browser I can get to the site again.
So, whatever malicious behaviour IIS is trying to curb with this would not work against an even slightly determined opponent. In most cases, if excrement does hit fan it will be coding/configuration error and not the fault of the user who happened to request a page at that time.
Therefore, I'd like to turn this feature off as the theoretical user would have no idea that they need to clear their cookies before they can access the site again. I would really appreciate any ideas on how this might be possible.
Yous should be using ASP.Net Session StateServer instead of In-Proc (see msdn for details). That way, you session will run in different process and won't be affected by IIS crash.
Turn what "feature" off? If the worker process is reset (and your using in-proc session) then the session is blown away on a reset.
You might want to investigate moving your session storage to a state server or some other out of process scenario.
Also, you might want to set the application pool to use several worker processes (aka: web garden) this way if one process is killed the others continue serving content.
Next, as another option you might want to set up multiple web servers and load balance them.
Finally, you might want to profile the app to see exactly how they are causing it to spin into nothingness. My guess is that there are a number of code issues you are simply covering up with this idea.

What are some best practices for managing background threads in IIS?

I have written an HttpModule that spawns a background thread. I'm using the thread like a Scheduled Task that runs in-process, which is really handy.
What are best practices for keeping track of this thread? I've never done this before, and I'm a little confused about some aspects of it:
How do I know if the thread is still running? I see it do its job, but is there another way to know if it's still alive? I downloaded ProcMon, but w3wp.exe spawns a boatload of threads, so I had no idea which was my thread. I named it, but that didn't help.
How do I "catch" the thread if it dies? Is there some kind of Dispose method where I can have it write to the EventLog or something if it fails? A "dying declaration" or something?
How do I actively stop the thread? If I want it to stop running this background process, how do I kill it without having to bounce IIS?
Is there anyway to start it again, independently of the HttpModule? (I'm guessing the answer to this is no...)
Edit: Just to clarify, the intention is that my thread never goes away. It runs a function, then goes to sleep for a couple minutes, then wakes up and runs the function again. It's not like it's doing one task then ending.
When Jeff made Stackoverflow, he had a similiar issue.
His solution was to use the cache expiration. You'd put something in the cache and then when it expires an event is fired in a non-user facing thread. In the event handler for the expiration, you stick some code in to re-add the item to the cache and do whatever housekeeping work needs to be done for your application
Using this technique, your subquestions are easily answered:
You check the item is still in
cache.
If the item is not in
cache, re-add it.
Remove the
cache item from the cache.
Add
the item back to the cache.
You could make a small management page to configure these options.
This gives you a nice way to roughly time housekeeping processes in your web-application. It doesn't require a seperate Windows Service, which is a big win.
From my experience, you can get this to work "good enough", but not perfect. I would recommend to implement your repeating tasks in a windows service. Depending on what the tasks do, the Windows Service maybe wouldn't even have to talk to the web application or vice versa, e. g. if both work with the same database. Otherwise you could still use e. g. WCF for communication.
The big advantage is: The windows service will start with the OS, you can easily configure, start and stop it using the control panel, you have built-in monitoring via Windows Event Log, you can update the background service and the web application independently etc.
If this is not an option, e. g. because you are in a shared hosting environment, I would recommend the following:
Start your background thread in Application_start (Global.asax), and store the thread reference in a static variable.
Wrap every method called on your background thread with try/catch, because since .NET 2.0, every unhandled exception on a background thread will shut down the application. (It will be restarted on the next request, but it slows down the next request, kills all current sessions and caches, and of course no timer will be active until the next request.)
On every request (implemented has a HttpModule or in Global.asax again), check the Thread instance in the global variable (is it still != null, is the thread active and running etc.). If not, call the restart code. Use locking in the restart part to make sure that the thread will not be created twice at same time.
Even then you can't be sure that your background thread is always running, if you don't have regular traffic around the clock. Also keep in mind that in a shared hosting environment it is very common to shut down application pools if there is no activity for a few hours. You could try to improve this by setting up a scheduled task on a client machine on your own network doing a light-weight HTTP request on your application every few minutes, just to ensure that your application is always running.

Can I use threads to carry out long-running jobs on IIS?

In an ASP.Net application, the user clicks a button on the webpage and this then instantiates an object on the server through the event handler and calls a method on the object.
The method goes off to an external system to do stuff and this could take a while. So, what I would like to do is run that method call in another thread so I can return control to the user with "Your request has been submitted".
I am reasonably happy to do this as fire-and-forget, though it would be even nicer if the user could keep polling the object for status.
What I don't know is if IIS allows my thread to keep running, even if the user session expires.
Imagine, the user fires the event and we instantiate the object on the server and fire the method in a new thread. The user is happy with the "Your request has been submitted" message and closes his browser. Eventually, this users session will time out on IIS, but the thread may still be running, doing work. Will IIS allow the thread to keep running or will it kill it and dispose of the object once the user session expires?
EDIT: From the answers and comments, I understand that the best way to do this is to move the long-running processing outside of IIS. Apart from everything else, this deals with the appdomain recycling problem. In practice, I need to get version 1 off the ground in limited time and has to work inside an existing framework, so would like to avoid the service layer, hence the desire to just fire off the thread inside IIS. In practice, "long running" here will only be a few minutes and the concurrency on the website will be low so it should be okay. But, next version definitely will need splitting into a separate service layer.
You can accomplish what you want, but it is typically a bad idea. Several ASP.NET blog and CMS engines take this approach, because they want to be installable on a shared hosting system and not take a dependency on a windows service that needs to be installed. Typically they kick off a long running thread in Global.asax when the app starts, and have that thread process queued up tasks.
In addition to reducing resources available to IIS/ASP.NET to process requests, you also have issues with the thread being killed when the AppDomain is recycled, and then you have to deal with persistence of the task while it is in-flight, as well as starting the work back up when the AppDomain comes back up.
Keep in mind that in many cases the AppDomain is recycled automatically at a default interval, as well as if you update the web.config, etc.
If you can handle the persistence and transactional aspects of your thread being killed at any time, then you can get around the AppDomain recycling by having some external process that makes a request on your site at some interval - so that if the site is recycled you are guaranteed to have it start back up again automatically within X minutes.
Again, this is typically a bad idea.
EDIT: Here are some examples of this technique in action:
Community Server: Using Windows Services vs. Background Thread to Run Code at Scheduled Intervals
Creating a Background Thread When Website First Starts
EDIT (from the far distant future) - These days I would use Hangfire.
I disagree with the accepted answer.
Using a background thread (or a task, started with Task.Factory.StartNew) is fine in ASP.NET. As with all hosting environments, you may want to understand and cooperate with the facilities governing shutdown.
In ASP.NET, you can register work needing to stop gracefully on shutdown using the HostingEnvironment.RegisterObject method. See this article and the comments for a discussion.
(As Gerard points out in his comment, there's now also HostingEnvironment.QueueBackgroundWorkItem that calls down to RegisterObject to register a scheduler for the background item to work on. Overall the new method is nicer since it's task-based.)
As for the general theme that you often hear of it being a bad idea, consider the alternative of deploying a windows service (or another kind of extra-process application):
No more trivial deployment with web deploy
Not deployable purely on Azure Websites
Depending on the nature of the background task, the processes will likely have to communicate. That means either some form of IPC or the service will have to access a common database.
Note also that some advanced scenarios might even need the background thread to be running in the same address space as the requests. I see the fact that ASP.NET can do this as a great advantage that has become possible through .NET.
You wouldn't want to use a thread from the IIS thread pool for this task because it would leave that thread unable to process future requests. You could look into Asynchronous Pages in ASP.NET 2.0, but that really wouldn't be the right answer, either. Instead, what it sounds like you would benefit from is looking into Microsoft Message Queuing. Essentially, you would add the task details to the queue and another background process (possibly a Windows Service) would be in charge of carrying out that task. But the bottom line is that the background process is completely isolated from IIS.
I would suggest to use HangFire for such requirements. Its a nice fire and forget engine runs in background, supports different architecture, reliable because it is backed by persistence storage.
There is a good thread and sample code here: http://forums.asp.net/t/1534903.aspx?PageIndex=2
I've even toyed with the idea of calling a keep alive page on my website from the thread to help keep the app pool alive. Keep in mind if you are using this method that you need really good recovery handling, because the application could recycle at any time. As many have mentioned this is not the right approach if you have access to other service options, but for shared hosting this may be one of your only options.
To help keep the app pool alive, you could make a request to your own site while the thread is processing. This may help keep the app pool alive if your process runs a long time.
string tempStr = GetUrlPageSource("http://www.mysite.com/keepalive.aspx");
public static string GetUrlPageSource(string url)
{
string returnString = "";
try
{
Uri uri = new Uri(url);
if (uri.Scheme == Uri.UriSchemeHttp)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
CookieContainer cookieJar = new CookieContainer();
req.CookieContainer = cookieJar;
//set the request timeout to 60 seconds
req.Timeout = 60000;
req.UserAgent = "MyAgent";
//we do not want to request a persistent connection
req.KeepAlive = false;
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
Stream stream = resp.GetResponseStream();
StreamReader sr = new StreamReader(stream);
returnString = sr.ReadToEnd();
sr.Close();
stream.Close();
resp.Close();
}
}
catch
{
returnString = "";
}
return returnString;
}
We started down this path, and it actually worked ok when our app was on one server. When we wanted to scale out to multiple machines (or use multiple w3wp in a web garen) we had to re-evaluate and look at how to manage a work queue, error handling, retries and the tricky problem of correctly locking to ensure only one server picks up the next item.
... we realized we are not in the business of writing background processing engines so we looked for existing solutions and we landed up using the awesome OSS project hangfire
Sergey Odinokov has created a real gem which is really easy to get started with, and allows you to swap out the backend of how work is persisted and queued. Hangfire uses background threads, but persists the jobs, handles retries and gives you visibility into the work queue. So hangfire jobs are robust and survive all the vagaries of appdomains being recycled etc.
Its basic setup uses sql server as the storage but you can swap out for Redis or MSMQ when its time to scale up. It also has an excellent UI for visualizing all the jobs and their status plus allowing you to re-queue jobs.
My point is that while its entirely possible to do what you want in an background thread, there is a lot of work to make it scalable and robust. Its fine for simple workloads but when things get more complex I much prefer to use a purpose built library rather than go through this effort.
For some more perspective on options available check out Scott Hanselman's blog which covers off a few options for handling background jobs in asp.net. (He gave hangfire a glowing review)
Also as referenced by John its worth reading up Phil Haack's blog on why the approach is problematic, and how to gracefully stop work on the thread when appdomain is unloaded.
Can you create a windows service to do that task? Then use .NET remoting from the Web Server to call the Windows Service to do the action? If that is the case that is what I would do.
This would eliminate the need to relay on IIS, and limit some of its processing power.
If not then I would force the user to sit there while the process is done. That way you ensure it is completed and not killed by IIS.
There does seem to be one supported way of hosting long-running work in IIS. Workflow Services seem designed for this, especially in conjunction with Windows Server AppFabric. The design allows for application pool recycling by supporting automatic persistence and resumption of the long-running work.
You may run tasks in the background and they will complete even after the request ends. Don't let an uncaught exception be thrown. Normally you want to always throw your exceptions. If an exception is thrown in a new thread then it will crash the IIS worker process - w3wp.exe, because you are no longer in the request's context. That's also then going to kill any other background tasks you have running in addition to in-process memory backed sessions if you are using them. This would be hard to diagnose, which is why the practice is discouraged.
Just create a surrogate process to run the async tasks; it doesn't have to be a windows service (although that is the more optimal approach in most cases. MSMQ is way over kill.

Resources