Why is calling a web service slower from a web page? - asp.net

We have a DLL used as the middle layer between our website front end and our back end ticketing system. The method of insertion into the ticketing system is a bit complicated to explain, but the short version is that it's slow. The best case scenario I've gotten is a 9 second submission time.
The real problem though, is that I can only get that time through a Windows app, not through an ASP.NET web site. I've set up both a Windows test application and a web page for testing, and even though the code is copied between them the web page is consistently submitting in 17-20 seconds, while the windows app is getting 8-11 seconds.
What could be causing that?
EDIT: In response to a couple of the answers...
The call to the web service is taking the bulk of the time, but I have no control over this web service as it's provided by the ticketing system vendor. I need to find out why the web service is taking different amounts of time when it's being called form a different kind of application. The code is exactly the same in both cases, and it's running a loop then reporting the times recorded.
The code is:
for (int i = 0; i < numIterations; i++)
{
startTimes[i] = DateTime.Now;
try
{
cvNum = Clearview.Submit(req, DateTime.Now, DateTime.Now, false);
}
catch (Exception ex)
{
exceptionCount++;
lblResult.Text += #"<br />Exception Caught: " + ex.Message + #"<br />";
}
endTimes[i] = DateTime.Now;
}
It's the same loop in both cases, and I'm marking the time right before and after the call to the library, which does further processing and then calls the web service. But that processing should be consistent shouldn't it? I have traced during debugging and not seen any delay getting to the actual web service call...
EDIT Again: Working with Ants, in both cases 99.4% of the time is being sent just on the web service call. There appears to be no difference there... except that when timed out the web page is taking longer than the windows app.

Potentially the location of the web service in relation to the web server could be having an issue. Also, the page structure and other processing inside your web UI could be having an impact on how long it takes the application to process.
As mentioned logging items on both sides is a great idea, if that doesn't get you what you need, you might try a performance profiler such as Ants Profiler by Red Gate that can help identify the line, method, or class that is using the bulk of the time.

Are you running both on the same machine? Is the middle-layer that you are calling located on a remote machine? The time durations you mentioned vaguely feels like a DNS timeout issue, when opening a connection incurs the penalty for the first (down/misaddressed) DNS response to timeout. Are you sure that whatever config file/var pointing the DLL to the middle-layer are the same in both invocations?
I second the suggestion to use Wireshark to see what is going on. You can at least satisfy yourself that the backend processing time is (should be, anyways) the same...

Pepper your application on both sides with logs - that will show you where the time is going. If that doesn't help, use Wireshark to trace the network activity.

Related

How do I get the web site's root URL from a Quartz.NET job?

I am implementing a scheduler with Quartz.NET in an ASP.NET web project. The web site is purely there to house some WCF services that form the back-end of a WPF desktop application. Therefore, the web site will only be active when the users of the desktop application are active. This means that the web site application is likely to close down during the night. However, I want the scheduler to keep running at all times.
Note that using a Windows service isn't an option due to our hosting, even though that would seem to be the best option overall. I am stuck with something like Quartz.NET (as far as I know).
Whilst reading around about this, I have seen a lot of suggestions to use a scheduled job that calls a page on the site every 19 minutes, to avoid the 20 minute time out. First question is, is this the best way to do it?
If it is, then I have a second question. All of the examples I have seen show a hard-coded URL, which I want to avoid. Ideally, I want the URL to be picked up in code, so that when running in Visual Studio, it will pick up and call the localhost URL, and when deployed, it will pick up the live one.
I know I can put the URL in the web.config file, and use a transform to change this to the live one when deploying, but I was wondering if there was a better way to do it.
This guy seems to have thought it out well:
https://www.mikesdotnetting.com/article/254/scheduled-tasks-in-asp-net-with-quartz-net
I will paste the header paragraph here, in case the link above dies. Then future readers have some piece of concrete info to search on.
A perennial question on the ASP.NET forums concerns how to schedule regular tasks as part of a web application. Typically, the requirement is to send emails once every 24 hours at a particular time each day, but it could actually be anything from tweeting on a schedule to performing maintenance tasks. Equally typically, half a dozen members on the forum dive in with recommendations to install Windows Services or schedule batch files with the Task Scheduler - regardless of the fact that most web site owners are not afforded such privileges as part of their shared hosting plan.
The above URL, I got from the below.
Since you cannot write a windows-service, then look at this url. It has some options.
https://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
It occurred to me that the answer is quite simple. The job itself doesn't know anything about the web context, but the containing application does.
All I needed to do was add a static string property to the job that pings the web site...
public static string WebSiteRootUrl = "";
...then in the Global.asax.cs of the ASP.NET web project, I did the following...
protected void Application_BeginRequest(object sender, EventArgs e) {
if (KeepWebSiteAliveJob.WebSiteRootUrl == "") {
string uri = HttpContext.Current.Request.Url.AbsoluteUri;
KeepWebSiteAliveJob.WebSiteRootUrl = uri.Substring(0, uri.IndexOf("/", 8) + 1);
}
}
As the Request object isn't available when the application starts, I had to do this in Application_BeginRequest, which means that a) it will be fired every time a request is made and b) the URL returned by HttpContext.Current.Request.Url.AbsoluteUri will include the full path to the requested resource, not just the desired root URL.
To get around the first issue, I only set KeepWebSiteAliveJob.WebSiteRootUrl if is hasn't been set yet (ie is an empty string) This will be on the very first request. I don't think this would actually be an issue without the check, as it's so quick that it is unlikely to cause any problems, but it was an easy check, so worth including just in case.
As for the second issue, I took advantage of the fact that an URL will contain two forward slashes between the scheme and the domain, and then a third after the domain and (optional) port. As we are only using http or https for the scheme, the magic number 8 starts the IndexOf() search after the double slashes, meaning that we get back the root URL.
Hope this helps someone.

Fixing slow initial load for IIS

IIS has an annoying feature for low traffic websites where it recycles unused worker processes, causing the first user to the site after some time to get an extremely long delay (30+ seconds).
I've been looking for a solution to the problem and I've found these potential solutions.
A. Use the Application Initialization plugin
B. Use Auto-Start with .NET 4
C. Disable the idle-timeout (under IIS Reset)
D. Precompile the site
I'm wondering which of these is preferred, and more importantly, why are there so many solutions to the same problem? (My guess is they aren't, and I'm just not understanding something correctly).
Edit
Performing C seems to be enough to keep my site warmed up, but I've discovered that the real root of my site's slowness has to do with Entity Framework, which I can't seem to figure out why it's going cold. See this question, which unfortunately hasn't been answered yet has been answered!
I eventually just had to make a warm up script to hit my site occasionally to make sure it stayed speedy.
Options A, B and D seem to be in the same category since they only influence the initial start time, they do warmup of the website like compilation and loading of libraries in memory.
Using C, setting the idle timeout, should be enough so that subsequent requests to the server are served fast (restarting the app pool takes quite some time - in the order of seconds).
As far as I know, the timeout exists to save memory that other websites running in parallel on that machine might need. The price being that one time slow load time.
Besides the fact that the app pool gets shutdown in case of user inactivity, the app pool will also recycle by default every 1740 minutes (29 hours).
From technet:
Internet Information Services (IIS) application pools can be
periodically recycled to avoid unstable states that can lead to
application crashes, hangs, or memory leaks.
As long as app pool recycling is left on, it should be enough.
But if you really want top notch performance for most components, you should also use something like the Application Initialization Module you mentioned.
Web Hosting Challenge
You have to remember that none of the machine configuration options are available if you are hosted on a shared server as many of us (smaller companies and individuals) are.
ASP.NET MVC Overhead
My site takes at least 30 seconds when it hasn't been hit in over 20 minutes (and the web app has been stopped). It is terrible.
Another Way to Test Performance
There's another way to test if it is your ASP.NET MVC start up or something else. Drop a normal HTML page on your site where you can hit it directly.
If the problem is related to ASP.NET MVC start up then the HTML page will render almost immediately even when the web app hasn't been started.
That's how I first recognized that the problem was in the ASP.NET MVC startup.
I loaded an HTML page at any time and it would load blazing fast. Then, after hitting that HTML page I'd hit one of my ASP.NET MVC URLs and I'd get the Chrome message "Waiting for raddev.us..."
Another Test With Helpful Script
After that I wrote a LINQPad (check out http://linqpad.net for more) script that would hit my web site every 8 minutes (less than the time for the app to unload -- which should be 20 minutes) and I let it run for hours.
While the script was running I hit my web site and every time my site came up blazingly fast. This gives me a good idea that most likely the slowness I was experiencing was because of ASP.NET MVC startup times.
Get LinqPad and you can run the following script -- just change the URL to your own and let it run and you can test this easily.
Good luck.
NOTE: In LinqPad you'll need to press F4 and add a reference to System.Net to add the library which will retrieve your page.
ALSO : make sure you change the String URL variable to point at a URL that will load a route from your ASP.NET MVC site so the engine will run.
System.Timers.Timer webKeepAlive = new System.Timers.Timer();
Int64 counter = 0;
void Main()
{
webKeepAlive.Interval = 5000;
webKeepAlive.Elapsed += WebKeepAlive_Elapsed;
webKeepAlive.Start();
}
private void WebKeepAlive_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
webKeepAlive.Stop();
try
{
// ONLY the first time it retrieves the content it will print the string
String finalHtml = GetWebContent();
if (counter < 1)
{
Console.WriteLine(finalHtml);
}
counter++;
}
finally
{
webKeepAlive.Interval = 480000; // every 8 minutes
webKeepAlive.Start();
}
}
public String GetWebContent()
{
try
{
String URL = "http://YOURURL.COM";
WebRequest request = WebRequest.Create(URL);
WebResponse response = request.GetResponse();
Stream data = response.GetResponseStream();
string html = String.Empty;
using (StreamReader sr = new StreamReader(data))
{
html = sr.ReadToEnd();
}
Console.WriteLine (String.Format("{0} : success",DateTime.Now));
return html;
}
catch (Exception ex)
{
Console.WriteLine (String.Format("{0} -- GetWebContent() : {1}",DateTime.Now,ex.Message));
return "fail";
}
}
Writing a ping service/script to hit your idle website is rather a best way to go because you will have a complete control. Other options that you have mentioned would be available if you have leased a dedicated hosting box.
In a shared hosting space, warmup scripts are the best first level defense (self help is the best help). Here is an article which shares an idea on how to do it from your own web application.
I'd use B because that in conjunction with worker process recycling means there'd only be a delay while it's recycling. This avoids the delay normally associated with initialization in response to the first request after idle. You also get to keep the benefits of recycling.
A good option to ping the site on a schedule is to use Microsoft Flow, which is free for up to 750 "runs" per month. It is very easy to create a Flow that hits your site every hour to keep it warm. You can even work around their limit of 750 by creating a single flow with delays separating multiple hits of your site.
https://flow.microsoft.com
See this article for tips on how to help performance issues. This includes both performance issues related to starting up, under the "cold start" section. Most of this will matter no matter what type of server you are using, locally or in production.
https://blogs.msdn.microsoft.com/b/mcsuksoldev/2011/01/19/common-performance-issues-on-asp-net-web-sites/
If the application deserializes anything from XML (and that includes web services…) make sure SGEN is run against all binaries involved in deseriaization and place the resulting DLLs in the Global Assembly Cache (GAC). This precompiles all the serialization objects used by the assemblies SGEN was run against and caches them in the resulting DLL. This can give huge time savings on the first deserialization (loading) of config files from disk and initial calls to web services.
http://msdn.microsoft.com/en-us/library/bk3w6240(VS.80).aspx
If any IIS servers do not have outgoing access to the internet, turn off Certificate Revocation List (CRL) checking for Authenticode binaries by adding generatePublisherEvidence=”false” into machine.config. Otherwise every worker processes can hang for over 20 seconds during start-up while it times out trying to connect to the internet to obtain a CRL list.
http://blogs.msdn.com/amolravande/archive/2008/07/20/startup-performance-disable-the-generatepublisherevidence-property.aspx
http://msdn.microsoft.com/en-us/library/bb629393.aspx
Consider using NGEN on all assemblies. However without careful use this doesn’t give much of a performance gain. This is because the base load addresses of all the binaries that are loaded by each process must be carefully set at build time to not overlap. If the binaries have to be rebased when they are loaded because of address clashes, almost all the performance gains of using NGEN will be lost.
http://msdn.microsoft.com/en-us/magazine/cc163610.aspx
I was getting a consistent 15 second delay on the first request after 4 minutes of inactivity. My problem was that my app was using Windows Integrated Authentication to SQL Server and the service profile was in a different domain than the server. This caused a cross-domain authentication from IIS to SQL upon app initialization - and this was the real source of my delay. I changed to using a SQL login instead of windows authentication. The delay was immediately gone. I still have all the app initialization settings in place to help improve performance but they may have not been needed at all in my case.

When is Response.IsClientConnected slow?

I have a long running ASP response (actually an MVC action) that I want to cancel if the user has navigated away. I think this should be fairly simple:
if(!this.Response.IsClientConnected)
{
Response.End();
}
However I've come across various sources starting that this method is slow.
So I ran my own tests (using MVC mini profiler, though you could use your own):
using (var step = MiniProfiler.Current.Step("Response_IsClientConnected"))
if(!this.Response.IsClientConnected)
{
Response.End();
}
That found that every time I call it it's consistently very fast: under 1ms on my developer set up. This is whether it's true or false.
Under what circumstances is Response.IsClientConnected expected to be slow?
I have to support IIS6 - would Response.IsClientConnected be slower on that?
Does anyone know what it's doing under the covers? At a low level I'd expect the TCP/IP stack to know whether the connection is still there, so I'd expect this check to be instant, but does IIS have to do some additional work to check?
Good question but unfortunately don't have the answer, but can provide the following information. Hopefully this can be a starting point to know what's it's doing under the covers.
The Response.IsClientConnected is checking this by asking the current worker HttpWorkerRequest handling the request.
The worker request can be one of the following types and is created by the ISAPIWorkerRequest.CreateWorkerRequest(IntPtr ecb, bool useOOP) which is called by the ISAPIRuntime.ProcessRequest(IntPtr ecb, int iWRType). This is the entry point from the low level ISAPI to the ASP.NET runtime.
ISAPIWorkerRequestInProcForIIS6
ISAPIWorkerRequestInProcForIIS7 >= IIS7
ISAPIWorkerRequestInProc < IIS6
ISAPIWorkerRequestOutOfProc For out of proc requests
For all the InProc HttpWorkerRequest workers this call is then directed back to unmanaged code by calling int EcbIsClientConnected(IntPtr pECB) which is located in the webengine.dll pECB being the Extension Control Block (ECB), provides all the low level access to the ISAPI request. This reference is initially passed to the ISAPIRuntime.ProcessRequest.
Now I can't find any implementation details of the EcbIsClientConnected method. So without this it's impossible to know what it's doing under the covers and how this maybe differs for the different versions of IIS. Maybe someone else can explain this? I would like to know as well.

track visitor info on every request

Using Asp.net webforms I want to track down visitor info just like Google Analytics does. Of course, I can use Google Analytic for this purpose but I want to know how can I achieve the same thing with Asp.net 3.5 and SQL Server 2008.
I want to store IP, Country, URL Referrer of the visitor, Resolution on each page request except postback. I am expecting 50k+ visit everyday..
Main concern is I want to do it in a way that it should not block current request.
i.e In general it happens when we save data in to db, current request stops on particular SP calling statment and moves ahead when it finishes executing SP or tsql statement. I want to follow "Insert and Forget" approach. It should insert in background when I pass parameter to particular event or function.
I found below alternatives for this :
1. PageAsynchTask
2. BeginExecuteNonQuery
3. Jquery Post method and Webservice (But I am not confident about this, and wondering how should I go about it)
I hope I've mentioned my problem properly.
Can anybody tell me which one is better approach? Also let me know if you've any other ideas or better approach than the listed one. Your help will be really appreciated.
Problems with any background thread in server side is each and every request is going to occupy two threads. One for serving the ASP.NET request and one for logging the stuff you want to log. So, you end up having scalability issues due to exhaustion of ASP.NET threads. And logging each and every request in database is a big no no.
Best is to just write to log files using some high performance logging library. Logging libraries are highly optimized for multi-threaded logging. They don't produce I/O calls on each and every call. Logs are stored in a memory buffer and flushed periodically. You should use EntLib or Log4net for logging.
You can use an HttpModule that intercepts each and every GET, POST and then inside the HttpModule you can check whether the Request.Url is an aspx or not. Then you can read Request.Headers["__ASYNCPOST"] and see if it's "true", which means it's an UpdatePanel async update. If all these conditions are true, you just log the request into a log file that stores the
You can get the client IP from:
HttpContext.Current.Request.UserHostAddress;
or
HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"];
To get the IP address of the machine and not the proxy use the following code
HttpContext.Current.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
However you cannot get the country. You will have to log the IP in your log files and then process the log files using some console application or job which will resolve the country of the IP. You need to get some IP->Country database to do the job. I have used http://www.maxmind.com/app/geoip_country before.
For screen size, you will have to rely on some javascript. Use a javascript on each page that finds the size of the screen on the client side and stores in a cookie.
var screenW = 640, screenH = 480;
if (parseInt(navigator.appVersion)>3) {
screenW = screen.width;
screenH = screen.height;
}
else if (navigator.appName == "Netscape"
&& parseInt(navigator.appVersion)==3
&& navigator.javaEnabled()
)
{
var jToolkit = java.awt.Toolkit.getDefaultToolkit();
var jScreenSize = jToolkit.getScreenSize();
screenW = jScreenSize.width;
screenH = jScreenSize.height;
}
Once you store it in a cookie (I haven't shown that code), you can read the screen dimensions from the HttpModule by using Request.Cookies and then log it in the log file.
So, this gives you solution for logging IP, screensize, finding country from IP, and filtering UpdatePanel async postback from logging.
Does this give you a complete solution to the problem?
Talking about server side, if you're running on IIS, and you don't need absolute real time information, I recommend you use IIS logs.
There is nothing faster than this, as it's been optimized for performance since IIS 1.0
You can append your own information in these logs (HttpRequest.AppendToLog), they have a standard format, there is an API if you want to do custom things with it (but you can still use text parser if you prefer), and there are a lots of free tools, for example Microsoft Log Parser which can transfer data in a SQL database (among others).
The first approach is looking good. (And I recommend it.) But it has 2 downsides:
Request will still block until task completes (or aborts on timeout).
You'll have to register your Task on every page.
The second approach looks inconvenient and might lead to errors. (You have to watch for a situation when your page renders faster than your query is processed. I'm not sure what will happen if your query is not complete when your page object is destroyed and GC goes on finalize() spree... but nothing good, I assume. You can avoid it by waiting for IAsyncResult.IsCompleted after render, but that's inconvenient.)
The third method is plain wrong. You should initiate your logging on the server side while processing the request you're going to log. But you still can call a web service from the server side. (Or a win service).
Personally, I'd like to implement logging in BeginRequest to avoid code duplication, but you need IsPostback... Still there might be a workaround.
Hii,
you can fire an asynchronous request and don't wait for response.
here i have some implemented code..
for that you need to create a web service to do your database operation or you can use it for your whole event handling.
from server side you have to call the web service asynchronously like this
Declare a Private Delegate
private delegate void ReEntryDelegate(long CaseID, string MessageText);
Now the method will contain web service calling like this
WebServiceTest.Notification service = new WebServiceTest.Notification();
IAsyncResult handle;
ReEntryDelegate objAscReEntry = new ReEntryDelegate(service.ReEntryNotifications);
handle = objAscReEntry.BeginInvoke(CaseID, MessageText, null, null);
break;
And the variable values will be passed by method here (CaseID,MessageText)
Hope this is clear to you
All the Best

Can I use threads to carry out long-running jobs on IIS?

In an ASP.Net application, the user clicks a button on the webpage and this then instantiates an object on the server through the event handler and calls a method on the object.
The method goes off to an external system to do stuff and this could take a while. So, what I would like to do is run that method call in another thread so I can return control to the user with "Your request has been submitted".
I am reasonably happy to do this as fire-and-forget, though it would be even nicer if the user could keep polling the object for status.
What I don't know is if IIS allows my thread to keep running, even if the user session expires.
Imagine, the user fires the event and we instantiate the object on the server and fire the method in a new thread. The user is happy with the "Your request has been submitted" message and closes his browser. Eventually, this users session will time out on IIS, but the thread may still be running, doing work. Will IIS allow the thread to keep running or will it kill it and dispose of the object once the user session expires?
EDIT: From the answers and comments, I understand that the best way to do this is to move the long-running processing outside of IIS. Apart from everything else, this deals with the appdomain recycling problem. In practice, I need to get version 1 off the ground in limited time and has to work inside an existing framework, so would like to avoid the service layer, hence the desire to just fire off the thread inside IIS. In practice, "long running" here will only be a few minutes and the concurrency on the website will be low so it should be okay. But, next version definitely will need splitting into a separate service layer.
You can accomplish what you want, but it is typically a bad idea. Several ASP.NET blog and CMS engines take this approach, because they want to be installable on a shared hosting system and not take a dependency on a windows service that needs to be installed. Typically they kick off a long running thread in Global.asax when the app starts, and have that thread process queued up tasks.
In addition to reducing resources available to IIS/ASP.NET to process requests, you also have issues with the thread being killed when the AppDomain is recycled, and then you have to deal with persistence of the task while it is in-flight, as well as starting the work back up when the AppDomain comes back up.
Keep in mind that in many cases the AppDomain is recycled automatically at a default interval, as well as if you update the web.config, etc.
If you can handle the persistence and transactional aspects of your thread being killed at any time, then you can get around the AppDomain recycling by having some external process that makes a request on your site at some interval - so that if the site is recycled you are guaranteed to have it start back up again automatically within X minutes.
Again, this is typically a bad idea.
EDIT: Here are some examples of this technique in action:
Community Server: Using Windows Services vs. Background Thread to Run Code at Scheduled Intervals
Creating a Background Thread When Website First Starts
EDIT (from the far distant future) - These days I would use Hangfire.
I disagree with the accepted answer.
Using a background thread (or a task, started with Task.Factory.StartNew) is fine in ASP.NET. As with all hosting environments, you may want to understand and cooperate with the facilities governing shutdown.
In ASP.NET, you can register work needing to stop gracefully on shutdown using the HostingEnvironment.RegisterObject method. See this article and the comments for a discussion.
(As Gerard points out in his comment, there's now also HostingEnvironment.QueueBackgroundWorkItem that calls down to RegisterObject to register a scheduler for the background item to work on. Overall the new method is nicer since it's task-based.)
As for the general theme that you often hear of it being a bad idea, consider the alternative of deploying a windows service (or another kind of extra-process application):
No more trivial deployment with web deploy
Not deployable purely on Azure Websites
Depending on the nature of the background task, the processes will likely have to communicate. That means either some form of IPC or the service will have to access a common database.
Note also that some advanced scenarios might even need the background thread to be running in the same address space as the requests. I see the fact that ASP.NET can do this as a great advantage that has become possible through .NET.
You wouldn't want to use a thread from the IIS thread pool for this task because it would leave that thread unable to process future requests. You could look into Asynchronous Pages in ASP.NET 2.0, but that really wouldn't be the right answer, either. Instead, what it sounds like you would benefit from is looking into Microsoft Message Queuing. Essentially, you would add the task details to the queue and another background process (possibly a Windows Service) would be in charge of carrying out that task. But the bottom line is that the background process is completely isolated from IIS.
I would suggest to use HangFire for such requirements. Its a nice fire and forget engine runs in background, supports different architecture, reliable because it is backed by persistence storage.
There is a good thread and sample code here: http://forums.asp.net/t/1534903.aspx?PageIndex=2
I've even toyed with the idea of calling a keep alive page on my website from the thread to help keep the app pool alive. Keep in mind if you are using this method that you need really good recovery handling, because the application could recycle at any time. As many have mentioned this is not the right approach if you have access to other service options, but for shared hosting this may be one of your only options.
To help keep the app pool alive, you could make a request to your own site while the thread is processing. This may help keep the app pool alive if your process runs a long time.
string tempStr = GetUrlPageSource("http://www.mysite.com/keepalive.aspx");
public static string GetUrlPageSource(string url)
{
string returnString = "";
try
{
Uri uri = new Uri(url);
if (uri.Scheme == Uri.UriSchemeHttp)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
CookieContainer cookieJar = new CookieContainer();
req.CookieContainer = cookieJar;
//set the request timeout to 60 seconds
req.Timeout = 60000;
req.UserAgent = "MyAgent";
//we do not want to request a persistent connection
req.KeepAlive = false;
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
Stream stream = resp.GetResponseStream();
StreamReader sr = new StreamReader(stream);
returnString = sr.ReadToEnd();
sr.Close();
stream.Close();
resp.Close();
}
}
catch
{
returnString = "";
}
return returnString;
}
We started down this path, and it actually worked ok when our app was on one server. When we wanted to scale out to multiple machines (or use multiple w3wp in a web garen) we had to re-evaluate and look at how to manage a work queue, error handling, retries and the tricky problem of correctly locking to ensure only one server picks up the next item.
... we realized we are not in the business of writing background processing engines so we looked for existing solutions and we landed up using the awesome OSS project hangfire
Sergey Odinokov has created a real gem which is really easy to get started with, and allows you to swap out the backend of how work is persisted and queued. Hangfire uses background threads, but persists the jobs, handles retries and gives you visibility into the work queue. So hangfire jobs are robust and survive all the vagaries of appdomains being recycled etc.
Its basic setup uses sql server as the storage but you can swap out for Redis or MSMQ when its time to scale up. It also has an excellent UI for visualizing all the jobs and their status plus allowing you to re-queue jobs.
My point is that while its entirely possible to do what you want in an background thread, there is a lot of work to make it scalable and robust. Its fine for simple workloads but when things get more complex I much prefer to use a purpose built library rather than go through this effort.
For some more perspective on options available check out Scott Hanselman's blog which covers off a few options for handling background jobs in asp.net. (He gave hangfire a glowing review)
Also as referenced by John its worth reading up Phil Haack's blog on why the approach is problematic, and how to gracefully stop work on the thread when appdomain is unloaded.
Can you create a windows service to do that task? Then use .NET remoting from the Web Server to call the Windows Service to do the action? If that is the case that is what I would do.
This would eliminate the need to relay on IIS, and limit some of its processing power.
If not then I would force the user to sit there while the process is done. That way you ensure it is completed and not killed by IIS.
There does seem to be one supported way of hosting long-running work in IIS. Workflow Services seem designed for this, especially in conjunction with Windows Server AppFabric. The design allows for application pool recycling by supporting automatic persistence and resumption of the long-running work.
You may run tasks in the background and they will complete even after the request ends. Don't let an uncaught exception be thrown. Normally you want to always throw your exceptions. If an exception is thrown in a new thread then it will crash the IIS worker process - w3wp.exe, because you are no longer in the request's context. That's also then going to kill any other background tasks you have running in addition to in-process memory backed sessions if you are using them. This would be hard to diagnose, which is why the practice is discouraged.
Just create a surrogate process to run the async tasks; it doesn't have to be a windows service (although that is the more optimal approach in most cases. MSMQ is way over kill.

Resources