When is Response.IsClientConnected slow? - asp.net

I have a long running ASP response (actually an MVC action) that I want to cancel if the user has navigated away. I think this should be fairly simple:
if(!this.Response.IsClientConnected)
{
Response.End();
}
However I've come across various sources starting that this method is slow.
So I ran my own tests (using MVC mini profiler, though you could use your own):
using (var step = MiniProfiler.Current.Step("Response_IsClientConnected"))
if(!this.Response.IsClientConnected)
{
Response.End();
}
That found that every time I call it it's consistently very fast: under 1ms on my developer set up. This is whether it's true or false.
Under what circumstances is Response.IsClientConnected expected to be slow?
I have to support IIS6 - would Response.IsClientConnected be slower on that?
Does anyone know what it's doing under the covers? At a low level I'd expect the TCP/IP stack to know whether the connection is still there, so I'd expect this check to be instant, but does IIS have to do some additional work to check?

Good question but unfortunately don't have the answer, but can provide the following information. Hopefully this can be a starting point to know what's it's doing under the covers.
The Response.IsClientConnected is checking this by asking the current worker HttpWorkerRequest handling the request.
The worker request can be one of the following types and is created by the ISAPIWorkerRequest.CreateWorkerRequest(IntPtr ecb, bool useOOP) which is called by the ISAPIRuntime.ProcessRequest(IntPtr ecb, int iWRType). This is the entry point from the low level ISAPI to the ASP.NET runtime.
ISAPIWorkerRequestInProcForIIS6
ISAPIWorkerRequestInProcForIIS7 >= IIS7
ISAPIWorkerRequestInProc < IIS6
ISAPIWorkerRequestOutOfProc For out of proc requests
For all the InProc HttpWorkerRequest workers this call is then directed back to unmanaged code by calling int EcbIsClientConnected(IntPtr pECB) which is located in the webengine.dll pECB being the Extension Control Block (ECB), provides all the low level access to the ISAPI request. This reference is initially passed to the ISAPIRuntime.ProcessRequest.
Now I can't find any implementation details of the EcbIsClientConnected method. So without this it's impossible to know what it's doing under the covers and how this maybe differs for the different versions of IIS. Maybe someone else can explain this? I would like to know as well.

Related

Fixing slow initial load for IIS

IIS has an annoying feature for low traffic websites where it recycles unused worker processes, causing the first user to the site after some time to get an extremely long delay (30+ seconds).
I've been looking for a solution to the problem and I've found these potential solutions.
A. Use the Application Initialization plugin
B. Use Auto-Start with .NET 4
C. Disable the idle-timeout (under IIS Reset)
D. Precompile the site
I'm wondering which of these is preferred, and more importantly, why are there so many solutions to the same problem? (My guess is they aren't, and I'm just not understanding something correctly).
Edit
Performing C seems to be enough to keep my site warmed up, but I've discovered that the real root of my site's slowness has to do with Entity Framework, which I can't seem to figure out why it's going cold. See this question, which unfortunately hasn't been answered yet has been answered!
I eventually just had to make a warm up script to hit my site occasionally to make sure it stayed speedy.
Options A, B and D seem to be in the same category since they only influence the initial start time, they do warmup of the website like compilation and loading of libraries in memory.
Using C, setting the idle timeout, should be enough so that subsequent requests to the server are served fast (restarting the app pool takes quite some time - in the order of seconds).
As far as I know, the timeout exists to save memory that other websites running in parallel on that machine might need. The price being that one time slow load time.
Besides the fact that the app pool gets shutdown in case of user inactivity, the app pool will also recycle by default every 1740 minutes (29 hours).
From technet:
Internet Information Services (IIS) application pools can be
periodically recycled to avoid unstable states that can lead to
application crashes, hangs, or memory leaks.
As long as app pool recycling is left on, it should be enough.
But if you really want top notch performance for most components, you should also use something like the Application Initialization Module you mentioned.
Web Hosting Challenge
You have to remember that none of the machine configuration options are available if you are hosted on a shared server as many of us (smaller companies and individuals) are.
ASP.NET MVC Overhead
My site takes at least 30 seconds when it hasn't been hit in over 20 minutes (and the web app has been stopped). It is terrible.
Another Way to Test Performance
There's another way to test if it is your ASP.NET MVC start up or something else. Drop a normal HTML page on your site where you can hit it directly.
If the problem is related to ASP.NET MVC start up then the HTML page will render almost immediately even when the web app hasn't been started.
That's how I first recognized that the problem was in the ASP.NET MVC startup.
I loaded an HTML page at any time and it would load blazing fast. Then, after hitting that HTML page I'd hit one of my ASP.NET MVC URLs and I'd get the Chrome message "Waiting for raddev.us..."
Another Test With Helpful Script
After that I wrote a LINQPad (check out http://linqpad.net for more) script that would hit my web site every 8 minutes (less than the time for the app to unload -- which should be 20 minutes) and I let it run for hours.
While the script was running I hit my web site and every time my site came up blazingly fast. This gives me a good idea that most likely the slowness I was experiencing was because of ASP.NET MVC startup times.
Get LinqPad and you can run the following script -- just change the URL to your own and let it run and you can test this easily.
Good luck.
NOTE: In LinqPad you'll need to press F4 and add a reference to System.Net to add the library which will retrieve your page.
ALSO : make sure you change the String URL variable to point at a URL that will load a route from your ASP.NET MVC site so the engine will run.
System.Timers.Timer webKeepAlive = new System.Timers.Timer();
Int64 counter = 0;
void Main()
{
webKeepAlive.Interval = 5000;
webKeepAlive.Elapsed += WebKeepAlive_Elapsed;
webKeepAlive.Start();
}
private void WebKeepAlive_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
webKeepAlive.Stop();
try
{
// ONLY the first time it retrieves the content it will print the string
String finalHtml = GetWebContent();
if (counter < 1)
{
Console.WriteLine(finalHtml);
}
counter++;
}
finally
{
webKeepAlive.Interval = 480000; // every 8 minutes
webKeepAlive.Start();
}
}
public String GetWebContent()
{
try
{
String URL = "http://YOURURL.COM";
WebRequest request = WebRequest.Create(URL);
WebResponse response = request.GetResponse();
Stream data = response.GetResponseStream();
string html = String.Empty;
using (StreamReader sr = new StreamReader(data))
{
html = sr.ReadToEnd();
}
Console.WriteLine (String.Format("{0} : success",DateTime.Now));
return html;
}
catch (Exception ex)
{
Console.WriteLine (String.Format("{0} -- GetWebContent() : {1}",DateTime.Now,ex.Message));
return "fail";
}
}
Writing a ping service/script to hit your idle website is rather a best way to go because you will have a complete control. Other options that you have mentioned would be available if you have leased a dedicated hosting box.
In a shared hosting space, warmup scripts are the best first level defense (self help is the best help). Here is an article which shares an idea on how to do it from your own web application.
I'd use B because that in conjunction with worker process recycling means there'd only be a delay while it's recycling. This avoids the delay normally associated with initialization in response to the first request after idle. You also get to keep the benefits of recycling.
A good option to ping the site on a schedule is to use Microsoft Flow, which is free for up to 750 "runs" per month. It is very easy to create a Flow that hits your site every hour to keep it warm. You can even work around their limit of 750 by creating a single flow with delays separating multiple hits of your site.
https://flow.microsoft.com
See this article for tips on how to help performance issues. This includes both performance issues related to starting up, under the "cold start" section. Most of this will matter no matter what type of server you are using, locally or in production.
https://blogs.msdn.microsoft.com/b/mcsuksoldev/2011/01/19/common-performance-issues-on-asp-net-web-sites/
If the application deserializes anything from XML (and that includes web services…) make sure SGEN is run against all binaries involved in deseriaization and place the resulting DLLs in the Global Assembly Cache (GAC). This precompiles all the serialization objects used by the assemblies SGEN was run against and caches them in the resulting DLL. This can give huge time savings on the first deserialization (loading) of config files from disk and initial calls to web services.
http://msdn.microsoft.com/en-us/library/bk3w6240(VS.80).aspx
If any IIS servers do not have outgoing access to the internet, turn off Certificate Revocation List (CRL) checking for Authenticode binaries by adding generatePublisherEvidence=”false” into machine.config. Otherwise every worker processes can hang for over 20 seconds during start-up while it times out trying to connect to the internet to obtain a CRL list.
http://blogs.msdn.com/amolravande/archive/2008/07/20/startup-performance-disable-the-generatepublisherevidence-property.aspx
http://msdn.microsoft.com/en-us/library/bb629393.aspx
Consider using NGEN on all assemblies. However without careful use this doesn’t give much of a performance gain. This is because the base load addresses of all the binaries that are loaded by each process must be carefully set at build time to not overlap. If the binaries have to be rebased when they are loaded because of address clashes, almost all the performance gains of using NGEN will be lost.
http://msdn.microsoft.com/en-us/magazine/cc163610.aspx
I was getting a consistent 15 second delay on the first request after 4 minutes of inactivity. My problem was that my app was using Windows Integrated Authentication to SQL Server and the service profile was in a different domain than the server. This caused a cross-domain authentication from IIS to SQL upon app initialization - and this was the real source of my delay. I changed to using a SQL login instead of windows authentication. The delay was immediately gone. I still have all the app initialization settings in place to help improve performance but they may have not been needed at all in my case.

Does asp.net lifecycle continue if I close the browser in the middle of processing?

I have an ASP.NET web page that connects to a number of databases and uses a number of files. I am not clear what happens if the end user closes the web page before it was finished loading i.e. does the ASP.NET life cycle end or will the server still try to generate the page and return it to the client? I have reasonable knowledge of the life cycle but I cannot find any documentation on this.
I am trying to locate a potential memory leak. I am trying to establish whether all of the code will run i.e. whether the connection will be disposed etc.
The code would still run. There is a property IsClientConnected on the HttpRequest object that can indicate whether the client is still connected if you are doing operations like streaming output in a loop.
Once the request to the page is generated, it will go through to the unload on the life cycle. It has no idea the client isn't there until it sends the information on the unload.
A unique aspect of this is the Dynamic Compilation portion. You can read up on it here: http://msdn.microsoft.com/en-us/library/ms366723
For more information the the ASP.NET Life Cycle, look here:
http://msdn.microsoft.com/en-us/library/ms178472.aspx#general_page_lifecycle_stages
So basically, a page is requested, ASP.NET uses the Dynamic Compilation to basically create the page, and then it attempts to send the page to the client. All code will be run in that you have specified in the code, no matter if the client is there or not to receive it.
This is a very simplified answer, but that is the basics. Your code is compiled, the request generates the response, then the response is sent. It isn't sent in pieces unless you explicitly tell it to.
Edit: Thanks to Chris Lively for the recommendation on changing the wording.
You mention tracking down a potential memory leak and the word "connection". I'm going to guess you mean a database connection.
You should ALWAYS wrap all of your connections and commands in using clauses. This will guarantee the connection/command is properly disposed of regardless of if an error occurs, client disconnects, etc.
There are plenty of examples here, but it boils down to something like:
using (SqlConnection conn = new SqlConnection(connStr)) {
using (SqlCommand cmd = new SqlCommand(conn)) {
// do something here.
}
}
If, for some reason, your code doesn't allow you to do it this way then I'd suggest the next thing you do is restructure it as you've done it wrong. A common problem is that some people will create a connection object at the top of the page execution then re-use that for the life of the page. This is guaranteed to lead to problems, including but not limited to: errors with the connection pool, loss of memory, random query issues, complete hosing of the app...
Don't worry about performance with establishing (and discarding) connections at the point you need them in code. Windows uses a connection pool that is lightning fast and will maintain connections for as long as needed even if your app signals that it's done.
Also note: you should use this pattern EVERY TIME you are using an un-managed class. Those always implement IDisposable.

Problem running Blackberry App without BES

I'm developing a Blackberry Application that does quite a bit of networking, using HttpConnections and InputStreams. I've been testing it in an environment where it has access to a BES, but will be demoing it with only wireless.
Some preliminary testing on a Bold 9000 shows that although the web browser of the phone can get onto the internet, my application cannot. My understanding of it is that the BES usually handles most of the logic of networking, and that the Blackberry itself isn't very good at it.
I've seen some references to having to add ";interface=wifi" to the urls I am trying to connect to, but when I do this, progressively downloading a large movie file will hang after a few seconds.
Is there anything else that can be done to get a Blackberry Application to work with just wireless? Are there signed classes I could use that could handle this?
Edit
It looks like what is going on is that there is a rare chance of the networking just not working -- General Socket Exception. The problem is that for large files, I'm doing many connections, in chunks of 256k, so for large files there's more of a problem of it erroring eventually. I'm really not sure how to handle this.
Edit
I've used a work around with my Connector.open method, using the version of .open that has a timeout option. If a particular networking call doesn't ever return, which was my problem, in addition to the Exceptions, then it retries after a few seconds. It does this for the exceptions as well. This is, at best, a temporary fix, and if anyone knows of a way to improve non BES networking performance, I'd love to hear it.
A simple solution would be to check for the WiFi Coverage Status
public boolean GetWiFiCoverageStatus() {
if((WLANInfo.getWLANState() == WLANInfo.WLAN_STATE_CONNECTED) &&
RadioInfo.areWAFsSupported(RadioInfo.WAF_WLAN)) {
// this.connectionString += ";interface=wifi";
return true;
} else return false;
}
This would ensure that a connection is build only if the device is connected to an Access Point.
Edit:
Second thing you should check is this Knowledge Base Entry (HTTP 413 Request Entity Too Large)
Third addition: Did you use ;deviceside=true in your connection string? without a MDS backend you have to use this appendix to ensure a normal TCP/IP connection

Why is calling a web service slower from a web page?

We have a DLL used as the middle layer between our website front end and our back end ticketing system. The method of insertion into the ticketing system is a bit complicated to explain, but the short version is that it's slow. The best case scenario I've gotten is a 9 second submission time.
The real problem though, is that I can only get that time through a Windows app, not through an ASP.NET web site. I've set up both a Windows test application and a web page for testing, and even though the code is copied between them the web page is consistently submitting in 17-20 seconds, while the windows app is getting 8-11 seconds.
What could be causing that?
EDIT: In response to a couple of the answers...
The call to the web service is taking the bulk of the time, but I have no control over this web service as it's provided by the ticketing system vendor. I need to find out why the web service is taking different amounts of time when it's being called form a different kind of application. The code is exactly the same in both cases, and it's running a loop then reporting the times recorded.
The code is:
for (int i = 0; i < numIterations; i++)
{
startTimes[i] = DateTime.Now;
try
{
cvNum = Clearview.Submit(req, DateTime.Now, DateTime.Now, false);
}
catch (Exception ex)
{
exceptionCount++;
lblResult.Text += #"<br />Exception Caught: " + ex.Message + #"<br />";
}
endTimes[i] = DateTime.Now;
}
It's the same loop in both cases, and I'm marking the time right before and after the call to the library, which does further processing and then calls the web service. But that processing should be consistent shouldn't it? I have traced during debugging and not seen any delay getting to the actual web service call...
EDIT Again: Working with Ants, in both cases 99.4% of the time is being sent just on the web service call. There appears to be no difference there... except that when timed out the web page is taking longer than the windows app.
Potentially the location of the web service in relation to the web server could be having an issue. Also, the page structure and other processing inside your web UI could be having an impact on how long it takes the application to process.
As mentioned logging items on both sides is a great idea, if that doesn't get you what you need, you might try a performance profiler such as Ants Profiler by Red Gate that can help identify the line, method, or class that is using the bulk of the time.
Are you running both on the same machine? Is the middle-layer that you are calling located on a remote machine? The time durations you mentioned vaguely feels like a DNS timeout issue, when opening a connection incurs the penalty for the first (down/misaddressed) DNS response to timeout. Are you sure that whatever config file/var pointing the DLL to the middle-layer are the same in both invocations?
I second the suggestion to use Wireshark to see what is going on. You can at least satisfy yourself that the backend processing time is (should be, anyways) the same...
Pepper your application on both sides with logs - that will show you where the time is going. If that doesn't help, use Wireshark to trace the network activity.

Can I use threads to carry out long-running jobs on IIS?

In an ASP.Net application, the user clicks a button on the webpage and this then instantiates an object on the server through the event handler and calls a method on the object.
The method goes off to an external system to do stuff and this could take a while. So, what I would like to do is run that method call in another thread so I can return control to the user with "Your request has been submitted".
I am reasonably happy to do this as fire-and-forget, though it would be even nicer if the user could keep polling the object for status.
What I don't know is if IIS allows my thread to keep running, even if the user session expires.
Imagine, the user fires the event and we instantiate the object on the server and fire the method in a new thread. The user is happy with the "Your request has been submitted" message and closes his browser. Eventually, this users session will time out on IIS, but the thread may still be running, doing work. Will IIS allow the thread to keep running or will it kill it and dispose of the object once the user session expires?
EDIT: From the answers and comments, I understand that the best way to do this is to move the long-running processing outside of IIS. Apart from everything else, this deals with the appdomain recycling problem. In practice, I need to get version 1 off the ground in limited time and has to work inside an existing framework, so would like to avoid the service layer, hence the desire to just fire off the thread inside IIS. In practice, "long running" here will only be a few minutes and the concurrency on the website will be low so it should be okay. But, next version definitely will need splitting into a separate service layer.
You can accomplish what you want, but it is typically a bad idea. Several ASP.NET blog and CMS engines take this approach, because they want to be installable on a shared hosting system and not take a dependency on a windows service that needs to be installed. Typically they kick off a long running thread in Global.asax when the app starts, and have that thread process queued up tasks.
In addition to reducing resources available to IIS/ASP.NET to process requests, you also have issues with the thread being killed when the AppDomain is recycled, and then you have to deal with persistence of the task while it is in-flight, as well as starting the work back up when the AppDomain comes back up.
Keep in mind that in many cases the AppDomain is recycled automatically at a default interval, as well as if you update the web.config, etc.
If you can handle the persistence and transactional aspects of your thread being killed at any time, then you can get around the AppDomain recycling by having some external process that makes a request on your site at some interval - so that if the site is recycled you are guaranteed to have it start back up again automatically within X minutes.
Again, this is typically a bad idea.
EDIT: Here are some examples of this technique in action:
Community Server: Using Windows Services vs. Background Thread to Run Code at Scheduled Intervals
Creating a Background Thread When Website First Starts
EDIT (from the far distant future) - These days I would use Hangfire.
I disagree with the accepted answer.
Using a background thread (or a task, started with Task.Factory.StartNew) is fine in ASP.NET. As with all hosting environments, you may want to understand and cooperate with the facilities governing shutdown.
In ASP.NET, you can register work needing to stop gracefully on shutdown using the HostingEnvironment.RegisterObject method. See this article and the comments for a discussion.
(As Gerard points out in his comment, there's now also HostingEnvironment.QueueBackgroundWorkItem that calls down to RegisterObject to register a scheduler for the background item to work on. Overall the new method is nicer since it's task-based.)
As for the general theme that you often hear of it being a bad idea, consider the alternative of deploying a windows service (or another kind of extra-process application):
No more trivial deployment with web deploy
Not deployable purely on Azure Websites
Depending on the nature of the background task, the processes will likely have to communicate. That means either some form of IPC or the service will have to access a common database.
Note also that some advanced scenarios might even need the background thread to be running in the same address space as the requests. I see the fact that ASP.NET can do this as a great advantage that has become possible through .NET.
You wouldn't want to use a thread from the IIS thread pool for this task because it would leave that thread unable to process future requests. You could look into Asynchronous Pages in ASP.NET 2.0, but that really wouldn't be the right answer, either. Instead, what it sounds like you would benefit from is looking into Microsoft Message Queuing. Essentially, you would add the task details to the queue and another background process (possibly a Windows Service) would be in charge of carrying out that task. But the bottom line is that the background process is completely isolated from IIS.
I would suggest to use HangFire for such requirements. Its a nice fire and forget engine runs in background, supports different architecture, reliable because it is backed by persistence storage.
There is a good thread and sample code here: http://forums.asp.net/t/1534903.aspx?PageIndex=2
I've even toyed with the idea of calling a keep alive page on my website from the thread to help keep the app pool alive. Keep in mind if you are using this method that you need really good recovery handling, because the application could recycle at any time. As many have mentioned this is not the right approach if you have access to other service options, but for shared hosting this may be one of your only options.
To help keep the app pool alive, you could make a request to your own site while the thread is processing. This may help keep the app pool alive if your process runs a long time.
string tempStr = GetUrlPageSource("http://www.mysite.com/keepalive.aspx");
public static string GetUrlPageSource(string url)
{
string returnString = "";
try
{
Uri uri = new Uri(url);
if (uri.Scheme == Uri.UriSchemeHttp)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
CookieContainer cookieJar = new CookieContainer();
req.CookieContainer = cookieJar;
//set the request timeout to 60 seconds
req.Timeout = 60000;
req.UserAgent = "MyAgent";
//we do not want to request a persistent connection
req.KeepAlive = false;
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
Stream stream = resp.GetResponseStream();
StreamReader sr = new StreamReader(stream);
returnString = sr.ReadToEnd();
sr.Close();
stream.Close();
resp.Close();
}
}
catch
{
returnString = "";
}
return returnString;
}
We started down this path, and it actually worked ok when our app was on one server. When we wanted to scale out to multiple machines (or use multiple w3wp in a web garen) we had to re-evaluate and look at how to manage a work queue, error handling, retries and the tricky problem of correctly locking to ensure only one server picks up the next item.
... we realized we are not in the business of writing background processing engines so we looked for existing solutions and we landed up using the awesome OSS project hangfire
Sergey Odinokov has created a real gem which is really easy to get started with, and allows you to swap out the backend of how work is persisted and queued. Hangfire uses background threads, but persists the jobs, handles retries and gives you visibility into the work queue. So hangfire jobs are robust and survive all the vagaries of appdomains being recycled etc.
Its basic setup uses sql server as the storage but you can swap out for Redis or MSMQ when its time to scale up. It also has an excellent UI for visualizing all the jobs and their status plus allowing you to re-queue jobs.
My point is that while its entirely possible to do what you want in an background thread, there is a lot of work to make it scalable and robust. Its fine for simple workloads but when things get more complex I much prefer to use a purpose built library rather than go through this effort.
For some more perspective on options available check out Scott Hanselman's blog which covers off a few options for handling background jobs in asp.net. (He gave hangfire a glowing review)
Also as referenced by John its worth reading up Phil Haack's blog on why the approach is problematic, and how to gracefully stop work on the thread when appdomain is unloaded.
Can you create a windows service to do that task? Then use .NET remoting from the Web Server to call the Windows Service to do the action? If that is the case that is what I would do.
This would eliminate the need to relay on IIS, and limit some of its processing power.
If not then I would force the user to sit there while the process is done. That way you ensure it is completed and not killed by IIS.
There does seem to be one supported way of hosting long-running work in IIS. Workflow Services seem designed for this, especially in conjunction with Windows Server AppFabric. The design allows for application pool recycling by supporting automatic persistence and resumption of the long-running work.
You may run tasks in the background and they will complete even after the request ends. Don't let an uncaught exception be thrown. Normally you want to always throw your exceptions. If an exception is thrown in a new thread then it will crash the IIS worker process - w3wp.exe, because you are no longer in the request's context. That's also then going to kill any other background tasks you have running in addition to in-process memory backed sessions if you are using them. This would be hard to diagnose, which is why the practice is discouraged.
Just create a surrogate process to run the async tasks; it doesn't have to be a windows service (although that is the more optimal approach in most cases. MSMQ is way over kill.

Resources