I'm doing some diagnostic logging in the Page_Unload event in an asp.net application, this logging can take a fair bit of time (about 100ms). Will the response stream get held up by the code in the Page Unload handler? I could do my work asynchronously by using the theadpool but I'd rather not if it won't affect the client's response time.
More information:
#thorkia is correct in that the documentation says that Page_Unload is called after the response is sent to the client, but in my testing (as advised by #steve) it does block. I've tried Casini, IIS Express, Full IIS 7.5 (on a test server) with both release and debug builds, with and without a debugger attached. And, grasping at straws, I tried putting Async=true in the Page Directive. I tried with Fiddler (streaming enabled), and without Fiddler. I've tried with IE9, and Firefox. If the documentation is "correct" then I wonder it it does send the response but perhaps doesn't "finish it off" (what ever that means I'll need to check the HTTP spec) and so the page doesn't render in the browser? But my understanding was that a client browser starts to render the page as it receives the bytes to this doesn't make sense to me either. I've also tried looking at the code in IL Spy but I think this might take me a lot of time.
Now I'm intrigued; am I doing something wrong, or is the documentation misleading?
Why not try it?
protected void Page_UnLoad(object sender, EventArgs e)
{
System.Diagnostics.Debug.WriteLine("In Page_UnLoad");
System.Threading.Thread.Sleep(10000);
System.Diagnostics.Debug.WriteLine("Leaving Page_UnLoad");
}
According to MSDN (http://msdn.microsoft.com/en-us/library/ms178472.aspx) the Page Unload stage is only called after the data has been sent to the client.
Taking a long time to do your logging and clean up will not affect the clients response time for that request but could affect future requests if lots of pages are waiting to be unloaded.
Related
I have set up an extremely basic Web Forms site locally to find out why when I do a load test through jMeter to a .Net website that the OutputCache directive on the page is ignored.
I'm using this directive at the top of the page:
<%# OutputCache Duration="3600" VaryByParam="none" %>
In order to check if the cache is working I have a simple log4net log in the Page Load:
protected void Page_Load(object sender, EventArgs e)
{
log.Debug("In Page_Load");
}
When browsing the site through the Web Browser the OutputCache is behaving as expected as I get a log in the file once:
DEBUG [2014-04-05 15:44:56,905] - In Page_Load
When I continue to refresh the browser to the same page as well as using other browsers, there are no more logs which is expected behaviour.
I have then restarted IIS and then executed a very simple jMeter test of 5 threads with a delay of 1000 milliseconds and I get 5 "In Page_Load" logs instead of the expected 1 as above.
If I restart IIS, open the site in a browser window and then execute the load test I get the single log from the first manual browser request and no more logs as it seems doing that first request through the browser populates the web servers cache and the jMeter load test runs as expected.
Does anyone have experience with jMeter and .Net OutputCache?
I have no experience with .Net OutputCache, but it sounds like your simple JMeter script "looks" different to IIS than your browser.
I would suggest recording the actual browser session with either the built in JMeter http recorder[1] (formerly called the JMeter built in proxy) or with BlazeMeter's JMeter Chrome plugin [2].
I'm guessing playing back an actual browser request might do the trick.
[1] http://jmeter.apache.org/usermanual/component_reference.html#HTTP(S)_Test_Script_Recorder
[2] https://chrome.google.com/webstore/detail/blazemeter-the-load-testi/mbopgmdnpcbohhpnfglgohlbhfongabi?hl=en
Update 1:
Try these scenarios:
A - 1 thread with 5 loops (and not 5 threads with 1 loop)
B - Use a loop controller in the actual script instead of looping with the thread group.
C - Can you try loading the page with a real browser 5 times and totally clearing all cache / cookies / history / etc between loads?
I'm guessing it might have something to do with cookies, though based on what you said earlier that you're using different browsers and didn't see an issue wouldn't make sense. Maybe there is some client side cookie those browsers already had?
I am working on a relatively complex asp.net web forms application, which loads user controls dynamically within update panels. I've run into a very peculiar problem with Internet Explorer where after leaving the page idle for exactly one minute you receive a Sys.WebForms.PageRequestManagerParserErrorException javascript exception when the next request is made. This doesn't happen in Firefox and Chrome. When the server receives the bad request, the body is actually empty but the headers are still there. The response that is sent back is a fresh response you would get from a GET request, which is not what the update panel script is expecting. Any requests done within a minute are okay. Also any requests made following the bad request are okay as well.
I do not have any response writes or redirects being executed. I've also tried setting ValidateRequest and EnableEventValidation in the page directive. I've looked into various timeout properties.
The problem resided with how IE handles NTLM authentication protocol. An optimization in IE that is not present in Chrome and Firefox strips the request body, which therefore creates an unexpected response for my update panels. To solve this issue you must either allow anonymous requests in IIS when using NTLM or ensure Kerberos is used instead. The KB article explains the issue and how to deal with it.KB251404
I know this has been asked and answered many times previously, believe me I've been through all of the posts looking for a solution before asking again.
If a user logs into a page, takes a copy of the URL, logs out then pastes the URL back into the browser, they can get access to the page they had previously visited very briefly before the browser redirects to the login page once more. During this brief window, if they are fast enough with the mouse and can click on a button or other control, they are somehow logged back into the site, no questions asked.
I've tried including the following code suggestion from another thread on the subject into each Page_Load event to avoid caching but no success.
private void ExpirePageCache()
{
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.SetExpires(DateTime.Now - new TimeSpan(1, 0, 0));
Response.Cache.SetLastModified(DateTime.Now);
Response.Cache.SetAllowResponseInBrowserHistory(false);
}
Code from logout.aspx is as follows:
protected void Page_Load(object sender, EventArgs e)
{
FormsAuthentication.SignOut();
HttpContext.Current.Session.Clear();
HttpContext.Current.Session.Abandon();
Response.Redirect("~/Account/Login.aspx");
}
Should I be using Server.Transfer() instead of Response.Redirect()?
I've read somewhere that I'm not allowed to clear the browser history programatically so am a bit stuck. Anyone have any clues please?
Yeah, that line of code is already included in the Page_Load event of the logout.aspx page. It's the first line of code that gets executed...
I suspect something else is up.
When you call response.redirect, none of the page content generated is sent to the client. ASP uses buffering, so as you generate your page, it's buffered until you get to the end, at which point that buffer is sent to the client. This allows you to make changes right up till the last moment, eg sending a redirect response. So that's not your issue.
Are you using output caching or setting the forms auth ticket to persistent? If the browser has a cached copy of the content, it will show that, rather than hit the server (as caching is designed to do). The minute you hit the server though, if the cookie is invalid, then the server should redirect you to somewhere to get a new ticket. if it's not doing that, then somehow it's finding a valid ticket.
You could use Fiddler to monitor the traffic. You can mimic a new browser session by sending request by hand using Fiddler and removing the session & ticket cookies.
Simon
When working with HTTP modules, has anyone noticed that the final two events in the pipeline -- PreSendRequestHeaders and PreSendRequestContent -- don't always run?
I've verified that code bound to EndRequest will run, but will not when bound to either PreSendRequestHeaders or PreSendRequestContent.
Is there a reason why? I thought perhaps it was a caching issue (with a 304 Not Modified, you don't actually send content...), but I've cleared caches and determined that the server is returning 200 OK, which would indicate that it sent content.
This is a problem because the StatusCode of the response defaults to 200 and my understanding is that it doesn't get updated to something like a 404 or 206 until those two final methods. If I check the StatusCode during EndRequest, it will always read 200.
isn't this related to the IIS 7 integrated pipeline?
To be verified, but I think that thoses events are only triggered when IIS 7 is running in integrated pipeline.
I'm injecting a cookie header on the PreSendRequestHeaders event and have yet to run into an issue of it not firing...
Maybe it has to do with HttpResponse.BufferOutput. If buffering is turned off, it seems like all of the headers and some of the content would have already been sent by the time these events fire.
This problem started on a different board, but Dave Ward, who was very prompt and helpful there is also here, so I'd like to pick up here for hopefully the last remaining piece of the puzzle.
Basically, I was looking for a way to do constant updates to a web page from a long process. I thought AJAX was the way to go, but Dave has a nice article about using JavaScript. I integrated it into my application and it worked great on my client, but NOT my server WebHost4Life. I have another server # Brinkster and decided to try it there and it DOES work. All the code is the same on my client, WebHost4Life, and Brinkster, so there's obviously something going on with WebHost4Life.
I'm planning to write an email to them or request technical support, but I'd like to be proactive and try to figure out what could be going on with their end to cause this difference. I did everything I could with my code to turn off Buffering like Page.Response.BufferOutput = False. What server settings could they have implemented to cause this difference? Is there any way I could circumvent it on my own without their help? If not, what would they need to do?
For reference, a link to the working version of a simpler version of my application is located # http://www.jasoncomedy.com/javascriptfun/javascriptfun.aspx and the same version that isn't working is located # http://www.tabroom.org/Ajaxfun/Default.aspx. You'll notice in the working version, you get updates with each step, but in the one that doesn't, it sits there for a long time until everything is done and then does all the updates to the client at once ... and that makes me sad.
Hey, Jason. Sorry you're still having trouble with this.
What I would do is set up a simple page like:
protected void Page_Load(object sender, EventArgs e)
{
for (int i = 0; i < 10; i++)
{
Response.Write(i + "<br />");
Response.Flush();
Thread.Sleep(1000);
}
}
As we discussed before, make sure the .aspx file is empty of any markup other than the #Page declaration. That can sometimes trigger page buffering when it wouldn't have normally happened.
Then, point the tech support guys to that file and describe the desired behavior (10 updates, 1 per second). I've found that giving them a simple test case goes a long way toward getting these things resolved.
Definitely let us know what it ends up being. I'm guessing some sort of inline caching or reverse proxy, but I'm curious.
I don't know that you can force buffering - but a reverse proxy server between you and the server would affect buffering (since the buffer then affects the proxy's connection - not your browser's).
I've done some fruitless research on this one, but i'll share my line of thinking in the dim hope that it helps.
IIS is one of the things sitting between client and server in this case, so it might be useful to know what version of IIS is involved in each case -- and to investigate if there's some way that IIS can perform its own buffering on an open connection.
Though it's not quite on the money, this article about IIS6 v IIS 5 is the kind of thing I'm thinking of.
You should make sure that neither IIS nor any other filter is trying to compress your response. It is very possible that your production server has IIS compression enabled for dynamic pages such as those with the .aspx suffix, and your development server does not.
If this is the case, IIS may be waiting for the entire response (or a sizeable chunk) before it attempts to compress and send any result back to the client.
I suggest using Fiddler to monitor the response from your production server and figure out if responses are being gzip'd.
If response compression does turn out to be the problem, you can instruct IIS to ignore compression for specific responses via the Content-Encoding:Identity header.
The issue is that IIS will further buffer output (beyond ASP.NET's buffering) if you have dynamic gzip compression turned on (it is by default these days).
Therefore to stop IIS buffering your response there's a little hack you can do to fool IIS into thinking that the client can't handle compression by overwriting the Request.Headers["Accept-Encoding"] header (yes, Request.Headers, trust me):
Response.BufferOutput = false;
Request.Headers["Accept-Encoding"] = ""; // suppresses gzip compression on output
As it's sending the response, the IIS compression filter checks the request headers for Accept-Encoding: gzip ... and if it's not there, doesn't compress (and therefore further buffer the output).