I have set up an extremely basic Web Forms site locally to find out why when I do a load test through jMeter to a .Net website that the OutputCache directive on the page is ignored.
I'm using this directive at the top of the page:
<%# OutputCache Duration="3600" VaryByParam="none" %>
In order to check if the cache is working I have a simple log4net log in the Page Load:
protected void Page_Load(object sender, EventArgs e)
{
log.Debug("In Page_Load");
}
When browsing the site through the Web Browser the OutputCache is behaving as expected as I get a log in the file once:
DEBUG [2014-04-05 15:44:56,905] - In Page_Load
When I continue to refresh the browser to the same page as well as using other browsers, there are no more logs which is expected behaviour.
I have then restarted IIS and then executed a very simple jMeter test of 5 threads with a delay of 1000 milliseconds and I get 5 "In Page_Load" logs instead of the expected 1 as above.
If I restart IIS, open the site in a browser window and then execute the load test I get the single log from the first manual browser request and no more logs as it seems doing that first request through the browser populates the web servers cache and the jMeter load test runs as expected.
Does anyone have experience with jMeter and .Net OutputCache?
I have no experience with .Net OutputCache, but it sounds like your simple JMeter script "looks" different to IIS than your browser.
I would suggest recording the actual browser session with either the built in JMeter http recorder[1] (formerly called the JMeter built in proxy) or with BlazeMeter's JMeter Chrome plugin [2].
I'm guessing playing back an actual browser request might do the trick.
[1] http://jmeter.apache.org/usermanual/component_reference.html#HTTP(S)_Test_Script_Recorder
[2] https://chrome.google.com/webstore/detail/blazemeter-the-load-testi/mbopgmdnpcbohhpnfglgohlbhfongabi?hl=en
Update 1:
Try these scenarios:
A - 1 thread with 5 loops (and not 5 threads with 1 loop)
B - Use a loop controller in the actual script instead of looping with the thread group.
C - Can you try loading the page with a real browser 5 times and totally clearing all cache / cookies / history / etc between loads?
I'm guessing it might have something to do with cookies, though based on what you said earlier that you're using different browsers and didn't see an issue wouldn't make sense. Maybe there is some client side cookie those browsers already had?
Related
I am working on a relatively complex asp.net web forms application, which loads user controls dynamically within update panels. I've run into a very peculiar problem with Internet Explorer where after leaving the page idle for exactly one minute you receive a Sys.WebForms.PageRequestManagerParserErrorException javascript exception when the next request is made. This doesn't happen in Firefox and Chrome. When the server receives the bad request, the body is actually empty but the headers are still there. The response that is sent back is a fresh response you would get from a GET request, which is not what the update panel script is expecting. Any requests done within a minute are okay. Also any requests made following the bad request are okay as well.
I do not have any response writes or redirects being executed. I've also tried setting ValidateRequest and EnableEventValidation in the page directive. I've looked into various timeout properties.
The problem resided with how IE handles NTLM authentication protocol. An optimization in IE that is not present in Chrome and Firefox strips the request body, which therefore creates an unexpected response for my update panels. To solve this issue you must either allow anonymous requests in IIS when using NTLM or ensure Kerberos is used instead. The KB article explains the issue and how to deal with it.KB251404
In the standard redirection scenario, the browser is sending another request to the server.
How to achieve internal IIS redirection with all web application lifecycle events (BeginRequest, AuthenticateRequest, ...) to be retriggered (not just calling the another handler)?
CLARIFICATION: I mean just redirection inside the same web app.
It depends where in the page lifecycle you are. Within a page you can use server.transfer, but it won't work with MVC & it's a bit hacky.
The context has some methods, such as rewritepath that let you change the path that is executing, but again, there are limitations.
iis.net has a useful article on this.
I was trying to track down why my site was so painfully slow in IE9 when I pulled out Fiddler and realised that every request is being sent 3 times (twice I get 401.2 and then a success). I verified this happens on all browsers, its just that Chrome's speed was masking this (or it could be that this has nothing to do with my sites performance issues in IE).
I've set up break points in my begin/end request handlers and the request comes in for say a css file. It is not authenticated and the response goes out with a 401.2, I doubled checked that I'm not setting the response status anywhere myself, so somewhere between begin_request and end_request the status is changing to 401.2
Note: I have the runAllManagedModulesForAllRequests=true so I can configure compression, however this setting does not affect this (from what I can see from Fiddler).
I am very ignorant on kerberos/active directory in general but I just cannot fathom that this is a normal handshaking protocol for every single request (perhaps for the first? but not all).
I have scoured the googles and nothing seems to help (adding/removing modules/authentication providers, etc). I mean my site works just fine, its only once you look under the hood that I see the treplicated requests. Note: This also happens when I deploy to production so its not a server specific issue.
Has anyone ever seen this? thanks in advance.
I think this is how NTLM authentication works. The process is discussed here. Note that you will want to set AuthPersistSingleRequest to false to cut down on the number of 401s
I'm doing some diagnostic logging in the Page_Unload event in an asp.net application, this logging can take a fair bit of time (about 100ms). Will the response stream get held up by the code in the Page Unload handler? I could do my work asynchronously by using the theadpool but I'd rather not if it won't affect the client's response time.
More information:
#thorkia is correct in that the documentation says that Page_Unload is called after the response is sent to the client, but in my testing (as advised by #steve) it does block. I've tried Casini, IIS Express, Full IIS 7.5 (on a test server) with both release and debug builds, with and without a debugger attached. And, grasping at straws, I tried putting Async=true in the Page Directive. I tried with Fiddler (streaming enabled), and without Fiddler. I've tried with IE9, and Firefox. If the documentation is "correct" then I wonder it it does send the response but perhaps doesn't "finish it off" (what ever that means I'll need to check the HTTP spec) and so the page doesn't render in the browser? But my understanding was that a client browser starts to render the page as it receives the bytes to this doesn't make sense to me either. I've also tried looking at the code in IL Spy but I think this might take me a lot of time.
Now I'm intrigued; am I doing something wrong, or is the documentation misleading?
Why not try it?
protected void Page_UnLoad(object sender, EventArgs e)
{
System.Diagnostics.Debug.WriteLine("In Page_UnLoad");
System.Threading.Thread.Sleep(10000);
System.Diagnostics.Debug.WriteLine("Leaving Page_UnLoad");
}
According to MSDN (http://msdn.microsoft.com/en-us/library/ms178472.aspx) the Page Unload stage is only called after the data has been sent to the client.
Taking a long time to do your logging and clean up will not affect the clients response time for that request but could affect future requests if lots of pages are waiting to be unloaded.
I have an HTTP Module to handle authentication from Facebook, which works fine in classic pipeline mode.
In integrated pipeline mode, however, I'm seeing an additional request pass through for the default document, which is causing the module to fail. We look at the request (from Facebook) to retrieve and validate the user accessing our app. The initial request authenticates fine, but then I see a second request, which lacks the posted form variables, and thus causes authentication to fail.
In integrated pipeline mode, an http request for "/" yields 2 AuthenticateRequests in a row:
A request where AppRelativeCurrentExecutionFilePath = "~/"
A request where AppRelativeCurrentExecutionFilePath = "~/default.aspx"
That second request loses all of the form values, so it fails to authenticate. In classic mode, that second request is the only one that happens, and it preserves the form values.
Any ideas what's going on here?
UPDATE: Here is an image of the trace from module notifications in IIS. Note that my module, FBAuth, is seeing AUTHENTICATE_REQUEST multiple times (I'd expect 2 - one for authenticate and one for postauthenticate, but I get 4).
I'm starting to believe this has something to do with module/filter configuration because I've found a (Vista) box running the same code that doesn't fire these events repeatedly - it behaves as expected. I'm working through trying to figure out what the difference could be...
Thanks!
Tom
My solution was to add the following code at the end of Application_BeginRequest:
if (Request.RawUrl.TrimEnd('/') == HostingEnvironment.ApplicationVirtualPath.TrimEnd('/'))
Server.Transfer(Request.RawUrl+"Default.aspx", true);
DefaultHttpHandler is not supported,
so applications relying on sub-classes
of DefaultHttpHandler will not be able
to serve requests If your application
uses DefaultHttpHandler or handlers
that derive from DefaultHttpHandler,
it will not function correctly. In
Integrated mode, handlers derived from
DefaultHttpHandler will not be able to
pass the request back to IIS for
processing, and instead serve the
requested resource as a static file.
Integrated mode allows ASP.NET modules
to run for all requests without
requiring the use of
DefaultHttpHandler. Workaround
Change your application to use
modules to perform request processing
for all requests, instead of using
wildcard mapping to map ASP.NET to all
requests and then using
DefaultHttpHandler derived handlers to
pass the request back to IIS.
Hmmm, or this could be the issue.
ASP.NET modules in early request
processing stages will see requests
that previously may have been rejected
by IIS prior to entering ASP.NET,
which includes modules running in
BeginRequest seeing anonymous requests
for resources that require
authentication ASP.NET modules can run
in any pipeline stages that are
available to native IIS modules.
Because of this, requests that
previously may have been rejected in
the authentication stage (such as
anonymous requests for resources that
require authentication) or other
stages prior to entering ASP.NET may
run ASP.NET modules. This behavior is
by design in order to enable ASP.NET
modules to extend IIS in all request
processing stages. Workaround
Change application code to avoid
any application-specific problems that
arise from seeing requests that may be
rejected later on during request
processing. This may involve changing
modules to subscribe to pipeline
events that are raised later during
request processing.
http://learn.iis.net/page.aspx/381/aspnet-20-breaking-changes-on-iis-70/