Sys.WebForms.PageRequestManagerParserErrorException with IE - asp.net

I am working on a relatively complex asp.net web forms application, which loads user controls dynamically within update panels. I've run into a very peculiar problem with Internet Explorer where after leaving the page idle for exactly one minute you receive a Sys.WebForms.PageRequestManagerParserErrorException javascript exception when the next request is made. This doesn't happen in Firefox and Chrome. When the server receives the bad request, the body is actually empty but the headers are still there. The response that is sent back is a fresh response you would get from a GET request, which is not what the update panel script is expecting. Any requests done within a minute are okay. Also any requests made following the bad request are okay as well.
I do not have any response writes or redirects being executed. I've also tried setting ValidateRequest and EnableEventValidation in the page directive. I've looked into various timeout properties.

The problem resided with how IE handles NTLM authentication protocol. An optimization in IE that is not present in Chrome and Firefox strips the request body, which therefore creates an unexpected response for my update panels. To solve this issue you must either allow anonymous requests in IIS when using NTLM or ensure Kerberos is used instead. The KB article explains the issue and how to deal with it.KB251404

Related

HTTP POST from app.example.com to localhost: session cookie not sent

I have two Spring Web applications that work together. I'm running the first application from the IDE on localhost, while the second one is running in docker on app.127.0.0.1.nip.io.
The two applications interact indirectly through the users browser by redirecting and POSTing between the two apps. This is slightly similar to how an SP and an IdP work together in SAML2.
In my case, the first application on localhost is sending a 302 to the second application. After doing some work, the second application sends an HTML page with a form an JS code to autosubmit it, back to my first application on localhost. The HTML looks similar to this:
<form method=POST action="http://localhost:8080/some/path">
...
</form>
My first application is using Spring Session with a session cookie, and this works just fine. However, when the second application makes the browser POST the form, the browser does not send the session cookie with the POST request.
When both applications are running in docker under .127.0.0.1.nip.io, the cookie is sent.
I've tried to find any hint if this behaviour is expected, and what headers or other bits the applications could use to influence this.
At this point, this is mostly an annoyance while debugging, but I'm concerned that once the two applications will run on different FQDNs and/or different domains, the browsers will also block the cookie being sent.
I've tested this with current versions of Chrome and Firefox.
The problem is the new(ish) SameSite cookie policy that covers exactly this case: another application is POSTing to a host via HTTP. The default now is SameSite: lax, which does not allow sending the first-party cookie values on this request.
The solution is to allow the session cookie to be sent by specifying SameSite: none. Be aware however that this might create security vulnerabilities. For my application, this is not an issue, so I can allow the cookie to always be sent, and especially when I run my application in the debugger.
For the production deployment, I will be able to tighten this, since both applications will run under the same domain (a.example.com and b.example.com), and both will use TLS, so I can set the session cookie to SameSite: lax.
Here's a decent explanation: https://web.dev/samesite-cookies-explained/

Chrome dev tools replacing AJAX request with redirect

I've got an app that makes an AJAX call using jQuery. Upon error, in Application_Error, there's a Response.Redirect which sends you to an error page (it's designed for regular page errors, but it fires on failed AJAX requests as well).
In my real project, when I make this AJAX request, the Network tab of the Chrome dev tools shows the original request and URL for a split second, and when the error occurs, that line goes away and gets replaced by the request to GenericError.aspx. In reality, the original call received a 302, and there was a second call to GenericError.aspx - this is confirmed with Fiddler.
I tried to recreate with new small projects, and those always show properly, with both the 302 and the error page showing up as separate lines.
The "Preserve Log" checkbox is checked, but they both behave the same if unchecked as well.
The AJAX requests and 302 responses are practically identical between my real project and the small one, so I don't see why Chrome would treat them differently.
Are there any config options or anything I might be missing that could change the way Chrome dev tools would treat AJAX responses with 302 redirects?

Cache control not working when hit refresh in the browser

I'm trying to implement cache control on my application. I've set up the tomcat filter for the all fonts giving a max-age=120.
When I request a font for the first time with the cache cleared, the call/response is the following:
and as you can see I have the max-age response. Now I expect that if I hit refresh the browser won't send the http request again instead this is what happens:
As you can see the second request has a
cache-control: max-age=0
value and the response is returned from the server cache. What I'm trying to achieve is to block the entire call from the browser.
Am I doing something wrong?
Thanks
Hitting refresh has semantics that are dependent upon the browser you're using, but often it will make a conditional request to make sure the user is seeing a fresh response (because they wanted to refresh).
If you want to check cache operation, try navigating to the page, rather than hitting refresh.
OTOH if you don't want refresh to behave like this -- and you really mean it -- Mozilla is prototyping Cache-Control: immutable to do this (but it's early days, and mob-only for the moment).

3 requests for every resource (2 x 401.2 and 1 x 200) in a windows authenticated asp.net mvc app

I was trying to track down why my site was so painfully slow in IE9 when I pulled out Fiddler and realised that every request is being sent 3 times (twice I get 401.2 and then a success). I verified this happens on all browsers, its just that Chrome's speed was masking this (or it could be that this has nothing to do with my sites performance issues in IE).
I've set up break points in my begin/end request handlers and the request comes in for say a css file. It is not authenticated and the response goes out with a 401.2, I doubled checked that I'm not setting the response status anywhere myself, so somewhere between begin_request and end_request the status is changing to 401.2
Note: I have the runAllManagedModulesForAllRequests=true so I can configure compression, however this setting does not affect this (from what I can see from Fiddler).
I am very ignorant on kerberos/active directory in general but I just cannot fathom that this is a normal handshaking protocol for every single request (perhaps for the first? but not all).
I have scoured the googles and nothing seems to help (adding/removing modules/authentication providers, etc). I mean my site works just fine, its only once you look under the hood that I see the treplicated requests. Note: This also happens when I deploy to production so its not a server specific issue.
Has anyone ever seen this? thanks in advance.
I think this is how NTLM authentication works. The process is discussed here. Note that you will want to set AuthPersistSingleRequest to false to cut down on the number of 401s

In what scenario could an AJAX request not have the cookies set by the page which fired the AJAX?

Some small percentage of the time, we see a flow like this, deducing from looking at server logs (I have not been able to reproduce this case with any browser):
At time A, client hits our page:
with no cookies
gets back a response with a Set-Cookie HTTP response header that gives them a session id of B
body has JS to fire an AJAX request /ajax/foo.
At time A + 1 second, client hits us with the AJAX request to /ajax/foo
the referrer is set to the page in step 1 that fired the AJAX, as expected
with no cookies - why?
gets back a response with a Set-Cookie header that gives them a session id of C (expected, since they didn't send us a cookie)
At some time slightly later, all of the client requests are sending either session id B or C - so the problem is not that the browser has cookies turned off.
This seems to be essentially a race condition -- the main page request and the AJAX request come in together very close in time, both with no cookies, and there is a race to set the cookie. One wins and one loses.
What is puzzling to me is how could this happen? My assumption is that by time the browser has read enough of the response to know that it needs to fire an AJAX request, it has already received the HTTP response headers and thus the Set-Cookie response header. So it seems to me that the client would always send back the cookie that we set in the page that fired the AJAX request. I just don't see how this could happen unless the browser is not promptly processing the Set-Cookie response.
Like I said, I can't reproduce this in Firefox, Safari, or Chrome, but we do see it several times a day.
There is a new feature in google chrome that could cause this misbehavior. It is called prerender.
Prerendering is an experimental feature in Chrome (versions 13 and up)
that can take hints from a site’s author to speed up the browsing
experience of users. A site author includes an element in HTML that
instructs Chrome to fetch and render an additional page in advance of
the user actually clicking on it.
Even if you do not proactively trigger prerendering yourself, it is
still possible that another site will instruct Chrome to prerender
your site. If your page is being prerendered, it may or may not ever
be shown to the user (depending on if the user clicks the link). In
the vast majority of cases, you shouldn’t have to do anything special
to handle your page being prerendered—it should just work.
For more information read: http://code.google.com/chrome/whitepapers/prerender.html
Edit:
You could trigger prerender on your page with: http://prerender-test.appspot.com/
a) Does the cookie have an expiration time?
b) If so, have you tried to reproduce it by setting the computer's clock back or forward by more than the TTL of the cookie? (I mean the clock of the computer running the browser, obviously; not the server running the app ... which should be a separate computer whose clock is set accurately.)
I've seen this as well; it seems to be triggered by users with screwed up system clocks. The cookie was sent with an expiration date that, from the browser's perspective, was already past.

Resources