Chrome and Firefox caching a 403 - http

I've got a weird bug and wondered if anyone else can think of a cause.
Scenario: -
User tries to access restricted content, gets a turn away page with 403 status code
User logs in
User tries to access content again but should be allowed, browser returns cached turn away page and 403 response (no hit registered on server).
CTRL+F5 or wait a while, browser returns correct content.
This is happening in Firefox and Chrome, I haven't tried Internet Exploder.
I have only reproduced the issue once on my machine whilst on a Skype call with the testers, they can reproduce it every time. They are based over in India though and have a much slower connection to our test site. Could that be a cause?
I saw a related question but that was caused by Squid proxy, i'm not behind a proxy (although testers might be).
I'm loathe to add cache control headers as browsers shouldn't cache a 403 according to the HTTP spec but I need to guarantee with that when a user logs in they get the correct content.
Any thoughts on what might be the cause would be greatly appreciated. In the meantime I'll add some cache control headers to the turn away page just to see if that helps.

Related

Preventing Safari prefetching web pages (server side)

When typing a web link into the safari URL field, the browser attempts to prefetch all links it has previously seen before, both GET and POST.
This causes each and every web link a server supports that is listed in the dropdown as a possible completion to be activated. This is problematic. For example, if a web site has authentication with an /auth/logout link for logging out, then this can cause the link to be activated if it appears in the dropdown, logging the user out unintentionally.
Many browsers send a specific header (eg. 'Purpose: Prefetch' in chrome) that allows the server side to filter prefetch/preload requests (eg. return a 503) but Safari doesn't seem to send any distinguishing header field. It also seems to try to prefetch POST requests, which seems very broken to me. Get requests are notionally at least idempotent, but POST requests are supposed to be understood to be data changing.
Has anyone got a solution to this? Please don't suggest that the browser preload feature can be turned off by the end user - that ISN'T a solution from a service delivery perspective.
Has anyone got an explanation as to why browsers would do this and NOT signal the purpose in a header field? (I get why prefetching is a useful Ux capacity, but not why its useful while typing URLs, especially for URLs already previously downloaded and thus capable of returning prefetching metadata that would allow a server to selectively disable the capability where appropriate) From what I can tell, this kind of functionality started to appear with header fields included, but some browsers have removed this signature. why? It seems to be dreadfully broken to me.
thanks.

Sys.WebForms.PageRequestManagerParserErrorException with IE

I am working on a relatively complex asp.net web forms application, which loads user controls dynamically within update panels. I've run into a very peculiar problem with Internet Explorer where after leaving the page idle for exactly one minute you receive a Sys.WebForms.PageRequestManagerParserErrorException javascript exception when the next request is made. This doesn't happen in Firefox and Chrome. When the server receives the bad request, the body is actually empty but the headers are still there. The response that is sent back is a fresh response you would get from a GET request, which is not what the update panel script is expecting. Any requests done within a minute are okay. Also any requests made following the bad request are okay as well.
I do not have any response writes or redirects being executed. I've also tried setting ValidateRequest and EnableEventValidation in the page directive. I've looked into various timeout properties.
The problem resided with how IE handles NTLM authentication protocol. An optimization in IE that is not present in Chrome and Firefox strips the request body, which therefore creates an unexpected response for my update panels. To solve this issue you must either allow anonymous requests in IIS when using NTLM or ensure Kerberos is used instead. The KB article explains the issue and how to deal with it.KB251404

3 requests for every resource (2 x 401.2 and 1 x 200) in a windows authenticated asp.net mvc app

I was trying to track down why my site was so painfully slow in IE9 when I pulled out Fiddler and realised that every request is being sent 3 times (twice I get 401.2 and then a success). I verified this happens on all browsers, its just that Chrome's speed was masking this (or it could be that this has nothing to do with my sites performance issues in IE).
I've set up break points in my begin/end request handlers and the request comes in for say a css file. It is not authenticated and the response goes out with a 401.2, I doubled checked that I'm not setting the response status anywhere myself, so somewhere between begin_request and end_request the status is changing to 401.2
Note: I have the runAllManagedModulesForAllRequests=true so I can configure compression, however this setting does not affect this (from what I can see from Fiddler).
I am very ignorant on kerberos/active directory in general but I just cannot fathom that this is a normal handshaking protocol for every single request (perhaps for the first? but not all).
I have scoured the googles and nothing seems to help (adding/removing modules/authentication providers, etc). I mean my site works just fine, its only once you look under the hood that I see the treplicated requests. Note: This also happens when I deploy to production so its not a server specific issue.
Has anyone ever seen this? thanks in advance.
I think this is how NTLM authentication works. The process is discussed here. Note that you will want to set AuthPersistSingleRequest to false to cut down on the number of 401s

What does "pending" mean for request in Chrome Developer Window?

What does "Pending" mean under the status column in the "Network" tab of Google Chrome Developer window?
This happens when my page script issues a GET request whose response contains content-headers for downloading a CSV file:
Content-type: text/csv;
Content-Disposition: attachment; filename=myfile.csv
This works fine in FF and IE7, downloading a CSV file as expected and opening a file picker to save the file, but Chrome does nothing. I confirmed that the server responds to the request, so it appears that Chrome will not process the response.
Curiously, all works as expected if I type the URL into Chromes address bar and hit <enter>.
FYI: Chrome 10.0.648.204 on Windows XP
In my case, I found that the "pending" status was caused by the AdBlock extension. The image that I couldn't get to load had the word "ad" in the URL, so AdBlock kept it from loading.
Disabling AdBlock fixes this issue.
Renaming the file so that it doesn't contain "ad" in the URL also fixes it, and is obviously a better solution. Unless it's an advertisement, in which case you should leave it like that.
I also get this when using the HTTPS everywhere plugin.
This plugin has a list of sites that also have https instead of http. So I assume before the actual request is made it is already being cancelled somehow.
So for example when I go to http://stackexchange.com, in Developer I first see a request with status (terminated). This request has some headers, but only the GET, User-Agent, and Accept. No response as well.
Then there is request to https://stackexchange.com with full headers etc.
So I assume it is used for requests that aren't sent.
I had some problems with pending request for mp3 files.
I had a list of mp3 files and one player to play them. If I picked a file that had already been downloaded, Chrome would block the request and show "pending request" in the network tab of the developer tools.
All versions of Chrome seem to be affected.
Here is a solution I found:
player[0].setAttribute('src','video.webm?dummy=' + Date.now());
You just add a dummy query string to the end of each url. This forces Chrome to download the file again.
Another example with popcorn player (using jquery) :
url = $(this).find('.url_song').attr('url');
pop = Popcorn.smart( "#player_", url + '?i=' + Date.now());
This works for me. In fact, the resource is not stored in the cache system. This should also work in the same way for .csv files.
I had the same issue on OSX Mavericks, it turned out that Sophos anti-virus was blocking certain requests, once I uninstalled it the issue went away.
If you think that it might be caused by an extension one easy way to try and test this is to open chrome with the '--disable-extensions flag to see if it fixes the problem. If that doesn't fix it consider looking beyond the browser to see if any other application might be causing the problem, specifically security apps which can affect requests.
I had a similar issue with application/json ajax calls. In ff/IE they were fine. In chrome in the Developer Network window Status was always (pending) because a different status code was being returned.
In my case I changed my Json response to send a HttpStatusCode of 200 then Chrome was fine and the Status Text changed to 200 OK.
For example using ASP.NET Web Api
return new HttpResponseMessage(HttpStatusCode.OK ) {
Content = request.Content
};
The Network pending state on time, means your request is in progressing state. As soon as it responds the time will be updated with total elapsed time.
This picture shows the network call is in processing state(Pending)
This picture shows the time taken in processing by network call.
The fix, for me, was to add the following to the top of the php file which was being requested.
header("Cache-Control: no-cache,no-store");
Same problem with Chrome : I had in my html page the following code :
<body>
...
<script src="http://myserver/lib/load.js"></script>
...
</body>
But the load.js was always in status pending when looking in the Network pannel.
I found a workaround using asynchronous load of load.js:
<body>
...
<script>
setTimeout(function(){
var head, script;
head = document.getElementsByTagName("head")[0];
script = document.createElement("script");
script.src = "http://myserver/lib/load.js";
head.appendChild(script);
}, 1);
</script>
...
</body>
Now its working fine.
Encountered a similar issue recently.
My App is in angular 11 and we have a form with some validators which have regex to validate the data. One of data element had a special character which the regex wasn't handling and it made the entire browser hung up. Infact, even though all network calls were successful with 200 Ok, chrome was not showing any response returned by the backend and was also showing the requests in Pending State when infact all network calls are successful, there was no console log errors or anything. Handling the regex fixed the issue.
After i found the issue, i googled more about it. Here is more explanation about it.
https://javascript.info/regexp-catastrophic-backtracking
I came across this issue when I was debugging a local web application. The issue turned out to be AVG Antivirus and Firewall restrictions. I had to allow an exception through the firewall to get rid of the "Pending" status.
In my case, a simple restart to my browser (chrome) and it worked straight away afterwards like magic!
Little bit of context, I happen to refresh my frontend web page and straight away went onto making a changes to my API which led it to restart. During that instance, the frontend was making calls to API which led into "pending" due to that API is reloading. Browser at this point cached that pending state. For me to get out of it is either I set no-cache (which I didn't want to) or simply restart the browser, I chose the restart.
A little background
I encountered such an issue when requesting an url in my Django project. The server is setup using Apache HTTP web server and basic auth for user authentication.
The url I was accessing required no authentication to access i.e. in my Apache config, I had set Require all granted on the url using the LocationMatch directive.
The issue
The url I was trying to access returned 200 status (in the Network tab in Chrome), but the static assets being used for styling of the requested webpage (css, javascript, font files etc.) associated with the request url were not loading and returned pending status.
In the meanwhile, the page loaded partially and still kept on loading. All this was happening in the presence of basic-auth dialog in browser, even though my url was granted all access.
What worked for me
Interestingly, as I entered my credentials and logged in, the requested page loaded all the static assets. This made it very clear to me that the static assets directory might NOT have the necessary access permissions.
Then, I granted the access to the static assets directory by updating my Apache config and then the requested url and the webpage loaded up fine (200 status) without any basic auth dialog OR pending status.
In my case, there's an update for Chrome that makes it won't load before you restart the browser. Cheers
I encountered the same problem when I request certain images from page. I use JavaScript to set the src attribute of an img object and if the network is poor pending will be displayed in the network panel of chrome developer window. I think it's due to the poor network.

How to redirect from HTTPS to HTTP without annoying error messages

I want to redirect users, after HTTPS login, to the HTTP pages on the site. Using HTTPS for the whole site is not going to happen.
What I have so far is the following:
User posts the login form to the secure site
The secure server validates the credentials
The secure server sends a 302 redirect to the client
This works, except on my machine in IE6 the user gets an error message because the default is to warn when exiting a secure page. These errors are a usability killer for me and thus a showstopper. I changed it so that step 3 is
Server sends html code with a meta refresh
But this is very slow; even on my local machine it's noticeably slower than doing the 302 redirect.
Is there a better way to accomplish the goal of a hassle-free redirection on standard settings that people use? IE6 represents 20%-25% of our traffic. Also, does anyone have any good info about which browsers will warn and which won't warn for the 302 redirect? I am considering black-listing IE6 so that only it gets the slow meta refresh and everyone else gets the fast 302.
Reviving an old topic , but to make it compelete posting the following so other devs can have a choice of implementation
One way of moving bettween https to http without a warning message is to use client redirect using javascript.
Steps
User enters login details on a https form and click on login button
login button will post back to https form for login validation ( assuming login is correct) will redirect to a holding page which is also under https and displays the message ( please wait while the site redirects you)
This holding page does a javascript redirect to the http page
no browser warning message will be displayed
HTH
I am considering black-listing IE6 so that only it gets the slow meta refresh and everyone else gets the fast 302.
I would do something like that. Also include a plain HTML link in the body for accessibility.
Note that some other browsers do give a similar warning about leaving an HTTPS site, but in their case it is accompanied by a (generally pre-ticked) “don't ask me again” button. So by the time they get to your site they will almost certainly have told that warning to disappear. This doesn't make the warning less pointless, but at least it alleviates the problem.
The secure server sends a 302 redirect to the client
You shouldn't 302 in response to POST. A theoretical browser that took the HTTP RFC seriously might respond to that by re-POSTing the form to the new URL. (Which, ironically, would make IE6's warning about information “being retransmitted to a nonsecure site” less misleading.) Instead use “303 See other”.
I don't think there's any other way. That error message is for the user's benefit, and is present in IE 7 and Firefox 3 now as well. The only way that I know of to prevent it is to add your site as trusted within the browser.
Update: Oh, so it's not the mixed content error. I know which one you mean, though I still don't think you can disable the error. Generally, security errors are for the users benefit to protect them from potentially dangerous sites, and as such, cannot be disable by the (potentially unsafe) website itself.

Resources